id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2302.01547 | Evidence to disfavour dual core system leading to double-peaked narrow
emission lines | In this manuscript, an interesting method is proposed to test dual core
system for double-peaked narrow emission lines, through precious dual core
system with double-peaked narrow Balmer lines in one system in main galaxy but
with single-peaked narrow Balmer lines in the other system in companion galaxy.
Under a dual core system, considering narrow Balmer (H$\alpha$ and H$\beta$)
emissions ($f_{e,~\alpha}$ and $f_{e,~\beta}$) from companion galaxy but
covered by SDSS fiber for the main galaxy and narrow Balmer emissions
($f_{c,~\alpha}$ and $f_{c,~\beta}$) from the companion galaxy covered by SDSS
fiber for the companion galaxy, the same flux ratios
$f_{e,~\alpha}/f_{c,~\alpha}=f_{e,~\beta}/f_{c,~\beta}$ can be expected, due to
totally similar physical conditions of each narrow Balmer emission region.
Then, the precious dual core system in SDSS J2219-0938 is discussed. After
subtracting pPXF code determined stellar lights, double-peaked narrow Balmer
emission lines are confirmed in the main galaxy with confidence level higher
than $5\sigma$, but single-peaked narrow Balmer emission lines in the companion
galaxy. Through measured fluxes of emission components,
$f_{e,~\alpha}/f_{c,~\alpha}$ is around 0.82, different from
$f_{e,~\beta}/f_{c,~\beta}\sim0.52$, to disfavour a dual core system for the
double-peaked narrow Balmer emission lines in SDSS J2219-0938. | Zhang XueGuang, Zheng Qi | 2023-02-03T04:53:31Z | http://arxiv.org/abs/2302.01547v1 | # Evidence to disfavour dual core system leading to double-peaked narrow emission lines
###### Abstract
In this manuscript, an interesting method is proposed to test dual core system for double-peaked narrow emission lines, through precious dual core system with double-peaked narrow Balmer lines in one system in main galaxy but with single-peaked narrow Balmer lines in the other system in companion galaxy. Under a dual core system, considering narrow Balmer (H\(\alpha\) and H\(\beta\)) emissions (\(f_{e,\ \alpha}\) and \(f_{e,\ \beta}\)) from companion galaxy but covered by SDSS fiber for the main galaxy and narrow Balmer emissions (\(f_{c,\ \alpha}\) and \(f_{c,\ \beta}\)) from the companion galaxy covered by SDSS fiber for the companion galaxy, the same flux ratios \(f_{e,\ \alpha}/f_{c,\ \alpha}=f_{e,\ \beta}/f_{c,\ \beta}\) can be expected, due to totally similar physical conditions of each narrow Balmer emission region. Then, the precious dual core system in SDSS J2219-0938 is discussed. After subtracting pPXF code determined stellar lights, double-peaked narrow Balmer emission lines are confirmed in the main galaxy with confidence level higher than 5\(\sigma\), but single-peaked narrow Balmer emission lines in the companion galaxy. Through measured fluxes of emission components, \(f_{e,\ \alpha}/f_{c,\ \alpha}\) is around 0.82, different from \(f_{e,\ \beta}/f_{c,\ \beta}\sim 0.52\), to disfavour a dual core system for the double-peaked narrow Balmer emission lines in SDSS J2219-0938.
keywords: galaxies:nuclei - galaxies:emission lines - galaxies:individual (SDSS J2219-0938)
## 1 Introduction
Dual core systems on scale of dozens to thousands parsecs to supermassive binary black hole (BBH) systems on scale of sub-parsecs in galaxies are commonly expected products by merging of galaxies, essential process of galaxy formation and evolution (Begelman et al., 1980; Kauffmann et al., 1993; Silk & Rees, 1998; Merritt, 2006; Mayer et al., 2010; Rodriguez-Gomez et al., 2017; Bottrell et al., 2019; Martin et al., 2021; Mannerkoski et al., 2022; Yoon et al., 2022). And there are many techniques proposed to detect dual core systems and BBH systems. Applications of double-peaked features of broad and/or narrow emission lines can be found in Komossa et al. (2008); Boroson & Lauer (2009); Shen & Loeb (2010); Popovic (2012); Comerford et al. (2013); Liu et al. (2016); De Rosa et al. (2019). Applications of spatially resolved image properties of central regions have been reported in Komossa et al. (2003); Rodriguez et al. (2009); Piconcelli et al. (2010); Nardini (2017); Kollatschny et al. (2020). Applications of different line profiles of broad Balmer emission lines can be found in Zhang (2021d). Applications of long-standing Optical Quasi-Periodic Oscillations have been reported in Graham et al. (2015); Kovacevic et al. (2019); Serafinelli et al. (2020); Liao et al. (2021); Zhang (2022).
Among the proposed techniques to detect dual core systems, application of double-peaked narrow emission lines is a very interesting topic. Zhou et al. (2004) have firstly reported a dual core system in SDSS J1048+0055 through double-peaked narrow emission lines combining with radio properties. And then, Gerke et al. (2007) have reported a dual core system in EGSD2 J1420+5259 through double-peaked [O iii] emission features combining with multi-band wavelength properties. Xu & Komossa (2009) have reported a dual core system in SDSS J1316+1753 through all narrow emission lines having double-peaked features. Liu et al. (2010) have reported dual core systems in four objects through double-peaked [O iii] emission features combining properties of deep near-infrared images. McGurk et al. (2011) have reported a dual core system in SDSS J0952+2552 through its double-peaked [O iii] lines combining with properties of resolved near-infrared images. Fu et al. (2011) have reported a dual core system in SDSS J1502+1115 through double-peaked [O iii] emission features and resolved radio images. Barrows et al. (2012) have shown a dual core system in CXOXB J1426+3533 through double-peaked narrow emission line features combining with properties of near-infrared adaptive optics imaging. Barrows et al. (2013) have shown that the dual core systems are favoured to the double-peaked high-ionization narrow emission lines through a sample of 131 quasars with \(0.8<z<1.6\). Woo et al. (2014) have reported a dual core system through double-peaked narrow emission lines combining with Hubble Space Telescope imaging. Severgnini et al. (2021) have reported a favoured dual core system in SDSS J1431+4358 with double-peaked narrow emission lines. Besides the discussed individual objects, there are large samples of objects with double-peaked narrow emission lines reported in Smith et al. (2010); Ge et al. (2012); Wang et al. (2019), etc..
However, there are some further reports to disfavour the double-peaked narrow emission lines as efficient signs of dual core systems. Liu et al. (2010) have shown double-peaked features due to narrow-line region kinematics or dual core systems. Rosario et al. (2010)
have shown double-peaked narrow emission lines due to radio-jet driven outflows. Fu et al. (2011) have shown scenarios involving a single AGN leading to the same double-peaked narrow emission lines. Shen et al. (2011) have shown kinematics scenario with a single AGN can be commonly applied for the majority of double-peaked [O iii] lines in Type-2 AGN. Fu et al. (2012) have discussed that only probably 1% dual AGN can lead to double-peaked narrow emission lines. Zhang (2015) have reported non-kinematic model for double-peaked narrow H\(\alpha\). McGurk et al. (2015) have shown that only one dual core system is detected among 12 candidates with double-peaked narrow emission lines, followed by Muller-Sanchez et al. (2015); Nevin et al. (2016). Zhang & Feng (2016) have shown that dual core systems are not statistically preferred to double-peaked narrow emission lines, through virial BH mass comparisons of broad line AGN with and without double-peaked narrow emission lines. Liu et al. (2018) have shown that radio-loud double-peaked narrow emission line AGN should be related to jets.
In the manuscript, an interesting method is proposed to test the dual core system for the double-peaked narrow emission lines. Section 2 presents our main hypotheses. Section 3 presents main results and necessary discussions in the precious dual core system in SDSS J221924.98-093821.6 (=SDSS J2219-0938). Section 4 gives final conclusions. And, the cosmological parameters have been adopted as \(H_{0}=70\rm{km\cdot s^{-1}Mpc^{-1}}\), \(\Omega_{\Lambda}=0.7\) and \(\Omega_{\rm{m}}=0.3\).
## 2 Main hypotheses
The main considering point is that narrow Balmer emission line regions covered by SDSS fiber have totally the same physical conditions. Then, a kind of precious dual core system, one core with double-peaked narrow Balmer emission lines in the main galaxy but the other core with single-peaked narrow Balmer emission lines in the companion galaxy, can be well discussed.
For the double-peaked narrow Balmer emission lines (H\(\alpha\) and H\(\beta\)) in the main galaxy, the blue-shifted (or red-shifted) components of the double-peaked narrow Balmer lines definitely include contributions of the narrow Balmer emissions from the companion galaxy but covered by the fiber for the main galaxy, under the framework of a dual core system. In other words, from the intrinsic narrow Balmer emission line regions of the companion galaxy, one region (ext-emission region) is covered by the SDSS fiber for the main galaxy, the other one region (comp-emission region) is covered by the SDSS fiber for the companion galaxy. Considering \(p1\) and \(p2\) as the ratio of narrow Balmer emissions in ext-emission region and in compression region to the intrinsic total narrow Balmer emissions in the companion galaxy, we will have
\[\frac{f_{e,\ \alpha}}{f_{T},\ \alpha}\ =\ p1,\ \ \frac{f_{e,\ \beta}}{f_{T},\ \beta}=p1,\ \ \frac{f_{c},\ \alpha}{f_{T},\ \alpha}\ =\ p2,\ \ \frac{f_{c},\ \beta}{f_{T},\ \beta}=p2 \tag{1}\]
with \(f_{e,\ \alpha}\) and \(f_{e,\ \beta}\) (\(f_{c,\ \alpha}\) and \(f_{c,\ \beta}\)) as line fluxes of narrow H\(\alpha\) and narrow H\(\beta\) from the ext-emission region (the comp-emission region), and \(f_{T,\ \alpha}\) and \(f_{T}\), \(\beta\) as intrinsic total line fluxes of narrow H\(\alpha\) and narrow H\(\beta\) of the companion galaxy. Although \(p1\) and \(p2\) are unknown parameters, the equations above can well lead to
\[\frac{p1}{p^{2}}=\frac{f_{e,\ \alpha}}{f_{c,\ \alpha}}=\frac{f_{e,\ \beta}}{f_{c,\ \beta}}=R_{ec} \tag{2}\]
Therefore, properties of \(R_{ec}\) can be well applied to test the assumed dual core system for double-peaked narrow emission lines.
Certainly, due to different locations of ext-emission region and comp-emission region, different dust obscurations should have effects on the flux ratio \(R_{ec}\). However, considering intrinsic narrow Balmer decrement (flux ratio of narrow Balmer emission lines) can be totally applied to correct the effects of dust obscurations. Then, in the manuscript, an interesting target SDSS J2219-0938 is collected and discussed, due to its double-peaked narrow Balmer emission lines in the main galaxy but single-peaked narrow Balmer emission lines in the companion galaxy.
Figure 1: Left panel shows the 25.8′′\(\times\)25.8′′ colorful photometric image of SDSS J2219-0938, right panel shows corresponding brightness contour of the photometric image. In left panel, the two red circles with radii 1.5′′ represent covering regions of the SDSS fibers, the two red pluses represents the pointing positions (central positions of the two nuclei) of the SDSS fibers. In right panel, the two red pluses mark the central positions of the two nuclei.
## 3 Photometric and spectroscopic results in SDSS J2219-0938
SDSS J2219-0938 at redshift 0.0948 is firstly reported with double-peaked narrow emission lines in Ge et al. (2012). Moreover, there are not only double-peaked narrow emission lines and apparent dual core in photometric image, but also there are SDSS providing high quality spectroscopic results of the dual core, leading to the best chance to test dual core system applied to explain the double-peaked narrow emission lines in SDSS J2219-0938, through properties of \(R_{\rm ec}\) discussed above.
Left panel of Fig. 1 shows the 25.8''\(\times\)25.8'' colorful photometric image with central position of RA=22:19:24.98 and DEC=-09:38:21.6. Right panel of Fig. 1 shows the image in contour, leading to two apparent cores with central positions (brightness peak positions) of (RA=334.8548, DEC=-9.63938) and (RA=334.8541, DEC=-9.63932) marked as red pluses, leading to the projected space distance about 2.6'' (5480pc at redshift 0.0948) between the two nuclei. Meanwhile, the two nuclei have SDSS spectra with plate-mjd-fiberid to be 0720-52206-0299 (covered by the fiber for the companion galaxy) and 0719-52203-0007 (for the main galaxy). The SDSS fiber covered areas are marked as red circles in left panel of Fig. 1, with the central positions of the two circles as the pointing positions (totally similar as the central positions of the two nuclei) of SDSS fibers.
Fig. 2 shows SDSS spectra of the two nuclei, with signal-to-noise about 22-25. In order to show clear narrow emission line features, the more recent pPXF (penalized pixel-fitting) code (Cappellari, 2017), one commonly applied SSP (Simple Stellar Population) method (Bruzual & Charlot, 2003; Kauffmann et al., 2003), is accepted to determine stellar contributions. The pPXF code is applied with 224 SSP templates from the MILES stellar library Falcon-Barroso et al. et al. (2011) with 32 stellar ages from 0.06Gyrs to 17.78Gyrs and with 7 metallicities from -2.32 to 0.22. After considering the popular regularization method, the pPXF code can give reliable and smoother star-formation histories. Fig. 2 shows the determine stellar lights in SDSS spectra of the two nuclei. And the pPXF code determined shifted velocities for the stellar templates are about \(-17\pm 11\rm km/s\) and \(-230\pm 14\rm km/s\) to describe the stellar features in the main galaxy and in the companion galaxy, respectively. The absorption features in the main galaxy are accepted to determine shifted velocities of the emission lines in the main galaxy and in the companion galaxy.
After subtracting host galaxy contributions and corrected pPXF code determined shifted properties, narrow Balmer emission lines can be well measured by multiple Gaussian functions, similar as what we have recently done in Zhang (2021a,b, 2022a,b,c). Emission lines around H\(\alpha\) with rest wavelength from 6520 to 6620A are firstly discussed, including narrow H\(\alpha\) and [N ii] doublet. In order to confirm the double-peaked narrow line emission features, two differ
Figure 3: Top two panels show the best-fitting results (solid red line for model B, dashed red line for model A) to the emission lines around H\(\alpha\) (in dark green) in the main galaxy and in the companion galaxy. Bottom two panels show corresponding results to the emission lines around H\(\beta\). In top two panels, dotted blue line and dotted green lines show the model A determined narrow H\(\alpha\) and [N ii] doublet, solid blue lines, solid green lines and solid pink lines show the model B determined Gaussian components in narrow H\(\alpha\) and [N ii] doublet. In bottom two panels, dotted blue line shows the model A determined narrow H\(\beta\), solid blue lines show the model B determined Gaussian components in narrow H\(\beta\).
Figure 2: SDSS spectra (in dark green) of the two nuclei and the pPXF code determined host galaxy contributions (in red). The determined \(\chi^{2}/dof\) related to the best descriptions is marked in title of each panel.
ent model functions are applied. In model A, each narrow emission line is described by one Gaussian function. But in model B, each narrow emission line is described by two Gaussian functions. When the model functions in both model A and model B are applied, the following two criteria are accepted. First, each Gaussian component has emission flux not smaller than zero. Second, components in [N ii] doublet have the same redshift, the same line width in velocity space and have the flux ratio to be fixed to the theoretical value of 3. Then, through the Levenberg-Marquardt least-squares minimization technique (MPFIT package), narrow H\(\alpha\) and [N ii] doublet can be well measured. The best fitting results are shown in top two panels of Fig. 3 with \(\chi^{2}_{A}/dof_{A}=1026.4/58\sim 17.7,\chi^{2}_{B}/dof_{B}=84.4/52\sim 1.6\) and \(\chi^{2}_{A}/dof_{A}=380.8/59\sim 6.5,\chi^{2}_{B}/dof_{B}=87.4/53\sim 1.6\), through applications of model A and model B to the lines in the spectra of the main galaxy and of the companion galaxy, respectively. Parameters of the emission components are listed in Table 1. In order to show clearer results in Fig. 3, the plots are limited with wavelength from 6530 to 6600A.
Similar as what we have recently done in Zhang (2022c), the F-test statistical technique is applied to determine whether the functions (double-peaked features) in model B are preferred in the main galaxy. Based on the different \(\chi^{2}/dof\) values for model A and Model B, the calculated \(F_{p}\) value is about
\[F_{p}=\frac{\chi^{2}_{A}-\chi^{2}_{B}}{\frac{dof_{A}-dof_{B}}{\chi^{2}_{B}/dof _{B}}}\sim 97 \tag{3}\]
Based on \(dof_{A}-dof_{B}\) and \(dof_{A}\) as number of dofs of the F-distribution numerator and denominator, the expected value from the statistical F-test is only about 10 (very smaller than 97) with 5\(\sigma\) confidence level. Therefore, the confidence level is higher than 5\(\sigma\) to support the double-peaked narrow H\(\alpha\) and [N ii] doublet. Similar procedure is applied to the model determined results in the companion galaxy, leading to higher than 5\(\sigma\) confidence level to support the model B determined results. However, as listed parameters in Table 1 for emission lines in the companion galaxy, there are no reliable measurements of double-peaked features in [N ii] doublet, due to measured emission intensity smaller than corresponding uncertainty. Therefore, the applied model B in the companion galaxy can lead to an extended component in narrow H\(\alpha\), but no double-peaked features.
Similar procedures can be applied to measure the narrow H\(\beta\) within rest wavelength range from 4830 to 4900A. Only one Gaussian component is applied in model A, and two Gaussian functions are applied in model B. Then, through the Levenberg-Marquardt least-squares minimization technique, narrow H\(\beta\) can be well measured. The best fitting results are shown in Fig. 3 with \(\chi^{2}_{A}/dof_{A}=124.3/57\sim 2.2\), \(\chi^{2}_{B}/dof_{B}=50.6/54\sim 0.94\) and \(\chi^{2}_{A}/dof_{A}=60.7/58\sim 1.0\), \(\chi^{2}_{B}/dof_{B}=47.3/55\sim 0.86\),through applications of model A and model B to the lines in the spectra of the main galaxy and the companion galaxy, respectively. In order to show clearer results in Fig. 3, the plots are limited with wavelength from 4850 to 4875A. And the parameters of the emission components are also listed in Table 1. Furthermore, through the F-test statistical technique and the determined different \(\chi^{2}/Dof\) values for the model A and model B, higher than 5\(\sigma\) confidence level can be determined to support the model B determined results in the main galaxy indicating reliable double-peaked narrow H\(\beta\). Meanwhile, smaller than 3\(\sigma\) confidence level can be determined to support the model B determined results in the companion galaxy, moreover considering measured emission intensity smaller than 2times of corresponding uncertainty, model A determined results are preferred in the companion galaxy, indicating single-peaked narrow H\(\beta\) similar as the single-peaked narrow H\(\alpha\) in the companion galaxy.
Based on the well measured double-peaked narrow Balmer emission lines in the main galaxy but single-peaked narrow Balmer emission lines in the companion galaxy, after considering the pPXF code determined intrinsic shifted velocities in stellar features, properties of \(R_{ec}\) can be well checked. The \(R_{ec}\) in narrow H\(\alpha\) is about \((113\pm 5)/(136\pm 5)\sim 0.82\pm 0.07\), however the \(R_{ec}\) in narrow H\(\beta\) is about \((15\pm 3)/(29\pm 2)\sim 0.52\pm 0.14\). Quite different ratios of \(R_{ec}\) in narrow H\(\alpha\) and in narrow H\(\beta\) strongly indicate interesting clues not to support the assumption that the double-peaked narrow Balmer emission lines in the main galaxy are tightly related to emission regions belong to the central dual core system.
Furthermore, effects of dust obscurations can be considered as follows. The narrow Balmer emission lines have flux ratio of \(5.1^{+2.1}_{-1.3}\)\(((159\pm 15)/(31\pm 7))\) in the line spectrum of the companion galaxy. The narrow Balmer emission lines in the red-shifted components of double-peaked narrow Balmer lines in the main galaxy have flux ratio of \(7.5^{+2.3}_{-1.5}\)\(((113\pm 5)/(15\pm 3))\). The similar narrow Balmer decrements in the comp-emissions and in the ext-emissions strongly indicate similar effects of obscurations on properties of \(R_{ec}\). Therefore, considering dust obscurations can lead to re-confirmed different \(R_{ec}\), providing clues to disfavour the dual core system related two emission regions leading to the double-peaked narrow Balmer emission lines in the main galaxy.
## 4 Conclusions
Rotating dual narrow emission line regions in a dual core system can be applied to explain double-peaked narrow emission lines. However, there are more and more reports to support that double-peaked
[N ii] & 6585.4\(\pm\)0.1 & 3.3\(\pm\)0.1 & 91\(\pm\)10 & \begin{tabular}{c} 6588.9\(\pm\)0.2 \\ 6587.5\(\pm\)0.2 \\ \end{tabular} & \begin{tabular}{c} 2.1\(\pm\)0.1 \\ 2.1\(\pm\)0.1 \\ \end{tabular} & \begin{tabular}{c} 41\(\pm\)4 \\ 484\(\pm\)4 \\ \end{tabular} & \begin{tabular}{c} 6588.3\(\pm\)0.2 \\ 6587.5\(\pm\)0.2 \\ \end{tabular} & \begin{tabular}{c} 2.8\(\pm\)0.1 \\ 2.8\(\pm\)0.1 \\ \end{tabular} & \begin{tabular}{c} 6587.8\(\pm\)0.2 \\ 6592.2\(\pm\)0.9 \\ \end{tabular} &
\begin{tabular}{c} 2.3\(\pm\)0.1 \\ 2.0\(\pm\)0.4 & 9\(\pm\)5 \\ \end{tabular} \\ \hline \hline \end{tabular}
* Notice: the center wavelength \(\lambda_{0}\) in unit of Å, the line width (second moment) \(\sigma\) in unit of Å and the line flux in unit of \(10^{-16}\) erg\(/\)s\(/\)cm\({}^{2}\).
\end{table}
Table 1: Line parameters
narrow emission lines are not efficient indicators for dual core systems. Therefore, in this manuscript, an interesting and independent method is proposed to test whether the dual core system can be applied to explain double-peaked narrow emission lines in precious dual core systems with one system having double-peaked narrow emission lines but the other system having single-peaked narrow emission lines. Accepted dual core system leading to double-peaked narrow emission lines, based on measured narrow Balmer emissions (\(f_{e,\ \alpha,\ f_{e,\ \beta}}\)) from the emission regions of the companion galaxy but covered by the SDSS fiber for the main galaxy and measured narrow Balmer emissions (\(f_{c,\ \alpha,\ f_{c,\ \beta}}\)) from the emission regions of the companion galaxy covered by the SDSS fiber for the companion galaxy, it will be expected that \(f_{e,\ \alpha/f_{c,\ \alpha}}=f_{e,\ \beta/f_{c,\ \beta}}\). Then, the SDSS J2219-0938 (the main galaxy) is collected, due to its double-peaked narrow Balmer emission lines and single-peaked narrow Balmer emission lines in its companion galaxy. After well measured narrow emission lines, the double-peaked narrow Balmer emission lines can be confirmed in the main galaxy with confidence level higher than \(5\sigma\). Moreover, through the measured emission components in narrow Balmer emission lines in the main galaxy and in the companion galaxy, the flux ratio \(f_{e,\ \alpha/f_{c,\ \alpha}}\) is about 0.82, while the flux ratio \(f_{e,\ \beta/f_{c,\ \beta}}\) is about 0.52. The results indicate that the double-peaked narrow Balmer emission lines in SDSS J2219-0938 are not mainly caused by the narrow Balmer emission line regions related to the observational dual core system. A sample of such precious dual core systems will be discussed in near future to provide further insights on dual core systems applied to explain double-peaked narrow emission lines.
## Acknowledgements
Zhang & Zheng gratefully acknowledge the anonymous referee for giving us constructive comments and suggestions to greatly improve our paper. Zhang gratefully acknowledges the kind grant support from NSFC-12173020. This paper has made use of the data from the SDSS projects, [http://www.sdss3.org/](http://www.sdss3.org/), managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration, and the data from MILES library [http://miles.iac.es/](http://miles.iac.es/). This paper has made use of the MPFIT package [https://pages.physics.wisc.edu/~craigm/idl/cmprit.html](https://pages.physics.wisc.edu/~craigm/idl/cmprit.html) and the emcee package [https://emcee.readthedocs.io/en/stable/](https://emcee.readthedocs.io/en/stable/). This research has made use of the NASA/IPAC Extragalactic Database (NED, [http://ned.ipac.caltech.edu](http://ned.ipac.caltech.edu)).
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author ([email protected]).
|
2308.14677 | Twin-width of graphs with tree-structured decompositions | The twin-width of a graph measures its distance to co-graphs and generalizes
classical width concepts such as tree-width or rank-width. Since its
introduction in 2020 (Bonnet et. al. 2020), a mass of new results has appeared
relating twin width to group theory, model theory, combinatorial optimization,
and structural graph theory.
We take a detailed look at the interplay between the twin-width of a graph
and the twin-width of its components under tree-structured decompositions: We
prove that the twin-width of a graph is at most twice its strong tree-width,
contrasting nicely with the result of (Bonnet and D\'epr\'es 2022), which
states that twin-width can be exponential in tree-width. Further, we employ the
fundamental concept from structural graph theory of decomposing a graph into
highly connected components, in order to obtain an optimal linear bound on the
twin-width of a graph given the widths of its biconnected components. For
triconnected components we obtain a linear upper bound if we add red edges to
the components indicating the splits which led to the components. Extending
this approach to quasi-4-connectivity, we obtain a quadratic upper bound.
Finally, we investigate how the adhesion of a tree decomposition influences the
twin-width of the decomposed graph. | Irene Heinrich, Simon Raßmann | 2023-08-28T16:13:24Z | http://arxiv.org/abs/2308.14677v3 | # Twin-width of graphs with tree-structured decompositions
###### Abstract
The twin-width of a graph measures its distance to co-graphs and generalizes classical width concepts such as tree-width or rank-width. Since its introduction in 2020 [13, 12], a mass of new results has appeared relating twin width to group theory, model theory, combinatorial optimization, and structural graph theory.
We take a detailed look at the interplay between the twin-width of a graph and the twin-width of its components under tree-structured decompositions: We prove that the twin-width of a graph is at most twice its strong tree-width, contrasting nicely with the result of [7, 6], which states that twin-width can be exponential in tree-width. Further, we employ the fundamental concept from structural graph theory of decomposing a graph into highly connected components, in order to obtain optimal linear bounds on the twin-width of a graph given the widths of its biconnected components. For triconnected components we obtain a linear upper bound if we add red edges to the components indicating the splits which led to the components. Extending this approach to quasi-4-connectivity, we obtain a quadratic upper bound. Finally, we investigate how the adhesion of a tree decomposition influences the twin-width of the decomposed graph.
twin-width, quasi-4 connected components, strong tree-width 2012 acmcopyrightmargin=.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5emem, roundcorner=0.5emem, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5emem, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5emem, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5emem, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5emem, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5emem, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5emem, roundcorner=0.5emem, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5emem, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5emem, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, roundcorner=0.5em, round=0.5emem, roundcorner=0.5em, roundcorner=0.
In [13, 12] the authors show that twin-width generalizes other width parameters such as rank-width, and, hence also clique-width and tree-width. Furthermore, given a graph \(H\), the class of \(H\)-minor free graphs has bounded twin-width and FO-model checking is FPT on classes of bounded twin-width, see [13, 12]. Many combinatorial problems which are NP-hard in general allow for improved algorithms if the twin-width of the input graph is bounded from above and the graph is given together with a width-minimal contraction sequence [9, 8].
Motivation.To decompose a graph into smaller components and estimate a certain parameter of the original graph from the parameters of its components is an indispensable approach of structural graph theory, which serves in taming branch-and-bound trees as well as for theoretical considerations because it allows for stronger assumptions (i.e., high connectivity) on the considered graphs. There are various ways to decompose a graph, e.g., bi-, tri-, or quasi-4-connected components, tree decompositions of small adhesion (the maximum cardinality of the intersection of to adjacent bags), modular decomposition, or decomposition into the factors of a graph product.
So far there is no detailed analysis of the relation between the twin-width of a graph and the twin-width of its biconnected, triconnected, or quasi-4-connected components. The only result towards (\(k\)-)connected components is the basic observation that the twin-width of a graph is obviously the maximum over the twin-width over its (1-connected) components. While there already exists a strong analysis of the interplay of tree-width and twin-width (cf. [18, 19]), it is still open how twin-width behaves with respect to the adhesion of a given tree decomposition, which can be significantly smaller than the tree-width of a graph (as an example, consider a graph whose biconnected components are large cliques - the adhesion is 1 whereas the tree-width is the maximum clique size). Further, there exist many variants of tree-width, for example, strong tree-width [22, 16] for which the interplay with twin-width has not yet been discussed in the literature.
Our results.We prove the following bound on the twin-width of a graph:
If \(G\) is a graph of strong tree-width \(k\), then
\[\operatorname{tww}(G)\leq\frac{3}{2}k+1+\frac{1}{2}(\sqrt{k+\ln k}+\sqrt{k}+2 \ln k).\]
This is a strong contrast to the result of [7, 6] that twin-width can be exponential in tree-width. We further provide a class of graphs which asymptotically satisfies that the twin-width equals the strong tree-width. Further, we investigate how to bound the twin-width of a graph in terms of the twin-width of its highly connected components starting with biconnected components.
If \(G\) is a graph with biconnected components \(C_{1},C_{2}\ldots,C_{\ell}\), then
\[\max_{i\in[\ell]}\operatorname{tww}(C_{i})\leq\operatorname{tww}(G)\leq\max _{i\in[\ell]}\operatorname{tww}(C_{i})+2.\]
Next, we consider decompositions into triconnected components:
Let \(C_{1},C_{2},\ldots,C_{\ell}\) be the triconnected components of a 2-connected graph \(G\). For \(i\in[\ell]\) we construct a trigraph \(\overline{C}_{i}\) from \(C_{i}\) as follows: all virtual edges2 of \(C_{i}\) are colored
red and all other edges remain black. If \(C_{i}\) contains parallel edges, then we remove all but one of the parallel edges such that the remaining edge is red whenever one of the parallel edges was red. Then_
\[\operatorname{tww}(G)\leq\max\left(8\max_{i\in[\ell]}\operatorname{tww}( \overline{C}_{i})+6,18\right).\]
Similarly clean decompositions into \(k\)-connected graphs with \(k>3\) cannot exist [14, 15]; but we move on one more step and consider the twin-width of a graph with respect to its quasi-4 connected components, introduced by [14, 15].
Let \(C_{1},C_{2},\ldots,C_{\ell}\) be the quasi-4-connected components of a 3-connected graph \(G\).
1. For \(i\in[\ell]\) we construct a trigraph \(\widehat{C}_{i}\) by adding for every 3-separator \(S\) in \(C_{i}\) along which \(G\) was split a vertex \(v_{S}\) which we connect via red edges to all vertices in \(S\). Then \[\operatorname{tww}(G)\leq\max\left(8\max_{i\in[\ell]}\operatorname{tww}( \widehat{C}_{i})+14,70\right).\]
2. For \(i\in[\ell]\), we construct a trigraph \(\overline{C}_{i}\) by coloring all edges in 3-separators in \(C_{i}\) along which \(G\) was split red. Then \[\operatorname{tww}(G)\leq\max\left(4\max_{i\in[\ell]}\left(\operatorname{tww} (\overline{C}_{i})^{2}+\operatorname{tww}(\overline{C}_{i})\right)+14,70 \right).\]
For the general case of tree decompositions of bounded adhesion, we get the following:
**Theorem 5**.: _For every \(k\in\mathbb{N}\) there exist explicit constants \(D_{k}\) and \(D^{\prime}_{k}\) such that for every graph \(G\) with a tree decomposition of adhesion \(k\) and parts \(P_{1},P_{2},\ldots,P_{\ell}\), the following statements are satisfied:_
1. _For each_ \(P_{i}\)_, we construct a trigraph_ \(\widehat{P}_{i}\) _by adding for each adhesion set_ \(S\) _in_ \(P_{i}\) _a new vertex_ \(v_{S}\) _which we connect via red edges to all vertices in_ \(S\)_. Then_ \[\operatorname{tww}(G)\leq 2^{k}\max_{i\in[\ell]}\operatorname{tww}(\widehat{P}_{i })+D_{k}.\]
2. _Assume_ \(k\geq 3\)_. For each_ \(P_{i}\)_, we construct the torso_ \(\overline{P}_{i}\) _by completing every adhesion set in_ \(P_{i}\) _to a red clique. Then_ \[\operatorname{tww}(G)\leq\frac{2^{k}}{(k-1)!}\max_{i\in[\ell]}\operatorname{ tww}(\overline{P}_{i})^{k-1}+D^{\prime}_{k}.\]
Finally, we refine the result of [18, 19], where the authors bound the twin-width of a graph given its tree-width.
Let \(G\) be a graph with a tree decomposition of width \(w\) and adhesion \(k\). Then
\[\operatorname{tww}(G)\leq 3\cdot 2^{k-1}+\max(w-k-2,0).\]
Bounding the red degree of decomposition trees.The underlying structure of all the decompositions that we consider in this paper is a tree. We generalize the optimal contraction sequence (cf. [3, 2]) for trees which works as follows: choose a root for the tree. If possible, choose two sibling leaves and contract them (which implies a red edge from the new vertex to its parent). Whenever a parent is joined to two of its leaf-children via red edges, these
two children are merged. This ensures that the red degree of any parent throughout the whole sequence never exceeds 2. If there are no sibling leaves, then choose a leaf of highest distance to the root and contract it with its parent. This yields a red edge between the new merged vertex and the former grandparent. Repeat this until we end up with a singleton. We preserve this idea in our proofs to ensure that at no point in time three distinct bag-siblings contribute to the red degree of the vertices in their parent bag.
Further related work.A standard reference on tree-width is [5]. For the basics on graph connectivity and decomposition we refer text books on graph theory such as [23]. The twin-width of a graph given the twin-width of its modular decomposition factors (and in particular, also the twin-width given the width of the factors of a lexicographical product) already has been investigated in [11, 10]. In contrast to the linear-time solvable tree-width decision problem [4] (for a fixed \(k\): is the tree-width of the input graph at most \(k\)?), deciding whether the twin-width of a graph is at most 4 is already NP-complete [3, 2]. The twin-width of a graph in terms of its biconnected components has already been considered in [20], where the author obtains a slightly weaker upper bound than Theorem 2.
Organization of the paper.We provide the preliminaries in Section 2. Our results on strong tree-width can be found in Section 3. In Section 4 we prove new bounds on the twin-width of a graph given the twin-widths of its highly connected components, and, we generalize our approach to graphs which allow for a tree-decomposition of small adhesion.
## 2 Preliminaries
For a natural number \(n\), we denote by \([n]\) the \(n\)-element set \(\{1,\ldots,n\}\). For a set \(A\), we write \(\mathcal{P}(A)\) for the power set of \(A\). For a natural number \(k\leq|A|\), we write \(\binom{A}{k}\) for the set of \(k\)-element subsets of \(A\).
Graphs and trigraphs.All graphs in this paper are finite, undirected and contain no loops. For a graph \(G\), we denote its vertex set by \(V(G)\) and its edge set by \(E(G)\). We write \(|G|\coloneqq|V(G)|\) for the order of \(G\).
A _trigraph_ is an undirected, edge-colored graph \(G\) with disjoint sets \(E(G)\) of _black edges_ and \(R(G)\) of _red edges_. We can interpret every graph as a trigraph by setting \(R(G)=\emptyset\). For a vertex subset \(A\) of a trigraph \(G\), we denote by \(G[A]\) the _subgraph induced on \(A\)_ and by \(G-A\) the subgraph induced on \(V(G)\setminus A\). For a vertex \(v\in V(G)\), we also write \(G-v\) instead of \(G-\{v\}\). If \(G\) is a graph, then the _degree_ of a vertex \(v\in V(G)\) is denoted by \(d_{G}(v)\) (or \(d(v)\) if \(G\) is clear from context). For trigraphs, we write \(\operatorname{red-deg}_{G}(v)\) for the _red degree_ of \(v\), i.e., the degree of \(v\) in the graph \((V(G),R(G))\). We write \(\Delta(G)\) or \(\Delta_{\operatorname{red}}(G)\) for the maximum (red) degree of a (tri-)graph \(G\).
A _multigraph_ is a graph where we allow multiple edges between each pair of vertices.
Twin-width.Let \(G\) be a trigraph and \(x,y\in V(G)\) two distinct, not necessarily adjacent vertices of \(G\). We _contract \(x\) and \(y\)_ by merging the two vertices to a common vertex \(z\), leaving all edges not incident to \(x\) or \(y\) unchanged, connecting \(z\) via a black edge to all common black neighbors of \(x\) and \(y\), and via a red edge to all red neighbors of \(x\) or \(y\) and to all vertices which are connected to precisely one of \(x\) and \(y\). We denote the resulting trigraph by \(G/xy\). A _partial contraction sequence_ of \(G\) is a sequence of trigraphs \((G_{i})_{i\in[k]}\) where \(G_{1}=G\) and \(G_{i+1}\) can be obtained from \(G_{i}\) by contracting two distinct vertices \(x_{i},y_{i}\in V(G_{i})\). By abuse of
notation, we also call the sequence \((x_{i}y_{i})_{i<|G|}\) of contraction pairs a partial contraction sequence. The width of a partial contraction sequence is the maximal red degree of all graphs \(G_{1},\ldots,G_{k}\). If the width of a sequence is at most \(d\), we call it a _\(d\)-contraction sequence_. A _(complete) contraction sequence_ is a partial contraction sequence whose final trigraph is the singleton graph on one vertex. The minimum width over all complete contraction sequences of \(G\) is called the _twin-width_ of \(G\) and is denoted by \(\operatorname{tww}(G)\). We often identify a vertex \(v\in V(G)\) with the vertices in the graphs \(G_{i}\) that \(v\) gets contracted to and sets of vertices with the sets of vertices they get contracted to.
Twin-width has many nice structural properties. For example, it is monotone with respect to induced subgraphs: for every induced subgraph \(H\subseteq G\) it holds that \(\operatorname{tww}(H)\leq\operatorname{tww}(G)\). Moreover, the twin-width of a disconnected graph is just the maximum twin-width of its connected components.
Tree decompositions and tree-width.Let \(G\) be a graph. A _tree decomposition_ of \(G\) is a pair \(\mathcal{T}=(T,\{B_{i}\colon i\in V(T)\})\) consisting of a tree \(T\) and a family \((B_{i})_{i\in V(T)}\) of subsets of \(V(G)\), called _bags_ satisfying the following conditions
1. every vertex of \(G\) is contained in some bag,
2. for every vertex \(v\in V(G)\), the set of tree vertices \(i\in V(T)\) such that \(v\in B_{i}\) forms a subtree of \(T\),
3. for every edge \(e\in E(G)\), there exists some bag which contains both endpoints of \(e\).
The subgraphs \(G[B_{i}]\) are called the _parts_ of the tree decomposition. The width of a tree-decomposition is \(\max_{i\in V(T)}|B_{i}|-1\) and the minimum width over all tree decompositions of \(G\) is the _tree-width_ of \(G\) and is denoted by \(\operatorname{tw}(G)\).
For an edge \(ij\in E(T)\), the sets \(B_{i}\cap B_{j}\) are the _adhesion sets_ or _separators_ of \(\mathcal{T}\) and the maximal size of an adhesion set is the _adhesion_ of \(\mathcal{T}\). The graphs obtained from a part \(G[B_{i}]\) by completing all adhesion sets \(B_{i}\cap B_{j}\) to cliques is called the _torso_ of \(G[B_{i}]\).
Strong tree-width.Strong tree-width, which is also called tree-partition width, is a graph parameter independently introduced by [22] and [16]. A _strong tree decomposition_ of a graph \(G\) is a tuple \((T,\{B_{i}\colon i\in V(T)\})\) where \(T\) is a tree and \(\{B_{i}\colon i\in V(T)\}\) is a set of pairwise disjoint subsets of \(V(G)\), one for each node of \(T\) such that
1. \(V(G)=\bigcup_{i\in V(T)}B_{i}\) and
2. for every edge \(uv\) of \(G\) there either exists a node \(i\in V(T)\) such that \(\{u,v\}\subseteq B_{i}\) or there exist two adjacent nodes \(i\) and \(j\) in \(T\) with \(u\in B_{i}\) and \(v\in B_{j}\).
The sets \(B_{i}\) are called _bags_ and \(\max_{i\in V(T)}|B_{i}|\) is the _width_ of the decomposition. The minimum width over all strong tree decompositions of \(G\) is the _strong tree-width_\(\operatorname{stw}(G)\) of \(G\).
The strong tree-width of a graph is bounded in its tree-width via \(\operatorname{tw}(G)\leq 2\operatorname{stw}(G)-1\), see [24]. In the other direction, there is no bound: the strong-tree width of a graph is unbounded in its tree-width [24]. However, it holds that \(\operatorname{stw}(G)\in O(\Delta(G)\cdot\operatorname{tw}(G))\), see [24]. Thus, for graphs of bounded degree, the two width notions are linearly equivalent.
In general, the strong tree-width is unbounded in the twin-width of a graph. For example, consider a complete graph on \(2n\) vertices. A width-minimal strong tree decomposition of this graph has two bags, each containing \(n\) vertices. However the twin-width of a complete graph is \(0\).
Highly connected componentsA _cut vertex_ of a graph \(G\) is a vertex \(v\in V(G)\) such that \(G-v\) contains more connected components that \(G\). A maximal connected subgraph of \(G\) that has no cut vertex is a _biconnected components_ of \(G\). The _block-cut-tree_ of \(G\) is a
bipartite graph where one part is the set of biconnected components of \(G\) and the other part is the set of cut vertices of \(G\) and a biconnected component is joined to a cut vertex precisely if the vertex is contained in the component. This graph is a forest, and even a tree if \(G\) is connected [23]. In terms of tree decomposition this can be rephrased as follows: For every connected graph \(G\) there exists a tree decomposition \(\mathcal{T}\) of \(G\) such that \(\mathcal{T}\) has adhesion at most \(1\), and every part is either \(2\)-connected or a complete graph of order \(2\). Moreover, the set of bags of this tree decomposition is isomorphism-invariant.
With respect to separators of order \(2\), the following result holds (see [17]): For every \(2\)-connected graph, there exists a tree decomposition \(\mathcal{T}\) of \(G\) such that \(\mathcal{T}\) has adhesion at most \(2\), and the torso of every bag is either \(3\)-connected, a cycle, or a complete graph of order \(2\). Moreover, the set of bags of this tree decomposition is isomorphism-invariant.
The _triconnected components of \(G\)_ are multigraphs constructed from the torsos of this tree decomposition. In this work, these multigraphs are not important and we also call the torsos themselves _triconnected components_.
A similarly clean decomposition into \(4\)-connected components arranged in a tree-like fashion does not exist [14, 15]. This motivated Grohe to introduce the notion of quasi-\(4\)-connectivity [14, 15]: A graph \(G\) is _quasi-4-connected_ if it is \(3\)-connected and all \(3\)-separators split off at most a single vertex. That is, for every separator \(S\) of size \(3\), the graph \(G-S\) splits into exactly two connected components, at least one of which consists of a single vertex. The prime example of quasi-\(4\)-connected graphs which are not \(4\)-connected are hexagonal grids. Also for quasi-\(4\)-connectivity there is a tree-like decomposition into components:
[[14, 15]] For every \(3\)-connected graph \(G\), there exists a tree decomposition \(\mathcal{T}\) of \(G\) such that \(\mathcal{T}\) has adhesion at most \(3\), and the torso of every bag is either quasi-\(4\)-connected or of size at most \(4\).
The torsos of this tree decomposition are called _quasi-4-connected components of \(G\)_.
## 3 Twin-width of graphs of bounded strong tree-width
If \(G\) is a graph of strong tree-width \(k\), then
\[\operatorname{tww}(G)\leq\frac{3}{2}k+1+\frac{1}{2}(\sqrt{k+\ln k}+\sqrt{k}+2 \ln k).\]
Proof.: For a graph \(H\) and a vertex subset \(U\subseteq V(H)\) a partial contraction sequence \(s\) of \(H\) is a _\(U\)-contraction sequence_ if only vertices of \(U\) are involved in the contractions in \(s\) and \(s\) is of length \(|U|\), that is, performing all contractions of \(s\) yields a partition of \(V(H)\) where \(U\) forms one part and the rest of the parts are singletons. We denote the minimum width over all \(U\)-contraction sequences of \(H\) by \(\operatorname{tww}_{U}(H)\).
Let \(\mathcal{T}=(T,\{B_{i}\colon i\in V(T)\})\) be a strong tree decomposition of \(G\) of width \(k\). Fix \(r\in V(T)\) and consider \(T\) to be a rooted tree with root \(r\) from now on. If a bag \(B_{i}\) contains only one vertex \(v\), then we set \(v_{i}\coloneqq v\). We label all nodes \(i\) of \(T\) with \(|B_{i}|=1\) as _merged_. All other nodes of \(T\) are labeled as _unmerged_. A node \(p\) of \(T\) is a _leaf-parent_ if all of its children are leaves. If \(B_{i}\) is a bag of \(\mathcal{T}\), then _contracting_\(B_{i}\) means to apply a width-minimal \(B_{i}\)-contraction sequence and then relabel \(i\) as _merged_. After a contraction of two vertices \(u\) and \(v\) to a new vertex \(x\) we _update_ the strong tree decomposition \(\mathcal{T}\), that is, if \(u\) and \(v\) were contained in the same bag, then we simply replace \(u\) and \(v\) by \(x\). If, otherwise, \(u\) and \(v\) are contained in adjacent bags, then we remove \(u\) and \(v\) from its bags and insert \(x\) to the bag which is closer to the root. If this causes an empty bag, we remove the bag as well as the corresponding tree-vertex. Observe that updating preserves the strong tree-width. We claim
that the algorithm Contract merges \(G\) into a single vertex via a contraction sequence of the required width.
First, we check that that the algorithm terminates. Observe that the root \(r\) is not part of any of the contractions in the while-loop. In particular, as long as the loop is executed, there exists at least one leaf-parent. In every iteration of the loop at least one of the if-conditions is satisfied and hence, \(|V(G)|\) shrinks with every iteration, which proves that the algorithm terminates with a singleton graph, that is, it provides a contraction sequence.
It remains to bound the width of the sequence. For \(a\in\mathbb{N}\) we set
\[f(a)\coloneqq(a+\sqrt{a+\ln a}+\sqrt{a}+2\ln a)/2.\]
We will exploit the result of [18, 19] that an \(a\)-vertex graph has twin-width at most \(f(a)\).
Let \((G_{i})_{i\leq|G|}\) be the contraction obtained by the algorithm. Fix \(i\in[|G|]\) and \(v\in G_{i}\) and let \(\mathcal{T}_{i}=(T_{i},\mathcal{B}_{i})\) be the strong tree decomposition corresponding to \(G_{i}\) and \(B_{j}\) the bag containing \(v\) in \(\mathcal{T}\).
If \(j\) is neither a leaf, nor in a leaf-parent, nor the parent of a leaf-parent in \(T_{i}\), then \(\operatorname{red-deg}(v)=0\).
Assume that \(j\) is a leaf of \(T_{i}\), then all red edges incident to \(v\) are either internal edges of \(B_{j}\) or joining \(v\) with a vertex of \(B_{p}\) where \(p\) is the parent of \(j\) in \(T_{i}\). Since \(\operatorname{stw}(G)\leq k\) there are at most \(k\) red edges of the latter form. Internal red edges of a bag may only arise during a the contraction of this leaf-bag in Line 10. Since the corresponding partial contraction sequence is chosen to be width-minimal and by the bound of [18, 19] we obtain that \(\operatorname{red-deg}_{G_{i}}(v)\leq k+f(k)\).
Now assume that \(j\) is a leaf-parent in \(T_{i}\). If the bag \(B_{j}\) of \(\mathcal{T}_{i}\) was already contained in \(\mathcal{T}\), then there are no internal red edges in \(B_{j}\) and the only red edges incident to \(v\) are incident to the vertices of precisely one leaf-bag, or, to the vertices of precisely two leaf-bags one of which is merged. In each of the two cases, the red degree of \(v\) in \(G_{i}\) is bounded by \(k+1\). Otherwise \(B_{j}\) is obtained during the contraction in Line 6. In this case, \(j\) has precisely one child \(\ell\) in \(T_{i}\) and \(\ell\) is merged. Hence, \(j\) has at most \(k+f(k)+1\) red neighbors.
Finally, assume that \(j\) is neither a leaf nor a leaf-parent but parent of a leaf-parent in \(T_{i}\). Let \(j_{1},\ldots,j_{h}\) be the children of \(j\) in \(T_{i}\). Observe that there are at most two children of \(j\)
say, \(j_{1}\) and \(j_{2}\) such that \(v\) is joined to vertices of the corresponding bags and there are no internal red edges in \(B_{j}\). The only red edges incident to \(v\) are arsing during the contraction of \(B_{j_{1}}\) or \(B_{j_{2}}\) in Line 6. Since first, one of the two children is contracted to one vertex before any contraction in the other bag happens, the red degree of \(v\) is bounded by \(k+1\).
There exists a family of graphs \((H_{n})_{n\in\mathbb{N}}\) such that \(\lim_{n\to\infty}\frac{\operatorname{tww}(H_{n})}{\operatorname{stw}(H_{n})} \geq 1\).
Proof.: For each \(n\in\mathbb{N}\) let \(H_{n}\) be the \(n\)-th Paley graph. Fix \(n\in\mathbb{N}\). It is known that \(\operatorname{tww}(H_{n})=\frac{|V(H_{n})|-1}{2}\), see [18, 19]. By distributing the vertices of \(H_{n}\) to two bags, one of cardinality \(\frac{|V(H_{n})|+1}{2}\), the other one of cardinality \(\frac{|V(H_{n})|-1}{2}\), we obtain a strong tree decomposition of \(H_{n}\) of width \(\frac{|V(H_{n})|+1}{2}\).
## 4 Twin-width of graphs with small separators
### Biconnected components
We start our investigation of graphs of small adhesion by proving a bound on the twin-width of graphs in terms of the twin-width of their biconnected components. This proof contains many of the ideas we will generalize later to deal with tri- and quasi-4-connected components as well as general graphs with a tree decomposition of bounded adhesion.
The main obstacle to constructing contraction sequences of a graph from contraction sequences of its biconnected components is that naively contracting one component might increase the red degree of the incident cut vertices in the neighboring components arbitrarily. Thus, we need to find contraction sequences of the biconnected components not involving the incident cut vertices.
Let \(G\) be a trigraph and \(\mathcal{P}\) be a partition of \(G\). Denote by \(G/\mathcal{P}\) the trigraph obtained from \(G\) by contracting each part of \(\mathcal{P}\) into a single vertex. For a vertex \(v\in V(G)\) we denote by \(\mathcal{P}(v)\) the part of \(\mathcal{P}\) that contains \(v\). If \(\mathcal{P}(v)\neq\{v\}\), then we obtain a refined partition \(\mathcal{P}_{v}\) by replacing \(\mathcal{P}(v)\) in \(\mathcal{P}\) by the two parts \(\mathcal{P}(v)\) and \(\{v\}\). Otherwise, we set \(\mathcal{P}_{v}=\mathcal{P}\). Since \(G/\mathcal{P}\) can be obtained from \(G/\mathcal{P}_{v}\) by at most one contraction, and one contraction of a trigraph reduces the maximum red degree by at most \(1\) we have
\[\Delta_{\operatorname{red}}(G/\mathcal{P}_{v})\leq\Delta_{\operatorname{red} }(G/\mathcal{P})+1. \tag{1}\]
For every trigraph \(G\) and every vertex \(v\in V(G)\),
\[\operatorname{tww}_{V(G-v)}(G)\leq\operatorname{tww}(G)+1.\]
Proof.: Let \((\mathcal{P}^{(i)})_{i\in[|G|]}\) be a sequence of partitions corresponding to a width-minimal contraction sequence of \(G\). Further, let \(j\) be the maximal index with \(\{v\}\in\mathcal{P}^{(j)}\). Then \((\mathcal{P}^{(i)})_{i\in[j]}\) is a partial \(\operatorname{tww}(G)\)-contraction sequence which does not involve \(v\), and by (1) the sequence \((\mathcal{P}^{(i)}_{v})_{i\in[|G|]\setminus[j+1]}\) is a partial \((\operatorname{tww}(G)+1)\)-contraction sequence which contracts the resulting trigraph until \(v\) and one further vertex remain. Combining these two sequences yields the claim.
If \(G\) is a graph with biconnected components \(C_{1},C_{2}\ldots,C_{\ell}\), then
\[\max_{i\in[\ell]}\operatorname{tww}(C_{i})\leq\operatorname{tww}(G)\leq\max_ {i\in[\ell]}\operatorname{tww}(C_{i})+2.\]
Proof.: The lower bound follows from the fact that all biconnected components are induced subgraphs of \(G\) together with the monotonicity of twin-width.
For the upper bound we may assume that \(G\) is connected since the twin-width of a disconnected graph is the maximum twin-width of its connected components [13, 12]. Consider the block-cut-tree of \(G\), i.e., the tree \(T\) whose vertex set is the union of the biconnected components of \(G\) and the cut vertices of \(G\), where every cut vertex joined to precisely those biconnected components containing it. In particular, the biconnected components and the cut vertices form a bipartition of \(T\).
We choose a cut vertex \(r\) as a root of \(T\). For every biconnected components \(C\in V(T)\), we let \(v_{C}\) be the parent of \(C\) in \(T\).
To make our argument simpler, let \(\widehat{G}\) be the graph obtained from \(G\) by joining a new vertex \(r_{v}\) to every vertex \(v\in V(G)\) via a red edge. Similarly, for a biconnected component \(C\), we let \(\widehat{C}\) be the graph obtained from \(C\) by attaching a new vertex \(r_{v}\) to every vertex \(v\) of \(C\). For each cut vertex \(c\), we let \(\widehat{G}_{c}\) be the graph induced by \(\widehat{G}\) on the union of all blocks in the subtree \(T_{c}\) of \(T\) rooted at \(c\) together with all vertices \(r_{v}\) adjacent to these blocks.
We show that \(\operatorname{tww}(\widehat{G})\leq\max_{i\in[\ell]}\operatorname{tww}(C_{i} )+2\). The claim then follows since \(G\) is an induced subgraph of \(\widehat{G}\).
For every biconnected component \(C\) of \(G\),
\[\operatorname{tww}_{V(\widehat{C}-v_{C})}(\widehat{C})\leq\operatorname{tww} (C)+2.\]
Proof of the Claim.: By applying Lemma 10 to \(C\) and \(v_{C}\), we find a \(V(C-v_{C})\)-contraction sequence \(S\) of \(C\) of width at most \(\operatorname{tww}(C)+1\). We show how this contraction sequence can be adapted to also contract the vertices \(r_{v}\) for all cut vertices \(v\) incident to \(C\). Indeed, before every contraction \(vw\) of \(S\), we insert the contraction of \(r_{v}\) and \(r_{w}\). This keeps the invariant that we never contract a vertex from \(C\) with a vertex \(r_{v}\), and further, every vertex of \(C\) is incident to at most one vertex \(r_{v}\) (or a contraction of those vertices). Moreover, the red degree among the vertices \(r_{v}\) also stays bounded by \(2\). The entire partial contraction sequence constructed so far thus has width at most \(\operatorname{tww}(C)+2\).
After applying this sequence, we end up with at most four vertices: \(v_{C}\), \(r_{v_{C}}\), the contraction of \(C-v_{C}\) and the contraction of all vertices \(r_{v}\) for vertices \(v\neq v_{C}\). As \(r_{v_{C}}\) is only connected to \(v_{C}\) and the contraction of all other vertices \(r_{v}\) is not connected to \(v_{C}\), these four vertices form a path of length four. Thus, the contraction sequence can be completed with trigraphs of width at most \(2\): \(\lhd\)
Now, consider again the whole graph \(\widehat{G}\) and choose a leaf block \(C\) of \(T\). We can apply the partial contraction sequence given by the previous claim to \(\hat{C}\) in \(\widehat{G}\). Because we never contract \(v_{C}\) with any other vertex, this does not create red edges anywhere besides inside \(\hat{C}\). Thus, it is still a partial \((\operatorname{tww}(C)+2)\)-contraction sequence of \(\widehat{G}\). Moreover, the resulting trigraph is isomorphic to \(\widehat{G}-V(\widehat{C}-v_{C})\), i.e., the graph obtained from \(\widehat{G}\) by just removing the biconnected component \(C\) (but leaving the cut vertex \(v_{C}\)). By iterating this, we can remove all biconnected components one after the other using width at most \(\max_{i\in[\ell]}\operatorname{tww}(C_{i})+2\). Finally, we end up with just two vertices: The root cut vertex \(r\), together with its red neighbor \(r_{r}\), which we can simply contract.
Note that the bounds in Theorem 2 are sharp even on the class of trees: the biconnected components of a tree are just its edges which have twin-width \(0\). As there are trees both of twin-width \(0\) and of twin-width \(2\), both the upper and the lower bound can be obtained.
Let \(\mathcal{C}\) be a graph class closed under taking biconnected components. Then \(\mathcal{C}\) has bounded twin-width if and only if the subclass of \(2\)-connected graphs in \(\mathcal{C}\) has.
Proof.: Since every biconnected component is either \(2\)-connected or a bridge, we can bound the twin-width of every graph via the maximum twin-width of a \(2\)-connected biconnected component.
Moreover, Theorem 2 also reduces the algorithmic problem of computing or approximating the twin-width of a graph to within some factor to the corresponding problem on \(2\)-connected graphs.
Note that in contrast to the case of connected components, where the twin-width of a graph is determined by the twin-widths of its components, the twin-width of a connected graph is not determined by the multiset of its biconnected components. As an example, consider a tree of twin-width \(2\) and a star with the same number of edges of twin-width \(0\). Then both graphs have the same biconnected components, but different twin-width.
### Apices and contractions respecting subsets
To deal with adhesion sets of size at least \(2\), it no longer suffices to find contraction sequences of the parts that just don't contract vertices in the adhesion sets. Indeed, as those vertices can appear parts corresponding to a subtree of unbounded depth, this could create an unbounded number of red edges incident to vertices in adhesion sets. Instead, we want contraction sequences that create no red edges incident to any vertices of adhesion sets.
For a trigraph \(G\) and a set of vertices \(A\subseteq V(G)\) of red degree \(0\), we say that a partial sequence of \(d\)-contractions \(G=G_{0},G_{1},\ldots,G_{\ell}\)_respects_\(A\) if \(G_{i}[A]=G[A]\) and red-\(\deg_{G_{i}}(a)=0\) for all \(i\leq\ell\) and \(a\in A\). Thus, for every contraction \(xy\) in the sequence, we have \(x,y\notin A\) and \(N(x)\cap A=N(y)\cap A\), which implies that the vertices in \(A\) are not incident to any red edges all along the sequence.
A _complete \(d\)-contraction sequence respecting \(A\)_ is a sequence of \(d\)-contractions that respects \(A\) of maximal length, i.e., one whose resulting trigraph \(G_{\ell}\) does not allow a further contraction respecting \(A\). This is equivalent to no two vertices in \(V(G_{\ell})\setminus A\) having the same neighborhood in \(A\). In particular, a complete contraction sequence respecting \(A\) leaves at most \(2^{|A|}\) vertices besides \(A\).
We write \(\operatorname{tww}(G,A)\) for the minimal \(d\) such that there exists a complete \(d\)-contraction sequence respecting \(A\). For a single vertex \(v\in V(G)\), we also write \(\operatorname{tww}(G,v)\) for \(\operatorname{tww}(G,\{v\})\). Note that \(\operatorname{tww}(G)=\operatorname{tww}(G,\emptyset)\).
It was proven in [13, 12, Theorem 2] that adding a single apex to a graph of twin-width \(d\) raises the twin-width to at most \(2d+2\). The proof given there readily works in our setting without any modifications.
Let \(G\) be a trigraph, \(v\in V(G)\) a vertex not incident to any red edges and \(A\subseteq V(G)\setminus\{v\}\) a set of vertices. Then
\[\operatorname{tww}(G,A\cup\{v\})\leq 2\operatorname{tww}(G-v,A)+2.\]
Let \(G\) be a trigraph and \(A\subseteq V(G)\) a subset of vertices with \(\operatorname{red-deg}(a)=0\) for all vertices \(a\in A\). Then
\[\operatorname{tww}(G,A)\leq 2^{|A|}\operatorname{tww}(G)+2^{|A|+1}-2.\]
Proof.: We proceed by induction on \(|A|\). For \(|A|=0\), the claim is immediate. Thus, write \(A=A_{0}\mathbin{\dot{\cup}}\{a\}\) and assume the claim is true for \(A_{0}\). Theorem 4 and the induction hypothesis
yield
\[\operatorname{tww}(G,A)=\operatorname{tww}(G,A_{0}\cup\{a\}) \leq 2\operatorname{tww}(G-a,A_{0})-2\] \[\leq 2\left(2^{|A_{0}|}\operatorname{tww}(G-A)+2^{|A_{0}|+1}-2 \right)-2\] \[=2^{|A|}\operatorname{tww}(G-A)+2^{|A|+1}-2\] \[\leq 2^{|A|}\operatorname{tww}(G)+2^{|A|+1}-2.\qed\]
### Tree decompositions of small adhesion
We are now ready to generalize the linear bound on the twin-width of a graph in terms of its biconnected components to allow for larger separators of bounded size. This is most easily expressed in terms of tree decompositions of bounded adhesion.
In all of the following two sections, let \(G\) be a graph, \(\mathcal{T}=((T,r),\{B_{t}\colon t\in V(T)\})\) a rooted tree decomposition with adhesion \(k\geq 1\).
For a vertex \(t\in V(T)\), we write \(P_{t}\coloneqq G[B_{t}]\) for the part associated to \(t\). For a vertex \(t\in V(T)\) with parent \(s\in V(T)\) we write \(S_{t}\coloneqq B_{t}\cap B_{s}\) and call \(S_{t}\) the _parent separator_ of \(P_{t}\) or a _child separator_ of \(P_{s}\). Moreover, we set \(S_{r}\coloneqq\emptyset\) to be the _root separator_. For a tree vertex \(t\in V(T)\), we write \(T_{t}\) for the subtree of \(T\) with root \(t\), \(G_{t}\coloneqq G[\bigcup_{s\in V(T_{t})}B_{s}]\) for the corresponding subgraph and \(\mathcal{T}_{t}\) for the corresponding tree decomposition of \(G_{t}\).
We can assume w.l.o.g. that every two vertices \(s,t\in V(T)\) with \(S_{s}=S_{t}\) are siblings. Indeed, if they are not, let \(s^{\prime}\) be a highest vertex in the tree with parent separator \(S_{s}\) and construct another tree decomposition by attaching all vertices \(t\) with \(S_{t}=S_{s}\) directly to the parent of \(s^{\prime}\) instead of their old parent. By repeating this procedure if necessary, we obtain the required property.
For a vertex \(t\in T\) with children \(c_{1},\ldots,c_{\ell}\), we set \(N_{c_{i}}\coloneqq\{N(v)\cap S_{c_{i}}\colon v\in V(G_{c_{i}})\setminus S_{c_ {i}}\}\) to be the set of (possibly empty) neighborhoods that vertices in \(G_{c_{i}}-S_{c_{i}}\) have in the separator \(S_{c_{i}}\) (and thus in \(P_{t}\)). We now define a trigraph \(\widetilde{P}_{t}\) with vertex set
\[V(\widetilde{P}_{t})\coloneqq V(P_{t})\mathbin{\dot{\cup}}\{s_{M}^{c_{i}} \colon i\in[\ell],M\in N_{c_{i}}\}.\]
We will often abuse notation and also denote the set \(\{s_{M}^{c_{i}}\colon M\in N_{c_{i}}\}\) by \(N_{c_{i}}\).
We define the edge set of \(\widetilde{P}_{t}\) such that
1. \(\widetilde{P}_{t}[V(P_{t})]=P_{t}\),
2. \(\widetilde{P}_{t}[N_{c_{i}}]\) is a red clique for every \(i\),
3. \(s_{M}^{c_{i}}\) is connected via black edges to all vertices in \(M\), and there are no further red or black edges. Note that in \(\widetilde{P}_{t}\), there are no red edges incident to any vertices in \(P_{t}\) and thus in particular not to any vertices in \(S_{t}\). A drawing of the gadget attached to \(S_{c_{i}}\) in \(\widetilde{P}_{t}\) in comparison with the simpler gadgets we will reduce to later can be seen in Figure 1.
Let \(G\) and \(\mathcal{T}\) be as above. For every \(t\in V(T)\), it holds that
\[\operatorname{tww}(G_{t},S_{t})\leq\max_{s\in V(T_{t})}\operatorname{tww}( \widetilde{P}_{s},S_{s}).\]
In particular, \(\operatorname{tww}(G)\leq\max_{s\in V(T)}\operatorname{tww}(\widetilde{P}_{s },S_{s})\leq 2^{k}\max_{s\in V(T)}\operatorname{tww}(\widetilde{P}_{s})+2^{k+1}-2\).
Proof.: We proceed by induction on the height of \(t\) in the tree \(T\). If \(t\) is a leaf, then \(G_{t}=P_{t}=\widetilde{P}_{t}\) and we can use a contraction sequence of \(\widetilde{P}_{t}\) respecting \(S_{t}\).
Thus, assume that \(t\) is not a leaf and assume the claim is true for all vertices of \(T\) below \(t\). Let \(c_{1},\ldots,c_{\ell}\) be the children of \(t\) in \(T\). By the induction hypothesis,
\[\operatorname{tww}(G_{c_{i}},S_{c_{i}})\leq\max_{s\in V(T_{c_{i}})} \operatorname{tww}(\widetilde{P}_{s},S_{s})\leq\max_{s\in V(T_{t})} \operatorname{tww}(\widetilde{P}_{s},S_{s}).\]
Thus, we find complete contraction sequences of all \(G_{c_{i}}\) respecting \(S_{c_{i}}\) within the required width bounds. Because the graphs \(G_{c_{i}}\) and \(G_{c_{j}}\) for distinct \(i\) and \(j\) only intersect in \(S_{c_{i}}\cap S_{c_{j}}\), which are not contracted, we do not create red edges between the different graphs \(G_{c_{i}}\) nor between these graphs and their parent separators. Thus, we can combine these contraction sequences without exceeding our width bounds. This results in the graph \(\widetilde{P}_{t}\), where possibly some red edges are still black. We can thus in a final step apply a contraction sequence of \(\widetilde{P}_{t}\) respecting \(S_{t}\).
The second claim follows by applying the first claim to \(t=r\) and then applying Corollary 15.
If \(\mathcal{T}\) has bounded width, we can proceed as in [18, 19, Lemma 3.1] to bound the twin-width of the graphs \(\widetilde{P}_{t}\) and thus the twin-width of \(G\):
Let \(G\) and \(\mathcal{T}\) be as above and additionally assume that \(\mathcal{T}\) has width at most \(w\). For every \(t\in V(T)\), it holds that
\[\operatorname{tww}(\widetilde{P}_{t},S_{t})\leq 3\cdot 2^{k-1}+\max(w-k-2,0).\]
Proof.: We first note that the red degree of \(\widetilde{P}\) itself is bounded by \(2^{k}-1\).
Now, let \(c_{1},\ldots,c_{\ell}\) be the (possibly empty) list of children of \(t\) in \(T\). We first find a contraction sequence of \(\bigcup_{i=1}^{\ell}N_{c_{i}}\) respecting \(S_{t}\). For this, we argue by induction that \(\bigcup_{i=1}^{j-1}N_{c_{i}}\) can be contracted while preserving the required width. This claim is trivial for \(j=1\). Hence assume we have already contracted \(\bigcup_{i=1}^{j-1}N_{c_{i}}\) to a set \(B_{j-1}\) of size at most \(2^{|S_{P}|}\). The vertices of \(B_{j-1}\) may be connected via red edges to vertices in \(B_{j-1}\) itself and in \(P_{t}\setminus S_{t}\). Thus, the red degree of vertices in \(B_{j-1}\) is bounded by
\[|B_{j-1}|-1+|P_{t}|-|S_{t}|\leq 2^{|S_{t}|}+|P_{t}|-|S_{t}|-1\leq 2^{k}+w-k-1,\]
while the red degree of vertices in \(P_{t}\setminus S_{t}\) is bounded by \(|B_{j}|\leq 2^{|S_{t}|}\leq 2^{k}\).
Now, we first apply a maximal contraction sequence of \(N_{c_{j}}\) respecting \(S_{t}\) resulting in a quotient \(\bar{N}_{c_{j}}\). Because of our assumption on the tree decomposition \(\mathcal{T}\), we know that \(S_{t}\neq S_{c_{j}}\) for all \(j\in[\ell]\). In particular, this implies that \(|S_{c_{j}}\cap S_{t}|<k\). and thus \(|\bar{N}_{c_{j}}|\leq 2^{k-1}\)
Figure 1: A separator \(S_{t^{\prime}}\) on three (square) vertices together with the three versions of gadgets we attach to it. Dashed edges represent either edges or non-edges. In \(\widetilde{P}_{t}\), we add a red clique consisting of one vertex for every neighborhood of vertices in \(G_{t^{\prime}}-S_{t^{\prime}}\) in \(S_{t^{\prime}}\). In \(\widetilde{P}_{t}\), we only add a single vertex with red edges to all vertices in \(S_{t^{\prime}}\). In \(\overline{P}_{t}\), we add no new vertices but complete all child separators to red cliques.
During this contraction sequence, there can appear red edges between the contracted vertices of \(N_{c_{j}}\) and vertices in \(P_{t}\setminus S_{t}\). The vertices of \(N_{c_{j}}\) thus have red degree bounded by \(|N_{c_{j}}|-1+|P_{t}|-|S_{t}|\leq 2^{k}+w-k-1\). Every red neighbor of vertices in \(P_{t}\setminus S_{t}\) in a quotient of \(N_{c_{j}}\) must be the contraction of at least two vertices of \(N_{c_{j}}\). Thus, the red degree of these vertices is bounded by
\[|B_{j-1}|+|N_{c_{j}}|/2\leq 2^{k}+2^{k-1}=3\cdot 2^{k-1}.\]
Next, we contract vertices from \(B_{j-1}\) and \(\bar{N}_{c_{j}}\) which have equal neighborhoods in \(S_{t}\). As our bounds already allow every vertex in \(P_{t}\setminus S_{t}\) to be connected via red edges to all of \(B_{j-1}\cup\bar{N}_{c_{j}}\), it suffices to argue that this keeps the red degree of vertices in \(B_{j-1}\cup\bar{N}_{c_{j}}\) within our bounds. But this set has size at most \(3\cdot 2^{k-1}\). Hence, after one contraction, the red degree is bounded by
\[|B_{j-1}|+|\bar{N}_{c_{j}}|-2+|P_{t}|-|S_{t}|\leq 3\cdot 2^{k-1}+w-k-2.\]
We have now successfully contracted \(N_{c_{j}}\) into \(B_{j-1}\) while keeping the red degree bounded by
\[\max(2^{k}+w-k-1,2^{k},3\cdot 2^{k-1},3\cdot 2^{k-1}+w-k-2)=3\cdot 2^{k-1}+ \max(w-k-2,0).\]
By repeating this procedure for all \(j\in[\ell]\), we find a contraction sequence of \(\bigcup_{i=1}^{\ell}N_{c_{i}}\) respecting \(S_{t}\) within this width. The resulting graph thus consists of \(S_{t}\), the vertices of \(P_{t}\setminus S_{t}\) and the vertices from \(B_{\ell}\). In total, these are at most \(2^{|S_{t}|}+|P_{t}|-|S_{t}|\leq 2^{k}+w-k\) vertices besides those in \(S_{t}\). These can further be contracted while keeping the red degree bounded by
\[2^{k}+w-k-1\leq 3\cdot 2^{k-1}+w-k-2.\]
In total, our contraction sequence thus has width at most \(3\cdot 2^{k-1}+\max(w-k-2,0)\), proving the claim.
By combining Lemma 17 with Lemma 16, we obtain a general bound on the twin-width of graphs admitting a tree decomposition of bounded width and adhesion:
Let \(G\) be a graph with a tree decomposition of width \(w\) and adhesion \(k\). Then
\[\operatorname{tww}(G)\leq 3\cdot 2^{k-1}+\max(w-k-2,0).\]
This upper bounds sharpens the bound given in [18, 19] by making explicit the dependence on the adhesion of the tree decomposition. Our bound shows that, while the twin-width in general can be exponential in the tree-width [7, 6], the exponential dependence comes from the adhesion of the tree decomposition and not from the width itself.
Moreover, our bound is asymptotically sharp. As already mentioned, it is known that there are graphs whose twin-width is exponential in the adhesion of some tree decomposition [7, 6]. By adding into some bag a Paley graph whose twin-width is linear in its size [1], we also achieve asymptotic sharpness in the linear width term.
### Simplifying the parts
Before we apply this general lemma to the special case of the tree of bi-, tri- or quasi-4-connected components, we show that we can simplify the gadgets attached in the graphs \(\widetilde{P}\) to all separators while raising the twin-width by at most a constant factor.
In a first step, we replace the sets \(N_{c_{i}}\) from the definition of the parts \(\widetilde{P}_{t}\) by a single common red neighbor for every separator. For every vertex \(t\in V(T)\) with children \(c_{1},\ldots,c_{\ell}\), we define the trigraph \(\widehat{P}_{t}\) as follows: we set \(\mathcal{S}(t)\coloneqq\{S_{c_{i}}\colon i\in[\ell],S_{c_{i}}\not\subseteq S_{c _{j}}\text{ for all }j\in[\ell]\}\) to be the set of subset-maximal child separators of \(P_{t}\). Now, we take a collection of fresh vertices \(V_{S}\coloneqq\{v_{S}\colon S\in\mathcal{S}(t)\}\) and set
\[V(\widehat{P}_{t})\coloneqq V(P_{t})\mathbin{\dot{\cup}}V_{S}.\]
The subgraph induced by \(\widehat{P}_{t}\) on \(V(P_{t})\) is just \(P_{t}\) itself. The vertex \(v_{S}\) is connected via red edges to all vertices in \(S\) and has no further neighbors. A drawing of the gadget attached to \(S_{c_{i}}\) in \(\widehat{P}_{t}\) can be found in Figure 1.
Let \(G\) and \(\mathcal{T}\) be as before. Then for every \(t\in V(T)\), it holds that
\[\operatorname{tww}(\widetilde{P}_{t},S_{t})\leq\max(2^{k}\operatorname{tww}( \widehat{P}_{t})+2^{k+1}-2,4^{k}+2^{k}-2).\]
In particular, \(\operatorname{tww}(G)\leq\max(2^{k}\max_{t\in V(T)}\operatorname{tww}( \widehat{P}_{t})+2^{k+1}-2,4^{k}+2^{k}-2)\).
Proof.: Consider a separator \(S\in\mathcal{S}(t)\) in the graph \(\widetilde{P}\) and assume \(t\) has more than one child \(c\) with \(S_{c}\subseteq S\). If \(c\) and \(c^{\prime}\) are such children, we can contract vertices from \(N_{c}\) and \(N_{c^{\prime}}\) with the same neighborhood in \(S\) as long as these exist. By doing this for all children whose parent separator is contained in \(S\), we can reduce to the case that there is only a single such child using a contraction sequence of width at most \(2^{k+1}-2\). Thus, in the following assume that \(S_{c}\not\subseteq S_{c^{\prime}}\) for every two distinct children \(c\) and \(c^{\prime}\) of \(t\).
By Corollary 3, it holds that \(\operatorname{tww}(\widetilde{P}_{t},S_{t})\leq 2^{k}\operatorname{tww}( \widetilde{P}_{t})+2^{k+1}-2\). Thus, we want to bound \(\operatorname{tww}(\widetilde{P}_{t})\).
For this, let \(c_{1},\ldots,c_{\ell}\) be the children of \(t\) and choose a vertex \(x_{i}\in N_{c_{i}}\) for every \(i\in[\ell]\). We show that we can contract all vertices of \(N_{c_{i}}\) into \(x_{i}\) one after the other in such a way that the red degree of \(x_{i}\) stays bounded by \(2^{k}-1\) while we do not create red edges between vertices of \(P\) and vertices of \(N_{c_{i}}\) besides \(x_{i}\).
Indeed, let \(S_{c_{i}}=\{s_{1},\ldots,s_{k^{\prime}}\}\). If \(k^{\prime}<k\), then we can contract the vertices of \(N_{c_{i}}\) into \(x_{i}\) in an arbitrary way while keeping the red degree bounded by \(2^{k^{\prime}}+k^{\prime}\leq 2^{k}-1\). Hence, assume \(k^{\prime}=k\). Now, let \(M\coloneqq N(x_{i})\cap S_{c_{i}}\) be the neighborhood of \(x_{i}\) in the separator \(S_{c_{i}}\) and define \(M_{j}\coloneqq M\bigtriangleup\{s_{1},\ldots,s_{j}\}\). For all \(j\in[k]\), there exists at most one vertex \(y_{j}\coloneqq s_{M_{j}}^{c_{i}}\in N_{c_{i}}\) such that \(N(y_{j})\cap S_{c_{i}}=M_{j}\). We first show that we can contract the vertices \(y_{1},\ldots,y_{k}\) into \(x_{i}\) one after the other without exceeding our bound of \(2^{k}-1\). For this, we note that after having contracted \(y_{1},\ldots,y_{j}\) into \(x_{i}\), the remaining red neighbors of \(x_{i}\) are among \(\{s_{1},\ldots,s_{j}\}\cup(N_{c_{i}}\setminus\{x_{i},y_{1},\ldots,y_{j}\})\). Now, whether or not \(y_{j}\) exists, the exclusion of \(y_{j}\) removes one of the possible \(2^{|S_{c_{i}}|}\) many vertices in \(N_{c_{i}}\). Thus, \(x_{i}\) has at most \((2^{k}-1)\)-many red neighbors.
After having merged \(y_{1},\ldots,y_{j}\) into \(x_{i}\), this possible neighborhood includes all the vertices in \(S_{c_{i}}\). Thus, we can now contract the remaining vertices of \(N_{c_{i}}\) in an arbitrary way without exceeding the bound of \(2^{k}-1\) on the red degree of \(x_{i}\).
By applying this procedure for all \(i\in[\ell]\), we find a contraction sequence which contracts \(\widetilde{P}_{t}\) to \(\widehat{P}_{t}\) using width at most \(2^{k}-1\). Thus, we have
\[\operatorname{tww}(\widetilde{P}_{t})\leq\max(2^{k}-1,\operatorname{tww}( \widehat{P}_{t})).\]
In total, we thus have
\[\operatorname{tww}(\widetilde{P}_{t},S_{t}) \leq\max\left(2^{k+1}-2,2^{k}\operatorname{tww}(\widetilde{P}_{t}) +2^{k+1}-2\right)\] \[\leq\max\left(2^{k+1}-2,2^{k}\max(2^{k}-1,\operatorname{tww}( \widehat{P}_{t}))+2^{k+1}-2\right)\] \[=\max(2^{k+1}-2,4^{k}+2^{k}-2,2^{k}\operatorname{tww}(\widehat{P} _{t})+2^{k-1}-2)\] \[=\max(4^{k}+2^{k}-2,2^{k}\operatorname{tww}(\widehat{P}_{t})+2^{k -1}-2)\]
The second claim follows from Lemma 16 together with the observation that \(\operatorname{tww}(G)=\operatorname{tww}(G,S_{r})\) for the root separator \(S_{r}=\emptyset\) of \(\mathcal{T}\).
Next, we want to define a version \(\overline{P}_{t}\) of the parts which does not need extra vertices in \(P_{t}\) but instead marks the separators via red cliques. Indeed, let \(\overline{P}_{t}\) be the trigraph obtained from \(P_{t}\) by completing each of the sets \(S\in\mathcal{S}_{t}\) to a red clique. Thus, the underlying graphs of the trigraphs \(\overline{P}_{t}\) are just the _torsos_ of the tree decomposition. We thus call the graphs \(\overline{P}_{t}\) the _red torsos_ of the tree decomposition \(\mathcal{T}\).
In order to obtain a bound on the twin-width of \(G\) in terms of the twin-width of the torsos \(\overline{P}_{t}\), we need one combinatorial lemma, which is a variant of Sperner's theorem [21].
Let \(\mathcal{F}\subseteq\mathcal{P}([n])\) be a family of subsets of \([n]\) such that every set has size at most \(k\) and no set is contained in another.
1. If \(k\leq\lfloor n/2\rfloor\), then \(|\mathcal{F}|\leq\binom{n}{k}\).
2. If \(k>\lfloor n/2\rfloor\), then \(|\mathcal{F}|\leq\binom{\lfloor n/2\rfloor}{n/2}\leq\binom{2k-1}{k}\).
Proof.: By the LYM-inequality [21], it holds that
\[\sum_{A\in\mathcal{F}}\frac{1}{\binom{n}{\lfloor A\rfloor}}\leq 1.\]
Furthermore, we also know that
\[\sum_{A\in\mathcal{F}}\frac{1}{\binom{n}{\lfloor A\rfloor}}\geq|\mathcal{F}| \cdot\frac{1}{\max_{i\leq k}\binom{n}{i}}=\begin{cases}\frac{|\mathcal{F}|}{ \binom{n}{k}}&\text{if }k\leq\lfloor n/2\rfloor,\\ \frac{|\mathcal{F}|}{\binom{n}{\lfloor n/2\rfloor}}&\text{if }k>\lfloor n/2 \rfloor.\end{cases}\]
Combining these two inequalities yields everything but the very last inequality of the claim. This inequality follows from the fact that when \(k>\lfloor n/2\rfloor\), then we also have \(k>n/2\) and thus \(n\leq 2k-1\).
Let \(H\) be a graph, \(\mathcal{F}\subseteq\mathcal{P}(V(H))\) and \(k\geq 2\) such that
1. every set in \(\mathcal{F}\) contains at most \(k\) vertices,
2. no two sets in \(\mathcal{F}\) are contained in each other, and
3. for each \(A\in\mathcal{F}\), the graph \(G[A]\) is a clique.
Then every \(x\in V(H)\) is contained in at most \(\max\left(\binom{\Delta(H)}{k-1},\binom{2k-3}{k-1}\right)\) sets of \(\mathcal{F}\).
Proof.: Consider a vertex \(x\in V(H)\). If \(\{x\}\in\mathcal{F}\), then \(x\) cannot be contained in any further sets of \(\mathcal{F}\). and is thus only contained in \(1\leq\binom{2k-3}{k-1}\) sets.
Thus, assume that \(\{x\}\notin\mathcal{F}\) and consider the family \(\mathcal{F}_{-x}\coloneqq\{A\setminus\{x\}\colon A\in\mathcal{F},x\in A\}\), whose cardinality is the number we want to bound. All sets in \(\mathcal{F}_{-x}\) are subsets of \(N_{H}(x)\), have size at most \(k-1\) and do not contain any other set in \(\mathcal{F}_{-x}\). Thus, Lemma 19 yields that
\[|\mathcal{F}_{-x}|\leq\max\left(\binom{|N_{H}(x)|}{k-1},\binom{2k-3}{k-1} \right)\leq\max\left(\binom{\Delta(H)}{k-1},\binom{2k-3}{k-1}\right).\qed\]
**Lemma 21**: _For every \(t\in V(T)\), it holds that_
\[\operatorname{tww}(\widehat{P}_{t})\leq\max\left(k+1,\operatorname{tww}( \overline{P}_{t})+\binom{\operatorname{tww}(\overline{P}_{t})}{k-1}, \operatorname{tww}(\overline{P}_{t})+\binom{2k-3}{k-1}\right).\]
_In particular,_
\[\operatorname{tww}(G)\leq\max\left(\begin{array}{l}2^{k}\max_{t\in V(T)} \left(\operatorname{tww}(\overline{P}_{t})+\binom{\operatorname{tww}( \overline{P}_{t})}{k-1}\right)\right)+2^{k+1}-2,\\ 2^{k}\max_{t\in V(T)}\operatorname{tww}(\overline{P}_{t})+2^{k}\binom{2k-3}{k -1}+2^{k+1}-2,\\ 4^{k}+2^{k}-2\end{array}\right)\]
Proof.: For \(k=1\) the claim is true. Thus, we assume in the following that \(k\geq 2\). Let \((x_{i}y_{i})_{i<|\overline{P}_{t}|}\) be a minimal contraction sequence of \(\overline{P}_{t}\), which we can also interpret as a contraction sequence of \(P_{t}\). We want to construct from the sequence \((x_{i}y_{i})_{i<|\overline{P}_{t}|}\) a contraction sequence of \(\widehat{P}_{t}\), which can be found in Algorithm 2.
```
1for\(i<|\widehat{P}_{t}|\)do
2whilethere exist distinct vertices \(v_{S}\) and \(v_{S^{\prime}}\) in the quotient of \(V_{S}\) such that \(N(v_{S})/x_{i}y_{i}\subseteq N(v_{S^{\prime}})/x_{i}y_{i}\)do
3 contract \(v_{S}\) and \(v_{S^{\prime}}\)
4 contract \(x_{i}\) and \(y_{i}\)
5 contract the remaining two vertices
```
**Algorithm 2** Contract\((\widehat{P}_{t},\,(x_{i}y_{i})_{i<|\overline{P}_{t}|})\)
To see that this contraction sequence does indeed contract \(\widehat{P}_{t}\) to a single vertex, note that after having exited the for-loop, we have applied all contractions \(x_{i}y_{i}\), which means that \(P_{t}\) was contracted to a single vertex. But then each two vertices \(v_{S}\) and \(v_{S^{\prime}}\) in the quotient of \(V_{S}\) have identical neighborhood in \(V(P_{t})\), which means that they were contracted before.
Now, in order to bound the width of this sequence, we first argue that we preserve the loop-invariant that at the beginning of Line 2, no two vertices \(v_{S}\) and \(v_{S^{\prime}}\) have neighborhoods that contain each other. In the uncontracted graph \(\widehat{P}_{t}\), this is true by construction, as we only added a vertex \(v_{S}\) for every subset-maximal child separator \(S\). If the invariant is true at the start of the \(i\)-th iteration, then the process in the while-loop precisely ensures that the property is also true at the start of the \((i+1)\)-th iteration.
Next, we argue that the red-degree of all vertices \(v_{S}\) is bounded by \(k\) at the beginning of each iteration and bounded by \(k+1\) during the whole sequence. Moreover, at the start of each iteration, the neighborhood of each vertex \(v_{S}\) is a red clique. Again, this is true by construction in the uncontracted graph \(\widehat{P}_{t}\). If this invariant is true at the beginning of the \(i\)-th iteration, we show that it is also true at the beginning of the \((i+1)\)-th iteration. For this, we note that whenever we contract a vertex \(v_{S}\) into a vertex \(v_{S^{\prime}}\) because \(N_{v_{S}}/x_{i}y_{i}\subseteq N_{v_{S^{\prime}}}/x_{i}y_{i}\), then both neighborhoods contain either \(x_{i}\) or \(y_{i}\) (or both) and both \(x_{i}\) and \(y_{i}\) are contained in one of the two neighborhoods. Thus, the neighborhood of the contracted vertex contains both \(x_{i}\) and \(y_{i}\) and its size is bounded by \(|N_{v_{S^{\prime}}}\cup\{x_{i},y_{i}\}|\leq k+1\). Moreover, when contracting the vertices \(x_{i}\) and \(y_{i}\) in \(N_{v_{S^{\prime}}}\cup\{x_{i},y_{i}\}\), then we get a red clique.
It remains to bound the red degree of the vertices in (the quotients of) \(P_{t}\). For this, we note that during the while-loop of Algorithm 2, the red degree of vertices in \(P_{t}\) can only decrease. Thus, it suffices to bound the red degree at the start of each iteration. The red
neighbors of vertices in \(P_{t}\) come in two sorts: red neighbors in \(P_{t}\) itself, stemming from the contraction sequence \(\left(x_{i}y_{i}\right)_{i<|\overline{P}_{t}|}\), and red neighbors among the vertices in \(V_{S}\).
The red degree that a vertex \(x\) in \(P_{t}\) can obtain within \(P_{t}\) is bounded by the width of the sequence \(\left(x_{i}y_{i}\right)_{i<|\overline{P}_{t}|}\) which is \(\operatorname{tww}(\overline{P}_{t})\). To bound the red degree that a vertex \(x\) in \(P_{t}\) can get from vertices in \(V_{S}\), let \(Q_{i}\) be the partially contracted graph that we have at the start of the \(i\)-th iteration and set \(\mathcal{F}_{i}\coloneqq\{N_{Q}(v_{S})\colon v_{S}\in V_{S}\}\). By Corollary 20, the number of red neighbors of \(x\) among the vertices \(v_{S}\) is bounded by
\[\max\left(\binom{\Delta_{\operatorname{red}}(Q_{i})}{k-1},\binom{2k-3}{k-1} \right)\leq\max\left(\binom{\operatorname{tww}(\overline{P}_{t})}{k-1},\binom {2k-3}{k-1}\right).\]
Combining these bounds, we get that the red degree of the contraction sequence of \(\widehat{P}_{t}\) is bounded by
\[\max\left(k+1,\operatorname{tww}(\overline{P}_{t})+\binom{\operatorname{tww} (\overline{P}_{t})}{k-1},\operatorname{tww}(\overline{P}_{t})+\binom{2k-3}{k -1}\right).\]
The second claim follows by inserting this bound into the bound in Lemma 18.
Combining Lemma 18 and Lemma 21, we get the following two asymptotic bounds on the twin-width of a graph admitting a tree decomposition of small adhesion.
For every \(k\in\mathbb{N}\) there exist explicit constants \(D_{k}\) and \(D^{\prime}_{k}\) such that for every graph \(G\) with a tree decomposition of adhesion \(k\) and parts \(P_{1},P_{2},\ldots,P_{\ell}\), the following statements are satisfied:
1. \(\operatorname{tww}(G)\leq 2^{k}\max_{i\in[\ell]}\operatorname{tww}(\widehat{P}_{i})+ D_{k}\),
2. if \(k\geq 3\), then \(\operatorname{tww}(G)\leq\frac{2^{k}}{(k-1)!}\max_{i\in[\ell]} \operatorname{tww}(\overline{P}_{i})^{k-1}+D^{\prime}_{k}\).
### Tri- and quasi-4-connected components
We now want to apply these general results on the interplay between twin-width and tree decompositions of small adhesion to obtain bounds on the twin-width of graphs in terms of the twin-width of their tri- and quasi-4-connected components.
Let \(C_{1},C_{2},\ldots,C_{\ell}\) be the triconnected components of a 2-connected graph \(G\). If we write \(\overline{C}_{i}\) for the red torsos of the triconnected components \(C_{i}\), then
\[\operatorname{tww}(G)\leq\max\left(8\max_{i\in[\ell]}\operatorname{tww}( \overline{C}_{i})+6,18\right).\]
Proof.: This follows from Lemma 21 applied to the tree of triconnected components of \(G\) together with the observation that for \(k=2\), the second term in the maximum in Lemma 21 is always bounded by the maximum of the first and third term.
Note that in Theorem 3 we cannot hope for a lower bound similar to the lower bound in Theorem 3 without dropping the virtual edges. Indeed, consider a 3-connected graph \(G\) of large twin-width (e.g. Paley graphs or Rook's graphs). By [3, 2], a \((2\lceil\log(|G|)\rceil-1)\)-subdivision \(H\) of \(G\) has twin-width at most 4, but its triconnected components are \(G\) and multiple long cycles. Thus, there exist graphs of twin-width at most 4 with triconnected components of arbitrarily large twin-width.
Moreover, the red virtual edges in each separator can also not be replaced by black edges.
**Lemma 22**.: _There exists a family of graphs \((G_{n})_{n\in\mathbb{N}}\) with unbounded twin-width such that the twin-width of the class of triconnected components of \(G_{n}\) with black virtual edges is bounded._
Proof.: Let \(G_{n}\) be the graph obtained from a clique \(K_{n}\) by subdividing every edge once. The triconnected components of this graph are the \(K_{n}\) and a \(K_{3}\) for every edge of the \(K_{n}\), which all have twin-width \(0\).
In order to show that the twin-width of the family \((G_{n})_{n\in\mathbb{N}}\) is unbounded, we show that for every \(d\geq 2\) and \(n\geq(d+1)\binom{d}{2}+1\), we have \(\operatorname{tww}(G_{n})>d\). For this, consider any \(d\)-contraction sequence of \(G_{n}\) for \(n\geq(d+1)^{2}+2\) and let \(\mathcal{P}\) be the partition of \(V(G_{n})\) right before the first contraction in the sequence that does not contract two subdivision vertices. We show that every partition class \(P\in\mathcal{P}\) has size at most \(\binom{d}{2}\). As no subdivision vertices were contracted so far, we only need to consider classes of subdivision vertices. Thus, let \(P=\{v_{e_{1}},\ldots,v_{e_{\ell}}\}\) be such a class, where \(e_{1},\ldots,e_{\ell}\in\binom{V(K_{n})}{2}\) are edges of the original \(K_{n}\). If the edges \(e_{i}\) all have a common endpoint, then \(P\) has red edges to all \(\ell\) other endpoints of these edges, meaning that \(\ell\leq d\leq\binom{d}{2}\). Otherwise, \(P\) has red edges to all endpoints of all \(e_{i}\). If \(\ell>\binom{d}{2}\), these have to be more that \(d\), which is a contraction. Thus, \(|P|=\ell\leq\binom{d}{2}\).
Now, let \(xy\) be the next contraction in the sequence. If neither \(x\) nor \(y\) is a subdivision vertex, then \(G_{n}\) contains precisely \(2(n-2)\)-many vertices which are connected to either \(x\) or \(y\) but not both. In the contracted graph, the contraction would thus create at least \(\frac{2(n-2)}{\binom{d}{2}}\geq d+1\) red edges incident to the contracted vertex. If, on the other hand, either \(x\) or \(y\) is a subdivision vertex but the other is not, then \(x\) and \(y\) have no common neighbors. But as non-subdivision vertices have degree \(n-1\) in \(G_{n}\), contracting these two would create at least \(\frac{n-1}{\binom{d}{2}}\geq d+2\) red edges incident to the contracted vertex. Thus, no further contraction keeps the red degree of the sequence bounded by \(d\), which implies \(\operatorname{tww}(G_{n})>d\).
In the case of separators of size 3, we get two bounds on the twin-width of a graph in terms of its quasi-4-connected components: one linear bound in terms of the subgraphs induced on the quasi-4-connected components together with a common red neighbor for every 3-separator along which the graph was split, and one quadratic bound in terms of the (red) torsos of the quasi-4-connected components.
**Theorem 4**.: _Let \(C_{1},C_{2},\ldots,C_{\ell}\) be the quasi-4-connected components of a 3-connected graph \(G\)._
1. _For_ \(i\in[\ell]\) _we construct a trigraph_ \(\widehat{C}_{i}\) _by adding for every 3-separator_ \(S\) _in_ \(C_{i}\) _along which_ \(G\) _was split a vertex_ \(v_{S}\) _which we connect via red edges to all vertices in_ \(S\)_. Then_ \[\operatorname{tww}(G)\leq\max\left(8\max_{i\in[\ell]}\operatorname{tww}( \widehat{C}_{i})+14,70\right).\]
2. _For_ \(i\in[\ell]\)_, denote by_ \(\overline{C}_{i}\) _the_ red torso _of the quasi-4-connected component_ \(C_{i}\)_. Then_ \[\operatorname{tww}(G)\leq\max\left(4\max_{i\in[\ell]}\left(\operatorname{tww} (\overline{C}_{i})^{2}+\operatorname{tww}(\overline{C}_{i})\right)+14,70 \right).\]
Proof.: The two claims follow from Lemma 18 and Lemma 21 applied to the tree of quasi-4-connected components of \(G\)[14, 15] together with the observation that also for \(k=3\), the second term in the maximum in Lemma 21 is always bounded by the maximum of the first and third term.
## 5 Conclusion and further research
We proved that \(\operatorname{tww}(G)\leq\frac{3}{2}k+1+\frac{1}{2}(\sqrt{k+\ln k}+\sqrt{k}+2\ln k)\) if \(G\) is a graph of strong tree-width at most \(k\) (Theorem 1). Moreover, we demonstrated that asymptotically the twin-width of a Paley graph agrees with its strong tree-width (Lemma 9).
We provided a detailed analysis of the relation between the twin-width of a graph and the twin-width of its highly connected components. Concerning 2-connected graphs, there is a tight linear upper bound on the twin-width of a graph given the twin-width of its biconnected components (Theorem 2). There is a linear upper bound for a slightly modified version of triconnected components (Theorem 3). By further providing a quadratic upper bound on the twin-width of graph given the twin-widths of its modified quasi-4-connected components (Theorem 4) we took one important step further to complete the picture of the interplay of the twin-width of a graph with the twin-width of its highly connected components. As a natural generalization of the above decompositions we considered graphs allowing for a tree decomposition of small adhesion (Theorem 5 and Theorem 6).
It seems worthwhile to integrate our new bounds for practical twin-width computations, for example, with a branch-and-bound approach. |
2303.16312 | Kernel based quantum machine learning at record rate : Many-body
distribution functionals as compact representations | The feature vector mapping used to represent chemical systems is a key factor
governing the superior data-efficiency of kernel based quantum machine learning
(QML) models applicable throughout chemical compound space. Unfortunately, the
most accurate representations require a high dimensional feature mapping,
thereby imposing a considerable computational burden on model training and use.
We introduce compact yet accurate, linear scaling QML representations based on
atomic Gaussian many-body distribution functionals (MBDF), and their
derivatives. Weighted density functions (DF) of MBDF values are used as global
representations which are constant in size, i.e.~invariant with respect to the
number of atoms. We report predictive performance and training data efficiency
that is competitive with state of the art for two diverse datasets of organic
molecules, QM9 and QMugs. Generalization capability has been investigated for
atomization energies, HOMO-LUMO eigenvalues and gap, internal energies at 0 K,
zero point vibrational energies, dipole moment norm, static isotropic
polarizability, and heat capacity as encoded in QM9. MBDF based QM9 performance
lowers the optimal Pareto front spanned between sampling and training cost to
compute node minutes,~effectively sampling chemical compound space with
chemical accuracy at a sampling rate of $\sim 48$ molecules per core second. | Danish Khan, Stefan Heinen, O. Anatole von Lilienfeld | 2023-03-28T21:11:33Z | http://arxiv.org/abs/2303.16312v2 | # Kernel based quantum machine learning at record rate:
###### Abstract
The feature vector mapping used to represent chemical systems is a key factor governing the superior data-efficiency of kernel based quantum machine learning (QML) models applicable throughout chemical compound space. Unfortunately, the most accurate representations require a high dimensional feature mapping, thereby imposing a considerable computational burden on model training and use. We introduce compact yet accurate, linear scaling QML representations based on atomic Gaussian many-body distribution functionals (MBDF), and their derivatives. Weighted density functions (DF) of MBDF values are used as global representations which are constant in size, i.e. invariant with respect to the number of atoms. We report predictive performance and training data efficiency that is competitive with state of the art for two diverse datasets of organic molecules, QM9 and QMugs. Generalization capability has been investigated for atomization energies, HOMO-LUMO eigenvalues and gap, internal energies at 0 K, zero point vibrational energies, dipole moment norm, static isotropic polarizability, and heat capacity as encoded in QM9. MBDF based QM9 performance lowers the optimal Pareto front spanned between sampling and training cost to compute node minutes, effectively sampling chemical compound space with chemical accuracy at a sampling rate of \(\sim 48\) molecules per core second.
## I Introduction
Modern data-driven statistical Machine Learning (ML) models have emerged as powerful tools over the past decade for inferring quantum mechanical observables throughout chemical compound space, without explicitly solving electronic Schrodinger equations [1; 2; 3]. Similar success was obtained for ML based interatomic potentials and force-fields [4; 5; 6; 7; 8; 9] as well as electronic structure modeling throughout Chemical Compound Space (CCS) [10; 11]. For an entire set of extensive in-depth reviews on these and other related ML applications, we refer the reader to the recent special issue in Chemical Reviews [12; 13]. Various aspects in the development of ML model architecture and training protocols have proven to be essential for data-efficiency. In particular, the molecular representation is known to have a strong impact on the performance of similarity based ML models, such as kernel ridge regression (KRR) [14; 15; 16]. This is not surprising as the representation controls the information about the systems, how its weighed and the consistency of these Quantum Machine Learning (QML) models with ab-_initio_ methods [17]. These representations are non-linear mappings of the atomistic systems to a suitable Hilbert Space where a statistical regression model can easily be applied. The Hilbert Space constraint applies due to the requirement of measuring similarity in terms of an inner product [18]. These mappings should have some desirable features of which the most important are i) uniqueness such that systems with different properties necessarily possess different representations [14]and ii) invariance with respect to transformations that leave the target property invariant, such as global translations, rotations, and atomic index permutations of the same chemical elements. Other desirable features include iii) an analytical and continuous form of the representation function, iv) differentiability with respect to nuclear coordinates, charges, number of electrons, and number of atoms, v) as general as the Hamiltonian, vi) computationally efficient evaluation, vi) compact or even constant size, limiting the computational cost for larger systems [19].
Due to their critical role, many representations have been introduced and investigated within the context of atomistic simulation [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. For recent comprehensive reviews, the reader is referred to Refs. [16; 33]. These representations can either describe the molecule as a whole (global) or each atom (local or atomic) separately. For the sake of brevity we restricted most of our comparative benchmarks within this study to the following representations which are commonly used to train QML models throughout CCS: Faber-Christensen-Huang-Lilienfeld (FCHL19) representation [34; 35], Smooth Overlap of Atomic Positions (SOAP) [36], spectrum of London and Axilrod-Teller-Muto potentials (SLATM) [37], atom index sorted Coulomb matrix (CM) [1], and its vectorized form, Bag of Bonds (BOB) [38]. Other representations/models tested are mentioned in the data and code section.
While these representations satisfy most of the aforementioned qualities, seen the immense size of CCS, a
more compact and scalable representation would still be desirable. Formally, the number of degrees of freedom of any material or molecule would prescribe usage of a \(4M\) dimensional feature vector (3 spatial coordinates and one nuclear charge coordinate for each of the atoms). However, all aforementioned representations when using optimal hyperparameters require a higher dimensional feature vector mapping in order to be accurate and training-data-efficient at regression tasks, and some (e.g. CM or BOB) even scale quadratically with \(M\). While verbosity facilitates the inclusion of invariances, the \(4M\) degrees of freedom suggest that the same performance can be obtained using more compact representations. This is especially an issue for kernel based ML models, where the size of the representation directly affects the distance/kernel evaluation time [18; 34]. Although the scaling for kernel inversion [39] is larger (\(\propto\mathcal{O}(n^{3})\) for Cholesky solvers), for highly data-efficient (i.e. efficient in training data) QML models it is the kernel generation and evaluation that consumes the most compute time as demonstrated later. The kernel evaluation pre-factor becomes even worse when using atomic (or local) representations in conjunction with a suitable local kernel [40]. Obvious solutions by reducing finite local cutoffs within the representation come at the expense of reducing the predictive power, or, conversely, increasing training data needs. As such, a computationally efficient yet accurate solution is desirable as was shown in the discretization of the FCHL18 representation [34; 35]. Other solutions to this problem such as sparse kernel models [41] and the recently introduced conjugate gradients based iterative approach for training kernel based models [42] would also be well complemented by a compact molecular representation.
Herein, we propose a methodology for generating representations that minimize feature size. We use functionals of many-body distributions (MBDFs) and their derivatives to encode any local chemical environment of any atom in an integrated compact fashion. The representations thus generated preserve the system's various symmetries (translation, rotation, atom index invariance), and can be tailored to the physical property of interest through scaling functions. MBDFs are easily extendable to include higher order many-body interactions with minimal increase in size. In the current formulation, while including three-body interactions, MBDF scales as \(5M\). We further tackle the issue of storing this information in a manner that remains invariant to the number of atoms in a molecule. We do this by generating a discretized density (DF) function of MBDF values, and using it as a global molecular representation. Using two diverse datasets the performance of MDBF is tested against aforementioned SOAP and FCHL19, which are commonly used and state-of-the-art, as well as SLATM, BOB, CM representations and a few other QML models mentioned later. Lastly, we explore the bottleneck cross-over from kernel evaluation to inversion.
## II Theory and Discussion
### Many-Body Distribution Functionals
We begin the discussion of our local representation using distribution functions over the internal coordinates which can be constructed using the atomic coordinates.
An analytical and continuous distribution over the pair-wise internal coordinate defined as the inter-atomic distances (pair correlation function), is easily built using Gaussian probability density functions (PDFs) centered at each inter-atomic distance with respect to an atom \(i\):
\[\rho_{i}(r,R_{ij})=\frac{1}{\sqrt{2\pi\sigma_{r}^{2}}}\sum_{j\neq i}^{M}Z_{j} \exp\left(-\frac{(r-R_{ij})^{2}}{\sigma_{r}}\right) \tag{1}\]
where \(\rho_{i}(r,R_{ij})\) is the normalized distribution with atom \(i\) as the origin, \(\sigma_{r}\) is the Gaussian length-scale (or variance) parameter, \(M\) denotes the total number of atoms in the system (or within a radial cutoff if employed), \(R_{ij}\) denotes the inter-atomic distance between atoms \(i\) and \(j\) and \(Z_{j}\) is the nuclear charge. Scaling by nuclear charges defines elemental identities, but could also be done in other ways e.g. having different length-scale parameters \(\sigma_{r}\) for each unique chemical element, or multiple dimensions such as period and group specifications [43]. In a similar fashion, a continuous distribution (triplet correlation function) over the 3-body internal coordinate, inter-atomic angles \(\theta\), is defined as:
\[\begin{split}\rho_{i}(\theta,\theta_{jik})=\frac{1}{\sqrt{2\pi \sigma_{\theta}^{2}}}\sum_{j\neq i}^{M}\sum_{k\neq j}^{M}Z_{j}Z_{k}\ \exp\bigg{(}-\frac{(\theta-\theta_{jik})^{2}}{\sigma_{\theta}}\bigg{)},\\ \theta_{jik}=\cos^{-1}\frac{(\mathbf{R}_{i}-\mathbf{R}_{j})^{T}( \mathbf{R}_{i}-\mathbf{R}_{j})}{R_{ij}^{2}}\end{split} \tag{2}\]
where \(\theta_{jik}\) is the inter-atomic angle centered on atom \(i\). This can be generalized to define a continuous distribution, or correlation function, over any \(m\)-body internal coordinate \(\tau\).
Such analytical distribution functions of the internal coordinates have been used as descriptors of atomic environments in Atom Centered Symmetry Functions [6; 20] (ACSF), their weighted variants wACSFs [23] and other similar methods. Instead of using these distribution functions as atomic descriptors, we define the MBDF representation as functionals of these \(m\)-body distributions which leads to a more compact descriptor and allows inclusion of higher order terms with minimal change in size (a single scalar for each \(m\)-body term). With the 2- and 3-body distributions defined above, each atom can then be represented by the two zeroth-order functionals:
\[F_{0}^{(2)}[i]=\int_{0}^{\tau_{c}}dr\ g_{0}(r,R_{ij})\ \rho_{i}(r,R_{ij}) \tag{3}\]
\[F_{0}^{(3)}[i]=\int_{0}^{\pi}d\theta\ g_{0}(\theta,R_{ij},R_{jk},R_{ik})\ \rho_{i}( \theta,\theta_{jik}) \tag{4}\]
where \(r_{c}\) denotes the radial cut-off distance, and \(g_{0}(r,R_{ij})\) and \(g_{0}(\theta,R_{ij},R_{jk},R_{kj})\) denote 2- and 3-body weighting functions. Note that when the weighting functions \(g_{0}(r,R_{ij})\) and \(g_{0}(\theta,R_{ij},R_{jk},R_{kj})\) correspond to suitable 2 and 3-body potentials, the functionals \(F_{0}^{(2)}\), \(F_{0}^{(3)}\) become the average of the corresponding 2, 3-body inter-atomic interactions weighted by the pair and triplet correlation functions \(\rho_{i}(r,R_{ij})\), \(\rho_{i}(\theta,\theta_{jik})\), respectively. These functionals then form a coarse approximation to the average 2, 3-body interactions experienced by a chemical species. Furthermore, we exploit the advantage of using the infinitely differentiable Gaussian PDFs to define higher order functionals such as:
\[F_{1}^{(2)}[i]=\int_{0}^{r_{c}}dr\ g_{1}(r,R_{ij})\ \frac{\partial\rho_{i}}{ \partial r}(r,R_{ij}) \tag{5}\]
\[F_{1}^{(3)}[i]=\int_{0}^{\pi}d\theta\ g_{1}(\theta,R_{ij},R_{jk},R_{ik})\ \frac{\partial\rho_{i}}{\partial\theta}(\theta,\theta_{jik}) \tag{6}\]
with potentially different weighting functions \(g_{1}(r)\), \(g_{1}(\theta,R_{ij},R_{jk},R_{kj})\). The derivative functionals are useful since the functional of any arbitrary distribution is not unique. These functionals also encode the change in the \(m\)-body distribution in an atom's local neighborhood and have not been used in previous works involving internal coordinate distribution functions. For any \(n\)-th derivative of the 2-body distribution we can define the functional:
\[F_{n}^{(2)}[i]=\int_{0}^{r_{c}}dr\ g_{n}(r)\ \partial_{r}^{n} \rho_{i}(r,R_{ij}),\] \[\partial_{r}^{n}\rho_{i}(r)=\frac{\partial^{n}\rho_{i}}{\partial r ^{n}}(r,R_{ij}) \tag{7}\]
where \(g_{n}(r)\) is, again, a suitable radial weighting function. Generalizing this to all internal coordinates, a functional \(F_{n}^{(m)}[i]\) can be defined over the \(n\)-th derivative of any \(m\)-body distribution function centered at atom \(i\):
\[F_{n}^{(m)}[i]=\int_{0}^{r_{c}}d\tau\ g_{n}(\tau)\ \partial_{ \tau}^{n}\rho_{i}(\tau,\tau_{i_{1}i_{2}..i_{m}}),\\ \partial_{\tau}^{n}\rho_{i}(\tau,\tau_{i_{1}i_{2}..i_{m}})=\sum_{ i_{1}<i_{2}..<i_{m}}^{M}H_{n}(\tau-\tau_{i_{1}i_{2}..i_{m}})\\ \times\mathcal{N}(\tau_{i_{1}i_{2}..i_{m}},\sigma_{\tau}^{2}) \prod_{j=i_{1}}^{i_{m}}Z_{j} \tag{8}\]
where \(\tau\) denotes the \(m\)-body internal coordinate, \(\rho_{i}\) is the \(m\)-body distribution function w.r.t atom \(i\), \(g_{n}\) is the weighting function for the \(n\)-th derivative of \(\rho_{i}\), \(H_{n}\) denotes the Hermite polynomial of degree \(n\) and \(\mathcal{N}(\tau_{i_{1}i_{2}..i_{m}},\sigma_{\tau}^{2})\) denotes the normalized Gaussian distribution. The Hermite polynomials arise due to the use of Gaussian PDFs and allow convenient computation of \(n\) derivatives of the distribution function at any point \(\tau\).
We note here that an alternative way to describe a (bounded) distribution in a compact form is through a moment expansion of the form:
\[G^{(m)}[i]=\int_{0}^{r_{c}}d\tau\ (\tau-\tau_{i_{1}i_{2}..i_{m}})^{m}g _{m}(\tau)\rho_{i}(\tau,\tau_{i_{1}i_{2}..i_{m}}) \tag{9}\]
where \(G^{(m)}[i]\) denotes the \(m\)-th moment of the distribution centered at atom \(i\). The set of \(m\) moments \(G^{(m)}[i]\) would then form the local representation of the atom \(i\). These moments can also be evaluated by placing a set of Gaussians (or any basis functions) on each atom \(i\), and then evaluating the moments of the atomic density \(\rho_{i}(\mathbf{r})\) w.r.t each atomic position within a radial cutoff \(\mathbf{r}_{c}\):
\[G^{(m)}[i]=\int_{0}^{\mathbf{r}_{c}}d\mathbf{r}\ | \mathbf{r}|^{m}g_{m}(|\mathbf{r}|)\rho_{i}(\mathbf{r}),\\ \rho_{i}(\mathbf{r})=\frac{1}{\sqrt{2\pi\sigma^{2}}}\sum_{j}\exp \left(-\frac{||\mathbf{r}-\mathbf{R}_{ij}||_{2}^{2}}{2\sigma^{2}}\right) \tag{10}\]
where \(|.|\) is any metric. This form has the advantage of being independent of many-body orders and the computational cost of evaluating these moments scales solely with the number of atoms within the cutoff radius \(\mathbf{r}_{c}\). The integral can be simplified in spherical polar coordinates by expanding the density \(\rho_{i}(\mathbf{r})\) in a basis set composed of spherical harmonics \(Y_{l}^{m^{\prime}}\) and orthogonal radial functions \(u_{n}\) (which is a common practice [36]):
\[G^{(m)}[i]=\int_{0}^{r_{c}}d\tau\ g_{n}(\tau)\ \partial_{\tau}^{n} \rho_{i}(\tau,\tau_{i_{1}i_{2}..i_{m}}),\\ \partial_{\tau}^{n}\rho_{i}(\tau,\tau_{i_{1}i_{2}..i_{m}})=\sum_{ i_{1}<i_{2}..<i_{m}}^{M}H_{n}(\tau-\tau_{i_{1}i_{2}..i_{m}})\\ \times\mathcal{N}(\tau_{i_{1}i_{2}..i_{m}},\sigma_{\tau}^{2})\prod _{j=i_{1}}^{i_{m}}Z_{j} \tag{11}\]
where \(\tau\) denotes the \(m\)-body internal coordinate, \(\rho_{i}\) is the \(m\)-body distribution function w.r.t atom \(i\), \(g_{n}\) is the weighting function for the \(n\)-th derivative of \(\rho_{i}\), \(H_{n}\) denotes the Hermite polynomial of degree \(n\) and \(\mathcal{N}(\tau_{i_{1}i_{2}..i_{m}},\sigma_{\tau}^{2})\) denotes the normalized Gaussian distribution. The Hermite polynomials arise due to the use of Gaussian PDFs and allow convenient computation of \(n\) derivatives of the distribution function at any point \(\tau\).
We note here that an alternative way to describe a (bounded) distribution in a compact form is through a moment expansion of the form:
\[G^{(m)}[i]=\int_{0}^{r_{c}}d\tau\ (\tau-\tau_{i_{1}i_{2}..i_{m}})^{m}g _{m}(\tau)\rho_{i}(\tau,\tau_{i_{1}i_{2}..i_{m}}) \tag{12}\]
where \(G^{(m)}[i]\) denotes the \(m\)-th moment of the distribution centered at atom \(i\). The set of \(m\) moments \(G^{(m)}[i]\) would then form the local representation of the atom \(i\). These moments can also be evaluated by placing a set of Gaussians (or any basis functions) on each atom \(i\), and then evaluating the moments of the atomic density \(\rho_{i}(\mathbf{r})\) w.r.t each atomic position within a radial cutoff \(\mathbf{r}_{c}\):
\[G^{(m)}[i]=\int_{0}^{\mathbf{r}_{c}}d\mathbf{r}\ |\mathbf{r}|^{m}g_{m}(| \mathbf{r}|)\rho_{i}(\mathbf{r}),\\ \rho_{i}(\mathbf{r})=\frac{1}{\sqrt{2\pi\sigma^{2}}}\sum_{j}\exp \left(-\frac{||\mathbf{r}-\mathbf{R}_{ij}||_{2}^{2}}{2\sigma^{2}}\right) \tag{13}\]
where \(|.|\) is any metric. This form has the advantage of being independent of many-body orders and the computational cost of evaluating these moments scales solely with the number of atoms within the cutoff radius \(\mathbf{r}_{c}\). The integral can be simplified in spherical polar coordinates by expanding the density \(\rho_{i}(\mathbf{r})\) in a basis set composed of spherical harmonics \(Y_{l}^{m^{\prime}}\) and orthogonal radial functions \(u_{n}\) (which is a common practice [36]):
\[G^{(m)}[i]=\int_{0}^{r_{c}}\int_{0}^{2\pi}\int_{0}^{\pi}dr\ d \theta\ d\phi\ r^{m+2}\ g_{m}(r)\ \sin(\phi)\\ \times\sum_{nlm^{\prime}}c_{nlm^{\prime}}^{i}\ u_{n}(r)\ Y_{l}^{m^{ \prime}}(\theta,\phi),\\ c_{nlm^{\prime}}^{i}=\langle\rho_{i}(\mathbf{r})|u_{n}(r)\ Y_{l}^{m^{ \prime}}(\theta,\phi)\rangle \tag{14}\]
Throughout our work we use the derivative formalism from eq. (8) since our numerical results indicated superior performance than the moments expansion in the internal coordinates (eq. (9)). However, the moment expansion in eq. (10), being independent of many-body terms, offers a promising alternative and could be the subject of a future work.
We have tested multiple weighting functions to identify the best combination. In particular, since these functionals correspond to correlation function averages of \(m\)-body interactions, we have tested the Harmonic, Morse [44], Lennard-Jones [45] potentials and simple decaying functions (power laws, exponential, gaussian decays
and their combinations) for 2-body terms. For 3-body terms we have tested Cosine Harmonic, Axilrod-Teller [46], Stillinger-Weber [47] potentials and a scaled Fourier series. Through trial and error combined with cross-validated hyper-parameter optimization, we have identified the following 5 suitable functionals corresponding to 2 and 3-body distributions:
\[F_{0}^{(2)}[i]= \int_{0}^{r_{c}}dr\ \left[e^{-\eta r}-\frac{\sqrt{2}\pi}{10(r+1)^{3} }\right]\ \rho_{i}(r,R_{ij}) \tag{12}\] \[F_{1}^{(2)}[i]= \frac{\sqrt{2}\pi}{10}\int_{0}^{r_{c}}dr\ \frac{\partial_{r}\rho_{i}(r,R_{ij})}{(r+1)^{6}}\] (13) \[F_{2}^{(2)}[i]= \int_{0}^{r_{c}}dr\ e^{-\alpha r}\ \partial_{r}^{2}\rho_{i}(r,R_{ij})\] (14) \[F_{0}^{(3)}[i]= \int_{0}^{\pi}d\theta\ \frac{\sum_{n=0}^{3}a_{n}\cos(n\theta)}{(R_{ ij}R_{jk}R_{ik})^{4}}\ \rho_{i}(\theta,\theta_{jik})\] (15) \[F_{1}^{(3)}[i]= \int_{0}^{\pi}d\theta\ \left[\frac{1+\cos(\theta)\cos(\theta_{kji}) \cos(\theta_{ikj})}{(R_{ij}R_{jk}R_{ik})^{4}}\right]\] (16) \[\times\partial_{\theta}\rho_{i}(\theta,\theta_{jik})\]
where \(\eta\), \(\alpha\), \(a_{n}\) and the various power laws are all hyper-parameters of the representation. Note the scaling functions of MBDF contributions \(F_{0}^{(2)},F_{1}^{(2)},F_{0}^{(3)},F_{1}^{(3)}\) being respectively reminiscent of Buckingham type-potential, softened London dispersion potential, Fourier series scaled by Lennard-Jones repulsion and Axilrod-Teller-Muto potential scaled by Lennard-Jones repulsion. The specific reason as to why this particular selection of weighting functions has proven advantageous will be subject of future research.
Figure 1 shows the effect of each functional on the learning capacity of MBDF for the task of predicting atomization energies of the QM9 [48] dataset. It is apparent that both the derivative and many-body terms improve the learning capacity, albeit by different magnitudes.
Throughout our testing on the QM7 [49], QM9 [48] and QMugs [50] datasets, we have found these 2, 3-body functionals to be sufficient at discriminating between all of the molecular structures. Cases where the 2-body information does not suffice include homometric pairs, as already discussed many years ago [14]. And even 3- and 4-body information does not suffice for some cases, as recently discussed in Ref. [51]. We note here that, whenever necessary, arbitrarily higher order derivative and many-body information could also be included in MBDFs at minimal increase in size, i.e. one additional term per order (See eq. (8)). In particular, we believe that the inclusion of the 4-body term as a functional of the dihedrals could further improve the learning capacity for conformational isomers. Inclusion of 4-body information has been shown to result in further improvements of learning curves [52]. Also note that the size of MBDF is invariant to the cutoffs used which can be raised to arbitrarily higher values while employing a suitable long-range functional (hence increasing the farsightedness of the representation) without affecting the kernel evaluation cost. We further note that other weighting functions and response terms could also be useful for QML models of physical observables such as dipole moments, vibrational frequencies, heat capacities etc.
### Density of functionals
MBDF is a local representation and its size scales linearly with the number of atoms in the system. In order to eliminate this scaling we can transform to the frequency space of MBDF functional values. The frequencies can be evaluated by normalizing the functional values to lie within an arbitrary range and then using, for e.g., kernel density estimation [53]. For a finite set of MBDF functional values \(\{X_{i}\}_{i=1}^{5M}\) over the range \([a,b]\), a "smooth histogram" of their frequencies can be constructed by
Figure 1: MBDF based QML learning curves using concatenated increasingly higher order and body functionals (Eqs. (12-16)). Mean absolute error (MAE) of predicting atomization energies of the QM9 [48] dataset is shown as a function of training set size \(N\).
placing a set of kernel functions \(K\) at each point \(X_{i}\):
\[f(u)=\frac{1}{5M}\ \sum_{i=1}^{5M}\ K(u,X_{i}) \tag{17}\]
where \(f(u)\) gives the density of the samples at any point \(u\ \epsilon\ [a,b]\). If the set \(\{X_{i}\}_{i=1}^{5M}\) are the MBDF functional values for any molecule, their distribution density can be evaluated using eq. (17). The density \(f(u)\) can then be used as a global molecular representation whose size is independent of the number of MBDF functional values, and number of atoms by extension. The (dis-)similarity between two molecules \(A\) and \(B\) can be evaluated as, e.g., the l2-distance:
\[d(A,B)^{2}=\int_{-c}^{c}du\ |f_{A}(u)-f_{B}(u)|^{2} \tag{18}\]
where \([-c,c]\) is the normalization range chosen as \([-10,10]\) in our work. The form of the density function used in our work is:
\[f_{A}(u)=\frac{1}{5M}\ \sum_{i=1}^{5M}\ \frac{\sqrt{|X_{i}|}}{\sqrt{2\pi}\sigma _{b}}\ \exp\left(-\frac{(u-X_{i})^{2}}{2\sigma_{b}^{2}}\right) \tag{19}\]
where \(X_{i}\) are MBDF functionals for molecule \(A\), \(\sigma_{b}\) is the variance of the Gaussian function and is a hyperparameter (also called bandwidth). Comparing with eq. (17), the function \(K\) is defined as:
\[K(u,X_{i}):=\frac{\sqrt{|X_{i}|}}{\sqrt{2\pi}\sigma_{b}}\ \exp\left(-\frac{(u-X_{i})^{2}}{2 \sigma_{b}^{2}}\right) \tag{20}\]
Note that this is a divergence [54] but not a kernel function because it is asymmetric:
\[K(x,y)= \frac{\sqrt{|y|}}{\sqrt{2\pi}\sigma_{b}}\ \exp\left(-\frac{(x-y)^{2}}{2 \sigma_{b}^{2}}\right)\] \[\neq \frac{\sqrt{|x|}}{\sqrt{2\pi}\sigma_{b}}\ \exp\left(-\frac{(x-y)^{2}}{2 \sigma_{b}^{2}}\right)=K(y,x) \tag{21}\]
This function is used because it weights the MBDF functional frequencies by the functional value itself resulting in the distance measurement (eq. (18)) being weighted by the difference in functional values. Another advantage of this function is that it eliminates the frequency of null values (or "ghost atoms") within the MBDF representation which might be present due to the procedure of zero-padding [1].
In our work we generate a separate density function \(f(u)\) for each of the 5 MBDF functionals in eq. (11-15), and for each unique chemical element present in the dataset. These are then concatenated to form the global representation of the molecule. Alternatively, it could be done by using multivariate Gaussian functions for the density estimation. Let \(\mathbf{x}_{i}\) denote the 5-dimensional vector of MBDF functional values (eq. 11-15) for any atom \(i\) in molecule \(A\). Then the multivariate density function \(f_{A}(\mathbf{u})\) of this molecule can be evaluated as :
\[f_{A}(\mathbf{u})=\frac{1}{M}\ \sum_{i=1}^{M}\ K(\mathbf{u},\mathbf{x}_{i}) \tag{22}\]
\[f_{A}(\mathbf{u})=\frac{1}{M}\sum_{i=1}^{M}\frac{(\mathbf{x}_{i}^{T}\mathbf{x }_{i})^{1/4}}{\sqrt{2\pi}\sigma_{b}}\exp\left(-\frac{(\mathbf{u}-\mathbf{x}_{i })^{T}(\mathbf{u}-\mathbf{x}_{i})}{2\sigma_{b}^{2}}\right) \tag{23}\]
The l2-distance between molecules \(A\) and \(B\) then takes the form:
\[d(A,B)^{2}=\int d\mathbf{u}\ (f_{A}(\mathbf{u})-f_{B}(\mathbf{u}))^{T}( f_{A}(\mathbf{u})-f_{B}(\mathbf{u})) \tag{24}\]
where the integral is over the normalization region. Since the former method generates a more compact representation we have chosen to work with it. The abbreviation DF (Density of functionals) will be used throughout for this global representation.
### Numerical analysis
The DF method allows generating a representation that does not scale with the number of atoms in the system. However, in order to use it as a feature vector the density functions have to be discretized. Through convergence testing we have set the grid spacing to 0.2 throughout our work. However, we note that this grid spacing could be changed for a different data-set in order to achieve the desirable accuracy vs computational cost trade-off. Furthermore, DF corresponds to a flattened feature vector which can be used with global kernels (or other ML methods), and which exhibits superior performance when compared to a straightforward concatenation of all MBDF rows (see Figure 2). The flattened MBDF representation is generated by sorting the MBDF matrix of each molecule by the row norm, and then flattening the matrix by concatenation of the rows to vectorize it [1].
Figure 3 shows molecular fingerprints generated using the 1 and 5 functional DF representations for three diverse and relevant organic molecules (glucose, uric acid, and testosterone) on the same grid. For each molecule, a distinct fingerprint is obtained, with peak-positions depending on the local chemical environment of each atom. Consequently, peaks of atoms with chemically similar environments are located closer to each other. Peak heights encode both number and type (because of the density estimate being weighted) of chemical environments [See Eq. 20]. Figure 3 demonstrates that for molecules with increasing size, corresponding DF based fingerprints will
grow in magnitude, not in size. In the SI, we also show how DF fingerprints distinguish conformational isomers, as exemplified for the chair and boat conformations of cyclohexane.
## III Methods and data
### Kernel ridge regression
The ML method that we focus on, and use throughout this work, is the supervised learning method called Kernel ridge regression [39; 18] (KRR). This method has been covered extensively earlier [34; 35; 1; 2; 3] so we skip the details here.
The kernel functions [55; 56] we use in our work along with global representations are the Gaussian kernel,
\[k(\mathbf{x}_{I},\mathbf{x}_{J})=\exp\left(-\frac{||\mathbf{x}_{I}-\mathbf{x} _{J}||_{2}^{2}}{2\sigma^{2}}\right) \tag{25}\]
and Laplacian kernel,
\[k(\mathbf{x}_{I},\mathbf{x}_{J})=\exp\left(-\frac{||\mathbf{x}_{I}-\mathbf{x} _{J}||_{1}}{\sigma}\right) \tag{26}\]
where \(\mathbf{x}_{I}\) denotes the representation vector of molecule \(I\).
The kernel function used for the local representations FCHL19, SOAP and MBDF is a summation of atomic kernels:
\[k(\mathbf{M}_{I},\mathbf{M}_{J})=\sum_{aei}\sum_{bej}k^{l}(\mathbf{x}_{Ia}, \mathbf{x}_{Jb}) \tag{27}\]
with the local Gaussian kernel:
\[k^{l}(\mathbf{x}_{Ia},\mathbf{x}_{Jb})=\delta_{Z_{a},Z_{b}}\exp\left(-\frac{|| \mathbf{x}_{Ia}-\mathbf{x}_{Jb}||_{2}^{2}}{2\sigma^{2}}\right) \tag{28}\]
the local Laplacian kernel:
\[k^{l}(\mathbf{x}_{Ia},\mathbf{x}_{Jb})=\delta_{Z_{a},Z_{b}}\exp\left(-\frac{|| \mathbf{x}_{Ia}-\mathbf{x}_{Jb}||_{1}}{\sigma}\right) \tag{29}\]
or the local exponential kernel with the Euclidean norm:
\[k^{l}(\mathbf{x}_{Ia},\mathbf{x}_{Jb})=\delta_{Z_{a},Z_{b}}\exp\left(-\frac{|| \mathbf{x}_{Ia}-\mathbf{x}_{Jb}||_{2}}{\sigma}\right) \tag{30}\]
where \(\mathbf{M}_{I}\) denotes the representation matrix of molecule \(I\), \(\mathbf{x}_{Ia}\) denotes the representation vector of atom \(a\) within molecule \(I\) and \(\delta_{Z_{a},Z_{b}}\) denotes a Kronecker Delta over the nuclear charges \(Z_{a},Z_{b}\) which restricts the similarity measurement between atoms of the same chemical element [34]. Other kernel functions will also be tested in the future [57; 40; 58].
Throughout this study, we evaluate performance of ML methods through learning curves for the task of predicting physical properties of molecular systems. Learning curves quantify the model prediction error \(\varepsilon\) (often measured as mean absolute error (MAE)) against the number of training samples \(N\) and are key to understand the efficiency of ML models. It is generally known [59; 60; 18] that they are linear on a log-log scale,
\[\log\left(\varepsilon\right)\approx I+S\log\left(N\right) \tag{31}\]
where \(I\) is the initial error and \(S\) is the slope indicating model improvement given more training data. We also note that according to the central limit theorem the distribution of the errors \(\epsilon\) approaches the normal distribution with standard deviation \(\frac{\sigma}{\sqrt{N}}\), and mean 0 as
Figure 2: QML learning curves for MBDF with local kernel, DF (global), and flattened MBDF (global and sorted by row norm). Mean absolute error (MAE) of predicting atomization energies of the QM9 [48] data-set as a function of training set size N.
\(N\rightarrow\infty\). Hence eq. (31) becomes :
\[\log\left(\varepsilon\right)\approx\log(\sigma)-\frac{1}{2}\log\left(N\right) \tag{32}\]
### Hyperparameter Optimization
The current form of the representations has been optimized for Kernel based learning models. It depends on the weighting functions used and a number of hyperparameters which include variances \((\sigma_{r},\sigma_{\theta})\) of the Gaussian PDFs, weighting function hyperparameters mentioned in eq. (12)-(16), and bandwidth \(\sigma_{b}\) for the DF representation. The hyperparameter optimization was done on a random subset of two thousand molecules from the QM7 dataset [49], and then kept fixed for all other data-sets. We note that further improvements might be possible if they had been optimized simultaneously on all datasets. The weighting functions \(g_{n}(\tau)\) in eq. (8) were chosen by straightforward screening of the functions mentioned earlier. The optimization minimized the atomization energy prediction errors on the QM7 subset using Gaussian Process (GP) based Bayesian Optimization (BOpt) [39]. Starting with a Gaussian prior, the method fits a posterior distribution over the objective function using successive function evaluations. The posterior distribution is then used to construct an efficient acquisition function which can be optimized using, for instance, a quasi-Newton method to determine the next query point. Table 1 shows the optimized hyperparameter values used throughout this work.
We used the scikit-optimize [61] implementation of GP based BOpt using the default Matern kernel with unit variance and the limited memory BFGS optimizer [62] for the acquisition function. In order to enable comparison to other representations (such as FCHL or SOAP) that rely on distance cutoffs, we have chosen to set interatomic distance cutoff \(r_{c}\) for MBDF to 6 A throughout this work. We note that larger cutoffs for MBDF would not change its size.
All MBDF functionals were evaluated using the trapezoidal numerical integration method. The grid spacing for discretizing DF densities has been set to 0.2 throughout our work as noted earlier. The bandwidth \(\sigma_{b}=0.07\) was found to work well on the QM7 subset however it is recommended to be screened once (in the range \(\sim\) [0.01,1]) along with the grid-spacing when using with other datasets. The Numpy [53] and Numba [64] libraries are used in the representation generation code.
### Data and Code
The QM9 dataset [48] consists of \(\sim\)134k small organic molecules with up to 9 heavy atoms (C, N, O, F). The calculations were performed at the B3LYP/6-31G(2df,p) [65; 66; 67] level of theory.
QMugs is a dataset containing \(\sim\)665k biologically and pharmacologically relevant drug-like molecules. It consists of larger molecules than QM9 with up to 100 heavy atoms (C, N, O, F, P, S, Cl, Br, or I) per molecule. The training and predictions were performed on the DFT (\(\omega\)B97X-D/def2-SVP) [68; 69] values reported in the dataset. The QMugs subsets we used for Figure 5 were drawn at random and consist of 20k molecules. Throughout, we used zero-padding for all representations studied in order to accommodate training and test molecules smaller than the maximum present in the data.
In order to keep the FCHL19 and SOAP kernel evaluations computationally tractable, we have (a) restricted ourselves to a maximum 100 atoms per QMugs-molecule, and (b) reduced the default hyperparameters of the
\begin{table}
\begin{tabular}{|c|c|} \hline Parameter & Value \\ \hline \(\sigma_{r}\) & 1 \\ \(\sigma_{\theta}\) & 2 \\ \(\eta\) & 10.8 \\ \(\alpha\) & 1.5 \\ \(a_{0},a_{1},a_{2},a_{3}\) & 3, 100, -200, -164 \\ \(\sigma_{b}\) & 0.07 \\ \(r_{c}\) (Å) & 6 \\ \hline \end{tabular}
\end{table}
Table 1: MBDF and DF hyperparameters after optimization on atomization energies of QM7 [49] subset.
Figure 3: Two-body (left) and three-body (right) DF versions (Eqs. 12, 15) representations for Glucose, Uric Acid, and Testosterone.
FCHL19 and SOAP representations to nRs2 = 12, nRs3 = 10, \(r_{cut}\) = 6A and \(n_{max}\) = \(l_{max}\) = 3, \(\sigma\) = 0.1, \(r_{cut}\) = 6A, respectively. For consistency, we used the same parameters for all other results reported in this article. These two versions of the representations with the reduced basis sets are denoted as FCHL19* and SOAP* in all reported figures. Note that the latter choice of hyperparameters negligibly deteriorates the predictive accuracy for QM9 (as assessed below and when comparing to the published prediction errors on QM9 for FCHL19 and SOAP2013). For FCHL19 and SOAP based prediction errors reported here within for QMugs, it could still be that the accuracy could improve further if these parameters were optimized.
Figure 4 also includes results for QML models based on the 2- (k = 2) and 3-body (k = 3) Many-Body Tensor Representations (MBTR) [22] and a variant [70] of the Atom Centered Symmetry Functions [20] (ACSF) as implemented in the QMLcode library [71]. The MBTR representations were generated with the same hyperparameters as those used in Ref. [33] for the 10k training point on the QM9 dataset.
Throughout our work, the FCHL19, SLATM and ACSF representations were generated using the QMLcode library [71], SOAP was generated using the Dscribe library [72] with the default gaussian type radial basis functions, MBTR was generated using the qmmlpack library [73] and SchNet [74], PaiNN [75] were built and trained using the SchNetPack [76] library. Allegro [77] was built and trained using the nequip [78] code [79] based on the E(3)-NN framework [80; 81].
For FCHL19, SOAP and ACSF we employed the local Gaussian (eq. (28)), for SLATM, MBTR and DF we use the global Gaussian (eq. (25)) and for CM, BOB we use the global Laplacian (eq. (26)) kernels respectively. For MBDF we used the local Gaussian kernel on the MD17 dataset and the local exponential (eq. (30)) kernel for all other models. These choices were made based on the best performances for each representation. All kernel evaluations were performed using the QMLcode library.
## IV Results
### Atomization Energies
_QM9_
Figure 4 a) shows learning curves for QM9 and the size (dimension of feature vector) of the representation arrays in the legend. For the task of predicting atomization energies, local representations have previously been shown to be more efficient [34; 35]. Results for the local representations FCHL19 and SOAP are closely reproduced, and reach chemical accuracy after training on less than 10k molecules [34]. Among the global representations, SLATM has previously also been shown [34; 37] to perform quite well reaching chemical accuracy after training on \(\sim\)9k molecules although it shows a smaller slope at larger training sizes. This is closely followed by MBDF which reaches chemical accuracy after training on \(\sim\)10k molecules (less than 10% of the dataset). The global DF representation also performs decently well reaching chemical accuracy at \(\sim\)32k training data. The local ACSF representation shows a larger offset but a better slope and it reaches chemical accuracy at \(\sim\)50k training set size. We note here that for consistency with the other representations used in this work, we did not optimize the hyperparameters of the MBTR representation on every training point, but rather kept them fixed throughout. Only the KRR hyperparameters were optimized at each training set size as with all other representations used here. The 3-body MBTR reaches chemical accuracy at \(\sim\)60k training set size while the 2-body MBTR performs better than the other 2-body representations, BOB and CM. We have also included the recently introduced constant size Persistence Images [82](PI) representation for comparison.
Note that MBDF has the smallest size, requiring only 5 numbers per atom (145 dimensions for QM9 molecules). By contrast, other local representations such as FCHL19, SOAP require \(\sim\)400 numbers per atom, while ACSF uses a 150 dimensional feature vector per atom. Encouragingly, and despite its compact size, MBDF outperforms most of the other larger representations with the exception of SOAP and FCHL. We note here that while the size of the global DF representation is larger than MBDF, it utilises a global kernel implying training and prediction cost invariance with respect to system size.
This compactness of the representation translates into faster ML model evaluation timings. This is shown in Figure 4 b) which plots the trade-off between training and prediction timings vs. training data needs for reaching mean absolute prediction errors of atomization energies of 1 kcal/mol. We note that there are only two representations located on the Pareto-front, FCHL18 [35] and MBDF [this work]. We also point out that currently the best performing model on the QM9 dataset is the recently proposed Wigner Kernels method [83] which is not included in this study.
As noted earlier, local kernels based on representations such as FCHL18/19, SOAP, or ACSF exhibit very good training data efficiency, but this comes at the expense of a larger computational overhead. The exception is the local MBDF based kernel which achieves the fastest training timing of \(\sim\)0.07 compute node minutes (14k training molecules) due to its compact size. Predictions on 100k QM9 molecules using the local MBDF kernel are made in \(\sim\)1.46 compute node minutes which translates to an unprecedented chemically accurate navigation rate of \(\sim\)1140 molecules/second. SLATM being clost to the Pareto front, and the DF representation, both represent fast global kernel based KRR models. While requiring more training data than SLATM in order to reach chemical accuracy, DF has the advantage that it is largely invariant with respect to system size (see below). For the sake of completeness, Fig. 4 b) also includes results for the deep learning based models SchNet [74], PaiNN [75]
and the current state-of-the-art, Allegro[77] which were all trained on a GPU. The reported timing for SchNet refers to 3000 epochs of training on 20k molecules and predictions of 100k molecules taking about 3 h: 27 min and 7 sec respectively. For PaiNN the reported timing corresponds to 1000 epochs of training on 7k molecules which took 0 h: 56 mins and a prediction time of 9 sec. Allegro reached chemical accuracy after training on 4500 molecules for 728 epochs, with early stopping, which took 17 h: 8 min while the prediction on 100k molecules took \(\sim\)7 mins (using a maximum possible batch size of 5 on the used GPU).
MBDF achieves the fastest training time out of all models and the fastest prediction rate among the kernel based models. Numerical results for porting KRR results based on FCHL19 to GPUs[84] would suggest that it seems likely for the prediction rate to increase significantly once MBDF is reimplemented in CUDA.
#### QMugs
We have tested the generalizability of our method to larger systems and more diverse chemistries using the
Figure 4: MBDF/DF performance and comparison on atomization energies from QM9 data set (drawn at random from \(\sim\)134k organic molecules)[48]. a) (Left) Mean absolute error (MAE) of prediction as a function of training set size for representations CM[1], BOB[38], MBTR[22], SLATM[37], PI[82], ACSF[20, 70], FCHL19[34], SOAP[36]. Numbers in legend denote representation size (feature vector dimensions), G and L denote Global and Local kernels, respectively. b) (Right) Timing for training and testing as a function of training set size required for making chemically accurate (MAE = 1 kcal/mol) predictions on 100k molecules. Blue, red, and green points indicate local kernels, global kernels, and neural network, respectively. Dashed gray line corresponds to the optimal Pareto front. For SchNet[74], PaiNN[75], Allegro[77] an Nvidia RTX 3070 GPU has been used. All other timings were evaluated on a compute node equipped with a 24-core AMD EPYC 7402P @ 3.3 GHz CPU and 512 GB RAM. Timings for FCHL18[35], BOB, CM are estimated using smaller kernels (not taking into account kernel inversion). Asterisk denotes representations with reduced hyperparameters used in this work. \(N\) values for ACSF, MBTR, BOB, CM estimated via extrapolation. Numbers in brackets denote year of publication.
QMugs[50] data set. Figure 5 shows the atomization energy learning curves. Due to the large variety in the dataset, the predictive error is larger for all representations than their QM9 counterparts even when predicting on a much smaller test set. MBDF reaches \(~{}\sim\)2 kcal/mol prediction error after training on 16k molecules. This is better than the QML based neural network predictions published in Ref. [85], and similar to the \(\Delta\)-QML numbers they also report. In terms of speed, generating the local MBDF kernel for training and testing on 20k molecules on this dataset takes \(\sim\)1.8 compute node mins (see below) which corresponds to a navigation rate of \(\sim\)185 molecules/second. By comparison, this is substantially faster than the GPU based prediction rates of approximately 50 and 5 molecules per second for the direct and \(\Delta\)-learning (using GFN2-xTB[86]) based ML models, respectively using the convolutional neural network reported in Ref. [85]. Only SLATM and FCHL19 exhibit lower off-set than MBDF, while the performance for SOAP and DF is similar, albeit slightly worse than MBDF. As mentioned before, however, in order to make FCHL19 and SOAP tractable, we have dramatically reduced the hyper parameters. In particular, we believe that the learning efficiency of SOAP for QMugs is being reduced due to the use of small basis sets (\(n_{max}\) = \(l_{max}\) = 3). Note that no representation reaches chemical accuracy within 16k training molecules, indicating that QMugs possesses substantially more chemical diversity than QM9.
In terms of representation sizes, MBDF again remains the smallest representation since it still requires only 5 dimensions per atom regardless of the chemical composition. However, being a local representation, on average its size increases \(\sim\) 3.4 times compared to QM9. FCHL19 and SOAP, on the other hand, now require more than 1000 dimensions to represent each atom for this larger dataset. CM, BOB, FCHL19, and SOAP show larger than 10 fold increase in the representation size compared to QM9, followed by SLATM which shows an increase of \(\sim\) 6.6 times. This results in a considerable increase in the train/test time (_vide infra_) which precludes the straightforward application of these representations to the entire QMugs dataset. The DF representation changes the least in size since it does not formally scale with number of atoms but only with number of different chemical elements. Consequently, its size doubles compared to QM9 since a separate density function is generated for each unique chemical element in the dataset using eq. (19) as mentioned earlier.
#### QM7b and MD17
Figure 2 in the SI shows learning curves for the QM7b[87; 88] atomization energies and the size (dimension of feature vector) of the representation arrays in the legend. Similar trends in performance and representation size noted so far are observed on this dataset as well.
Figure 3 in the SI shows learning curves of energies for a few molecules from the revised[89; 90] MD17[91; 92] molecular dynamics dataset. Although the comparative performance trend on this dataset is similar to the others, we note that the chosen functionals for MBDF and the current representation hyperparameters are not optimized for potential energy surface learning. Furthermore, the implementation of gradients for MBDF to enable force based learning should significantly enhance the performance for such tasks as was shown[89] for the FCHL19 representation. However, the relatively larger difference in performance between MBDF and FCHL19 on this dataset might indicate that compact representations are
Figure 5: MBDF/DF performance and comparison to CM\({}^{1}\), BOB[38], SLATM[37], FCHL19[34], and SOAP[56] representation based atomization energy estimates using the QMugs (\(\sim\)667k organic molecules with up to one hundred heavy atoms) data set[50]. Training and testing data drawn at random. Prediction mean absolute errors (MAE) on holdout test set of 4k molecules shown as a function of training set size. Numbers in legend denote representation size (feature vector dimensions), G and L denote Global and Local kernels respectively. Shaded region denotes the standard deviation across 4 different learning curves (except for FCHL19 and SOAP for which only one learning curve was tractable). Asterisk denotes representations with reduced hyperparameters used in this work.
sufficient for regression through equilibrium structures across CCS while more verbose representations are required for potential energy surface learning.
### Timings
Figure 6 shows scaling plots of kernel evaluation timings across both QM9 and QMugs datasets for various representations and as a function of training set size. As one would expect, the off-set increases systematically with increasing representation and training molecule size. More specifically, for the larger molecules of QMugs, the FCHL19 and SOAP kernel evaluations become rapidly intractable very quickly with the 16k kernel (Fig. 5) already taking an entire compute node day. Encouragingly and in stark contrast, DF, being a size invariant representation, shows hardly any change in computational overhead when moving from the small QM9 molecules to the much larger QMugs molecules.
For context, the time required in the kernel inversion step for each training kernel is shown as well. The bottleneck crossover from kernel generation (\(\mathcal{O}(n^{2})\)) to the inversion step (\(\mathcal{O}(n^{3})\)) occurs rather late. When using Cholesky decomposition it occurs for MBDF at \(N\sim 64\)k training molecules. For less compact representations (for SLATM and larger) the same cross-over occurs at training set sizes that exceed \(\sim\)1 M. As demonstrated above, chemical accuracy is already achieved at substantially smaller training set sizes. Consequently and contrary to popular belief, for any of the more modern and accurate representations, kernel inversion does not constitute a bottleneck.
Table 2 reports the kernel evaluation and representation timings for both the QM9 dataset (130k molecules) and the QMugs subset (20k molecules) used in our work. It can be seen that MBDF reduces the local kernel evaluation timings from days to a few minutes for both small and large molecules. For the representation generation step we note that our code is currently written in Python and uses the Numba[64] library and could be further optimized with a low-level implementation. However, the current timings as well do not affect the overall QML model cost much given the kernel evaluation bottleneck.
### Performance for molecular quantum properties
We assessed the generalization capacities of MBDF/DF on physical properties other than atomization energies. Figure 7 shows the learning curves for the task of predicting 8 important molecular quantum properties from the QM9 dataset. These include the highest occupied molecular orbital (HOMO), lowest unoccupied molecular orbital (LUMO) eigenvalues and the HOMO-LUMO gap, internal energy at 0 K (\(U_{0}\)), dipole moment norm (\(|\mathbf{\mu}|\)), static isotropic polarizability (\(\alpha\)), zero point vibrational energy (ZPVE) and heat capacity (\(C_{v}\)). Due to substantial computational costs, the KRR hyper-parameters were not optimized at each training size for the FCHL19 and SOAP representations, and we picked the same parameters as those used for learning atomization energies in figure 4. We reproduce earlier trends among intensive and extensive properties when using local/global kernels[93; 3]. MBDF and DF match this trend: They perform better on extensive and on intensive properties, respectively.
Again, note that the performance on these properties of MBDF/DF could be further improved by including different functionals suited to the property, or by augmenting them with response terms as was done for the FCHL19 representation[93]. It would also be interesting
Figure 6: Compute node time required for kernel inversion and evaluation as a function training set sizes \(N\) drawn from QM9 (squares) and QMugs (diamonds) datasets. QML results shown for SOAP[36], FCHL19[34], SLATM[37], and MBDF and DF. Dotted lines indicate extrapolation (using quadratic (kernel evaluation) and cubic (kernel inversion) polynomial fits). G and L denote Global and Local kernels, respectively. Compute node: 24-core AMD EPYC 7402P @ 3.3 GHz CPU with 512 GB RAM.
to see how the learning capacities across these different physical properties is affected by the inclusion of higher order functional terms.
## V Conclusion
We have introduced ultra-compact atomic local many body distribution functional (MBDF) and global density of functionals (DF) representations for use within kernel ridge regression based Quantum Machine Learning (QML) models for rapid sampling of chemical compound space. MBDF/DF can accurately describe any atom/molecule using a minimal number of discrete elements and thereby reduce QML training and prediction times by multiple orders of magnitude for small and large molecules. MBDF and DF correspond to functionals of analytical weighted many body distributions in interatomic distances and angles, as well as their derivatives. Chemical identity is encoded as a prefactor to the atomic functionals. DF is a weighted density estimations of MBDF, i.e. a global molecular fingerprint that is invariant with respect to number of atoms (not number of chemical elements).
We have demonstrated predictive power competitive with the state-of-the-art for a variety of quantum physical properties, as encoded in the QM9 dataset [48]. On the QM9 dataset MBDF reaches a MAE of atomization energies of only 0.69 kcal/mol after training on 32k molecules while using only 5 dimensions to describe an atom. Regarding different molecular properties, it is beneficial to use DF representation along with a global kernel for in
\begin{table}
\begin{tabular}{l|r r r|r r r} & \multicolumn{3}{c|}{**QM9 130k Molecules**} & \multicolumn{3}{c}{**QMugs 20k Molecules**} \\ \hline Representation & \(t_{\text{rep}}\) [min] & \(t_{\text{kernel}}\) [min] & Dimension & \(t_{\text{rep}}\) [min] & \(t_{\text{kernel}}\) [min] & Dimension \\ \hline CM\({}^{a}\) (G) & 0.186 & 2.862 & 435 & 0.012 & 1.146 & 5050 \\ BoB\({}^{a}\) (G) & 0.216 & 7.296 & 1128 & 1.362 & 3.396 & 18528 \\ SLATM\({}^{a}\) (G) & 18.60 & 86.32 & 11960 & 15.76 & 14.58 & 79045 \\ FCHL19\({}^{a}\) (L) & 0.846 & 1071 & 10440 & 1.764 & 1566 & 122000 \\ SOAP\({}^{b}\) (L) & 0.216 & 1873 & 13920 & 0.246 & 2925 & 186000 \\
**MBDF (L)** & 1.626 & 11.81 & 145 & 4.182 & 1.848 & 500 \\
**DF (G)** & 2.262 & 12.16 & 2500 & 2.442 & 0.996 & 5000 \\ \end{tabular}
\end{table}
Table 2: Compute node times for generating representations (\(t_{\text{rep}}\)) and kernel matrices (\(t_{\text{kernel}}\)) for 130k molecules from the QM9 dataset and 20k molecules from the QMugs dataset. Global and Local kernels are again denoted by (G) and (L) respectively. Representations with superscript \(a\) were generated with QMLcode [71] library and \(b\) with the Dscribe\({}^{\text{\textregistered}2}\) library. Compute node: 24-core AMD EPYC 7402P @ 3.3 GHz CPU and 512 GB RAM.
Figure 7: Learning curves for various representations in QML models of highest occupied molecular orbital (HOMO), lowest unoccupied molecular orbital (LUMO) eigenvalues, HOMO-LUMO gap (\(\Delta\epsilon\)), internal energy at 0 K (\(U_{0}\)), dipole moment norm (\(|\mathbf{\mu}|\)), static isotropic polarizability (\(\alpha\)), zero point vibrational energy (ZPVE), and heat capacity (\(C_{v}\)) from the QM9 dataset. G and L denote Global and Local kernels, respectively. Asterisk denotes representations with reduced hyperparameters used in this work.
tensive properties, and MBDF along with a local kernel for the extensive properties. MBDF and DF generalize also well to other compound spaces, as evinced for the chemically more diverse QMugs dataset [50] consisting of substantially larger molecules: After training on 16k molecules, MBDF reaches a peak MAE for atomization energies of 1.97 kcal/mol while still using only 5 dimensions per atom. Corresponding training and prediction times for both data-sets is \(\sim\)2 compute node time minutes for MBDF based models. Furthermore, our results indicate that using MBDF/DF brings the train/test timings of kernel based ML models to their "lower-bound" as imposed by the kernel inversion bottleneck for both small and large molecules.
We have analyzed the comparative performance for the sampling cost vs. training set size needs to reach chemical accuracy for predicting atomization energies in the QM9 data-set. MBDF has extended the corresponding optimal Pareto front towards minimal time requirements.
While the numerical evidence collected is indicative of a surprising effectiveness of MBDF, it is clear that the truncations used in this work may lead to lack of uniqueness. However, neither within QM9 nor within the QMugs subset we sample, have we encountered a case where using this small set of functionals maps two different molecular structures to the same feature vector. Furthermore, the likelihood of uniqueness can easily be increased by inclusion of higher order derivatives and many-body terms. More rigorous numerical analysis would be desirable to quantify this trend.
Thanks to its numerical efficiency, we believe that this approach holds great promise for further accelerating the virtual discovery of novel molecules and materials. Furthermore, this framework provides a possible solution to the general problem of unfavourable scaling due to i) inclusion of higher order many body and long range physics and ii) applying these ML models to larger molecules with greater chemical diversity. Future work will deal with extension of the representations as described to deal with various types of conformers, and an assessment of its sensitivity to changes in molecular structures.
## VI Acknowledgments
D.K. is thankful for discussions with B. Huang, S. Krug, D. Lemm, M. Sahre, and J. Weinreich. O.A.v.L. has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 772834). O.A.v.L. has received support as the Ed Clark Chair of Advanced Materials and as a Canada CIFAR AI Chair.
## VII Supplementary Material
See supplementary material for QM7b [87], MD17 [89; 90; 91; 92] learning curves and a figure showing the transition of cyclohexane from chair to boat conformation alongside the response by its DF fingerprint. It also contains heat maps showing ideal KRR hyperparameters to be used with the MBDF representation and kernel PCAs of some molecules from the QM7b dataset.
## VIII Code and Data Availability
Python script for generating the MBDF and DF representations is available at [https://github.com/dkhan42/MBDF](https://github.com/dkhan42/MBDF). Train/test split of the QM7 dataset used for optimizing representation hyperparameters, other data and code for reproducing the results in this study and some tools for performing KRR are openly available at [https://github.com/dkhan42/QML](https://github.com/dkhan42/QML).
|
2309.03022 | A Rigorous Study of Hawking Radiation on Collapsing Charged Spherically
Symmetric Spacetimes | In this paper, we give a rigorous mathematical treatment of the late time
Hawking radiation of massless bosons emitted by a class of collapsing,
spherically symmetric, charged models of black hole formation, including both
extremal and sub-extremal black holes. We will also prove a bound on the rate
at which the radiation emitted approaches this late time limit. This includes
an integrable decay rate of radiation emitted by extremal black holes, for
which the late time limit vanishes. Thus, we show that the total expected
quantity of any massless boson emitted by an extremal \RNS black hole is
finite. | Frederick Alford | 2023-09-06T14:05:01Z | http://arxiv.org/abs/2309.03022v1 | # A Rigorous Study of Hawking Radiation
###### Abstract
In this paper, we give a rigorous mathematical treatment of the late time Hawking radiation of massless bosons emitted by a class of collapsing, spherically symmetric, charged models of black hole formation, including both extremal and sub-extremal black holes. We will also prove a bound on the rate at which the radiation emitted approaches this late time limit. This includes an integrable decay rate of radiation emitted by extremal black holes, for which the late time limit vanishes. Thus, we show that the total expected quantity of any massless boson emitted by an extremal Reissner-Nordstrom black hole is finite.
###### Contents
* 1 Introduction
* 1.1 Overview
* 1.2 Acknowledgements
* 2 Previous Work
* 2.1 Mathematical Works
* 2.2 Selected Works Within the Physics Literature
* 2.3 Further Remarks
* 3 RNOS Models
* 3.1 Reissner-Nordstrom Spacetimes
* 3.2 RNOS Definition
* 3.3 Double Null Coordinates
* 3.4 Killing Fields
* 4 Notation
* 5 Results in Pure Reissner-Nordstrom Spacetimes
* 5.1 Scattering Results on Pure Reissner-Nordstrom
* 5.2 Further Properties of Reissner-Nordstrom
* 6 Classical Scattering in RNOS spacetimes
* 7 Convergence Rates of the \(\dot{H}^{1/2}\) Norm Controlled by Integrated Error Terms (\(I.E.\))
* 7.1 The Set-up and the Reduction to Fixed Spherical Harmonic \(l\)
* 7.2 Summary of the Proof of Convergence
* 7.3 Evolution in Pure Reissner-Nordstrom
* 7.4 Reflection off the Matter Cloud
* 7.5 High Frequency Transmission
* 7.6 Proof of Convergence of \(H^{1/2}\) Norm
* 8 Treatment of Error Terms
* 9 Proof of the Main Result
Introduction
Classically, black holes, once formed, are permanent. The discovery of Hawking Radiation was therefore a major breakthrough in understanding how, in the context of quantum theories, black holes can in fact disintegrate over time, as it describes a mechanism to decrease the mass of black holes and potentially cause them to evaporate entirely. Since Hawking proposed this phenomenon in 1974 [1, 2], there have been hundreds of papers on the topic within the physics literature. For an overview of the physical aspects of Hawking radiation, we refer the reader to [3]. Concerning a mathematically rigorous treatment of Hawking radiation, however, there are substantially fewer works (see already [4] and Section 2 for a discussion of further references), and the mathematical status of Hawking radiation still leaves much to be desired. As a result, it has not been possible yet to ask more quantitative questions about Hawking radiation, which are necessary if one wants to eventually understand this phenomenon in the non-perturbative setting. This paper contributes towards solving this problem by giving a new physical space approach to Hawking's calculation allowing one to also obtain a rigorous bound on the rate at which emission approaches black body radiation.
The mathematical problem of Hawking radiation for massless zero-spin bosons can be formulated as follows:
We first consider the exteriors of collapsing spherically symmetric spacetimes. In these exteriors, the spacetimes are a subset of Reissner-Nordstrom spacetime [5]. Therefore we have coordinates \(t^{*}\), \(r\), \(\theta\), \(\varphi\) for which the metric takes the form
\[g=-\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)d{t^{*}}^{2}+2\left( \frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)d{t^{*}}d{r}+\left(1+\frac{2M}{r} -\frac{q^{2}M^{2}}{r^{2}}\right)d{r}^{2}+r^{2}g_{S^{3}} \tag{1.1}\]
\[t^{*}\in\mathbb{R}\qquad r\in[\tilde{r}_{h}(t^{*}),\infty),\]
where \(g_{S^{3}}\) is the metric on the unit 2-sphere. Here \(\tilde{r}_{h}(t^{*})\) is initially the area radius of the boundary of the collapsing matter cloud as a function of \(t^{*}\), and then becomes \(r_{+}\) (defined in (3.1.3)) for sufficiently late time - see Section 3 for further details. Note that this problem requires a collapsing setting, as the problem is trivial (and physically incorrect) in the non-collapsing case. We refer to \(M\) as the mass of the underlying Reissner-Nordstrom spacetime, and \(q\in[-1,1]\) as the charge to mass ratio. In particular, we are allowing the extremal case, \(|q|=1\).
These collapsing charged model exteriors will include the exterior of the Oppenheimer-Snyder Model [6] (for which \(q=0\)), and we will refer to these more general models as Reissner-Nordstrom Oppenheimer-Snyder (RNOS) models [7].
We now consider a function \(\psi_{+}\) on future null infinity, \(\mathcal{I}^{+}\). Let \(\phi\) be the solution to the linear wave equation
\[\Box_{\phi}\phi:=\frac{1}{\sqrt{-g}}\partial_{a}\left(\sqrt{-g}g^{ab}\partial _{b}\phi\right)=0 \tag{1.2}\]
which vanishes on \(\mathcal{H}^{+}\), and has future radiation field equal to \(\psi_{+}(u-u_{0},\theta,\varphi)\). We will be imposing Dirichlet conditions on the boundary of the matter cloud, i.e. \(\phi=0\) on \(\{r=r_{h}(t^{*})\}\). Note that we have existence of such a solution thanks to our companion paper [7]. Let us denote the past radiation field of \(\phi\) by \(\psi_{-,w_{0}}\). This is a function on past null infinity, \(\mathcal{I}^{-}\). The Hawking radiation calculation is to determine
\[\lim_{u_{0}\rightarrow\infty}\left(\int_{\omega=-\infty}^{0}\int_{\rho=0}^{2 \pi}\int_{\theta=0}^{\pi}|\omega||\hat{\psi}_{-,u_{0}}(\omega,\theta,\varphi) |^{2}\sin\theta d\theta d\varphi d\omega\right), \tag{1.3}\]
where \(\hat{\psi}_{-,u_{0}}\) corresponds to the Fourier transform of \(\hat{\psi}_{-,u_{0}}\) with respect to the advanced time \(u\) coordinate.
Note that here, \(\psi_{-,u_{0}}\) will depend on \(u_{0}\). While there is no immediately obvious classical reason that this limit should exist, in [1], Hawking made a heuristic argument for both the existence and value of this limit.
The main Theorem of this paper verifies Hawking's prediction for the value of (1.3), and is stated below: [Late Time Emission of Hawking Radiation] Let \(\psi_{+}(u,\theta,\varphi)\) be a Schwartz function on the \(\mathbb{R}\times S^{2}\), with \(\hat{\psi}_{+}\) only supported on positive frequencies \(([0,\infty)\times S^{2})\). Fix \(\mathcal{M}\) an RNOS spacetime (see Section 3) with related \(M>0\), \(q\in[-1,1]\) and \(r_{h}\). Let \(\phi\) be the solution of (1.2), as given by Theorem 7.2 of [7], such that
\[\lim_{\omega\rightarrow\infty}r(u,v)\phi(u,v,\theta,\varphi)=\psi_ {+}(u-u_{0},\theta,\varphi) \tag{1.4}\] \[\lim_{u\rightarrow\infty}r(u,v)\phi(u,v,\theta,\varphi)=0\quad \forall v\geq v_{c}, \tag{1.5}\]
Define the function \(\psi_{-,u_{0}}\) by
\[\lim_{u\rightarrow\infty}r(u,v)\phi(u,v,\theta,\varphi)=\psi_{-,u_{0}}(v, \theta,\varphi). \tag{1.6}\]
Then in the case \(|q|<1\), \(n\in\mathbb{N}\), there exist constants \(A_{n}(\mathcal{M},\psi_{+})\) such that
\[\left|\int_{\omega=-\infty}^{0}\int_{\rho=0}^{2\pi}\int_{\theta=0}^{\pi}| \omega||\hat{\psi}_{-,u_{0}}(\omega,\theta,\varphi)|^{2}\sin\theta d\omega \theta d\varphi-\int_{\omega=-\infty}^{\infty}\int_{\varphi=0}^{2\pi}\int_{ \theta=0}^{\pi}\frac{|\omega||\hat{\psi}_{h^{-}}(\omega,\theta,\varphi)|^{2}}{e ^{\frac{2\pi\pi\pi\pi}{\pi}}-1}\sin\theta d\omega d\theta d\varphi\right| \leq A_{n}u_{0}^{-n}, \tag{1.7}\]
for sufficiently large \(u_{0}\).
Here, \(\hat{f}\) is the Fourier transform of \(f\) with respect to its non-angular coordinate, \(\psi_{h^{-}}\) is the transmission of \(\psi_{+}\) in pure Reissner-Nordstrom spacetime (see Theorem 5.1.1), and \(\kappa\) is the surface gravity of the Reissner-Nordstrom black hole, i.e.
\[\kappa=\frac{\sqrt{1-q^{2}}}{\left(2+2\sqrt{1-q^{2}}-q^{2}\right)M}. \tag{1.8}\]
Figure 1: Penrose Diagram of RNOS Model, with null hyper surface \(\Sigma_{v}\).
In the case \(|q|=1\), there exists a constant \(A(\mathcal{M})\) such that
\[\left|\int_{\omega=-\infty}^{0}\int_{\varphi=0}^{2\pi}\int_{\theta=0}^{\pi}| \omega||\hat{\psi}_{-,u_{0}}(\omega,\theta,\varphi)|^{2}\sin\theta d\omega d \theta d\varphi\right|\leq\frac{A}{u_{0}^{3/2}}, \tag{1.9}\]
for sufficiently large \(u_{0}\).
The proof of Theorem 1 will rely on certain scattering results for solutions of (1.2), which are proven in our companion paper [7]. It will also make use of scattering results in pure Reissner-Nordstrom spacetime, some rigorous high frequency approximations, and finally will use an \(\pi^{sp}\) weighted energy estimate, based very closely on the estimates given in [8].
The physical interpretation of this is the following: Given a single particle on future null infinity, \(\mathcal{I}^{+}\), this will have a quantum state given by applying a creation operator to the vacuum state. This creation operator will have a corresponding _positive frequency_ function on \(\mathcal{I}^{+}\), which will be the \(\psi_{+}\) in the statement of the theorem. By considering the quantum calculation done in [1, 2] (or see [9] for a more in depth explanation), we know that the expected number of these particles emitted by the formation of the black hole is given by
\[\int_{\omega=-\infty}^{0}\int_{\varphi=0}^{2\pi}\int_{\theta=0}^{\pi}|\omega ||\hat{\psi}_{-}(\omega,\theta,\varphi)|^{2}\sin\theta d\theta d\varphi d\omega, \tag{1.10}\]
where again \(\hat{\psi}_{-}\) corresponds to the Fourier transform of \(\psi_{-}\) with respect to the advanced time \(u\) coordinate.
In this paper, we are concerned with the particles emitted at late times in the formation of the black hole. Thus instead of a fixed future radiation field, we will consider the family of future radiation fields given by \(\psi_{+,u_{0}}(u,\theta,\varphi):=\psi_{+}(u-u_{0},\theta,\varphi)\), parametrised by \(u_{0}\). The integral (1.3) therefore represents the late time limit of the expected number of particles (associated with the function \(\psi_{+}\)) emitted by the collapsing black hole.
The limits (1.7) and (1.9) have the physical interpretation that a sub-extremal Reissner-Nordstrom black hole forming in the collapse of a matter cloud gives off radiation approaching that of a black body with temperature \(\frac{\omega}{\omega_{0}}\) (in units where \(\hbar=G=c=1\)). Thus sub-extremal Reissner-Nordstrom black holes will emit infinitely many particles in the future. In the extremal case however, this limit suggests that the amount of radiation emitted by a forming, extremal Reissner-Nordstrom black hole tends towards \(0\). This is therefore a rigorous result confirming Hawking's original calculation in both extremal and subextremal settings.
Equations (1.7) and (1.9) also give estimates for the rate at which the limit predicted by Hawking is approached. In the case of (1.7), this rate is very fast. In the case of (1.9), though the bound decays more slowly, it manages to be integrable. This has the physical interpretation that, as the 'final' temperature is zero, the total quantity of any given particle of radiation emitted by an extremal Reissner-Nordstrom black hole that forms from collapse is finite. It remains an important open problem to find the mathematical representation of the total radiation (all such particles) emitted by an extremal Reissner-Nordstrom black hole, and determine if this is indeed finite, and thus, whether extremal black holes are indeed stable to Hawking radiation.
### Overview
Before proving Theorem 1, we will first discuss previous mathematical works on Hawking radiation in Section 2. We will then discuss the background "RNOS" spacetime models in Section 3, and the notation used in the rest of this paper in Section 4. Lastly before the proof, we will discuss previous scattering results on Reissner-Nordstrom spacetimes (Section 5) and on RNOS models (Section 6).
The proof of Theorem 1 will be split up into three main parts. Firstly, in Section 7 we calculate a very similar limit to (1.3), but integrating over all frequencies (rather than only negative frequencies). The bounds on the rate at which this limit is approached will be in terms of weighted energies of \(\psi_{+}\). Secondly, in Section 8, we bound the decay rates of these weighted energies. Finally, in Section 9, we bring the results of Sections 7 and 8 together, and combine them with a standard conserved quantity to obtain the final result.
### Acknowledgements
We would like to thank Mihalis Dafermos for many insightful comments and for proof reading the manuscript. We would like to thank Owain Salter Fitz-Gibbon for many insightful discussions. We would also like to thank Dejan Gajic for useful discussions of the extremal case. Last but by no means least, we would like to thank Claude Warnick and Bernard Kay for their comments and suggestions.
This work was part funded by EPSRC DTP, [1936235]. This work was supported by the Additional Funding Programme for Mathematical Sciences, delivered by EPSRC (EP/V521917/1) and the Heilbronn Institute for Mathematical Research.
## 2 Previous Work
Hawking radiation on collapsing spacetimes has been mathematically studied in several other settings, for example [10, 11, 12, 13]. Each of these papers primarily work in frequency space, and work in different contexts to this paper. Let us discuss some of these differences in more detail.
### Mathematical Works
The original mathematical study of Hawking radiation by Bachelot, [4], considered Hawking radiation of massive or massless non-interacting bosons for a spherically symmetric uncharged collapsing model, and performs this calculation almost entirely in frequency space. This paper obtains what can be viewed as a partial result towards Theorem 1, where the surface of the collapsing star is assumed to remain at a fixed radius for all sufficiently far back times, and no rate at which the limit is approached is calculated.
In [11], Haffner studies Hawking radiation of fermions for sub-extremal charged, rotating black holes, described by the Dirac equation rather than the wave equation. This was the first work on black holes outside spherical symmetry. The Dirac
equation itself has a \(0^{\text{th}}\) order conserved current, which avoids many of the difficulties of considering the linear wave equation, for which no such current exists.
In the paper [12], Drouot considers the full Klein-Gordon equation, but on the sub-extremal Schwarzschild de-Sitter metric. This paper is also the first paper in this setting to obtain a rate at which the limit is obtained, independent of the angular mode. This is easier than the asymptotically flat case considered in Hawking's original work (due to the lack of a cosmological horizon at a finite radius in the asymptotically flat case).
The paper [14] looks at calculating the Hawking radiation of extremal and subextremal Reissner-Nordstrom black holes in one fewer dimension, with no rate obtained. This paper also considers Hawking radiation in the context of the Unruh-type vacuum rather than Hawking radiation generated from a collapsing spacetime.
There are two ways to obtain an analogue of Hawking radiation by considering quantum states on a (non-collapsing) Reissner-Nordstrom background. (Note that an understanding of these is not required to understand of the rest of this paper). Firstly, one can construct the Unruh state [15]. If one considers quantum states on the Reissner-Nordstrom black hole, one can show that there is a unique state that coincides with the vacuum state on \(\mathcal{D}^{-}\) and is well behaved at \(\mathcal{H}^{+}\) (i.e. is a Hadamard state). This state evaluated on \(\mathcal{I}^{+}\) is a thermal state of temperature \(\kappa/2\pi\) (see [16], for example).
The second analogue can be obtained by constructing the Hartle-Hawking-Israel state. If one again considers quantum states on the permanent Reissner-Nordstrom black hole, one can show that there is a unique state that is well behaved (Hadamard) at \(\mathcal{I}^{+}\), \(\mathcal{I}^{-}\), \(\mathcal{H}^{+}\), \(\mathcal{H}^{-}\)[17]. This state is that of a thermal black body, again of temperature \(\kappa/2\pi\). The interpretation of this is that the black hole is in equilibrium with this level of thermal radiation, and is therefore emitting the radiation of a black body of this temperature. This result has been considered in a mathematically rigorous manner on Schwarzschild [18, 16], and more recently in a more general setting [19, 20]. The present paper, however, will be focused on the collapsing setting, as it is believed that the collapsing spacetime method will generalise more readily, as the Hartle-Hawking-Israel state has been shown not to exist in Kerr spacetimes [17].
### Selected Works Within the Physics Literature
The physics literature on this topic is vast, so we will only mention some select results here.
Hawking radiation on a charged background has been considered in several other papers in the physics literature, the most relevant being [21, 22]. The second of these, [22], considers Hawking radiation emitted by extremal black holes in the style of Hawking's original paper. Many papers also make use of the surprising fact that the extremal Reissner-Nordstrom Hawking radiation calculation is very similar to an accelerated mirror in Minkowski space [23].
A more thorough discussion of the physical derivation of Hawking radiation in general, along with a full explanation of Hawking's original method for the calculation, can be found in chapter 14.4 of General Relativity by Wald, [24].
### Further Remarks
In contrast to many of the above works, the considerations of this paper are largely in physical space, despite the fact that the final statement involves the Fourier transform. We will be using the Friedlander radiation formalism, [25], for the radiation field, and we hope this will make the proof more transparent to the reader.
While we will make use of results on the classical scattering map on Reisner-Nordstrom spacetimes, we will not here discuss previous works on this classical scattering map. For an in depth discussion of scattering, we refer the reader to our previous works [26, 7] and references therein.
## 3 RNOS Models
In this section, we introduce our background models of spherically symmetric charged matter cloud collapse. Note these models only cover the exterior of the matter clouds, as we will be imposing reflective boundary conditions on the surface of these clouds. (We are thus not modelling the cloud per se, but only its boundary.) For a more in depth discussion of these models, we refer the reader to our companion paper [7]. Here, we will just state the properties of these Reissner-Nordstrom Oppenheimer-Snyder type models (RNOS models) that we will be using.
The RNOS models are a class of collapsing spacetime exteriors, which can individually be viewed as a submanifold of a member of the 2 parameter family of exterior Reissner-Nordstrom spacetimes, or "pure Reissner-Nordstrom spacetimes".
### Reissner-Nordstrom Spacetimes
For mass parameter \(M>0\) and charge to mass ratio parameter \(q\in[-1,1]\), the pure Reissner-Nordstrom spacetime takes the form:
\[\mathcal{M}_{RN}=\mathbb{R}\times[r_{+},\infty)\times S^{2} \tag{3.1.1}\]
\[g=-\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)dt^{*2}+2\left(\frac{2 M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)dt^{*}drdr+\left(1+\frac{2M}{r}-\frac{q^{2}M^ {2}}{r^{2}}\right)dr^{2}+r^{2}g_{S^{2}} \tag{3.1.2}\]
where \(g_{S^{2}}\) is the metric on the unit 2-sphere, and
\[r_{+}:=M(1+\sqrt{1-q^{2}}). \tag{3.1.3}\]
The \(t^{*}\) coordinate can be writen in terms of the "usual" \(t\) coordinate by
\[t^{*}=t+\int\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)^{-1}dr. \tag{3.1.4}\]
We use \(t^{*}\) in this paper as it extends to the future event horizon \(\mathcal{H}^{+}:=\{(t^{*},r_{+},\theta,\varphi)\in\mathcal{M}_{RN}\}\), where the usual \((t,r)\) coordinate system degenerates.
Figure 2: Penrose diagram of pure Reissner–Nordström spacetimes
**Remark 3.1.1** (Inclusion of \(\mathcal{H}^{-}\)): _As written, our manifold \(\mathcal{M}_{RN}\) does not include the past event horizon, \(\mathcal{H}^{-}\). If we wished to, we could attach \(\mathcal{H}^{-}\) as a limit as \(t^{*}+r\to-\infty\). In the sub-extremal case, we would then also want to attach the bifurcation sphere, \(\mathcal{B}\), as the "corner" between the future directed limit of \(\mathcal{H}^{-}\) and the past directed limit of \(\mathcal{H}^{+}\). However, in practice, we will only be considering \(\mathcal{H}^{-}\) as a limit, so it is not strictly necessary to include this in our manifold._
### RNOS Definition
As mentioned previously, the RNOS models will be a submanifold of \(\mathcal{M}_{RN}\), given by restricting \(\mathcal{M}_{RN}\) to a region \(\{r\geq\tilde{r}_{b}(t^{*})\}\). Thus, the RNOS models have the same mass parameter \(M>0\) and charge to mass ratio \(q\in[-1,1]\) and also an additional dependence on the \(H^{2}_{loc}\) function \(r_{b}:(-\infty,t^{*}_{c}]\to[0,\infty)\). We impose the following conditions on the function \(r_{b}\):
\[\tilde{r}_{b}(t^{*}):=\frac{dr}{dt^{*}}\in(-1,0] \tag{3.2.1}\] \[\exists t^{*}_{c}\text{ s.t. }r_{b}(t^{*}_{c})=r_{+},r_{+}(t^{*})>r _{+}\quad\forall t^{*}<t^{*}_{c}\] (3.2.2) \[(1,\tilde{r}_{b}(t^{*}),0,0)\in T(\mathcal{M})\text{ is timelike for all }t^{*}\in(-\infty,t^{*}_{c}], \tag{3.2.3}\]
where \(r_{+}\) is the black hole horizon for the Reissner-Nordstrom spacetime given by (3.1.3). We then define
\[\tilde{r}_{b}(t^{*}):=\begin{cases}r_{b}(t^{*})&t^{*}\leq t^{*}_{c}\\ r_{+}&t^{*}>t^{*}_{c}\end{cases}. \tag{3.2.4}\]
We also allow 2 possible past asymptotic behaviours for \(r_{b}\). Firstly,
\[\int_{-\infty}^{t^{*}}|\tilde{r}_{b}(t^{*})|+|\tilde{r}_{b}(t^{*})|dt^{*}<\infty. \tag{3.2.5}\]
This is known as the 'fixed boundary' case, as it required \(r_{b}(t^{*})\to r_{-}\) as \(t^{*}\to-\infty\) for some \(r_{-}\).
The second allowed past asymptotic behaviour will be referred to as the 'expanding boundary' case, and requires:
\[\int_{-\infty}^{t^{*}}\frac{1}{r_{b}(t^{*})^{2}}dt^{*}<\infty \tag{3.2.6}\] \[\tilde{r}_{b}(t^{*})\in[-1+\epsilon,0]\text{ for some }\epsilon>0.\]
This model includes any past boundary condition for which \(\tilde{r}_{b}\to a\in(-1,0)\), and also includes the Oppenheimer-Snyder model, as this has \(r_{b}(t^{*})\sim(-t^{*})^{2/3}\). Note this case requires \(r_{b}\to\infty\) as \(t^{*}\to-\infty\).
The RNOS manifolds are given by:
\[\mathcal{M}=\bigcup_{t^{*}\in\mathbb{R}}\{t^{*}\}\times[\tilde{r }_{b}(t^{*}),\infty)\times S^{2}\subset\mathcal{M}_{RN} \tag{3.2.7}\] \[g=-\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)dt^{*2}+2 \left(\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}}\right)dt^{*}dr+\left(1+\frac{2M}{ r}-\frac{q^{2}M^{2}}{r^{2}}\right)dr^{2}+r^{2}g_{S^{2}}. \tag{3.2.8}\]
Here, the range of the second coordinate, \(r\), depends on the first coordinate, \(t^{*}\), and \(g_{S^{2}}\) is the flat metric on the unit sphere. The RNOS models have the same exterior Penrose diagram as the original Oppenheimer-Snyder model (see Figure 1, derived in [26], for example).
### Double Null Coordinates
For both RNOS and pure Resinner-Nordstrom, we will use double null coordinates given by:
\[u =t^{*}-\int_{\pi=3M}^{r}\frac{1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^ {2}}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}ds+C_{u} \tag{3.3.1}\] \[v =t^{*}+r+C_{v}\] (3.3.2) \[\partial_{u} =\frac{1}{2}\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right) \left(\partial_{v}-\partial_{r}\right)\] (3.3.3) \[\partial_{v} =\frac{1}{2}\left(\left(1+\frac{2M}{r}-\frac{q^{2}M^{2}}{r^{2}} \right)\partial_{r}+\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right) \partial_{r}\right)\] (3.3.4) \[g =-\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)dudv+r(u,v)^{ 2}g_{S^{2}}\] (3.3.5) \[r^{*} =\int_{s=3M}^{r}\left(1-\frac{2M}{s}+\frac{q^{2}M^{2}}{s^{2}} \right)^{-1}ds. \tag{3.3.6}\]
In the definition of \(u\), \(v\) and \(r^{*}\), there is an arbitrary choice of constant. Here, we have chosen \(r^{*}\) to vanish at \(r=3M\). In the extremal case, we will fix \(C_{u}\) when determining the behaviour of the boundary of the matter cloud below, but will otherwise leave these constants arbitrary.
Much of the discussion will be concerning \(u\) and \(v\) coordinates. Therefore, we will find it useful to parameterise the surface of the cloud by \(u\) and \(v\). That is, given any \(u\), define \(v_{b}(u)\) to be the unique solution to
\[r(u,v_{b}(u))=r_{b}(t^{*}(u,v_{b}(u))), \tag{3.3.7}\]
We will also define \(u_{b}\) in the domain \(v\leq v_{c}\) as the inverse of \(v_{b}\), _i.e._
\[u_{b}(v):=v_{b}^{-1}(v),\quad i.e.\quad r(u_{b}(v),v)=r_{b}(t^{*}(u_{b}(v),v)) \tag{3.3.8}\]
We will be making use of the following properties of \(v_{\natural}\):
\[v_{\natural}(u)\to v_{\natural}:=v(t^{*}_{\epsilon},r_{+})\qquad \text{as }u\to\infty \tag{3.3.9}\] \[v_{\natural}(u)=\begin{cases}Ae^{-\kappa u}+O(e^{-2\kappa u})&|q|<1 \\ \frac{A}{4}+O(u^{-3})&|q|=1.\end{cases}\] (3.3.10) \[v^{\prime}_{\natural}(u)=\begin{cases}Ae^{-\kappa u}+O(e^{-2\kappa u })&|q|<1\\ \frac{A}{4}+O(u^{-4})&|q|=1\end{cases}, \tag{3.3.11}\]
for constants \(A=A(\mathcal{M})>0\) depending on the choice of RNOS spacetime.
These are straightforward calculations, once we note in the extremal case we can choose the constant \(C_{\natural}\) in (3.3.1) to remove the \(u^{-2}\) term in the expansion of \(v_{\natural}-v_{\natural}\). Here, \(\kappa\) is the surface gravity of the Reissner-Nordstrom black hole that our cloud is forming, as given by (1.8).
### Killing Fields
For both pure Reissner-Nordstrom and RNOS spacetimes, we will make use of the existence of Killing vector fields.
In the interior of pure Reissner-Nordstrom, there exists a timelike (becoming null on \(\mathcal{H}^{+}\)) Killing vector field, given in our coordinates by \(\partial_{\mathcal{H}^{+}}\). This is tangent to the event horizon \(\mathcal{H}^{+}\), and \(\kappa\) defined above obeys the usual equation for surface gravity of a black hole:
\[\partial^{\kappa}_{\epsilon}\nabla_{\alpha}\partial^{\kappa}_{\epsilon^{ \prime}}=\kappa\partial^{\kappa}_{\epsilon^{\prime}}. \tag{3.4.1}\]
While this is a Killing field in the interior of RNOS spacetimes, it is not tangent to the boundary of the matter cloud \(\{r=r_{\natural}\}\) and thus does not provide an energy conservation law. However, it will still be very useful when studying RNOS models.
Finally, we have three independent angular Killing vector fields in both Reissner-Nordstrom and RNOS spacetimes, labelled \(\{\Omega_{i}\}_{i=1}^{3}\), which span all angular derivatives and are tangent to \(\{r=r_{\natural}\}\). When given in \(\theta\), \(\varphi\) coordinates, these take the form:
\[\Omega_{1}=\partial_{\varphi}\] \[\Omega_{2}=\cos\varphi\partial_{\theta}-\sin\varphi\cot\theta \partial_{\varphi} \tag{3.4.2}\] \[\Omega_{3}=-\sin\varphi\partial_{\theta}-\cos\varphi\cot\theta \partial_{\varphi}.\]
## 4 Notation
We will be considering the following hypersurfaces in the manifolds \(\mathcal{M}_{RN}\) and \(\mathcal{M}\), equipped with normals and volume forms induced by these normals. **Note these normals will not necessarily be unit normals** (and hence the volume forms are non-standard), but have been chosen such that divergence theorem can still be applied without introducing extra factors.
\[\Sigma_{u_{0}}:=\{(t^{*},r,\theta,\varphi):u(t^{*},r)=u_{0}\} dV=\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)r^{2}\sin \theta d\theta d\varphi dvdn=-du \tag{4.1}\] \[\Sigma_{v_{0}}:=\{(t^{*},r,\theta,\varphi):v(t^{*},r)=v_{0}\} dV=\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)r^{2}\sin \theta d\theta d\varphi dudn=-dv\] (4.2) \[\Sigma_{\Omega_{i}^{*}}:=\{(t^{*}_{0},r,\theta,\varphi)\} dV=r^{2}\sin\theta d\theta d\varphi drdn=-dt\] (4.3) \[\bar{\Sigma}_{\Omega_{i}^{*},R}:=\left(\Sigma_{u=\Omega_{i}^{*}+R }\cap\{r^{*}\leq-R\}\right)\cup\left(\Sigma_{u_{i}^{*}}\cap\{r^{*}\in[-R,R]\} \right)\cup\left(\Sigma_{v_{i}=\Omega_{i}^{*}+R}\cap\{r^{*}\geq R\}\right). \tag{4.4}\]
The volume form of \(\bar{\Sigma}_{\Omega_{i}^{*},R}\) matches that of \(\Sigma_{u_{0}}\), \(\Sigma_{v_{0}}\) and \(\Sigma_{\Omega_{i}^{*}}\) in each section.
When considering these surfaces in the RNOS model, we will also impose that \(r\geq r_{\natural}(t^{*})\).
We define future/past null infinity as abstract sets by:
\[\mathcal{I}^{+}:=\mathbb{R}\times S^{2}\qquad dV=\sin\theta d\theta d\varphi du \qquad\mathcal{I}^{-}:=\mathbb{R}\times S^{2}\qquad dV=\sin\theta d\theta d \varphi dv. \tag{4.5}\]
Past null infinity can be viewed as the limit of \(\Sigma_{u_{0}}\) as \(u_{0}\to\infty\). For an appropriate function \(f(u,v,\theta,\varphi)\), we will define the function "evaluated on \(\mathcal{I}^{+}\)" to be
\[f(v,\theta,\varphi)|_{\mathcal{I}^{-}}:=\lim_{u\to\infty}f(u,v,\theta,\varphi). \tag{4.6}\]
Similarly, \(\mathcal{I}^{+}\) can be viewed as the limit of \(\Sigma_{v_{0}}\) as \(v_{0}\to\infty\). For an appropriate function \(f(u,v,\theta,\varphi)\), we will define the function "evaluated on \(\mathcal{I}^{+}\)" to be
\[f(u,\theta,\varphi)|_{\mathcal{I}^{+}}:=\lim_{v\to\infty}f(u,v,\theta,\varphi). \tag{4.7}\]
Given a vector field \(X\) and a scalar \(w\), we will be considering their associated energy currents, given by:
\[T_{\mu\nu}(\phi) =\nabla_{\mu}\phi\nabla_{\nu}\bar{\phi}-\frac{1}{2}g_{\mu\nu} \nabla^{\rho}\phi\nabla_{\rho}\bar{\phi} \tag{4.8}\] \[J^{x}_{\mu} =X^{\nu}T_{\mu\nu}\] (4.9) \[K^{X} =\nabla^{\mu}J^{x}_{\nu}\] (4.10) \[J^{x,w}_{\mu} =X^{\nu}T_{\mu\nu}+w\nabla_{\mu}(|\phi|^{2})-|\phi|^{2}\nabla_{ \mu}w\] (4.11) \[K^{X,w} =\nabla^{\nu}J^{x,w}_{\nu}=K^{X}+2w\nabla_{\mu}\bar{\phi}\nabla^{ \mu}\phi-|\phi|^{2}\Box_{\bar{\phi}}w\] (4.12) \[X\text{-energy}(\phi,S) =\int_{S}dn(J^{X}), \tag{4.13}\]
where \(\bar{\phi}\) is the complex conjugate of \(\phi\).
For \(\psi\) a Schwartz function on \(\mathcal{I}^{k}\), we define
\[\|\psi\|_{L^{2}(\mathcal{I}^{+})}^{2}:=\int_{\mathcal{I}^{+}}|\psi|^{2}\sin \theta d\theta d\varphi dv \tag{4.14}\]
\[\|\psi\|_{L^{2}(\mathcal{I}^{-})}^{2}:=\int_{\mathcal{I}^{-}}|\psi|^{2}\sin \theta d\theta d\varphi du \tag{4.15}\]
\[\|\psi\|_{B^{1}(\mathcal{I}^{+})}^{2}:=\int_{\mathcal{I}^{+}}|\partial_{\varphi }\psi|^{2}\sin\theta d\theta d\varphi dv \tag{4.16}\]
\[\|\psi\|_{B^{1}(\mathcal{I}^{-})}^{2}:=\int_{\mathcal{I}^{-}}|\partial_{\varphi }\psi|^{2}\sin\theta d\theta d\varphi du. \tag{4.17}\]
We will be using Fourier transforms, for which we will use the following convention: Let \(f:\mathbb{R}\times S^{2}\to\mathbb{C}\). Then
\[\dot{f}(\omega):=\int_{x=-\infty}^{\infty}e^{-i\kappa x}f(x)dx. \tag{4.18}\]
## 5 Results in Pure Reissner-Nordstrom Spacetimes
As large portions of the Hawking radiation calculation take place in subsets of pure Reissner-Nordstrom, we will be making use of several key results already proven on this family of spacetimes.
### Scattering Results on Pure Reissner-Nordstrom
We will first consider some basic scattering results in pure Reissner-Nordstrom:
**Theorem 5.1.1** (Existence of Scattering Solutions in pure Reissner-Nordstrom): _Let \(\psi_{+}(u,\theta,\varphi)\) be a smooth function, compactly supported in \([u_{-},u_{+}]\times S^{2}\) on the 3-cylinder. Then there exists a unique finite \(\partial_{t^{*}}\)-energy smooth solution, \(\phi(u,v,\theta,\varphi)\) to (1.2) in pure Reissner-Nordstrom spacetime \(\mathcal{M}_{RN}\) such that_
\[\lim_{v\to\infty}r(u,v)\phi(u,v,\theta,\varphi)=\psi_{+}(u, \theta,\varphi) \tag{5.1.1}\] \[\lim_{u\to\infty}r(u,v)\phi(u,v,\theta,\varphi)=0. \tag{5.1.2}\]
_There exist functions \(\psi_{RN}\) and \(\psi_{\mathcal{H}^{-}}\) such that_
\[\lim_{v\to\infty}r(u,v)\phi(u,v,\theta,\varphi)=\psi_{\mathcal{H} ^{-}}(u,\theta,\varphi) \tag{5.1.3}\] \[\lim_{u\to\infty}r(u,v)\phi(u,v,\theta,\varphi)=\psi_{RN}(v, \theta,\varphi). \tag{5.1.4}\]
_Furthermore, let us consider separating \(\psi_{+}\) into spherical harmonics \(Y_{l,m}\):_
\[\psi_{+}(u,\theta,\varphi)=\sum_{l\geq 0}\sum_{m\in\mathbb{Z}}\psi_{+,l,m}(u)Y_ {l,m}(\theta,\varphi). \tag{5.1.5}\]
_Then \(\psi_{\mathcal{H}^{+}}\) and \(\psi_{RN}\) can be expressed in Fourier space as_
\[\psi_{\mathcal{H}^{-}}(v) =\frac{1}{2\pi}\sum_{l,m}\int_{\omega=-\infty}^{\infty}\hat{ \psi}_{+,l,m}(\omega)\tilde{T}_{\omega,l,m}e^{i\omega r}d\omega \tag{5.1.6}\] \[\psi_{RN}(u) =\frac{1}{2\pi}\sum_{l,m}\int_{\omega=-\infty}^{\infty}\hat{ \psi}_{+,l,m}(\omega)\tilde{R}_{\omega,l,m}e^{i\omega u}d\omega, \tag{5.1.7}\]
_where \(\tilde{T}_{\omega,l,m}\) and \(\tilde{R}_{\omega,l,m}\) are transmission and reflection coefficients defined by (5.1.12), and \(\hat{\psi}_{+l,m}\) are the Fourier transform of \(\psi_{+l,m}\)._
_Finally, \(\phi(u,v,\theta,\varphi)=0\) for all \(u\geq u_{+}\)._
_Sketch of Proof._ For the existence of the radiation fields, the sub-extremal case can be deduced from the harder sub-extremal Kerr case [27] and the extremal case can be deduced from the scattering theory in [8].
We will now outline the proof of (5.1.6),(5.1.7), the existence of reflection and transmission coefficients. For a more in depth discussion of these, we refer the reader to [27, 28].
We will define the transmission and reflection coefficients in the same way as [27]. We first change coordinates to the tortoise radial function, \(r^{*}\), and then consider fixed frequency solutions of the wave equation, \(\psi=e^{i\omega t}u_{\omega,m,l}(r^{*})Y_{l,m}(\theta,\varphi)\). The equation obeyed by this \(u_{\omega,l,m}(r^{*})\) is
\[u^{\prime\prime}+(\omega^{2}-V_{l})u=0, \tag{5.1.8}\]
where
\[V_{l}(r)=\frac{1}{r^{2}}\left(\{l(l+1)+\frac{2M}{r}\left(1-\frac{q^{2}M}{r} \right)\right)\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right). \tag{5.1.9}\]
Considering asymptotic behaviour of possible solutions, there exist unique solutions \(U_{hor}\) and \(U_{inf}\)[28], characterised by
\[U_{hor}\sim e^{-i\kappa r^{*}}\text{ as }r^{*}\to-\infty \tag{5.1.10}\] \[U_{inf}\sim e^{i\kappa r^{*}}\text{ as }r^{*}\to\infty. \tag{5.1.11}\]
We can also see that \(\tilde{U}_{hor}\) and \(\tilde{U}_{inf}\) are solutions to (5.1.8). Moreover \(U_{inf}\) and \(\tilde{U}_{inf}\) are linearly independent, so we can write \(U_{hor}\) in terms of \(U_{inf}\) and \(\tilde{U}_{inf}\):
\[\tilde{T}_{\omega,l,m}U_{hor}=\tilde{R}_{\omega,l,m}U_{inf}+\tilde{U}_{inf}. \tag{5.1.12}\]
Here \(\tilde{R}\) and \(\tilde{T}\) are what we refer to as the reflection and transmission coefficients, respectively.
Now we return to physical space. For a Schwartz function \(\psi_{+}(u)\), we can impose the future radiation field \(\psi_{+}(u)Y_{l,m}(\theta,\varphi)\) on \(\mathcal{I}^{+}\), and \(0\) on \(\mathcal{H}^{+}\). Therefore the solution of the wave equation is of the form \(\phi=\frac{Y_{l,m}(\theta,\varphi)}{r}\psi\). We then rewrite the wave equation in terms of \(\psi\).
Using that \(\psi_{\mathcal{H}^{+}}=0\) and \(\psi_{\mathcal{I}^{+}}=\psi_{+}(u)\), we can formally write
\[\hat{\psi}(r^{*},\omega)=\hat{\psi}_{+}(\omega)\tilde{T}_{\omega,l,m}U_{hor}(r ^{*})=\hat{\psi}_{+}(\omega)\left(\tilde{R}_{\omega,l,m}U_{inf}(r^{*})+\tilde{ U}_{inf}(r^{*})\right). \tag{5.1.13}\]
Therefore, given appropriate convergence of \(\psi_{+}\), we can obtain an expression of \(\psi\) on \(\mathcal{H}^{-}\) and \(\mathcal{I}^{-}\):
\[\psi_{\mathcal{H}^{-}}(v)=\frac{1}{2\pi}\int_{\omega=-\infty}^{ \infty}\hat{\psi}_{+}(\omega)\tilde{T}_{\omega,l,m}e^{i\omega t}d\omega \tag{5.1.14}\] \[\psi_{RN}(u)=\frac{1}{2\pi}\int_{\omega=-\infty}^{\infty}\hat{ \psi}_{+}(\omega)\tilde{R}_{\omega,l,m}e^{i\omega u}d\omega. \tag{5.1.15}\]
The final result follows from orthogonality of \(Y_{l,m}\).
We will use two properties of \(\tilde{R}\) and \(\tilde{T}\), given by the following proposition
**Proposition 5.1.2** (Boundedness and Decay of Reflection and Transmission Coefficients): _Let \(\tilde{R}_{\omega,l,m}\) and \(\tilde{T}_{\omega,l,m}\) be defined as in Theorem 5.1.1. Then_
\[|\tilde{R}_{\omega,l,m}|^{2}+|\tilde{T}_{\omega,l,m}|^{2}=1. \tag{5.1.16}\]
\[|\tilde{R}_{\omega,l,m}|^{2}\leq\frac{C(l+1)^{2}}{1+M^{2}\omega^{2}}. \tag{5.1.17}\]
Proof.: The first result (5.1.16) can be deduced easily from \(T\) energy conservation. The second result is proven in Appendix A in [29].
### Further Properties of Reissner-Nordstrom
This section introduces known results concerning solutions of the wave equation on Reissner-Nordstrom which we will make use of.
**Proposition 5.2.1** (\(T\)-energy Conservation): _Let \(T=\partial_{t^{*}}\) be the vector field in \(\mathcal{M}_{RN}\). Let \(\Omega\subset\mathcal{M}_{RN}\) be a compact region with a regular boundary \(\partial\Omega\). Let \(\phi\) be a solution to (1.2) on \(\mathcal{M}_{RN}\). Then_
\[T\text{-energy}(\phi,\partial\Omega)=0. \tag{5.2.1}\]
_where all the normals used in the definition of \(T\)-energy (4.13) are outward pointing._
Proof.: This is an immediate application of divergence theorem (or Generalised Stokes' Theorem).
This result also holds for sufficiently well behaved non-compact regions, if one then includes \(\mathcal{I}^{\pm}\) in the boundary.
**Proposition 5.2.2** (Scattering \(T\)-energy Conservation): _Let \(T=\partial_{t^{*}}\) be the vector field in \(\mathcal{M}_{RN}\). Let \(\Omega\subset\mathcal{M}_{RN}\) be a non-compact region with a regular boundary \(\partial\Omega\), equal to a finite union of regions of \(\mathcal{I}^{\pm}\), \(\mathcal{H}^{\pm}\), \(\Sigma_{t^{*}}\), \(\Sigma_{u}\) and \(\Sigma_{v}\). Let \(\phi\) be a solution to (1.2) on \(\mathcal{M}_{RN}\). Then_
\[T\text{-energy}(\phi,\partial\Omega)=0. \tag{5.2.2}\]
_where all the normals used in the definition of \(T\)-energy (4.13) are outward pointing._
Proof.: This is a consequence of the scattering map [27]. \(T\)-energy conservation allows us to obtain this result, provided that there is some decay on the energy of our solutions towards \(i^{\pm}\).
**Theorem 5.2.3** (Domain of Dependence of the wave equation): _Let \(\phi(t_{0},r,\theta,\varphi)\) be a smooth solution of (1.2) on pure Reissner-Nordstrom spacetime \(\mathcal{M}_{RN}\), such that on surface \(\Sigma_{t_{0}}\), \(\phi(t_{0},r,\theta,\varphi)\) and \(\nabla\phi(t_{0},r,\theta,\varphi)\) is supported on \(r\in[r_{0},r_{1}]\)._
_Then \(\phi\) vanishes in the \(4\) regions \(\{t>t_{0}\}\cap\{v\leq v(t_{0},r_{0})\}\), \(\{t>t_{0}\}\cap\{u\leq u(t_{0},r_{1})\}\), \(\{t<t_{0}\}\cap\{v\geq v(t_{0},r_{1})\}\) and \(\{t<t_{0}\}\cap\{u\geq u(t_{0},r_{0})\}\)._
In this case, this result is a trivial consequence of \(T\)-energy conservation. However, it can also be derived from more general domain of dependence results concerning hyperbolic PDEs.
The final result we will be using is (part of) Proposition 7.4 in [8], which we will restate here:
**Proposition 5.2.4**: _Let \(\phi\) be a sufficiently well behaved solution to (1.2) in an extremal Reissner-Nordstrom spacetime \(\mathcal{M}_{RN}\), with \(\psi=r\phi\). Let \(M<r_{0}<2M\). Then there exists a constant, \(C=C(M,r_{0})>0\) such that_
\[\int_{\Sigma_{t_{0}}\cap\{t\leq r_{0}\}}\left(1-\frac{r}{M}\right) ^{-2}|\partial_{t}\psi|^{2}\sin\theta d\theta d\varphi dv+\int_{\Sigma_{t=u_{1} \sim t_{0}}\cap\{r>0\}}|\partial_{\alpha}\psi|^{2}\sin\theta d\theta d\varphi du\] \[\leq C\int_{\mathcal{H}^{-}\cap\{u\leq u_{1}\}}\left(M^{2}+(u-u_{1 })^{2}\right)|\partial_{\alpha}\psi|^{2}+\left|\left.\dot{\nabla}\psi\right|^{2} \sin\theta d\theta d\varphi du \tag{5.2.3}\] \[\qquad\qquad+C\int_{\mathcal{I}^{-}\cap\{v\leq u_{1}\}}(M^{2}+(v -u_{1}-r_{0}^{*}))^{2}\Big{|}\partial_{\alpha}\psi|^{2}+\left|\left.\dot{ \nabla}\psi\right|^{2}\sin\theta d\theta d\varphi dv\right.\]
_for all \(u_{1}\in\mathbb{R}\)._
Proof.: To prove this, we have taken the \(r_{\mathcal{I}^{+}}\) in the original statement of the proposition in [8] to be where \(r^{*}=0\).
Section 8 will be concerned with a sub-extremal equivalent of this result, proven in a very similar manner to [8].
Figure 3: The Domain of Dependence
Classical Scattering in RNOS spacetimes
We now return to our collapsing exterior spacetime, \(\mathcal{M}\). In order to discuss properties of solutions to the linear wave equation (1.2), we first need a result on the existence of such solutions, for which we will use a result from [7]:
**Theorem 6.1** (Existence of Smooth Solutions to the Linear Wave Equation on RNOS): _Let \(\psi_{+}(u,\theta,\varphi)\) be a smooth, compactly supported function on \(\mathcal{I}^{+}\). Fix an RNOS spacetime, \((\mathcal{M},g)\). Then there exists a unique finite \(\partial_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
the tails of \(\psi_{+}\), \(\psi_{K^{-}}\) and \(\psi_{RN}\)) are given by_
\[I.E.[\psi_{+},v_{c},u_{1},u_{0}]= \int_{-\infty}^{u_{1}}\int_{S^{2}}\left[(M+u_{0}-v_{c})(M+u_{0}-u_{1 })|\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}+|\psi_{\mathcal{H}^{-}}|^{2}\right] \sin\theta d\theta d\varphi du\] \[+(M+u_{0}-u_{1})\int_{-\infty}^{v_{c}}\int_{S^{2}}\left[(M^{2}+(v _{c}-u)^{2})|\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}\right]\sin\theta d\theta d \varphi du \tag{7.6}\] \[+(M+u_{0}-u_{1})\int_{-\infty}^{v_{c}}\int_{S^{2}}\left[(M^{2}+M(u _{0}-v_{c})+(v_{c}-v)^{2})|\partial_{v}\psi_{RN}|^{2}+|\hat{\not{\nabla}}\psi_ {RN}|^{2}\right]\sin\theta d\theta d\varphi dv\] \[+\left[\int_{u=u_{0}-u_{1}}^{\infty}\int_{S^{2}}(M^{2}+(u-u_{0}+u _{1})^{2})|\partial_{u}\psi_{+}|^{2}\sin\theta d\theta d\varphi du\right]^{*}\] \[I.T.[\psi_{+}]= \int_{\infty}^{\infty}\int_{S^{2}}(M^{2}+u^{2})\left(1+\left|\hat {\not{\nabla}}\right|^{4}\right)\left(|\partial_{u}\psi_{+}|^{2}+|\psi_{+}| \right)\sin\theta d\theta d\varphi du. \tag{7.7}\]
_Here, \(\hat{\not{\nabla}}\) is the derivative on the unit sphere, and we write \(\left|\hat{\not{\nabla}}\right|^{4}|f|^{2}\) to mean \(\left|\hat{\not{\nabla}}^{2}\hat{f}\right|^{2}\). Note that \(I.T.[\psi_{+}]\) controls similarly weighted norms of \(\psi_{\mathcal{H}^{-}}\) and \(\psi_{RN}\), thanks to reflection and transmission coefficients being bounded above by \(1\) (see Section 5). Finally, it should be noted that the final term in \(I.E.\) (marked by \([\,]^{*}\)), is only required in the extremal (\(|q|=1\)) case._
_In the case \(|q|=1\), there exists a constant \(A(\mathcal{M})\) such that_
\[\left|\int_{u=-\infty}^{\infty}\int_{S^{2}}|\omega||\hat{\not{\nabla}}_{-u_{0 }}|^{2}\sin\theta d\theta d\varphi du\wedge d\theta d\varphi-\int_{u=-\infty} ^{\infty}\int_{S^{2}}|\omega||\hat{\psi}_{+}|^{2}\sin\theta d\theta d\varphi d \omega\right|\leq A\left(\frac{I.T.[\psi_{+}]}{u_{0}^{3/2}}+u_{0}^{5/2}I.E.[ \psi_{+},v_{c},u_{0}-\sqrt{Mu_{0}},u_{0}]\right), \tag{7.8}\]
_for sufficiently large \(u_{0}\)._
_Furthermore, let us fix \(\delta>0\). If we suppose that \(|q|<1\), and that \(\psi_{+}\) be such that all \(I.E.[\psi_{+},v_{c},(1-\delta)u_{0},u_{0}]\) terms decay faster than \(e^{-3\kappa(1-\delta)u_{0}}\). Then there exists a constant \(B(\mathcal{M},\psi_{+},\delta)\) such that_
\[\left|\int_{u=-\infty}^{\infty}\int_{S^{2}}|\omega||\hat{\psi}|^{2}\sin\theta d \theta d\varphi d\omega-\int_{u=-\infty}^{\infty}\int_{S^{2}}\left(|\omega| \coth\left(\frac{\pi}{\kappa}|\omega|\right)|\hat{\psi}_{\mathcal{H}^{-}}|^{2 }+|\omega||\hat{\psi}_{RN}|^{2}\right)\sin\theta d\theta d\varphi d\omega\right| \leq Be^{-\kappa(1-\delta)u_{0}}, \tag{7.9}\]
_for sufficiently large \(u_{0}\)._
Section 8 will then establish decay rates of the \(I.E.\) terms.
The eventual choice of \(u_{1}\) in this theorem will be to obtain decay of both the error terms \(I.E.\) and the bulk term \(I.T.\), despite the growing \(e^{2w_{1}}\) factor applied to \(I.E.\)
### The Set-up and the Reduction to Fixed Spherical Harmonic \(l\)
We will prove Theorem 7.1 by first restricting to the case \(\psi_{+}(u-u_{0},\theta,\varphi)=\psi_{+}(u-u_{0})Y_{\Gamma_{m}}(\theta,\varphi)\), where moreover \(\psi_{+}(x)\) is smooth and compactly supported. Here \(Y_{l,m}\) is spherical harmonic, see for example [30]. The full result will then follow from orthogonality of \(Y_{l,m}\) and the fact that we will track dependence of constants on \(l,m\). Let \(\phi\) be the solution to (1.2), subject to \(\psi=0\) on \(r=r_{b}(t^{*})\), with future radiation field \(Y_{l,m}(\theta,\varphi)\psi_{+}(u-u_{0})\), and \(\theta=0\) on \(\mathcal{H}^{+}\), as given by Theorem 6.1.
We will generally be considering \(\psi(u,v)\), given by
\[\psi(u,v)Y_{l,m}(\theta,\varphi):=r(u,v)\phi(u,v,\theta,\varphi) \tag{7.1.1}\]
rather than \(\phi\) itself. Note \(\psi(u,v)\) is independent of \(\theta,\varphi\), as spherical symmetry of our system implies that if we restrict scattering data in Theorems 5.1.1 and 6.1 to one spherical harmonic, the solution will also be restricted to that harmonic.
Re-writing the wave equation for fixed \(l,m\) in terms of \(\psi\), we obtain:
\[4\partial_{u}\partial_{v}\psi=-\frac{1}{r^{2}}\left(1-\frac{2M}{r^{2}}+\frac{q ^{2}M^{2}}{r^{2}}\right)\left(l(l+1)+\frac{2M}{r}\left(1-\frac{q^{2}M}{r} \right)\right)\psi=:-4V(r)\psi. \tag{7.1.2}\]
\[\psi(u,v_{b}(u))=0,\]
where \(v_{b}\) is as given in (3.3.7).
Note that \(\psi\), \(\phi\), \(\psi_{-}\) all depend on \(u_{0}\).
### Summary of the Proof of Convergence
This proof will be broken up into 4 sections.
1. Firstly (Section 7.3) we will consider the evolution of the solution determined by scattering data \((0,\psi_{+}Y_{l,m})\) on \(\mathcal{H}^{+}\cup\mathcal{I}^{+}\) through the region \(R_{1}:=\{v\in[v_{c},\infty)\}\), where \(v_{c}\) is the value of the \(v\) coordinate at \((t^{*}_{c},2M)\) given by (3.3.2). Note that evolution in \(R_{1}\) is entirely within a region of Reissner-Nordstrom spacetime, so is relatively easy to estimate.
We will obtain that
\[\hat{\psi}(\omega,v_{c})\approx\tilde{T}_{\omega,l,m}\hat{\psi}_{+}(\omega), \tag{7.2.1}\]
where \(\tilde{T}_{\omega,l,m}\) is the transmission coefficient (Section 5) from \(\mathcal{I}^{+}\) back to \(\mathcal{H}^{-}\) in Reissner-Nordstrom for the spherical harmonic \(Y_{l,m}\) (again see [30]). Here \(\tilde{}\) is the Fourier transform with respect to \(u\). Here, when we say "\(\approx\)", we mean to leading order for large \(u_{0}\), and the exact nature of these error terms will be covered in more detail in the full statement of Corollary 7.3.3.
Figure 4: The set-up for the Hawking radiation calculation
Figure 5: The regions we will consider in the Hawking radiation calculation
2. Secondly (Section 7.4) we will consider the reflection of the solution off the surface of the matter cloud. This will occur in the region \(R_{2}:=\{v\leq v_{c},u\geq u_{1}\}\cap\{r\geq r_{b}\}\) for the same \(u_{0}\) in the definition of \(\psi_{+}\). We will obtain that, for \(v\leq v_{c}\), \[\psi(u_{1},v)\approx\psi(u_{0}(v),v_{c}),\] (7.2.2) where here \(u=u_{b}(v)\) is parametrising the surface \(r=r_{b}(\mathbf{t}^{*})\) in terms of the null coordinates. See Proposition 7.4.1 for a precise statement of this.
3. Thirdly (Section 7.5) we will consider the high frequency transmission of the solution from near the surface of the matter cloud to \(\mathcal{I}^{-}\). This will occur in a region we will call \(R_{3}:=\{v\leq v_{c},u\leq u_{1}\}\). In a very similar manner to (7.2.1), we will obtain \[\hat{\psi}_{-}(\omega)\approx T_{\omega,l,m}\hat{\chi}\psi(u_{1},\omega),\] (7.2.3) where \(T_{\omega,l,m}\) is the transmission coefficient from \(\mathcal{H}^{+}\) to \(\mathcal{I}^{-}\) (or equivalently from \(\mathcal{H}^{-}\) to \(\mathcal{I}^{+}\)), \(\chi\) is a cut off function to the future of \(v=v_{c}\), and \(\psi_{-}\) is the past radiation field of \(\phi\) on \(\mathcal{I}^{-}\) (this is given in more detail in Proposition 7.5.1). Here \(\hat{\chi}\hat{\psi}^{\prime}(u_{1},\omega)\) is the Fourier transform of \(\chi\psi(u_{1},v)\) with respect to the \(v\) variable, and where we have extended this function by \(0\) for \(v<u_{0}(u_{1})\). More detail will be given on this in the full proof. As \(\psi|_{\Sigma_{u_{1}}\cap\{v\leq v_{c}\}}\) is supported in a small region, \(T_{\omega,l,m}\approx 1\) on the support of \(\hat{\chi}\hat{\psi}\). Thus we will further obtain that \[\psi_{-}(v)\approx\psi(u_{1},v),\] (7.2.4) for \(v\in[v_{0}(u_{1}),v_{c}]\). See Proposition 7.5.1 for the precise statement of this.
4. Finally (Section 7.6) we will prove Theorem 7.1, completing the calculation of the \(\hat{H}^{1/2}\) norm of \(\psi\) on \(\mathcal{I}^{-}\). Using a useful Lemma by Bachelot (Lemma II.6 in [31], Lemma 7.6.1 here) and applying (7.2.1), (7.2.2), (7.2.4), we obtain for the sub-extremal case \[\int_{-\infty}^{\infty}|\omega||\hat{\psi}_{-}|^{2}d\omega \approx\int_{-\infty}^{\infty}|\omega|\left|\widehat{\psi_{+}\cdot \circ}u_{b}+\hat{\psi}_{RN}\right|^{2}d\omega\] (7.2.5) \[\approx\int_{-\infty}^{\infty}|\omega||\widehat{R}_{\omega,m}|^{2 }|\hat{\psi}_{+}|^{2}+|\widehat{T}_{\omega,l,m}|^{2}|\omega|\coth\left(\frac{ \pi}{\kappa}|\omega|\right)|\hat{\psi}_{+}|^{2}d\omega.\] We obtain the equivalent result on the extremal case, using (7.2.1), (7.2.2), (7.2.4) and Lemma 7.6.2.
Although many aspects of this proof are firmly rooted in Fourier space, where possible we will use physical space calculations. We hope this will lead to a more transparent proof.
### Evolution in Pure Reissner-Nordstrom
In this section we will be considering the following problem: In pure Reissner-Nordstrom spacetime, if we impose radiation field \(\psi_{+}(u)Y_{l,m}(\theta,\varphi)\) on \(\mathcal{I}^{+}\) and that our solution vanishes on \(\mathcal{H}^{+}\), what happens to the solution on a surface of constant \(v\) as we let \(v\to-\infty\)? The aim of this section will be to prove the following result:
**Proposition 7.3.1** (\(H^{1/2}\) Error from Reissner-Nordstrom Transmission): _Let \(f:\mathbb{R}\to\mathbb{R}\) be a smooth, compactly supported function with \(f(0)=1\). Let \(\psi_{+}:\mathbb{R}\to\mathbb{C}\) be a Schwartz function. Let \(\psi\) be the solution of (7.1.2), as given by Theorem 5.1.1, with radiation field on \(\mathcal{I}^{+}\) equal to \(Y_{l,m}\psi_{+}\), and which vanishes on \(\mathcal{H}^{+}\). Let \(u_{0}\) be fixed. Let \(v_{c},u_{1}\in\mathbb{R}\), with \(r(u_{1},v_{c})\leq 8M/3\) and \(u_{1}<u_{0}\). Define_
\[\psi_{0}(u):=\begin{cases}\psi(u,v_{c})-f(u-u_{1})\psi(u_{1},v_{c})&u\geq u_{1 }\\ 0&u<u_{1}\end{cases}. \tag{7.3.1}\]
_In the extremal case, we also restrict \(\hat{\psi}_{+}\) to be supported on positive frequencies, and \(u_{0}-u_{1}\leq u_{1}-v_{c}\). Then there exists a constant \(A(M,q,f)>0\) such that_
\[\left|\int_{\omega=-\infty}^{\infty}\left(\kappa+|\omega|\right)\left(|\hat{ \psi}_{\mathcal{H}^{-}}|^{2}-|\hat{\psi}_{0}|^{2}\right)d\omega\right|\leq \begin{cases}A\left(e^{(v_{c}(u_{0}-u_{1})\cdot T,[Y_{l,m}\psi_{+}]}+I.E.[Y_{l,m}\psi_{+},v_{c},u_{1},u_{0}]^{1/2}I.T.[Y_{l,m}\psi_{+}]^{1/2}\right)&|q|<1\\ A\left(\frac{\left(\frac{\ln(\frac{\ln(\frac{\ln(\frac{\ln(u_{1}-u_{1}-u_{1})})} )}}{\ln(u_{1}-v_{c})}\right)I.T.[Y_{l,m}\psi_{+}]}{(u_{1}-v_{c})^{2}}\right)&|q|=1 \;,\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
**Corollary 7.3.3** (\(H^{1/2}\) Error from Reissner-Nordstrom Transmission): _Let \(f:\mathbb{R}\to\mathbb{R}\) be a smooth, compactly supported function with \(f(0)=1\). Let \(\psi_{+}:\mathbb{R}\to\mathbb{C}\) be a Schwartz function. Let \(\psi\) be the solution of (7.1.2), as given by Theorem 5.1.1, with radiation field on \(\mathcal{I}^{+}\) equal to \(\hat{Y}_{i,m}(\theta,\varphi)\psi_{+}(u-u_{0})\), and which vanishes on \(\mathcal{H}^{+}\). Let \(v_{c}\) be fixed. Let \(u_{0},u_{1}\in\mathbb{R}\), with \(r(u_{1},v_{c})\leq 8M/3\) and \(u_{1}<u_{0}\). Define_
\[\psi_{0}(u):=\begin{cases}\psi(u,v_{c})-f(u-u_{1})\psi(u_{1},v_{c})&u\geq u_{1} \\ 0&u<u_{1}\end{cases}. \tag{7.3.3}\]
_In the extremal case, we also restrict \(\hat{\psi}_{+}\) to be supported on positive frequencies, and \(u_{0}-u_{1}\leq u_{1}-v_{c}\). Then there exists a constant \(A(M,q,f)>0\) such that_
\[\left|\int_{\omega=-\infty}^{\infty}(\kappa+|\omega|)\left(|\hat{\psi}_{K^{-} }|^{2}-|\hat{\psi}_{0}|^{2}\right)d\omega\right|\leq\begin{cases}A\left(\frac{ (e^{-\kappa u_{1}}I.T.[Y_{1,m}\psi_{+}]+I.E.[Y_{1,m}\psi_{+},v_{c},u_{1},u_{0}] ^{1/2}I.T.[Y_{1,m}\psi_{+}]^{1/2}}{u_{1}^{2}}\right)&|q|<1\\ +u_{1}^{1/2}I.E.[Y_{1,m}\psi_{+},v_{c},u_{1},u_{0}]^{1/2}I.T.[Y_{1,m}\psi_{+} ]^{1/2}\right)\end{cases}, \tag{7.3.4}\]
_where \(\kappa\) is given by (1.8). Here \(I.E.[Y_{1,m}\psi_{+},v_{c},u_{1},u_{0}]\) and \(I.T.[Y_{1,m}\psi_{+},v_{c},u_{1},u_{0}]\) are both given in the statement of Theorem 7.1. where \(\psi_{RN},\psi_{K^{-}}\) are as defined in Theorem 5.1.1._
**Remark 7.3.4**: _This Corollary gives the exact definition of the \(\neg\neg\)\(\ast\) in (7.2.1) from Section 7.2, once one notes \(\hat{\psi}_{K^{-}}(\omega)=\hat{\psi}_{+}(\omega)\hat{T}_{\omega,l,m}\), and \(\psi_{0}(u)=\psi(u,v_{c})-f(u-u_{1})\psi(u_{1},v_{c})\)._
We will start by considering the following:
**Proposition 7.3.5** (Reissner-Nordstrom Transmission): _Let \(\psi_{+}:\mathbb{R}\to\mathbb{C}\) be a smooth, compactly supported function. Let \(\psi\) be the solution of (7.1.2), as given by Theorem 5.1.1, with radiation field on \(\mathcal{I}^{+}\) equal to \(\psi_{+}Y_{i,m}\), and which vanishes on \(\mathcal{H}^{+}\). Let \(v_{c},u_{1}\in\mathbb{R}\), with \(r(u_{1},v_{c})\leq 8M/3\). Define \(\psi_{K^{-}}\) as in Theorem 5.1.1._
_Then there exists a constant \(A(M,q)\) such that_
\[\int_{u=u_{1}}^{\infty}|\partial_{u}\psi(u,v_{c})-\partial_{u}\psi_{K^{-}}(u)| ^{2}du\leq AI.T.[\psi_{+}Y_{i,m}](r(u_{1},v_{c})-r_{+})^{2}. \tag{7.3.5}\]
\[(1+1)^{4}\sup_{\tau\leq v_{c}}\left(\int_{u_{1}}^{\infty}|\psi(u,v)|^{2}du \right)\leq AI.T.[\psi_{+}Y_{i,m}]. \tag{7.3.6}\]
_Moreover, if \(|q|<1\), then we also have a constant \(B(M,q)\) such that_
\[\int_{u=u_{1}}^{\infty}|\psi(u,v_{c})-\psi_{K^{-}}(u)|^{2}du\leq BI.T.[\psi_{+}Y _{i,m}](r(u_{1},v_{c})-r_{+})^{2}+B|\psi(u_{1},v_{c})|^{2}, \tag{7.3.7}\]
_again provided \(r(u_{1},v_{c})\leq 3M\)._
_In the case \(|q|=1\), we have_
\[\int_{u=u_{1}}^{\infty}|\psi(u,v_{c})-\psi_{K^{-}}(u)|^{2}du\leq AI.T.[\psi_{+} Y_{i,m}](u_{0}-u_{1})^{2}(r(u_{1},v_{c})-r_{+})^{2}+A(M^{2}+(u_{0}-u_{1})^{2})| \psi(u_{1},v_{c})-\psi_{K^{-}}(u_{1})|^{2}\]
\[+4IE.[\psi_{+}Y_{i,m},v_{c},u_{1},u_{0}]. \tag{7.3.8}\]
_Here, \(I.T.\) and \(I.E.\) are as defined in Theorem 7.1._
_Proof._ We know from Theorem 5.1.1 that
\[\lim_{v\to-\infty}\psi(u,v)=\psi_{K^{-}}(u). \tag{7.3.9}\]
Let \(\chi\) be a smooth cut off function such that
\[\chi(x)\begin{cases}=0&x\geq 1\\ \in[0,1]&x\in[0,1]\\ =1&x\leq 0\end{cases}. \tag{7.3.10}\]
Then for any smooth function \(\phi\) such that \(\lim_{u\to\infty}(\phi(u,v))=0\), we have
\[\int_{u=u_{-}}^{\infty}|\phi(u,v)|^{2}du \leq 2\int_{u=u_{-}}^{\infty}\left|\phi(u,v)-\phi(u_{-},v_{c}) \chi\left(\frac{u-u_{-}}{M}\right)\right|^{2}+\left|\phi(u_{-},v_{c})\chi \left(\frac{u-u_{-}}{M}\right)\right|^{2}du \tag{7.3.11}\] \[\leq 8\int_{u=u_{-}}^{\infty}(u-u_{-})^{2}\left|\partial_{u}\phi(u, v)-\phi(u_{-},v_{c})M^{-1}\chi^{\prime}\left(\frac{u-u_{-}}{M}\right)\right|^{2}du+2M| \phi(u_{-},v_{c})|^{2}\int_{s=0}^{\infty}|\chi(x)|^{2}\,dx\] \[\leq 16\int_{u=u_{-}}^{\infty}(u-u_{-})^{2}\left|\partial_{u}\phi(u, v)\right|^{2}+(u-u_{-})^{2}\left|\phi(u_{-},v_{c})M^{-1}\chi^{\prime}\left(\frac{u-u_{-}}{M} \right)\right|^{2}du+2M|\phi(u_{-},v_{c})|^{2}\int_{s=0}^{\infty}|\chi(x)|^{2}\,dx\] \[\leq 16\int_{u=u_{-}}^{\infty}(u-u_{-})^{2}\left|\partial_{u}\phi(u, v)\right|^{2}du+2M|\phi(u_{-},v_{c})|^{2}\int_{s=0}^{\infty}|\chi(x)|^{2}\,dx\] \[\leq 16\int_{u=u_{-}}^{\infty}(u-u_{-})^{2}\left|\partial_{u}\phi(u, v)\right|^{2}du+A(M)|\phi(u_{-},v_{c})|^{2}.\]
Here, we have used Hardy's inequality.
Figure 6: The first region we will consider in the Hawking radiation calculation
In order to prove (7.3.6), we use (7.3.11) with \(u_{-}=u_{0}\) to show
\[\int_{u=-\infty}^{\infty}|\phi(u,v)|^{2}du\leq 16\int_{u=-\infty}^{\infty}(u-u_{0} )^{2}\,|\partial_{u}\phi(u,v)|^{2}\,du+A(M)|\phi(u_{0},v_{c})|^{2}, \tag{7.3.12}\]
And we then consider
\[\phi=\begin{cases}\psi&u\geq u_{1}\\ \frac{u_{0}-u_{0}}{u_{0}-u}\psi(u_{1},v_{c})&u<u_{1}\end{cases}, \tag{7.3.13}\]
to obtain
\[\int_{u_{1}}^{\infty}|\psi(u,v)|^{2}du \leq\int_{u=-\infty}^{\infty}|\phi(u,v)|^{2}du\leq 16\int_{u=- \infty}^{\infty}(u-u_{0})^{2}\,|\partial_{u}\phi(u,v)|^{2}\,du+A(M)|\phi(u_{0 },v_{c})|^{2} \tag{7.3.14}\] \[\leq 16\int_{u=u_{1}}^{\infty}(u-u_{0})^{2}\,|\partial_{u}\psi(u, v)|^{2}\,du+A(M)|\psi(u_{0},v_{c})|^{2}+(u_{0}-u_{1})|\psi(u_{1},v_{c})|^{2}.\]
In order to bound \(\psi(u_{0},v_{c})\), we consider
\[|\psi(u_{0},v_{c})|^{2} \leq\left|\int_{u=-\infty}^{\infty}|\partial_{u}\psi(u,v_{c})du \right|^{2} \tag{7.3.15}\] \[\leq\left(\int_{u_{0}}^{\infty}\frac{1}{M^{2}+(u_{0}-u)^{2}}du \right)\left(\int_{u_{0}}^{\infty}\big{(}M^{2}+(u_{0}-u)^{2}\big{)}\,| \partial_{u}\psi(u,v_{c})|^{2}du\right)\] \[\leq\frac{\pi}{2M}\int_{u_{0}}^{\infty}\big{(}M^{2}+(u_{0}-u)^{2} \big{)}\,|\partial_{u}\psi(u,v_{c})|^{2}du.\]
We then consider the conserved \(T\)-energy (Proposition 5.2.2). In \(u,v\) coordinates, this is given by:
\[\text{T-energy}(\phi,\Sigma_{v})=\int_{-\infty}^{\infty}|\partial_{u}\psi(u, v)|^{2}+V(r)|\psi(u,v)|^{2}du. \tag{7.3.16}\]
We use (7.1.2) to derive a weighted version of the \(T\)-energy in the region \(u\geq u_{0}\):
\[\int_{u_{0}}^{\infty}\big{(}M^{2}+(u-u_{0})^{2}\big{)}\,\big{(} |\partial_{u}\psi(u,v)|^{2}+V|\psi(u,v)|^{2}\big{)}\,du =\int_{u_{0}}^{\infty}\big{(}M^{2}+(u-u_{0})^{2}\big{)}\,| \partial_{u}\psi_{v}|^{2}du \tag{7.3.17}\] \[\qquad-\int_{\omega_{2}u_{0},\omega^{2}\geq v}2(u-u_{0})\,\big{(} |\partial_{u}\psi(u,v)|^{2}+V|\psi(u,v)|^{2}\big{)}\,dv^{\prime}du\] \[\leq\int_{u_{0}}^{\infty}\big{(}M^{2}+(u-u_{0})^{2}\big{)}\,| \partial_{u}\psi_{v}|^{2}du.\]
We bound the \(u\leq u_{0}\) in a similar way:
\[\int_{u_{1}}^{u_{0}}\big{(}M^{2}+(u-u_{0})^{2}\big{)}\,\big{(} |\partial_{u}\psi(u,v)|^{2}+V|\psi(u,v)|^{2}\big{)}\,du =\int_{u_{1}}^{u_{0}}\big{(}M^{2}+(u-u_{0})^{2}\big{)}\,| \partial_{u}\psi_{v}|^{2}du \tag{7.3.18}\] \[\qquad+\int_{-\infty}^{v_{c}}\big{(}M^{2}+(u_{0}-u_{1})^{2}\big{)} \,\big{(}|\partial_{u}\psi(u_{1},v)|^{2}+V|\psi(u_{1},v)|^{2}\big{)}\,dv\] \[\qquad-\int_{u\in[u_{1},u_{0}],\omega^{2}\geq v}2(u_{0}-u)\,\big{(} |\partial_{u}\psi(u,v^{\prime})|^{2}+V|\psi(u,v^{\prime})|^{2}\big{)}\,dv^{ \prime}du\] \[\leq\int_{-\infty}^{u_{0}}\big{(}M^{2}+(u-u_{0})^{2}\big{)}\,| \partial_{u}\psi_{v}|^{2}du\] \[\qquad+\int_{-\infty}^{v_{c}}\big{(}M^{2}+(u_{0}-u_{1})^{2}\big{)} \,|\partial_{v}\psi_{RN}|^{2}.\]
Applying (7.3.15), (7.3.17) and (7.3.18) to (7.3.14) obtains (7.3.6).
The proof of (7.3.5) is fairly straightforward:
\[\int_{u=u_{1}}^{\infty}|\partial_{u}\psi(u,v_{c})-\partial_{u}\psi _{vv}(-u)|^{2}du =\int_{u=u_{1}}^{\infty}\left|\int_{-\infty}^{v_{c}}\partial_{u} \partial_{u}\psi dv\right|^{2}du=\int_{u=u_{1}}^{\infty}\left|\int_{-\infty}^{v_ {c}}V\psi dv\right|^{2}du \tag{7.3.19}\] \[\leq\left(\int_{-\infty}^{v_{c}}\left(\int_{u_{1}}^{\infty}V^{2}| \psi|^{2}du\right)^{1/2}dv\right)^{2}\leq\sup_{v\leq v_{c}}\left(\int_{u_{1}}^{ \infty}|\psi(u,v)|^{2}du\right)\left(\int_{-\infty}^{v_{c}}V(u_{1},v)dv\right) ^{2}\] \[\leq A(l+1)^{4}(r(u_{1},v_{c})-r_{+})^{2}\sup_{v\leq v_{c}}\left( \int_{u_{1}}^{\infty}|\psi(u,v)|^{2}du\right).\]
Here, we have used Minkowski's integral inequality to reach the second line, we have used that \(V\) (as defined in (7.1.2)) is decreasing in \(u\) for \(r\leq 8M/3\). Then (7.3.6) and (7.3.19) imply (7.3.5).
In the case of \(|q|<1\), we first show that to prove (7.3.7), it is sufficient to bound \(\int_{u=u_{1}}^{\infty}(u-u_{1})^{2}|\partial_{u}\psi(u,v_{c})-\partial_{u}\psi _{v}-(u)|^{2}du\).
Therefore, in order to obtain (7.3.7), we apply equation (7.3.11) to \(\phi(u,v_{c})=\psi(u,v_{c})-\psi_{v}\). It thus suffices to bound
the following weighted derivative:
\[\int_{u=u_{1}}^{\infty}(u-u_{1})^{2}|\partial_{u}\psi(u,v_{c})- \partial_{u}\psi_{\mathcal{H}^{-}}(u)|^{2}du =\int_{u=u_{1}}^{\infty}(u-u_{1})^{2}\left|\int_{-\infty}^{v_{c}} \partial_{u}\phi_{u}\psi dv\right|^{2}du=\int_{u=u_{1}}^{\infty}(u-u_{1})^{2} \left|\int_{-\infty}^{v_{c}}V\psi dv\right|^{2}du \tag{7.3.20}\] \[\leq\left(\int_{-\infty}^{v_{c}}\left(\int_{u_{1}}^{\infty}(u-u_{ 1})^{2}V^{2}|v|^{2}du\right)^{1/2}dv\right)^{2}\] \[\leq\sup_{v\leq v_{c}}\left(\int_{u_{1}}^{\infty}|\psi(u,v)|^{2} du\right)\left(\int_{-\infty}^{v_{c}}\sup_{u\geq u_{1}}\left\{(u-u_{1})V(u,v) \right\}dv\right)^{2}\] \[\leq B(l+1)^{4}(r(u_{1},v_{c})-r_{+})^{2}\sup_{v\leq v_{c}}\left( \int_{u_{1}}^{\infty}|\psi(u,v)|^{2}du\right).\]
Here we have used that there exists a constant \(C(M,q)\) such that
\[C^{-1}(l+1)^{2}e^{\kappa(v-u)}\leq V(u,v)\leq C(l+1)^{2}e^{\kappa(v-u)}, \tag{7.3.21}\]
for \(r\leq 3M\). Then applying (7.3.6) proves (7.3.7).
For the extremal case (7.3.8), we simply use Poincare's inequality to bound
\[\int_{u=u_{1}}^{2u_{0}-u_{1}}|\psi-\psi_{\mathcal{H}^{-}}|^{2}du \leq A(u_{0}-u_{1})^{2}\int_{u=u_{1}}^{2u_{0}-u_{1}}|\partial_{u} \psi-\partial_{v}\psi_{\mathcal{H}^{-}}|^{2}du+(M^{2}+(u_{0}-u_{1})^{2})|\psi (u_{1},v_{c})-\psi_{\mathcal{H}^{-}}(u_{1})|^{2} \tag{7.3.22}\] \[\leq AI.T.[\psi_{+}](u_{0}-u_{1})^{2}(r(u_{1})-r_{+})^{2}+(M^{2}+ (u_{0}-u_{1})^{2})]\psi(u_{1},v_{c})-\psi_{\mathcal{H}^{-}}(u_{1})|^{2}.\]
We are then left to bound
\[\int_{u=2u_{0}-u_{1}}^{\infty}|\psi-\psi_{\mathcal{H}^{-}}|^{2}du =\int_{x=0}^{\infty}\frac{1}{x^{2}}|\psi-\psi_{\mathcal{H}^{-}}|^ {2}dx\leq 4\int_{x=0}^{\infty}|\partial_{x}\psi-\partial_{x}\psi_{ \mathcal{H}^{-}}|^{2}dx=4\int_{w=2u_{0}-u_{1}}^{\infty}(u-2u_{0}+u_{1})^{2}| \partial_{u}\psi-\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}du, \tag{7.3.23}\]
where \(x=(u-2u_{0}+u_{1})^{-1}\).
This can then be bounded in exactly the same way as (7.3.17) to obtain
\[\int_{u=2u_{0}-u_{1}}^{\infty}(u-2u_{0}+u_{1})^{2}|\partial_{u} \psi-\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}du \leq 2\int_{u=2u_{0}-u_{1}}^{\infty}(u-2u_{0}+u_{1})^{2}\left(| \partial_{u}\psi_{\mathcal{H}^{-}}|^{2}+|\partial_{u}\psi_{\mathcal{H}^{-}}|^{ 2}\right)du \tag{7.3.24}\] \[\leq 4\int_{u=2u_{0}-u_{1}}^{\infty}(u-2u_{0}+u_{1})^{2}|\partial_{ u}\psi_{\mathcal{H}^{-}}|^{2}du.\]
By summing (7.3.22) and (7.3.23), and substituting in (7.3.24) and (7.3.15), we obtain (7.3.8).
We will also need to estimate the \(r\)-weighted energy of our solution on \(\Sigma_{v_{c}}\).
**Proposition 7.3.6** (Reissner-Nordstrom Weighted Bounds): _Let \(\psi_{+}:\mathbb{R}\to\mathbb{C}\) be a smooth, compactly supported function. Let \(\psi\) be the solution of (7.1.2), as given by Theorem 5.1.1 with radiation field on \(\mathcal{I}^{+}\) equal to \(\text{Y}_{l,m}\psi_{+}\), and which vanishes on \(\mathcal{H}^{+}\). Let \(\chi\) be a smooth function such that_
\[\chi(x)\begin{cases}=1&x\geq 1\\ \in[0,1]&x\in[0,1]\\ =0&x\leq 0\end{cases}, \tag{7.3.25}\]
_Let \(r_{0}>r_{+}\) and \(v_{c}\) be fixed. Define \(\psi_{\mathcal{H}^{-}}\) and \(\psi_{\mathcal{R}N}\) as in Theorem 6.1._
_Then there exist constants \(A(M,q,r_{0},\chi)\) and \(B(M,q,r_{0},\chi)\) such that_
\[\int_{\Sigma_{v_{c}}}\chi\left(\frac{r-r_{0}}{M}\right)r^{2}| \partial_{u}\psi|^{2}du \leq A\Bigg{(}\int_{u=-\infty}^{r_{0}^{*}-2\xi_{0}^{*}}\left(1+(v_{c}-2\tau_{0}^ {*}-u)^{2}\right)|\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}du \tag{7.3.26}\] \[\qquad\qquad+\int_{v=-\infty}^{v_{c}}\left(1+(v_{c}-v)^{2}\right)| \partial_{v}\psi_{\mathcal{R}N}|^{2}+l(l+1)|\psi_{\mathcal{R}N}|^{2}dv\Bigg{)},\]
_for \(l\neq 0\), and_
\[\int_{\Sigma_{v_{c}}}\chi\left(\frac{r-r_{0}}{M}\right)r^{3}| \partial_{u}\psi|^{2}du \leq B\Bigg{(}\int_{u=-\infty}^{r_{0}^{*}-2\xi_{0}^{*}}\left(1+(v_{c}-2\tau_{0} ^{*}-u)^{3}\right)|\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}du \tag{7.3.27}\] \[\qquad\qquad+\int_{v=-\infty}^{v_{c}}\left(1+(v_{c}-v)^{3}\right)| \partial_{v}\psi_{\mathcal{R}N}|^{3}+2M|\psi_{\mathcal{R}N}|^{2}dv\Bigg{)},\]
_for \(l=0\). Here, \(r_{0}^{*}=\frac{1}{2}(v-u)\) when \(r=r_{0}\). Note that here, we do not yet know whether the right hand sides of (7.3.26) and (7.3.27) are finite._
**Remark 7.3.7**: _In the extremal (\(q=1\)) case, (7.3.26) follows easily from Proposition 5.2.4, but the proof we will offer below will not distinguish between the extremal and sub-extremal cases._
Proof.: We first write the conserved \(T\)-energy flux though the surface
\[\Sigma_{v_{\epsilon},0}:=(\Sigma_{u=v_{\epsilon}-2v_{\epsilon}^{*}}\cap\{r\leq r _{0}\})\cup(\Sigma_{v_{\epsilon}}\cap\{r>r_{0}\}), \tag{7.3.28}\]
in terms of \(\psi\).
An explicit calculation gives:
\[T\text{-energy}(\phi,\Sigma_{v_{\epsilon},0}) =\int_{\Sigma_{v_{\epsilon},0}=-2v_{\epsilon}^{*}}\cap\{r\leq r_ {0}\}\left[2|\partial_{v}\phi|^{2}+\frac{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2} }}{2r^{2}}l(l+1)|\phi|^{2}\right]dv \tag{7.3.29}\] \[\qquad+\int_{\Sigma_{v_{\epsilon}}\cap\{r>r_{0}\}}\left[2| \partial_{v}\phi|^{2}+\frac{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}{2r^{2}}|l (l+1)|\phi|^{2}\right]dv\] \[=\int_{\Sigma_{v_{\epsilon},0}=-2v_{\epsilon}^{*}}\cap\{r\leq r_ {0}\}\left[2|\partial_{v}\psi|^{2}+V(r)|\psi|^{2}\right]dv+\int_{\Sigma_{v_{ \epsilon}}\cap\{r>r_{0}\}}2\left[|\partial_{v}\psi|^{2}+V(r)\psi^{2}\right]du\] \[=\int_{\Sigma_{v_{\epsilon},0}}2\left[|\tilde{\partial}_{r^{*}} \psi|^{2}+V(r)|\psi|^{2}\right]dr^{*},\]
where here we define \(\tilde{\partial}_{r^{*}}\) to be the \(r^{*}\) derivative along \(\Sigma_{v,0}\), i.e.
\[\tilde{\partial}_{r^{*}}=\begin{cases}\partial_{v}&r\leq r_{0}\\ \partial_{u}&r>r_{0}\end{cases}. \tag{7.3.30}\]
We define \(\tilde{\partial}_{r}:=\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right) \tilde{\partial}_{r^{*}}\).
We note here that even if \(l=0\), the \(T\)-energy bounds \((\chi\left(\frac{r-r_{0}}{M}\right)\psi)^{2}/r^{2}\) for any \(r_{0}>2M\):
\[\int_{\Sigma_{v,0}}\frac{\chi\left(\frac{r-r_{0}}{M}\right)^{2}| \psi|^{2}}{r^{2}}dr^{*} =\int_{\Sigma_{v}}\frac{\chi\left(\frac{r-r_{0}}{M}\right)^{2}| \psi|^{2}}{r^{2}}dr^{*}\leq\left(1-\frac{2M}{r_{0}}+\frac{q^{2}M^{2}}{r_{0}^{ 2}}\right)^{-1}\int_{\Sigma_{v}}\frac{\chi\left(\frac{r-r_{0}}{M}\right)^{2}| \psi|^{2}}{r^{2}}dr\] \[\leq A(r_{0})\int_{\Sigma_{v}}|\tilde{\partial}_{r}(\chi\psi)|^{ 2}dr=A(r_{0})\int_{\Sigma_{v}}|\chi\tilde{\partial}_{v}|^{2}-\chi\chi^{2}\frac {|\psi|^{2}}{M^{2}}dr\] \[\leq B(M,r_{0},\chi)\int_{\Sigma_{v}}|\tilde{\partial}_{r^{*}} \psi|^{2}+V(r)|\psi|^{2}dr^{*} \tag{7.3.31}\] \[\leq B(M,r_{0},\chi)\int_{\Sigma_{v,0}}|\tilde{\partial}_{r^{*}} \psi|^{2}+V(r)|\psi|^{2}dr^{*}.\]
Here we have used Hardy's inequality.
We now bound the integral of \(\chi\left(\frac{r-r_{0}}{M}\right)r^{p}(\partial_{v}\psi)^{2}\) on \(\Sigma_{v_{\epsilon}}\):
\[\int_{\Sigma_{v_{\epsilon}}}\chi\left(\frac{r-r_{0}}{2M}\right)r^ {p}|\partial_{v}\psi|^{2}du =\int_{v\leq r_{0}}\left[\frac{1}{2}\left(1-\frac{2M}{r^{2}}+ \frac{q^{2}M^{2}}{r^{2}}\right)\left(pr^{p-1}\chi+\frac{r^{p}}{M}\chi^{\prime} \right)|\partial_{v}\psi|^{2}+2\chi r^{p}V(r)\partial_{u}(|\psi|^{2})\right]dvdu\] \[=\int_{v\leq r_{0}}\left[\left(1-\frac{2M}{r^{2}}+\frac{q^{2}M^{ 2}}{r^{2}}\right)\left(\frac{1}{2}\left(pr^{p-1}\chi+\frac{r^{p}}{M}\chi^{ \prime}\right)|\partial_{v}\psi|^{2}+\partial_{r}(\chi r^{p}V(r))|\psi|^{2} \right)\right]dvdu\] \[\quad+\int_{\mathcal{I}-\cap\{v\leq v_{\epsilon}\}}2r^{p-2}l(l+1) |\psi|^{2}dv \tag{7.3.32}\] \[\leq\int_{v\leq r_{0}}\frac{pr^{p-1}}{2}\chi|\partial_{v}\psi|^{ 2}dvdu+A\int_{v\leq r,v\geq r_{0}}|\partial_{r^{*}}\psi|^{2}+V(r)|\psi|^{2}dvdu\] \[\quad+\int_{\mathcal{I}-\cap\{v\leq v\}}2r^{p-2}l(l+1)|\psi|^{2}dv.\]
For \(p=1\), we obtain
\[\int_{\Sigma_{v_{\epsilon}}}\chi\left(\frac{r-r_{0}}{M}\right)r|\partial_{v} \psi|^{2}du\leq A\int_{v\leq r_{0},r\geq r_{0}}|\partial_{r^{*}}\psi|^{2}+V(r)| \psi|^{2}dvdu. \tag{7.3.33}\]
Then for \(p=2\), we obtain
\[\int_{\Sigma_{v_{\epsilon}}}\chi\left(\frac{r-r_{0}}{M}\right)r^{ 2}|\partial_{v}\psi|^{2}du\leq A \int_{v=-\infty}^{v_{\epsilon}}\left(A\int_{v^{\prime}\leq v,v\geq r_{0}}| \partial_{r^{*}}\psi|^{2}+V(r)|\psi|^{2}dv^{\prime}du\right)dv+\int_{v\leq v _{\epsilon},v\geq r_{0}}|\partial_{r^{*}}\psi|^{2}+V(r)|\psi|^{2}dvdu\] \[\quad+\int_{\mathcal{I}-\cap\{v\leq v_{\epsilon}\}}2l(l+1)|\psi|^{ 2}dv. \tag{7.3.34}\]
By \(T\)-energy conservation (Proposition 5.2.2), we have that
\[\int_{v=-\infty}^{v_{\epsilon}}\left(\int_{v^{\prime}\leq v_{ \epsilon},r\geq r_{0}}|\partial_{r^{*}}\psi|^{2}+V(r)|\psi|^{2}dv^{\prime}du \right)dv\leq\int_{v=-\infty}^{v_{\epsilon}}\left(\int_{v^{\prime}=-\infty}^{v }T\text{-energy}(\phi,\Sigma_{v^{\prime}})dv^{\prime}\right)dv \tag{7.3.35}\] \[\qquad=\int_{\mathcal{I}-,u\leq v_{\epsilon}-2v_{\epsilon}^{*}} \int_{v^{\prime}\leq u}\int_{u^{\prime}\leq u^{\prime}}|\partial_{u}\psi|^{2}du^{ \prime}du^{\prime}du+\int_{\mathcal{I}-,v\leq v_{\epsilon}}\int_{v^{\prime} \leq v^{\prime}}\int_{v^{\prime}\leq v^{\prime}}|\partial_{v}\psi|^{2}dv^{ \prime\prime}dv^{\prime}dv\] \[\qquad=\int_{\mathcal{I}-,u\leq v_{\epsilon}-2v_{\epsilon}^{*}}(v _{\epsilon}-2v_{\epsilon}^{*}-u)^{2}|\partial_{u}\psi|^{2}du+\int_{\mathcal{I}-,v \leq v_{\epsilon}}(v_{\epsilon}-v)^{2}|\partial_{v}\psi|^{2}dv,\]
where we have integrated by parts to obtain the last line. (Note \(\psi\) on \(\mathcal{I}^{-}\) or \(\mathcal{H}^{-}\) is Schwartz, so we have arbitrarily large polynomial decay.)
Combining (7.3.34) and (7.3.35), we obtain:
\[\int_{\Sigma_{v_{c}}}\chi\left(\frac{r-r_{0}}{M}\right)r^{2}|\partial _{v}\psi|^{2}du\leq A\Bigg{(} \int_{\mathcal{H}^{-},u\leq v_{c}-2r_{0}^{*}}\left(1+(v_{c}-2r_{0}^{*}-u)^{ 2}\right)|\partial_{v}\psi|^{2}du \tag{7.3.36}\] \[+\int_{\mathcal{I}^{-},v\leq v_{c}}\left(1+(v_{c}-v)^{2}\right)| \partial_{v}\psi|^{2}+l(l+1)|\psi|^{2}dv\Bigg{)}.\]
for \(l\neq 0\), and
\[\int_{\Sigma_{v_{c}}}\chi\left(\frac{r-r_{0}}{M}\right)r^{3}| \partial_{v}\psi|^{2}du\leq A\Bigg{(} \int_{\mathcal{H}^{-},u\leq v_{c}-2r_{0}^{*}}\left(1+(v_{c}-2r_{0}^{*}-u)^{ 3}\right)|\partial_{v}\psi|^{2}du \tag{7.3.37}\] \[+\int_{\mathcal{I}^{-},v\leq v_{c}}\left(1+(v_{c}-v)^{3}\right)| \partial_{v}\psi|^{3}+2M|\psi|^{2}dv\Bigg{)},\]
for \(l=0\), as required.
**Proposition 7.3.8** (Pointwise Bounds): _Let \(\psi_{+}:\mathbb{R}\to\mathbb{C}\) be a smooth, compactly supported function. Let \(\psi\) be the solution of (7.1.2), as given by Theorem 5.1.1, with radiation field on \(\mathcal{I}^{+}\) equal to \(Y_{i,m}\psi_{+}\), and which vanishes on \(\mathcal{H}^{+}\). Let \(r_{0}>r_{+}\) and \(v_{c}\) be fixed. Then there exists a constant \(A(M,q,r_{0})\) such that_
\[|\psi(u_{1},v_{c})|^{2}\leq AI.E.[Y_{i,m}\psi_{+},v_{c},u_{1},u_{0}], \tag{7.3.38}\]
_for any \(u_{1}>v_{c}-r_{0}^{*}\). Here, \(r_{0}^{*}=\frac{1}{2}(v-u)\) on \(r=r_{0}\). Here we define \(\psi_{\mathcal{H}^{-}}\) and \(\psi_{RN}\) as in Theorem 6.1._
Proof.: This is a fairly straight-forward consequence of Proposition 7.3.6.
\[|\psi(u_{1},v_{c})|^{2} \leq 2\left|\int_{-\infty}^{v_{c}}\partial_{v}\psi_{RN}(v)dv \right|^{2}+2\left|\int_{-\infty}^{v_{c}-r_{0}^{*}}\partial_{v}\psi(u,v_{c}) du\right|^{2}+2\left|\int_{v_{c}-r_{0}^{*}}^{u_{1}}\partial_{v}\psi(u,v_{c})du \right|^{2} \tag{7.3.39}\] \[\leq 2\left(\int_{-\infty}^{v_{c}}\frac{1}{M^{2}+(v_{c}-v)^{2}}dv \right)\left(\int_{-\infty}^{v_{c}}\left(M^{2}+(v_{c}-v)^{2}\right)|\partial_ {v}\psi_{RN}(v)|^{2}dv\right)\] \[\quad+2\left(\int_{-\infty}^{v_{c}-r_{0}^{*}-M^{-2}}dv\right) \left(\int_{-\infty}^{v_{c}}r^{2}|\partial_{v}\psi(u,v_{c})|^{2}dv\right)+2 \left(\int_{v_{c}-r_{0}^{*}-M}^{u_{1}}dv\right)\left(\int_{v_{c}-r_{0}^{*}-M} ^{u_{1}}|\partial_{u}\psi(u,v_{c})|^{2}dv\right)\] \[\leq A\Bigg{(}\int_{u=-\infty}^{v_{c}-2r_{0}^{*}}\left(M^{2}+(v_{ c}-2r_{0}^{*}-u)^{2}\right)|\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}du+\int_{u=- \infty}^{u_{1}}(u_{1}-v_{c}+r_{0}^{*})|\partial_{u}\psi_{\mathcal{H}^{-}}|^{2 }du\] \[\quad\quad+\int_{v=-\infty}^{v_{c}}\left(M^{2}+M(u_{1}-v_{c}+r_{0 }^{*})+(v_{c}-v)^{2}\right)|\partial_{v}\psi_{RN}|^{2}+l(l+1)|\psi_{RN}|^{2}dv \Bigg{)}\] \[\leq AI.E.[Y_{i,m}\psi_{+},v_{c},u_{1},u_{0}],\]
as required.
**Proposition 7.3.9** (Extremal Weighted Energy Bounds): _Let \(\psi_{+}:\mathbb{R}\to\mathbb{C}\) be a smooth, compactly supported function. Let \(\psi\) be the solution of (7.1.2), as given by Theorem 5.1.1, on an extremal Reisnner-Nordstrom \(\mathcal{M}_{RN}\) with \(|q|=1\) background, with radiation field on \(\mathcal{I}^{+}\) equal to \(Y_{i,m}\psi_{+}\), and which vanishes on \(\mathcal{H}^{+}\). Let \(u_{1},v_{c}\) be such that \(r(u_{1},v_{c})<\frac{2}{3}M\). Then there exists a constant \(C=C(M)>0\) such that_
\[\int_{\Sigma_{u_{1}}\cap\{v\leq v_{c}\}}\left(1-\frac{M}{r}\right)^{-2}| \partial_{v}\psi|^{2}\sin\theta d\theta d\varphi dv\leq C(u_{1}-v_{c})^{2}I.E. [Y_{i,m}\psi_{+},v_{c},u_{1},u_{0}]. \tag{7.3.40}\]
Proof.: We will be considering the solution to the wave equation (1.2), \(\tilde{\phi}\), with radiation field \(\tilde{\psi}\), given by
\[\tilde{\psi}|_{\mathcal{I}^{-}}=\psi_{\mathcal{H}^{-}} \tag{7.3.41}\] \[\tilde{\psi}|_{\mathcal{I}^{-}}=\begin{cases}\chi\left(\frac{v-v _{c}}{M}\right)\psi_{RN}(v_{c})&v>v_{c}\\ \psi_{RN}&v\leq v_{c}\end{cases}, \tag{7.3.42}\]
where \(\chi\) is as defined in Proposition 7.3.6.
By a standard domain of dependence argument, we can see that
\[\int_{\Sigma_{u_{1}}\cap\{v\leq v_{c}\}}\left(1-\frac{M}{r}\right)^{-2}| \partial_{v}\psi|^{2}dv=\int_{\Sigma_{u_{1}}\cap\{v\leq v_{c}\}}\left(1-\frac{ M}{r}\right)^{-2}|\partial_{v}\tilde{\psi}|^{2}dv. \tag{7.3.43}\]
Here we will be making use of Proposition 5.2.4. By choosing \(r_{0}=\frac{3}{2}M\), we can then bound
\[\int_{\Sigma_{u_{1}}\cap\{v\leq v_{c}\}}\left(1-\frac{M}{r}\right)^ {-2}|\partial_{v}\tilde{\psi}|^{2}dv \leq C\int_{\mathcal{H}^{-}\cap\{u\geq u_{1}\}}\left(M^{2}+(u-u_{1})^{ 2}\right)|\partial_{v}\tilde{\psi}|^{2}+l(l+1)|\tilde{\psi}|^{2}du \tag{7.3.44}\] \[\quad+C\int_{\mathcal{I}^{-}\cap\{v\leq u\}}(M^{2}+(v-u_{1})^{ 2})|\partial_{v}\tilde{\psi}|^{2}+l(l+1)|\psi|^{2}dv\] \[\leq C(u_{1}-v_{c})^{2}I.E.[Y_{i,m}\psi_{+},v_{c},u_{1},u_{0}].\]
This brings us on to the main proof of this section.
Proof of Proposition 7.3.1.: We will consider \(|q|<1\) first:
\[\int_{\omega=-\infty}^{\infty}\left(\kappa+|\omega|\right)\left| \dot{|\psi}_{\mathcal{H}^{-}}|^{2}-|\dot{\psi}_{0}|^{2}\right|d\omega \leq A\left(\int_{\omega=-\infty}^{\infty}\left|\dot{\psi}_{ \mathcal{H}^{-}}-\dot{\psi}_{0}\right|^{2}d\omega\right)^{1/2}\left(\int_{ \omega=-\infty}^{\infty}\left(\kappa^{2}+\omega^{2}\right)\left(|\dot{\psi}_ {\mathcal{H}^{-}}|^{2}+|\dot{\psi}_{0}|^{2}\right)d\omega\right)^{1/2} \tag{7.3.45}\] \[\leq A\left(\int_{u_{1}}^{\infty}|\psi_{\mathcal{H}^{-}}-\psi(u, v_{c})|^{2}\,du+\int_{-\infty}^{u_{1}}|\psi_{\mathcal{H}^{-}}|^{2}du+|\psi(u_{1}, v_{c})|^{2}\right)^{1/2}\] \[\qquad\left(\int_{u_{1}}^{\infty}\kappa^{2}\left(|\psi_{\mathcal{ H}^{-}}|^{2}+|\psi_{0}|^{2}\right)+|\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}+| \partial_{u}\psi_{0}|^{2}du+\int_{-\infty}^{u_{1}}\kappa^{2}|\psi_{\mathcal{H} ^{-}}|^{2}+|\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}du\right)^{1/2}\] \[\leq A\left(I.T.[\psi_{+}|r(u_{1},v_{c})-r_{+})^{2}+I.E.[\psi_{+},u_{0}]\right)^{1/2}\] \[\qquad\left(\int_{u_{1}}^{\infty}\kappa^{2}|\psi_{+}|^{2}+| \partial_{u}\psi_{+}|^{2}du+\int_{-\infty}^{u_{1}}\kappa^{2}|\psi_{\mathcal{H} ^{-}}|^{2}+|\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}du+|\psi(u_{1},v_{c})|^{2} \right)^{1/2}\] \[\leq A\left(\left(r(u_{1},v_{c})-r_{+}\right)^{2}I.T.[\psi_{+}|r _{+}|+I.E.[\psi_{+},v_{c},u_{1},u_{0}]\right)^{1/2}\left(I.T.[\psi_{+}]\right) \right)^{1/2},\]
as required. We have used Proposition 7.3.5 to bound \(\int_{u}^{\infty}\left|\psi_{\mathcal{H}^{-}}-\psi_{0}\right|^{2}du\), and Proposition 7.3.8 to bound \(|\psi_{1}(u_{1},v)|^{2}\).
For the \(|q|=1\) case, we have \(\kappa=0\). We then proceed slightly differently to obtain our result, by first noting:
\[-i\omega\partial_{v}\dot{\psi}=\widetilde{V\psi}. \tag{7.3.46}\]
Here, \(\dot{\psi}\) is the Fourier transform of \(\psi\) with respect to \(u\). While this transform may not exist in an \(L^{2}\) sense, as \(V\psi\) is an \(L^{2}\) function on \(\Sigma_{u}\), this implies that \(\partial_{v}\dot{\psi}\) exists in a distributional sense.
We will write
\[\psi_{0}(u,v):=\begin{cases}\psi(u,v)-f(u-u_{1})\psi(u_{1},v)&u\geq u_{1}\\ 0&u<u_{1}\end{cases}. \tag{7.3.47}\]
Substituting (7.3.47) into (7.3.46), we obtain
\[-i\omega\partial_{v}\widehat{\psi_{0}}=-\widetilde{V\psi_{0}}-\widetilde{f^{ \prime}\partial_{v}}\widehat{\psi}-\psi(u_{1},v)\widetilde{Vf}_{\mathbb{I}_{ u_{2}u}}, \tag{7.3.48}\]
where
\[\mathbb{I}_{u_{2}u_{1}}=\begin{cases}1&u\geq u_{1}\\ 0&u\leq u_{1}\end{cases}. \tag{7.3.49}\]
We therefore obtain:
\[\left|\int_{\omega=-\infty}^{\infty}|\omega|\left(|\dot{|\psi}_{ \mathcal{H}^{-}}|^{2}-|\dot{\psi}_{0}|^{2}\right)d\omega\right| \leq 2\left|\int_{\omega=-\infty}^{\infty}\frac{\omega}{|\omega|} \int_{v=-\infty}^{v_{c}}\mathbb{R}\left(i\bar{\psi}_{0}\left(\widetilde{V\psi_ {0}}+\widetilde{f^{\prime}\partial_{v}}\widehat{\psi}+\psi(u_{1},v)\widetilde {Vf}_{\mathbb{I}_{u_{2}u_{1}}}\right)\right)dvd\omega\right| \tag{7.3.50}\] \[\quad+\left|\int_{\omega=-\infty}^{\infty}|\omega|\left(|\dot{| \psi}_{\mathcal{H}^{-}}|^{2}-|\dot{\psi}_{0}(u,-\infty)|^{2}\right)d\omega\right|\] \[\leq 2\left|\int_{\omega=-\infty}^{\infty}\frac{\omega}{|\omega|} \int_{v=-\infty}^{v_{c}}\mathbb{R}\left(i\bar{\psi}_{0}\widetilde{V\psi_{0} }\right)dvd\omega\right|\] \[\quad+2\left(\int_{u}\int_{c\leq v_{c}}\left(1-\frac{M}{r} \right)^{2}|\psi_{0}|^{2}dudv\right)^{1/2}\left(\int_{u}\int_{c\leq v_{c}}f^{ \prime}\frac{|\partial_{v}\psi|^{2}}{\left(1-\frac{M}{r}\right)^{2}}dudv\right)^ {1/2}\] \[\quad+2\sup_{v\leq v_{c}}|\psi(u_{1},v)|\left(\int_{u}\int_{v\leq v _{c}}\left(1-\frac{M}{r}\right)^{2}|\psi_{0}|^{2}dudv\right)^{1/2}\left(\int_{u \geq u_{1}}\int_{v\leq v_{c}}\frac{V^{2}}{\left(1-\frac{M}{r}\right)^{2}}f^{2} dudv\right)^{1/2}\] \[\quad+AI.E.[\psi_{+},v_{c},u_{1},u_{0}],\]
where \(\mathbb{R}(f)\) denotes the real part of \(f\).
We note that
\[\int_{u}\int_{v\leq v_{c}}\left(1-\frac{M}{r}\right)^{2}|\psi_{0}|^{2}dudv \leq\frac{A(r(u_{1},v_{c})-M)}{(l+1)^{4}}I.T.[\psi_{+}], \tag{7.3.51}\]
using Proposition 7.3.5, and given \(f\) is a compactly supported function, we can bound
\[\int_{u\geq u_{1}}\int_{v\leq v_{c}}\frac{V^{2}}{\left(1-\frac{M}{r}\right)^{2}}f ^{2}dudv\leq A(l+1)^{4}(r(u_{1},v_{c})-M). \tag{7.3.52}\]
We can also bound
\[\int_{u}\int_{v\leq v_{c}}f^{\prime\prime}\frac{|\partial_{v}\psi|^{2}}{\left(1- \frac{M}{r}\right)^{2}}dudv\leq A\sup_{f\neq 0}\int_{v\leq v_{c}}\frac{|\partial_{v}\psi|^{2}}{ \left(1-\frac{M}{r}\right)^{2}}dv\leq C(u_{1}-v_{c})^{2}I.E.[\psi_{+},v_{c},u_{1 },u_{0}], \tag{7.3.53}\]
using Proposition 7.3.9.
Thus we have
\[\left|\int_{u=-\infty}^{\infty}|\omega|\left(|\dot{|\psi}_{\mathcal{H}^{-}}|^{2}-| \dot{\psi}_{0}|^{2}\right)d\omega\right|\leq 2\left|\int_{u=-\infty}^{\infty}\frac{\omega}{|\omega|} \int_{v=-\infty}^{v_{c}}\mathbb{R}\left(i\bar{\psi}_{0}\widetilde{V\psi_{0}} \right)dvd\omega\right|+\left((u_{1}-v_{c})I.T.[\psi_{+}]I.E.[\psi_{+},v_{c},u_{1 },u_{0}]\right)^{1/2}. \tag{7.3.54}\]
Given that we know in some sense that \(\phi\rightarrow\phi_{\mathcal{H}^{-}}\), and \(V\sim\frac{\widetilde{U(l+1)}}{\nu^{\pi^{2}}}\) as \(v\rightarrow-\infty\), we will replace \(\phi_{0}=\phi_{0\mathcal{H}^{-}}+\delta\phi(u,v)\) and \(V=\frac{\widetilde{U(l+1)}}{\nu^{\pi^{2}}}+\delta V\).
Then we obtain
\[\left|\int_{\omega=-\infty}^{\infty}\frac{\omega}{|\omega|}\int_ {\omega=-\infty}^{v_{c}}\mathbb{R}\left(i\tilde{\psi}_{0}\widetilde{V} \widehat{\psi_{0}}\right)dvd\omega\right| \leq \tag{7.3.55}\] \[\qquad+\int_{v=-\infty}^{v_{c}}|\delta V(u_{1},v)||\psi_{0}|^{2}_ {\Sigma_{c}}dv\] \[\qquad+2\int_{v=-\infty}^{v_{c}}V(u_{1},v)||\psi_{0}|_{\Sigma_{c} }|\delta\psi_{0}|_{\Sigma_{c}}dv\] \[\leq\] \[\quad+AI.T.[\psi_{+}]\left(\frac{r(u_{1},v_{c})}{M}-M^{2}\ln\frac{ r(u_{1},v_{c})}{M}-1\right)+\frac{u_{0}-u_{1}}{u_{1}-v_{c}}\left(r(u_{1},v_{c})-M \right)\] \[\qquad+\frac{M+u_{0}-u_{1}}{u_{1}-v_{c}}I.T.[\psi_{+}]^{1/2}I.E.[ \psi_{+},v_{c},u_{1},u_{0}]^{1/2}.\]
We have bounded \(\|\delta\psi_{0}\|_{\Sigma_{c}}\) using Proposition 7.3.5, and \(|\delta V(u_{1},v)|\leq A(l+1)^{2}\left(1-\frac{M}{r}\right)^{3}\ln\left(\frac {r}{M}-1\right)\), by an explicit calculation. Also, as \(\psi_{0\mathcal{H}^{-}}\) is compactly supported, we can bring the integral over \(v\) inside the Fourier transform. Denoting \(\delta\psi_{\mathcal{H}^{-}}=\psi_{\mathcal{H}^{-}}-\psi_{0\mathcal{H}^{-}}\), we then obtain
\[\left|\int_{\omega=-\infty}^{\infty}\frac{\omega}{|\omega|} \mathbb{R}\left(i\tilde{\psi}_{0\mathcal{H}^{-}}\left(\frac{\widehat{\psi_{ \mathcal{H}^{-}}}}{u-v_{c}}\right)\right)d\omega\right| \leq\] (7.3.56) \[\qquad+\int_{\omega=-\infty}^{\infty}\frac{\omega}{|\omega|} \mathbb{R}\left(i\tilde{\psi}_{\mathcal{H}^{-}}\left(\frac{\widehat{\psi_{ \mathcal{H}^{-}}}(v_{c})}{u-v_{c}}\right)\right)d\omega\right|\] \[\qquad+\int_{\omega=-\infty}^{\infty}\frac{\omega}{|\omega|} \mathbb{R}\left(i\left(\tilde{\psi}_{\mathcal{H}^{-}}+\delta\widehat{\psi}_{ \mathcal{H}^{-}}\right)\left(\frac{\delta\psi_{\mathcal{H}^{-}}(v_{c})}{u-v_{ c}}\right)\right)d\omega\bigg{|}\] \[\leq\] \[\qquad+\sqrt{\frac{\pi}{2}}\left|\int_{\omega=-\infty}^{\infty} \mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(i\tilde{\psi}_{ \mathcal{H}^{-}}\left(-\omega\right)\psi_{\mathcal{H}^{-}}(v_{c})e^{-iv_{c} \omega}\right)d\omega\right|\] \[\qquad+\sqrt{\frac{\pi}{2}}\left|\int_{\omega=-\infty}^{\infty} \mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(i\tilde{\psi}_{ \mathcal{H}^{-}}(-\omega)\psi_{\mathcal{H}^{-}}(v_{c})e^{-iv_{c}\omega}\right)d \omega\right|\] \[\qquad+\sqrt{\frac{\pi}{2}}\left|\int_{\omega=-\infty}^{\infty} \mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(i\tilde{\psi}_{ \mathcal{H}^{-}}(-\omega)\psi_{\mathcal{H}^{-}}(v_{c})e^{-iv_{c}\omega}\right)d \omega\right|\right.\] \[\qquad+\sqrt{\frac{\pi}{2}}\left|\int_{\omega=-\infty}^{\infty} \mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(i\tilde{\psi}_{ \mathcal{H}^{-}}(-\omega)\psi_{\mathcal{H}^{-}}(v_{c})e^{-iv_{c}\omega}\right)d \omega\right|\right.\] \[\qquad+2|\delta\psi_{\mathcal{H}^{-}}[L^{2}]\|\partial_{\mathcal{H }^{-}}\psi_{\mathcal{H}^{-}}[L^{2}\] \[\qquad+\|\delta\psi_{\mathcal{H}^{-}}(v_{c})\psi_{\mathcal{H}^{-}} (v_{c})\psi_{\mathcal{H}^{-}}(-v_{c})\psi_{\mathcal{H}^{-}}[L^{2}\] \[\qquad+\left(|\delta\psi_{\mathcal{H}^{-}}(v_{c})|+\delta\tilde{\psi} _{\mathcal{H}^{-}}(-v_{c})\psi_{\mathcal{H}^{-}}(v_{c})\right)d\omega\] \[\qquad\leq\sqrt{\frac{\pi}{2}}\left|\int_{\omega=-\infty}^{\infty} \int_{\omega=\infty}^{\infty}\frac{\omega}{|\omega|^{\omega}|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left( \frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{ \omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{| \omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|} \mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left( \frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{ \omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{| \omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|} \mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R} \left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left( \frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left( \frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{ \omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{| \omega|}\mathbb{R}\left(\frac{\omega}{|\omega|}\mathbb{R}\left(\frac{\omega}{|\omega|} \mathbb{R}\left(\frac{\omega}{
If \(\hat{\psi}\) is only supported on positive frequencies, then we can simplify the following
\[\sqrt{\frac{\pi}{2}}\left|\int_{\omega=-\infty}^{\infty}\int_{\omega \in\mathbb{R}}\frac{\omega}{|\omega|}\frac{\omega^{\prime}}{|\omega^{\prime}|} \mathbb{R}\left(\hat{\psi}_{K^{-}}^{\ast}(-\omega)\hat{\psi}_{K^{-}}(\omega- \omega^{\prime})e^{-i\eta_{\omega}\omega^{\prime}}\right)\,d\omega^{\prime}d\omega\right| =\sqrt{\frac{\pi}{2}}\left|\int_{\omega=-\infty}^{\infty}\int_{ \omega\in\mathbb{R}}\frac{\omega^{\prime}}{|\omega^{\prime}|}\mathbb{R}\left( \hat{\psi}_{K^{-}}^{\ast}(-\omega)\hat{\psi}_{K^{-}}(\omega-\omega^{\prime})e^ {-i\eta_{\omega}\omega^{\prime}}\right)\,d\omega^{\prime}d\right. \tag{7.3.59}\] \[=\sqrt{\frac{\pi}{2}}\left|\int_{\omega\in\mathbb{R}}\frac{\omega ^{\prime}}{|\omega|^{\prime}}\mathbb{R}\left(\widehat{\psi_{K^{-}}}^{\ast}|^{ 2}(-\omega^{\prime})e^{-i\eta_{\omega}\omega^{\prime}}\right)\,d\omega^{ \prime}\right|\] \[=\sqrt{\frac{\pi}{2}}\left|\int_{\omega\in\mathbb{R}}\mathbb{R} \left(\widehat{\psi_{K^{-}}}^{\ast}|^{2}(\omega^{\prime})e^{i\eta_{\omega} \omega^{\prime}}\right)\,d\omega^{\prime}\right|=|\psi_{K^{-}}(v_{c})|^{2}\] \[\leq I.E.[\psi.\tau,v_{c},u_{1},u_{0}].\]
### Reflection off the Matter Cloud
In this section we will consider evolving our solution in the small compact (in \(r\), \(t^{\star}\) coordinates) region, given by \(R_{2}:=\{v\leq v_{c},u\leq u_{1}\}\cap\{r\geq r_{b}\}\).
We will consider the surface \(r=r_{b}(t^{\star})\) to be instead parametrised by \(v=v_{b}(u)\), or equivalently by \(u=u_{b}(v)=v_{b}^{-1}(v)\), as in (3.3.7). The final aim of this section will be to prove the following result:
**Proposition 7.4.1** (\(H^{1/2}\) Error from the Reflection): _Let \(\psi_{+}:\mathbb{R}\to\mathcal{C}\) be a smooth, compactly supported function. Let \(\psi\) be the solution of (7.1.2), as given by Theorem 6.1, with radiation field on \(\mathcal{I}^{+}\) equal to \(Y_{i,m}\psi_{+}\), and which vanishes on \(\mathcal{H}^{+}\). Let \(f:\mathbb{R}\to\mathbb{R}\) be a smooth compactly supported function such that \(f(0)=1\), and define_
\[\psi_{0}(u):=\begin{cases}\psi(u,v_{c})-f(u-u_{1})\psi(u_{1},v_{c})&u\geq u_{ 1}\\ 0&u<u_{1}\end{cases}. \tag{7.4.1}\]
\[\psi_{1}(v):=\begin{cases}\psi(u_{1},v)-\left(1-f(u_{b}(v)-u_{1})\right)\psi( u_{1},v_{c})&v\in[u_{1}(u_{1}),v_{c}]\\ 0&v\notin[u_{1}(u_{1}),v_{c}]\end{cases}. \tag{7.4.2}\]
_Then there exists a constant \(A(M,q,r_{b})\) such that_
\[\int_{\omega=-\infty}^{\infty}(\kappa+|\omega|)\left|\left|\hat{\psi}_{0} \right|^{2}-\widehat{\psi_{1}\circ v_{b}}\right|^{2}\left|d\omega\leq\begin{cases} A_{V}(\sqrt{u_{1}}e^{-\frac{3\pi}{2}u_{1}}I.T[\gamma_{m}\psi_{+}]&|q|<1\\ \frac{4\pi}{4!}I.T[\gamma_{m}\psi_{+}]&|q|=1\end{cases}, \tag{7.4.3}\]
_provided \(u_{1}<u_{0}\). Here \(\kappa\) is the surface gravity, as in (1.8) and \(I.T.[\psi_{+}]\) is as defined in the statement of Theorem 7.1_
**Remark 7.4.2**: _This Proposition gives the exact definition of the "\(\approx\)" in (7.2.2) in Section 7.2._
**Proposition 7.4.3** (Reflection Energy Bounds): _Let \(\psi\) be a smooth solution to (7.1.2) subject to (7.1.3). Define the function \(\psi_{refl}\) by_
\[\psi_{refl}:=\psi(u,v_{c})-\psi(u_{b}(v),v_{c}). \tag{7.4.4}\]
_Then there exists a constant \(A(\mathcal{M})\) such that_
\[\int_{v_{b}(u_{1})}^{v_{c}}\left|\frac{du_{b}}{dv}\right|^{-1}|\partial_{\psi }\psi(u_{1},v)-\partial_{v}\psi_{refl}(u_{1},v)|^{2}dv\leq\begin{cases}Ae^{-3 \pi u_{1}}I.T.[\psi_{+}]&|q|<1\\ \frac{4\pi^{2}_{1}I.T[\psi_{+}]}{u_{1}^{2}}&|q|=1\end{cases}, \tag{7.4.5}\]
_for any sufficiently large \(u_{1}\)._
_Furthermore, there exists a constant \(B(\mathcal{M})\) such that_
\[\int_{v_{b}(u_{1})}^{v_{c}}\left|\frac{du_{b}}{dv}\right|\left|\psi(u_{1},v)- \psi_{refl}(u_{1},v)\right|^{2}dv\leq\begin{cases}Bu_{1}e^{-3\pi u_{1}}I.T.[ \psi_{+}]&|q|<1\\ \frac{Bu_{1}I.T[\psi_{+}]}{u_{1}^{2}}&|q|=1\end{cases}. \tag{7.4.6}\]
_Finally, there exists a constant \(C(\mathcal{M})\) such that_
\[\int_{v_{b}(u_{1})}^{v_{c}}|\psi(u_{1},v)|^{2}dv\leq\begin{cases}C\frac{I.T[ \psi_{+}]}{(1+1)}e^{-\pi u_{1}}&|q|<1\\ C\frac{I.T[\psi_{+}]}{(1+1)}e^{-\pi u_{1}}&|q|=1.\end{cases} \tag{7.4.7}\]
_If \(\psi_{+}\) decays quickly enough as \(u\to\infty\), then_
\[\int_{v_{b}(u_{1})}^{v_{c}}|\psi(u_{1},v)|^{2}dv\leq e^{\pi(u_{0}-3u_{0})}\int _{u=-\infty}^{\infty}e^{\pi u}|\partial_{u}\psi_{+}|^{2}du. \tag{7.4.8}\]
**Remark 7.4.4**: _Note the form of \(\psi_{refl}\) here is the solution to the equation_
\[\partial_{u}\partial_{b}\psi_{refl}=0, \tag{7.4.9}\]
_with initial conditions_
\[\psi_{refl}(u,v_{c})=\psi(u,v_{c}). \tag{7.4.10}\]
_Therefore, \(\psi_{refl}\) is reflected as if it were in Minkowski spacetime. Thus, Proposition 7.4.3 gives a bound on how much the solution differs from a reflection in \(1+1\) dimensional Minkowski._
_This solution \(\psi_{refl}\) takes the form:_
\[\psi_{refl}(u,v)=\psi(u,v_{c})-\psi(u_{b}(v),v_{c}). \tag{7.4.11}\]
Proof.: We begin by considering how the derivatives of \(\psi\) and \(\psi_{r\ell/l}\) vary on the surface of the matter cloud, and applying (7.1.2):
\[\int_{S_{[u_{1},\infty)}}(u-u_{1})^{p}|\partial_{u}\psi-\partial_{u} \psi_{r\ell/l}|^{2}du \leq\int_{S_{[u_{1},\infty)}}(u-u_{1})^{p}\left|\int_{u_{1}(u_{1}) }^{e_{c}}\partial_{u}\partial_{v}\psi dv\right|^{2}du=\int_{S_{[u_{1},\infty)}} (u-u_{1})^{p}\left|\int_{r_{u}(u_{1})}^{e_{c}}V\psi dv\right|^{2}du \tag{7.4.12}\] \[\leq\int_{S_{[u_{1},\infty)}}(u-u_{1})^{p}\|1\|_{L^{2}(\Sigma_{u} )}^{2}\|V\psi\|_{L^{2}(\Sigma_{u})}^{2}du\] \[\leq\int_{u_{1}}^{\infty}\int_{r=u_{1}(u)}^{e_{c}}(u-u_{1})^{p}(v -v_{b}(u))V^{2}|\psi|^{2}dvdu.\]
We then proceed to do the same to compare the derivatives of \(\psi\) on the surface of the matter cloud to the derivatives of \(\psi\) on \(\Sigma_{u_{1}}\):
\[\int_{\Sigma_{u_{1}}}|u_{1}^{\prime}|^{-1}(u_{b}(v)-u_{1})^{p}| \partial_{u}\psi(u_{1},v)-\partial_{u}\psi(u_{b}(v),v)|^{2}du \leq\int_{\Sigma_{u_{1}}}|u_{b}^{\prime}|^{-1}(u_{b}(v)-u_{1})^{p }\left|\int_{u_{1}}^{e_{c}}\partial_{u}\partial_{v}\psi dv\right|^{2}dv \tag{7.4.13}\] \[=\int_{\Sigma_{u_{1}}}|u_{b}^{\prime}|^{-1}(u_{b}(v)-u_{1})^{p} \left|\int_{u_{1}(u_{b})}^{e_{c}}V\psi dv\right|^{2}dv\] \[\leq\int_{\Sigma_{u_{1}}}|u_{b}^{\prime}|^{-1}(u_{b}(v)-u_{1})^{p }\|V\sqrt{V}\|_{L^{2}(\Sigma_{u})}^{2}\left\|V\sqrt{V}\psi\right\|_{L^{2}( \Sigma_{u})}^{2}dv\] \[\leq A\int_{u=u_{1}}^{e_{c}}\int_{v=u_{1}(u)}^{e_{c}}|u_{b}^{ \prime}|^{-1}(u_{b}(v)-u_{1})^{p}\left(\int_{u_{1}}^{\infty}V(u^{\prime},v_{c} )du^{\prime}\right)V|\psi|^{2}dvdu.\]
Combining equations (7.4.12) and (7.4.13), we obtain
\[\int_{\Sigma_{u}}|u_{b}^{\prime}|^{-1}(u_{b}(v)-u_{1})^{p}| \partial_{v}\psi-\partial_{v}\psi_{r\ell/l}|^{2}dv \tag{7.4.14}\] \[\leq 2\int_{u=u_{1}}^{\infty}\int_{v=u_{1}(u)}^{e_{c}}\left((v-v_{ b}(u))(u-u_{1})^{p}V+|u_{b}^{\prime}|^{-1}(u_{b}(v)-u_{1})^{p}\left(\int_{u_{1}}^{ \infty}V(u^{\prime},v_{c})du^{\prime}\right)\right)V|\psi|^{2}dvdu\] \[\leq\begin{cases}A(l+1)^{2}\int_{u=u_{1}}^{\infty}\int_{v=u_{1}(u )}^{e_{c}}\left(u_{b}(v)^{p-2}+u_{b}(v)^{p-1}e^{-\kappa n_{1}}\right)V|\psi|^{2 }dvdu&|q|<1\\ A(l+1)^{2}\int_{u=u_{1}}^{\infty}\int_{v=u_{1}(u)}^{e_{c}}\left((v-v_{b}(u))u^{ p-2}+\frac{u_{1}(v)^{p-2}}{u_{1}-v_{c}}\right)V|\psi|^{2}dvdu&|q|=1\end{cases},\]
where we have used the behaviour of \(u_{b}\) and \(v_{b}\) for late times to obtain the final line.
We first consider the extremal case, and the non-extremal case for sufficiently good decay. We will be using energy boundedness results from [7] (Theorem 1).
This tells us that the non-degenerate energy of \(\phi\) on \(\Sigma_{u}\) bounds the non-degenerate energy of \(\phi\) on \(\Sigma_{u}\), _i.e._ there exists a constant \(A(\mathcal{M})\) such that for all \(u^{\prime}>u_{1}\)
\[\int_{u_{1}}^{\infty}\frac{|\partial_{u}\phi(u,v_{c})|^{2}}{1- \frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}+l(l+1)\left(1-\frac{2M}{r}+\frac{q^{2} M^{2}}{r^{2}}\right)|\phi(u,v_{c})|^{2}du \tag{7.4.15}\] \[\geq A\int_{u_{1}(u^{\prime})}^{e_{c}}\frac{|\partial_{v}\phi(u ^{\prime},v)|^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}+l(l+1)\left(1- \frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)|\phi(u^{\prime},v)|^{2}dv\] \[\geq\frac{1}{\left(1-\frac{2M}{r}(u^{\prime},v_{c})+\frac{q^{2} M^{2}}{r^{(\prime},v_{c})^{p}}\right)\left(v_{c}-v_{b}(u^{\prime})\right)^{2}} \int_{u_{1}(u^{\prime})}^{v_{c}}|\phi(u^{\prime},v)|^{2}dv.\]
Here we have used that \(\phi\) vanishes on the surface of the matter cloud in order to apply Poincare's inequality. In the extremal case, we can bound this non-degenerate energy as follows:
\[\int_{u_{1}}^{\infty}\frac{|\partial_{u}\phi(u,v_{c})|^{2}}{1- \frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}+ l(l+1)\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}} \right)|\phi(u,v_{c})|^{2}du \tag{7.4.16}\] \[\leq A\int_{u_{1}}^{\infty}(M^{2}+u^{2})|\partial_{u}\phi(u,v_{c} )|^{2}+l(l+1)\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}\right)|\phi(u,v_{c})| ^{2}du\] \[\leq A\int_{u_{1}}^{\infty}(M^{2}+(u-u_{0})^{2}+u_{0}^{2})|\partial _{u}\phi(u,v_{c})|^{2}+l(l+1)\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}} \right)|\phi(u,v_{c})|^{2}du.\]
We can use (7.3.17) and (7.3.18) to bound this by \(I.T.[\psi_{+}]\) plus \(u_{0}^{2}\) times \(T\) energy. Combining (7.4.15) and (7.4.16), we obtain (7.4.7) for the extremal case. We then proceed by combining this with (7.4.14):
\[A(l+1)^{2}\int_{u=u_{1}}^{\infty}\int_{v=u_{1}(u)}^{e_{c}} \left(\left(v-v_{b}(u))u^{p-2}+\frac{u_{b}(v)^{p-2}}{u_{1}-v_{c}}\right)V| \psi|^{2}dvdu \tag{7.4.17}\] \[\leq A(l+1)^{2}\sup_{u\geq u_{1},v<v_{c}}\left(\left((v-v_{b}(u))u^ {p-2}+\frac{u_{b}(v)^{p-2}}{u_{1}-v_{c}}\right)V\right)\] \[\leq A(l+1)^{-2}\sup_{u\geq u_{1},v<v_{c}}\left(\left((v-v_{b}(u))u^ {p-2}+\frac{u_{b}(v)^{p-2}}{u_{1}-v_{c}}\right)V\right)\] \[\qquad\qquad\int_{u_{1}}^{\infty}\left(\left(1-\frac{2M}{r(u^{ \prime},v_{c})}+\frac{q^{2}M^{2}}{r(u^{\prime},v_{c})^{2}}\right)(v_{c}-v_{b}(u^{ \prime}))^{2}\right)u_{0}^{2}I.T.[\psi_{+}]du\] \[\leq\frac{A(l+1)^{-2}u_{0}^{2}I.T.[\psi_{+}]}{u_{1}^{2}}\sup_{u \geq u_{1},v<v_{c}}\left(\left((v-v_{b}(u))u^{p-2}+\frac{u_{b}(v)^{p-2}}{u_{1}- v_{c}}\right)V\right).\]
The two cases we will be considering are the cases \(p=0\) and \(p=2\). These give the following results:
\[\int_{\Sigma_{u}}|u^{\prime}_{b}|^{-1}|\partial_{v}\psi-\partial_{v }\psi_{refl}|^{2}dv \leq\frac{A\Phi_{0}^{2}TZ.T.[\psi_{+}]}{u_{1}^{2}} \tag{7.4.18}\] \[\int_{\Sigma_{u}}|u^{\prime}_{b}|^{-1}(u_{b}(v)-u_{1})^{2}| \partial_{v}\psi-\partial_{v}\psi_{refl}|^{2}dv \leq\frac{A\Phi_{0}^{2}TZ.T.[\psi_{+}]}{u_{1}^{2}}. \tag{7.4.19}\]
In the less well behaved sub-extremal case, we instead consider energy currents in order to bound
\[\int_{u=u_{1}}^{\infty}\int_{v=v_{b}(u)}^{v_{c}}V|\psi|^{2}dvdu\leq A\int_{u=u_ {1}}^{\infty}\int_{v=v_{b}(u)}^{v_{c}}V|\phi|^{2}dvdu. \tag{7.4.20}\]
We apply divergence theorem to the following vector field
\[J:=M\partial_{u}+\alpha(v-v_{c})\partial_{v}-\left(1-\frac{2M}{r}+\frac{q^{2 }M^{2}}{r^{2}}\right)\left(M-\alpha(v-v_{c})\right)\frac{\phi\nabla\phi}{2r}+ \phi^{2}\nabla\left(\frac{M-\alpha(v-v_{c})}{4r}\left(1-\frac{2M}{r}+\frac{q^ {2}M^{2}}{r^{2}}\right)\right), \tag{7.4.21}\]
in the region \(u\geq u_{1},v\in[v_{b}(u),v_{c}]\). Here, \(\alpha=\alpha(\mathcal{M})>0\) is chosen such that \(\alpha^{-1}\geq 8(1-q^{2})^{-1}\) and \(\alpha^{-1}\geq(v-v_{c})u^{\prime}_{b}(v)\) for \(v-v_{c}\) sufficiently small.
A simple calculation reveals the following results:
\[\nabla.J =\left(\frac{\alpha}{2r^{2}}+\frac{M-\alpha(v_{c}-v)}{2r^{3}} \left(\frac{M}{r}\left(1-\frac{q^{2}M}{r}\right)-\left(1-\frac{2M}{r}+\frac{q ^{2}M^{2}}{r^{2}}\right)\right)\right)l(l+1)|\phi|^{2} \tag{7.4.22}\] \[\qquad+\left(\frac{M^{4}(1-q^{2})}{r^{6}}\left(2-\frac{Mq^{2}}{r }\right)-\left(\frac{q^{2}M^{4}(8-q^{2})}{r^{6}}+\frac{q^{2}M^{5}(4-3q^{2})} {r^{7}}\right)\alpha\right)|\phi|^{2}\] \[\qquad+\left(O\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}} \right)+O(v_{c}-v)\right)|\phi|^{2}\geq 0\] \[-du(J) =\frac{2\alpha(v_{c}-v)|\partial_{v}\phi|^{2}}{1-\frac{2M}{r}+ \frac{q^{2}M^{2}}{r^{2}}}+\frac{MI(l+1)|\phi|^{2}}{2r^{2}}-\partial_{v}\left( \frac{(M-\alpha(v_{c}-v))|\phi|^{2}}{2r}\right)\] \[\qquad+\left(\frac{M^{2}}{2r^{3}}\left(1-\frac{q^{2}M}{r}\right) +\frac{\alpha}{2r}+\frac{\alpha M^{2}}{2r^{3}}\left(4-q^{2}-\frac{q^{2}M}{r} \right)\right)|\phi|^{2}\] \[\qquad+\left(O\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}} \right)+O(v_{c}-v)\right)|\phi|^{2}\] \[-dv(J)|_{v=v_{b}} =\frac{2M|\partial_{v}\phi|^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{2} }{r^{2}}}-\frac{M\phi\partial_{u}\phi}{r}+\frac{M}{4r^{2}}\left(\frac{2M}{r} \left(1-\frac{q^{2}M}{r}\right)+\left(1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2} }\right)\right)|\phi|^{2}\] (7.4.23) \[\leq\frac{4M|\partial_{v}\phi|^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{ 2}}{r^{2}}}\] \[(du-u^{\prime}_{b}dv)(J)|_{S} =\frac{2u^{\prime}_{b}\left(1-\alpha(v-v_{c})u^{\prime}_{b} \right)|\partial_{v}\phi|^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}\geq 0, \tag{7.4.24}\]
for \(v_{c}-v\) sufficiently small. Here'sufficiently small' only depends on the background RNOS model \(\mathcal{M}\).
We then apply divergence theorem:
\[\int\nabla.J+\int_{\Sigma_{u_{0}}\cap\{v\leq v_{c}\}}(-du(J))+\int((du-v^{ \prime}_{b}dv)(J))=\int_{\Sigma_{u_{c}}}(-dv(J)), \tag{7.4.25}\]
to obtain
\[\int_{\Sigma_{u_{0}}\cap\{v\leq v_{c}\}}(v_{c}-v)|\partial_{v}\phi|^{2}dv\leq \frac{2M}{\alpha}\int_{\Sigma_{v_{c}}}|\partial_{u}\phi|^{2}du. \tag{7.4.26}\]
An application of Hardy's inequality to the function \(f\left(\frac{v_{c}-v}{4M}\right)\phi\), for \(f\) a smooth function which vanishes at \(0\) yields
\[\int_{v_{b}}^{v_{c}}\frac{f^{2}|\phi|^{2}}{(v_{c}-v)^{2}}dv \leq 4\int_{v_{c}}^{v_{c}}f^{2}|\partial_{v}\phi|^{2}-\frac{ff^{ \prime}}{M}\partial_{v}\left(|\phi|^{2}\right)+\frac{f^{\prime 2}}{M^{2}}|\phi|^{2}dv \tag{7.4.27}\] \[\leq 4\int_{v_{b}}^{v_{b}}f^{2}|\partial_{v}\phi|^{2}-\frac{ff^{ \prime\prime}}{M^{2}}|\phi|^{2}dv. \tag{7.4.28}\]
Choosing \(f(x)=x\sqrt{-\log(x)}\) gives
\[\int_{v_{b}}^{v_{c}}-\log\left(\frac{v_{c}-v}{M}\right)|\phi|^{2}dv\leq 4\int_{v_{b}}^{v_{c}}-\log \left(\frac{v_{c}-v}{M}\right)(v_{c}-v)^{2}|\partial_{v}\phi|^{2}+\left(\frac{1}{ 2}+\frac{1}{4(-\log\left(\frac{v_{c}-v}{M}\right))}\right)\frac{|\phi|^{2}}{M^ {2}}dv. \tag{7.4.29}\]
Provided \(v_{c}-v\leq\frac{M}{r}\), then this can be rearranged for
\[\int_{v_{b}}^{v_{c}}-\log\left(\frac{v_{c}-v}{M}\right)|\phi|^{2}dv\leq 16\int_{v_{b}}^{v_ {c}}-\log\left(\frac{v_{c}-v}{M}\right)(v_{c}-v)^{2}|\partial_{v}\phi|^{2}, \tag{7.4.30}\]
and as we know the form \(u^{\prime}_{b}\) takes for \(v\) close to \(v_{b}\), we have that
\[\int_{\Sigma_{u}\cap\{v\leq v_{c}\}}u^{\prime}_{b}(v)|\phi|^{2}dv \leq A\int_{\Sigma_{u}\cap\{v\leq v_{c}\}}-\log\left(\frac{v_{c}-v}{M} \right)(v_{c}-v)^{2}|\partial_{v}\phi|^{2} \tag{7.4.31}\] \[\leq-A\log\left(\frac{v_{c}-v_{b}(u_{1})}{M}\right)(v_{c}-v_{b}(u _{1}))\int_{\Sigma_{u_{c}}}|\partial_{v}\phi|^{2}du.\]
This can be used to immediately obtain (7.4.7), and we can also apply it to (7.4.14) to obtain
\[\int_{\Sigma_{u}}|u_{\text{t}}^{\prime}|^{-1}(u_{\text{t}}(v)-u_{1})^ {p}|\partial_{v}\psi-\partial_{v}\psi_{ref}|^{2}dv \leq A(l+1)^{2}e^{-\kappa u_{1}}\left|\log\left(\frac{v_{c}-v_{b}(u )}{M}\right)\right|^{p-1}(v_{c}-v_{b}(u_{1}))\int_{\Sigma_{u_{c}}}|\partial_{v }\phi|^{2}du\int_{u_{1}}^{\infty}V(u,v_{c})du\] \[\leq\begin{cases}Ae^{-3\kappa u_{1}}I.T.[\psi_{+}]&p=0\\ Au_{1}e^{-3\kappa u_{1}}I.T.[\psi_{+}]&p=2\end{cases}. \tag{7.4.32}\]
Finally, we can use Hardy's inequality on the function \(\psi(u_{1},v_{b}(u))-\psi_{ref}(u_{1},v_{b}(u))\), as this vanishes on \(u=u_{1}\), to get
\[\int_{u_{1}(u_{1})}^{v_{c}}u_{b}^{\prime}|\psi(u_{1},v)-\psi_{ref} (u_{1},v)|^{2}dv =\int_{u_{1}}^{\infty}|\psi(u_{1},v_{b}(u))-\psi_{ref}(u_{1},v_{b }(u))|^{2}du \tag{7.4.33}\] \[\leq 4\int_{u_{1}}^{\infty}(u-u_{1})^{2}|\partial_{v}\psi(u_{1},v _{b}(u))-\partial_{v}\psi_{ref}(u_{1},v_{b}(u))|^{2}du\] \[=4\int_{u_{1}(u_{1})}^{v_{c}}(u_{b}^{\prime}(v))^{-1}(u_{b}(u)-u_ {1})^{2}|\partial_{v}\psi(u_{1},v)-\partial_{v}\psi_{ref}(u_{1},v)|^{2}du.\]
The proof of (7.4.8) requires using a weighted \(T\)-energy estimate to obtain
\[\int_{u=-\infty}e^{\kappa(u-u_{0})}|\partial_{u}\psi_{+}(u-u_{0}) |^{2}du \geq\int_{u=-\infty}e^{\kappa(u-u_{0})}|\partial_{u}\psi(u,v_{c}) |^{2}du \tag{7.4.34}\] \[\geq ae^{-\kappa u_{0}}\int_{u=u_{1}}^{\infty}\frac{|\partial_{u} \psi(u,v_{c})|^{2}}{1-\frac{2M}{p^{2}}+\frac{4M^{2}}{p^{2}}}du\] \[\geq ae^{-\kappa u_{0}}\int_{v=u_{1}(u_{1})}^{v}\frac{|\partial_{ v}\psi(u_{1},v)|^{2}}{1-\frac{2M}{p^{2}}+\frac{4M^{2}}{p^{2}}}dv\] \[\geq ae^{-\kappa(u_{0}-u_{1})}|\partial_{v}\psi(u_{1},v)|^{2}dv \geq ae^{-\kappa(u_{0}-3u_{1})}|\psi(u_{1},v)|^{2}dv.\]
As required. Here we have again used Theorem 1 from [7], followed by Poincare's inequality.
Note we still have other errors occurring across the rest of \(\Sigma_{u_{c}}\):
\[\int_{u=-\infty}^{u_{1}}|\partial_{u}\psi(u,v_{c})-\partial_{u}\psi_{R^{-}}(u )|^{2}du \leq\begin{cases}Ae^{\kappa u_{1}}I.E.[\psi_{+},v_{c},u_{1},u_{0}]&|q|<1\\ AI.E.[\psi_{+},v_{c},u_{1},u_{0}]&|q|=1\end{cases}. \tag{7.4.35}\]
This brings us to the proof of Proposition 7.4.1.
Proof of Proposition 7.4.1.: This proof is similar to that of Proposition 7.3.1. We consider \(|q|<1\) first
\[\int_{u=-\infty}^{\infty}\left(\kappa+|\omega|\right)\left|| \dot{\psi}_{0}|^{2}-\widehat{|\psi_{1}\circ v_{b}|^{2}}\right|d\omega \leq A\left(\int_{u=-\infty}^{\infty}\left|\dot{\psi}_{0}-\widehat {\psi_{1}\circ v_{b}}\right|^{2}d\omega\right)^{1/2}\left(\int_{u=-\infty}^{ \infty}\left(\kappa^{2}+\omega^{2}\right)\left(|\widehat{|\psi_{1}\circ v_{b }|^{2}}+|\dot{\psi}_{0}|^{2}\right)d\omega\right)^{1/2}\] \[\leq A\left(\int_{u_{1}}^{\infty}|\psi_{0}-\dot{\psi}_{1}\circ v_{ b}|^{2}du\right)^{1/2}\left(\int_{u_{1}}^{\infty}|\psi_{1}\circ v_{b}|^{2}+|\psi_{0}|^{2}+| \partial_{u}(\psi_{1}\circ v_{b})|^{2}+|\partial_{u}\psi_{0}|^{2}du\right)^{1/2}\] \[\leq A\left(\int_{u_{1}}^{\infty}|\psi(u_{1},v_{c})-\psi(u,v_{c})- \psi(u_{1},v_{b}(u))|^{2}du\right)^{1/2} \tag{7.4.36}\] \[\qquad\left(\int_{u_{1}}^{\infty}|\psi_{0}-\dot{\psi}_{1}\circ v_{ b}|^{2}+|\psi_{0}|^{2}+|\partial_{u}(\psi_{0}-\psi_{1}\circ v_{b})|^{2}+| \partial_{u}\psi_{0}|^{2}du\right)^{1/2}\] \[\leq A\left(I.T.[\psi_{+}|u_{1}e^{-3\kappa u_{1}})^{1/2}\left(I.T. [\psi_{+}]\right)^{1/2}\] \[\leq A\sqrt{u_{1}}e^{-\frac{4M}{p}u_{1}}I.T.[\psi_{+}].\]
As required. Here, we have used Proposition 7.4.3 to reach the penultimate line.
We next consider \(|q|=1\), where \(\kappa=0\)
\[\int_{u=-\infty}^{\infty}|\omega|\left||\dot{\psi}_{0}|^{2}- \widehat{|\psi_{1}\circ v_{b}|^{2}}\right|d\omega \leq A\left(\int_{u=-\infty}^{\infty}\omega^{2}\left|\dot{\psi}_{0}- \widehat{\psi_{1}\circ v_{b}}\right|^{2}d\omega\right)^{1/2}\left(\int_{u=- \infty}^{\infty}\left(|\widehat{|\psi_{1}\circ v_{b}|^{2}}+|\dot{\psi}_{0}|^{2} \right)d\omega\right)^{1/2} \tag{7.4.37}\] \[\leq A\left(\int_{u_{1}}^{\infty}|\partial_{u}\psi_{0}-\partial_{u}( \psi_{1}\circ v_{b})|^{2}\,du\right)^{1/2}\left(\int_{u_{1}}^{\infty}|\psi_{1} \circ v_{b}|^{2}+|\psi_{0}|^{2}du\right)^{1/2}\] \[\leq A\left(\frac{I.T.[\psi_{+}]}{u_{1}^{3}}\right)^{1/2}\left(I.T. [\psi_{+}]\right)^{1/2}\] \[\leq\frac{A}{u_{1}^{2}}I.T.[\psi_{+}].\]
### High Frequency Transmission
We now consider how our solution on \(\Sigma_{u_{1}}\) is transmitted to \(\mathcal{I}^{-}\), ultimately resulting in the following proposition:
**Proposition 7.5.1** (Hawking Radiation Error from High Frequency Transmission): _Let \(\psi_{+}:\mathbb{R}\to\mathcal{C}\) be a smooth, compactly supported function. Let \(\psi\) be the solution of (7.1.2), as given by Theorem 6.1, with radiation field on \(\mathcal{I}^{+}\) equal to \(Y_{1,m}\psi_{+}(u-u_{0})\), and which vanishes on \(\mathcal{H}^{+}\). Let \(f\) be a smooth compactly supported function such that \(f(0)=1\), and define_
\[\psi_{1}(v):=\begin{cases}\psi(u_{1},v)-\left(1-f\left((l+1)^{2}(u_{0}(v)-u_{1} )\right)\right)\psi(u_{1},v_{c})&u\geq u_{1}\\ 0&u<u_{1}\end{cases}. \tag{7.5.1}\]
_Let \(\psi_{-}\) be the past radiation field. Then there exists a constant \(A(\mathcal{M})\) such that_
\[\int_{\omega=-\infty}^{\infty}|\omega|\left|\left|\dot{\psi}_{-} \right|^{2}-|\dot{\psi}_{1}|^{2}-|\dot{\psi}_{RN}|^{2}\right|d\omega\leq\begin{cases} A\left(I.T.[Y_{1,m}\psi_{+}]e^{-\kappa u_{1}}+e^{2\kappa u_{1}}I.E.[Y_{1,m} \psi_{+},v_{c},u_{1},u_{0}]\right)&|q|<1\\ A\left(\frac{I.T.[Y_{1,m}\psi_{-}]e^{-\kappa u_{1}}}{u_{1}^{3/2}}+u_{1}^{7/2}I.E.[Y_{1,m}\psi_{+},v_{c},u_{1},u_{0}]\right)&|q|=1\end{cases}. \tag{7.5.2}\]
_Here \(\kappa\) is the surface gravity, as in (1.8) and \(I.T.[\psi_{+}]\), \(I.E.[\psi_{+},v_{c},u_{1},u_{0}]\) are as defined in the statement of Theorem 7.1. In the extremal case, we will also require \(u_{1}>u_{0}/2\)._
_Suppose further that \(|q|<1\), and that for fixed \(\delta>0\), \(\psi_{+}\), \(\psi_{M^{-}}\) and \(\psi_{RN}\) decay sufficiently fast that all \(I.E.[\psi_{+},v_{c},(1-\delta)u_{0},u_{0}]\) terms decay faster than \(e^{-\lambda_{0}(1-\delta)u_{0}}\). Then there exists a constant \(B(\mathcal{M},\delta,\psi_{+})\) such that_
\[\int_{\omega=-\infty}^{\infty}|\omega|\left|\left|\dot{\psi}_{-} \right|^{2}-|\dot{\psi}_{1}|^{2}-|\dot{\psi}_{RN}|^{2}\right|d\omega\leq Be^{- \kappa(1-\delta)u_{0}}. \tag{7.5.3}\]
**Remark 7.5.2**: _This Proposition gives the exact definition of the \(\mathbf{\approx}\)" in (7.2.4) in Section 7.2_
We start this section by bounding the energy through the surface \(\Sigma_{u_{1}(u_{1})}\), as all other energy is transmitted to \(\mathcal{I}^{-}\). However, the map taking solutions on space-like surfaces back to their past radiation fields is bounded with respect to the _non-degenerate_ energy (Theorem 7.1 in [7]). Thus we need to look at non-degenerate energy through \(\Sigma_{u_{1}(u_{1})}\). The non-degenerate energy on a surface \(\Sigma_{v}\) takes the form
\[\int_{\Sigma_{v}}\frac{|\partial_{u}\psi|^{2}}{1-\frac{2M}{r}+ \frac{2M^{2}\psi^{2}}{r^{2}}}du, \tag{7.5.4}\]
where we can absorb the \(\psi\) term using Hardy's inequality:
\[\int_{\Sigma_{v}}\left(1-\frac{2M}{r}+\frac{M^{2}q^{2}}{r^{2}} \right)\frac{|\psi^{2}|}{(r-r_{b})^{2}}du=2\int_{\Sigma_{v}}\frac{|\psi|^{2}} {(r-r_{b})^{2}}dr\leq 8\int_{\Sigma_{v}}|\partial_{v}\psi|^{2}dr=4\int_{ \Sigma_{v}}\frac{|\partial_{u}\psi|^{2}}{1-\frac{2M}{r}+\frac{M^{2}\psi^{2}} {r^{2}}}du. \tag{7.5.5}\]
We have the following proposition:
**Proposition 7.5.3** (High Frequency Reflection in Pure Reissner-Nordstrom): _Let \(\psi\) be a smooth solution to (7.1.2), and let \(v_{2}\in\mathbb{R}\). Then there exists a constant \(A(\mathcal{M})\) such that_
\[\int_{\Sigma_{u_{1}}(v)}\frac{1}{V}|\partial_{u}\psi|^{2}du\leq A \left(\int_{\Sigma_{v_{1}}\cap\{u_{1}\leq u_{1}\}}\frac{1}{V}|\partial_{u} \psi|^{2}du+\int_{v=u_{1}(u_{1})}^{v_{c}}\left|\left|\psi(u_{1},v)\right|^{2}- \left|\psi_{-}(v)\right|^{2}\right|dv\right). \tag{7.5.6}\]
_There also exists a constant \(B(\mathcal{M})\) such that_
\[\int_{\Sigma_{u_{1}}(v_{1})}\frac{|\partial_{u}\psi|^{2}}{1-\frac {2M}{r}+\frac{\sigma^{2}M^{2}}{r^{2}}}du\leq B\left(\int_{\Sigma_{v},\cap\{u _{1}\leq u_{1}\}}\frac{|\partial_{u}\psi|^{2}}{1-\frac{2M}{r}+\frac{\sigma^{2}M ^{2}}{r^{2}}}du+(l+1)^{2}\int_{\Sigma_{u_{1}}\cap\{v\leq v_{1}\}}|\psi|^{2}dv \right). \tag{7.5.7}\]
_Furthermore, there exists a constant \(C(\mathcal{M})\) such that_
\[\int_{\mathcal{I}^{-}\cap\{v\leq u_{1}(u_{1})\}}|\psi|^{2}dv\leq C \left(\int_{\Sigma_{v_{1}}\cap\{u_{1}\leq u_{1}\}}\frac{1}{V}|\partial_{u} \psi|^{2}du+\int_{v=u_{1}(u_{1})}^{v_{c}}\left|\left|\psi(u_{1},v)\right|^{2}- \left|\psi_{-}(v)\right|^{2}\right|dv\right). \tag{7.5.8}\]
Proof.: Given that \(r_{b}\to\infty\) as \(v\to-\infty\), there exists some \(v^{*}\) such that \(V^{\prime}\leq 0\) for all \(v\leq v^{*}\). We then proceed using (7.1.2):
\[E_{V}(v): =\int_{\Sigma_{v}\cap\{u\leq u_{1}\}}\frac{1}{V}|\partial_{u}\psi |^{2}du=E_{V}(v_{c})+\int_{v}^{v_{c}}\int_{\Sigma_{v^{\prime}}\cap\{u\leq u_{1} \}}\partial_{u}(|\psi|^{2})-\partial_{v}\left(\frac{1}{V}\right)|\partial_{u} \psi|^{2}dudv^{\prime}\] \[\leq E_{V}(v_{c})+\int_{v}^{v_{c}}\left(\frac{1-\frac{2M}{r}+ \frac{2^{2}M^{2}}{r^{2}}}{V^{2}}\right)V^{\prime}\] \[\leq E_{V}(v_{c})+A\int_{\max\{v,v^{\prime}\}}^{v_{c}}E_{V}(v^{ \prime})dv^{\prime}+\int_{\Sigma_{u_{1}}\cap\{v^{\prime}\in[v,u_{1}\}]}|\psi|^{ 2}dv^{\prime}-\int_{\mathcal{I}^{-}\cap\{v^{\prime}\in[v,u_{1}]\}}|\psi|^{2}dv^{ \prime} \tag{7.5.9}\] \[\leq E_{V}(v_{c})+A\int_{\max\{v,v^{\prime}\}}^{v_{c}}E_{V}(v^{ \prime})dv^{\prime}+\int_{v_{c}\setminus\{v\}}^{v_{c}}\left|\psi|^{2}dv^{ \prime}-\int_{\mathcal{I}^{-}\cap\{v^{\prime}\in[v,u_{1}]\}}|\psi|^{2}dv^{ \prime}\] \[\leq E_{V}(v_{c})+A\int_{\max\{v,v^{\prime}\}}^{v_{c}}E_{V}(v^{ \prime})dv^{\prime}+\int_{v=u_{1}(u_{1})}^{v_{c}}\left|\left|\psi(u_{1},v) \right|^{2}-\left|\psi_{-}(v)\right|^{2}\right|dv^{\prime}-\int_{\mathcal{I}^{-} \cap\{v\leq u_{1}(u_{1})\}}|\psi_{-}(v^{\prime})|^{2}dv\]
To reach the penultimate line, we have used (7.5.5).
We then apply Gronwall's Inequality to obtain
\[\int_{\Sigma_{v_{2}}}\frac{1}{V}|\partial_{u}\psi|^{2}du\leq\left( \int_{\Sigma_{v_{c}}}\frac{1}{V}|\partial_{u}\psi|^{2}du+\int_{v=u_{1}(u_{1} )}^{v_{c}}\left|\left|\psi(u_{1},v)\right|^{2}-\left|\psi_{-}(v)\right|^{2} \right|dv\right)e^{\int_{\mathcal{I}^{+}}^{v}dx}, \tag{7.5.10}\]
By keeping the \(\mathcal{I}^{-}\) term in (7.5.9), we can then bound
\[\int_{\mathcal{I}^{-}\cap\{v\leq n_{\mathrm{u}}(u_{\mathrm{t}})\}}|v|^{2}dv\leq B \left(\int_{\Sigma_{v,\mathrm{c}}\cap\{u\leq u_{\mathrm{t}}\}}\frac{1}{V}| \partial_{u}\psi|^{2}du+\int_{v=u_{\mathrm{t}}(u_{\mathrm{t}})}^{v_{\mathrm{c} }}\left|\left|\psi(u_{\mathrm{t}},v)\right|^{2}-|\psi_{-}(v)|^{2}\right|dv \right), \tag{7.5.11}\]
as required.
We perform a similar calculation for the remaining result, (7.5.7)
\[E(v): =\int_{\Sigma_{v}\cap\{u\leq u_{\mathrm{t}}\}}\frac{|\partial_{u }\psi|^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}du=E(v_{\mathrm{c}})+\int_ {v}^{v_{\mathrm{c}}}\int_{\Sigma_{v}\cap\{u\leq u_{\mathrm{t}}\}}\frac{V \partial_{u}(|\psi|^{2})}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}-\partial_{ v}\left(\frac{1}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}\right)|\partial_{u} \psi|^{2}dudv^{\prime}\] \[\leq E(v_{\mathrm{c}})+A\int_{v^{\prime}=v}^{v_{\mathrm{c}}}E(v^{ \prime})dt^{\prime}+\int_{\Sigma_{u_{\mathrm{t}}}\cap\{v\leq v_{\mathrm{t}}\} }\frac{V|\psi|^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}dv-\int_{v}^{v_{ \mathrm{c}}}\int_{\Sigma_{v}\cap\{u\leq u_{\mathrm{t}}\}}\partial_{u}\left( \frac{V}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}\right)|\psi|^{2}dudv^{\prime} \tag{7.5.12}\] \[\leq E(v_{\mathrm{c}})+A\int_{v^{\prime}=v}^{v_{\mathrm{c}}}E(v^{ \prime})dt^{\prime}+A(l+1)^{2}\int_{\Sigma_{u_{\mathrm{t}}}\cap\{v\leq v_{ \mathrm{t}}\}}|\psi|^{2}dv,\]
to which another application of Gronwall's Inequality obtains the result.
The final Proposition in this section is:
**Proposition 7.5.4** (High Frequency Transmission): _Let \(\psi\) be a smooth solution to (7.1.2), (7.1.3) such that_
\[\int_{\Sigma_{v,\mathrm{c}}\cap\{u\leq u_{\mathrm{t}}\}}\frac{| \partial_{v}\psi|^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}du=:E_{v_{ \mathrm{c}}}<\infty \tag{7.5.13}\] \[\int_{\Sigma_{u_{\mathrm{t}}}}|\psi|^{2}du=:L_{u_{\mathrm{t}}}<\infty. \tag{7.5.14}\]
_Then we have that there exists a constant \(A(\mathcal{M})\) such that_
\[\int_{v\leq v_{\mathrm{c}}}\left|\partial_{v}\psi|_{\mathcal{I}^{-}}-\partial _{v}\psi|_{\Sigma_{u_{\mathrm{t}}}}\right|^{2}dv\leq A\left((l+1)^{6}L_{u_{ \mathrm{t}}}+(l+1)^{4}E_{v_{\mathrm{t}}}\right). \tag{7.5.15}\]
Proof.: We will start this result by considering the interval \([v_{\mathrm{t}}(u_{\mathrm{t}}),v_{\mathrm{c}}]\). This proof is done in a similar manner to Proposition 7.4.3.
\[\int_{v_{\mathrm{t}}(u_{\mathrm{t}})}^{v_{\mathrm{c}}}|\partial_{ v}\psi_{\mathcal{I}^{+}}(v)-\partial_{v}\psi(u_{\mathrm{t}},v)|^{2}dv =\int_{v=u_{\mathrm{t}}(u_{\mathrm{t}})}^{v_{\mathrm{c}}}\left| \int_{u=-\infty}^{v_{\mathrm{t}}}\partial_{u}\phi_{v}\psi(u,v)du\right|^{2}dv =\int_{v=u_{\mathrm{t}}(u_{\mathrm{t}})}^{v_{\mathrm{c}}}\left| \int_{u=-\infty}^{u_{\mathrm{t}}}V\psi(u,v)du\right|^{2}dv \tag{7.5.16}\] \[\leq\left\|\sqrt{\nabla}\right\|^{2}_{L^{2}(\Sigma_{v})}\int_{v=u _{\mathrm{t}}(u_{\mathrm{t}})}^{v_{\mathrm{c}}}\int_{u=-\infty}^{u_{\mathrm{t} }}V|\psi(u,v)|^{2}dudv\] \[\leq A(l+1)^{4}\int_{v=u_{\mathrm{t}}(u_{\mathrm{t}})}^{v_{ \mathrm{c}}}\int_{u=-\infty}^{u_{\mathrm{t}}}\left(1-\frac{2M}{r}+\frac{q^{2}M ^{2}}{r^{2}}\right)\frac{|\psi(u,v)|^{2}}{r^{2}}dudv\] \[\leq A(l+1)^{4}\int_{v=u_{\mathrm{t}}(u_{\mathrm{t}})}^{v_{ \mathrm{c}}}|\psi(u,v)|^{2}+\left(\int_{u=\infty}^{u_{\mathrm{t}}}\frac{| \partial_{u}\psi|^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}M^{2}}{r^{2}}}du\right)dv\] \[\leq A(l+1)^{4}\left(L_{u_{\mathrm{t}}}+(v_{\mathrm{c}}-v_{ \mathrm{t}}(u_{\mathrm{t}}))\left(E_{v_{\mathrm{c}}}+(l+1)^{2}L_{u_{\mathrm{t} }}\right)\right.\] \[\leq A\left((l+1)^{6}L_{u_{\mathrm{t}}}+(l+1)^{4}E_{v_{\mathrm{c} }}\right).\]
For the region \(v\leq v_{\mathrm{c}}\), we will use Theorem 7.1 from [7], which gives us energy boundedness of the scattering map, that is
\[\int_{v=-\infty}^{v_{\mathrm{t}}(u_{\mathrm{t}})}|\partial_{v}\psi_{-}|^{2}dv \leq A\int_{u=-\infty}^{u=u_{\mathrm{t}}}\frac{|\partial_{u}\psi(u,v_{ \mathrm{t}}(u_{\mathrm{t}}))|^{2}}{1-\frac{2M}{r}+\frac{q^{2}M^{2}}{r^{2}}}+V| \psi|^{2}du\leq A(l+1)^{2}\left(E_{v_{\mathrm{c}}}+(l+1)^{2}L_{u_{\mathrm{t}}} \right), \tag{7.5.17}\]
which gives us our result.
Propositions 7.5.3 and 7.5.4 allow us to prove Proposition 7.5.1.
Proof of Proposition 7.5.1.: Define the following
\[\psi_{2}:=\psi_{-}-\psi_{1}-\psi_{RN}. \tag{7.5.18}\]
Note that \(\psi_{2}\) is only supported in \(v\leq v_{\mathrm{c}}\), as \(v>v_{\mathrm{c}}\) is out of the past light cone of the collapsing cloud. Thus, the solution in \(v>v_{\mathrm{c}}\) coincides with that of Reissner-Nordstrom.
We can expand (7.5.2) to get:
\[\int_{-\infty}^{\infty}|\omega|\left|\left|\dot{\psi}_{-}|^{2}-|\dot{\psi}_{1}|^{2} -|\dot{\psi}_{RN}|^{2}\right|d\omega=\int_{-\infty}^{\infty}|\omega|\left| \dot{\psi}_{2}|^{2}+2\Re\left(\left(\dot{\psi}_{1}+\dot{\psi}_{2}\right) \ddot{\bar{\psi}}_{RN}+\dot{\psi}_{1}\ddot{\bar{\psi}}_{2}\right)\right|d\omega. \tag{7.5.19}\]
We can then bound
\[\int_{\infty}^{\infty}|\omega|Re\left(\hat{\psi}_{2}\hat{\psi}_{RN} \right)d\omega\leq\|\psi_{2}\|_{L^{2}(\mathcal{I}^{-})}\|\psi_{RN}\|_{\bar{H}^{ 1}(\mathcal{I}^{-})} \tag{7.5.20}\] \[\int_{\infty}^{\infty}|\omega|\mathbb{R}\left(\hat{\psi}_{1}\hat{ \psi}_{RN}\right)d\omega\leq\left\|\frac{\hat{\psi}\hat{\psi}_{1}}{1+M^{2} \omega^{2}}\right\|_{L^{2}(\mathcal{I}^{-})}\left[\left(1+M^{2}\omega^{2} \right)\hat{\psi}_{RN}\right\|_{L^{2}(\mathcal{I}^{-})}\] (7.5.21) \[\int_{\infty}^{\infty}|\omega|\mathbb{R}\left(\hat{\psi}_{1}\hat{ \psi}_{2}^{\frac{1}{2}}\right)d\omega\leq\|\psi_{1}\|_{L^{2}(\mathcal{I}^{-})} \|\psi_{2}\|_{\bar{H}^{1}(\mathcal{I}^{-})}\] (7.5.22) \[\int_{-\infty}^{\infty}|\omega|\hat{\psi}_{2}|^{2}d\omega\leq\| \psi_{2}\|_{L^{2}(\mathcal{I}^{-})}\|\psi_{2}\|_{\bar{H}^{1}(\mathcal{I}^{-})}. \tag{7.5.23}\]
We already have a bounds on \(\|\psi_{1}\|_{L^{2}(\mathcal{I}^{-})}\), given by Proposition 7.4.3:
\[\|\psi_{1}\|_{L^{2}(\mathcal{I}^{-})}^{2} \leq\|\psi(u_{1},v)\|_{L^{2}(\{v[u_{1}(u_{1}),v]\})}^{2}+|\psi(u_{ 1},v_{c})|^{2}\|/\|_{L^{2}(\mathcal{I}^{-})}^{2}\] (7.5.24) \[\leq\begin{cases}A\left(\frac{I(\mathcal{I}(\mathcal{I}(\mathcal{ I}(\mathcal{I}(\mathcal{I}(\mathcal{I}(\mathcal{I}(\mathcal{I}(\mathcal{I}( \mathcal{I}(\mathcal{I}(\mathcal{I}(\mathcal{I}(\mathcal{I}(\mathcal{I}( \mathcal{I}(\mathcal{I}\mathcal{I}(\mathcal{I}(\mathcal{I}(\mathcal{I} \mathcal{I}(\mathcal{I}(\mathcal{I}(\mathcal{I}(\mathcal{I}(\mathcal{I }(\mathcal{I(\mathcal{I}(\mathcal{I(\mathcal{I((((((((((((((((((( (((((((( ))))))) }})}}) }) }(((((((((((((((((((((((((((((((((((((((((((((((((((((( (((((( ))))))))) (((((((((((((((((((((((((((( ((((((((( (((( ( ((( (( ))))) ((((((((((((((((( ((( ((((( (( ((((( (( ( ( ))) ((((((((( (((( (((( ( (( (( ( ( ( ( ( ( (
assuming such a sequence converges. To show that such a sequence converges, we write:
\[|g(x)| \leq\int_{x_{1}=0}^{x}|f(x_{1})|dx_{1}+\int_{x_{1}=0}^{x_{1}}\int_{x _{2}=0}^{x_{2}}\int_{x_{2}=0}^{x_{2}}|f(x_{3})|dx_{3}dx_{2}dx_{1}+... \tag{7.5.32}\] \[\leq\|1\|_{L^{2}([0,\epsilon])}\|f\|_{L^{2}([0,\epsilon])}+\int_{ x_{1}=0}^{x_{1}}\int_{x_{2}=0}^{x_{1}}\|_{L^{2}([0,\epsilon])}\|f\|_{L^{2}([0, \epsilon])}dx_{2}dx_{1}+...\] \[\leq\sqrt{\epsilon}\|f\|_{L^{2}([0,\epsilon])}\left(1+\frac{ \epsilon^{2}}{3!}+\frac{\epsilon^{3}}{5!}+...\right)\leq\cosh(\epsilon) \sqrt{\epsilon}\|f\|_{L^{2}([0,\epsilon])}.\]
We can similarly bound the derivative of \(g(x)\):
\[|g^{\prime}(x)| \leq|f(x)|dx_{2}+\int_{x_{2}=0}^{x_{2}}\int_{x_{3}=0}^{x_{2}}|f(x _{3})|dx_{3}dx_{2}+... \tag{7.5.33}\] \[\leq|f(x)|+\int_{x_{2}=0}^{x_{1}}\|_{1\|L^{2}([0,\epsilon])}\|f\|_ {L^{2}([0,\epsilon])}dx_{2}+...\] \[\leq|f(x)|+\sqrt{\epsilon}\|f\|_{L^{2}([0,\epsilon])}\left( \epsilon+\frac{\epsilon^{3}}{3!}+\frac{\epsilon^{5}}{5!}+...\right)\leq|f(x)| +\sinh(\epsilon)\sqrt{\epsilon}\|f\|_{L^{2}([0,\epsilon])}.\]
We then need to consider the values of \(A\) and \(B\) which allow this function to be twice weakly differentiable:
\[f_{-1}(\epsilon)=A+g(\epsilon)=B \tag{7.5.34}\] \[f^{\prime}_{-1}(\epsilon)=A+g^{\prime}(\epsilon)=-B. \tag{7.5.35}\]
Solving these gives
\[A=-\frac{g(\epsilon)+g^{\prime}(\epsilon)}{2} \tag{7.5.36}\] \[B=\frac{g(\epsilon)-g^{\prime}(\epsilon)}{2}. \tag{7.5.37}\]
Then the \(L^{2}\) norm of \(f_{-1}\) can be bounded by:
\[\|f_{-1}\|_{L^{2}([0,\epsilon])}^{2} \leq\int_{\epsilon}^{\infty}|B|^{2}e^{2(\epsilon-x)}dx+2\int_{- \infty}^{\epsilon}|A|^{2}e^{2(\epsilon-x)}dx+2\int_{0}^{\epsilon}|g(x)|^{2}dx \tag{7.5.38}\] \[\leq A\left([g(\epsilon)]^{2}+|g^{\prime}(\epsilon)|^{2}+\int_{0} ^{\epsilon}|g(x)|^{2}dx\right)\] \[\leq A\left(\cosh(\epsilon)^{2}+\sinh(\epsilon)^{2}+\epsilon \cosh(\epsilon)^{2}\right)\epsilon\|f\|_{L^{2}}^{2},\]
as required.
We can apply this Lemma to \(\sigma(1+M^{2}\omega^{2})^{-1}\hat{\psi}_{1}\) to get that
\[\left\|\frac{\omega\hat{\psi}_{1}}{1+M^{2}\omega^{2}}\right\|_{L^{2}(\mathcal{ I}^{-})}^{2}\leq\frac{v_{\mathrm{c}}-v_{\mathrm{u}}(u_{1})}{M}\|\psi_{1}\|_{L^{2}( \mathcal{I}^{-})}^{2}. \tag{7.5.39}\]
We then use Proposition 5.1.2 to obtain
\[\|(1+M^{2}\omega^{2})\hat{\psi}_{RN}\|_{L^{2}(\mathcal{I}^{-})}^{2} =\int_{-\infty}^{\infty}(1+M^{2}\omega^{2})^{2}|\hat{R}_{\omega,l,m}|^{2}| \hat{\psi}_{+}|^{2}d\omega\leq A\int_{-\infty}^{\infty}(1+M^{2}\omega^{2})^{2} \frac{(l+1)^{2}}{1+M^{2}\omega^{2}}|\hat{\psi}_{+}|^{2}d\omega \tag{7.5.40}\] \[\leq A\int_{-\infty}^{\infty}(l+1)^{2}(1+M^{2}\omega^{2})|\hat{ \psi}_{+}|^{2}d\omega\leq A(l+1)^{-2}I.T.[\psi_{+}]\]
Substituting into (7.5.19) gives the required results.
For the result with sufficiently fast decay of \(\psi_{+}\), we can use (7.4.8) to bound \(\int_{v_{\mathrm{u}}(u_{1})}^{v_{\mathrm{u}}}|\psi(u_{1},v)|^{2}dv\) more accurately. Setting \(u_{1}=(1-\delta)u_{0}\), all \(I.E.\) terms will decay sufficiently fast to obtain our result.
### Proof of Convergence of \(H^{1/2}\) Norm
We now have all the tools we need to calculate the final result of this section. We wish to calculate:
\[I[\psi_{+},l,u_{0}]:=\int_{-\infty}^{\infty}|\omega||\hat{\psi}_{-}|^{2}d\omega, \tag{7.6.1}\]
where \(\hat{\psi}_{-}\) is the radiation field on \(\mathcal{I}^{-}\).
Proof of Theorem 7.1.: We will define \(\psi_{0}\) and \(\psi_{1}\) as in Proposition 7.4.1, that is
\[\psi_{0}(u):=\begin{cases}\psi(u,v_{\mathrm{c}})-f(u-u_{1})\psi(u_{1},v_{ \mathrm{c}})&u\geq u_{1}\\ 0&u<u_{1}\end{cases}, \tag{7.6.2}\]
\[\psi_{1}(v):=\begin{cases}\psi(u_{1},v)-(1-f(u_{0}(v)-u_{1}))\,\psi(u_{1},v_{ \mathrm{c}})&v\in[v_{\mathrm{b}}(u_{1}),v_{\mathrm{c}}]\\ 0&v\notin[v_{\mathrm{b}}(u_{1}),v_{\mathrm{c}}]\end{cases}, \tag{7.6.3}\]
where \(f\) is a smooth compactly supported function with \(f(0)=1\).
Note this coincides with the definition of \(\psi_{0}\) in Corollary 7.3.3 and the definition of \(\psi_{1}\) in Proposition 7.5.1.
For this final calculation, we will be using Lemma \(II.6\) from [31]:
**Lemma 7.6.1**: _For \(\beta>0\), \(u\in C^{\infty}_{0}(\mathbb{R})\), we define_
\[F(\xi)=\int_{x=-\infty}^{\infty}e^{i\xi\phi^{\mu}}u^{\prime}(x)dx. \tag{7.6.4}\]
_Then we have_
\[\int_{\xi=-\infty}^{\infty}|\xi|^{-1}|F(\xi)|^{2}d\xi=\int_{\xi=-\infty}^{ \infty}|\xi|\coth\left(\frac{\pi}{\beta}|\xi|\right)|\hat{u}(\xi)|^{2}d\xi. \tag{7.6.5}\]
We also have a similar Lemma for the extremal case:
**Lemma 7.6.2**: _Let \(A\in\mathbb{R}_{>0}\), \(v_{c}\in\mathbb{R}\) be constants. Define_
\[p(v)=\frac{A}{v_{c}-v}. \tag{7.6.6}\]
_Then for all \(u\in C^{\infty}_{0}(\mathbb{R})\), we have_
\[\int_{\omega=-\infty}^{\infty}|\omega||\hat{u}|^{2}d\omega=\int_{\omega=- \infty}^{\infty}|\omega||\widehat{u\circ p}|^{2}d\omega. \tag{7.6.7}\]
_Proof._ This proof proceeds in an almost identical way to the proof of Lemma 7.6.1 (see [31]).
\[\int_{\omega=-\infty}^{\infty}|\omega||\widehat{u\circ p}|^{2}d\omega=\lim_{ \epsilon\to 0}\left(\int_{\omega=-\infty}^{\infty}|\omega|e^{-i\omega| \widehat{u\circ p}|^{2}}d\omega\right)=\lim_{\epsilon\to 0}\left(\iiint_{x,x^{ \prime},\omega=-\infty}^{\infty}|\omega|e^{-i\omega|\widehat{u\circ p}|^{2}} e^{i\omega(x^{\prime}-x)}u\circ p(x)\overline{u\circ p}(x^{\prime})dxdx^{\prime}d\omega\right) \tag{7.6.8}\]
\[=\lim_{\epsilon\to 0}\iint_{y,w\in\mathbb{R}}\left(\int_{\omega=- \infty}^{\infty}|\omega|e^{-i\omega|\epsilon|\epsilon|\epsilon|\epsilon|\epsilon |\epsilon|\epsilon|\epsilon|\epsilon|\epsilon|\epsilon|\epsilon|\epsilon|\epsilon |\epsilon|\epsilon|\epsilon|}^{2}d\omega\right.\] \[=\lim_{\epsilon\to 0}\iint_{y,y^{\prime}\in\mathbb{R}}\left(\frac{2 \left(\epsilon^{2}-\left(\frac{A}{y}-\frac{A}{y}\right)^{2}\right)}{\left( \epsilon^{2}+\left(\frac{A}{y}-\frac{A}{y}\right)^{2}\right)^{2}}\right)u(y) \hat{u}(y^{\prime})\frac{A^{2}}{y^{2}y^{\prime 2}}dydy^{\prime}\] \[=\lim_{\epsilon\to 0}\iint_{y,w\in\mathbb{R}}\left(\frac{2 \left(\frac{\epsilon^{2}y^{\prime}(y-w)^{2}}{\left(\frac{y^{2}y^{2}}{A^{2}}+w \right)^{2}}\right)}{\left(\frac{y^{2}y^{2}(y-w)^{2}}{A^{2}}+w^{2}\right)^{2}} \right)u(y)\hat{u}(y-w)dydw\] \[=\iint_{y,w\in\mathbb{R}}\lim_{\alpha\to 0}\left(\frac{2 \left(\alpha^{2}-w^{2}\right)}{\left(\alpha^{2}+w^{2}\right)^{2}}\right)u(y) \hat{u}(y-w)dydw=\int_{w\in\mathbb{R}}\lim_{\alpha\to 0}(|\widehat{u\circ e^{- \alpha|\omega|}})(\widehat{|\hat{u}|^{2}})dw\] \[=\int_{\omega=-\infty}^{\infty}|\omega||\hat{u}|^{2}d\omega,\]
as required.
In order to use Lemmas 7.6.1 and 7.6.2, we take a sequence of functions in \(C^{\infty}_{0}(\mathbb{R})\) which approximate \(\psi_{1}\) with respect to the \(L^{2}\) and \(\hat{H}^{1}\) norms.
By considering \(\psi_{1}\), we can see
\[-ie^{i\omega\nu_{\omega}}\psi_{1}:=-i\int_{\Sigma^{\infty}_{\alpha}(\gamma<v \otimes v)}\omega e^{i\omega(v_{c}-v)}\psi_{1}dv=-i\int_{w=-\infty}^{\infty} \omega e^{i\omega(v_{c}-v_{b}(u))}\psi_{1}(v_{b}(u))\frac{dv_{b}}{du}du=\int_{w =-\infty}^{\infty}e^{i\omega(v_{c}-v_{b}(u))}\left(\psi_{1}\circ v_{b}\right)^ {\prime}du. \tag{7.6.9}\]
In the extremal case, this is similar to the form of \(F\) in (7.6.4), once we note that \(v_{c}-v_{b}(u)=Ae^{-\kappa u}+O(e^{-2\kappa u})\) for large \(u\).
Thus, we define
\[\gamma_{SE}(u):=-\frac{1}{\kappa}\log\left(\frac{v_{c}-v_{b}(u)}{A}\right)=u- \frac{1}{\kappa}\log\left(1+O(e^{-\kappa u})\right)=u+O(e^{-\kappa u}), \tag{7.6.10}\]
as \(u\to\infty\). Then combining (7.6.9) and (7.6.10) gives
\[-ie^{i\omega\nu_{\omega}}\omega\hat{\psi}_{1}=\int_{w=-\infty}^{\infty}e^{i \omega(v_{c}-v_{b}(u))}\left(\psi_{1}\circ v_{b}\right)^{\prime}du=\int_{w=- \infty}^{\infty}e^{i\omega Ac^{-\kappa\gamma_{SE}}}\left(\psi_{1}\circ v_{b} \circ\gamma_{SE}^{-1}\right)^{\prime}dw. \tag{7.6.11}\]
Then we can apply Lemma 7.6.1 to obtain:
\[\int_{w=-\infty}^{\infty}|\omega||\hat{\psi}_{1}|^{2}d\omega=\int_{\mathbb{R}}| \omega|\coth\left(\frac{\pi}{\kappa}|\omega|\right)|\widehat{\psi_{1}\circ v_{b} \circ\gamma_{SE}^{-1}}|^{2}d\omega. \tag{7.6.12}\]
Note that \(|\omega|\coth\left(\frac{\pi}{\kappa}|\omega|\right)\leq\frac{\pi}{\kappa}+|\omega|\).
In the extremal case, \(v_{c}-v_{b}(u)=A_{0}u^{-1}+O(u^{-3})\), so we define
\[\gamma_{E}(u):=\frac{A_{0}}{v_{c}-v_{b}(u)}=\frac{A_{0}}{A_{0}u^{-1}+O(u^{-3} )}=u+O(u^{-2}), \tag{7.6.13}\]
as \(u\to\infty\). For \(p\) as in Lemma 7.6.2, we have
\[p(v)=\frac{A}{v_{c}-v}=\frac{A}{v_{c}-v_{b}(u_{b}(v))}=\gamma_{E}\circ u_{b}. \tag{7.6.14}\]
Thus we can apply Lemma 7.6.2 to \(\psi_{1}\circ v_{0}\circ\gamma_{E}^{-1}\) to obtain
\[\int_{\omega=-\infty}^{\infty}|\omega||\psi_{1}\circ\widehat{v_{0} \circ\gamma_{E}^{-1}}|^{2}d\omega=\int_{\omega=-\infty}^{\infty}|\omega||\psi_{ 1}\circ v_{0}\circ\widehat{\gamma_{E}^{-1}}\circ\gamma_{E}\circ u_{l}|^{2}d \omega=\int_{\omega=-\infty}^{\infty}|\omega||\hat{\psi}_{1}|^{2}d\omega, \tag{7.6.15}\]
as in the sub-extremal case.
We now note that in both the extremal and sub-extremal cases, we have:
\[\int_{\omega=-\infty}^{\infty}(\kappa+|\omega|)\left|\widehat{| \psi_{1}\circ\widehat{v_{0}}\circ\gamma^{-1}|^{2}-|\widehat{v_{1}\circ v_{0} |}^{2}}\right|d\omega \leq A\left(\int_{u_{1}}^{\infty}|\partial_{u}(\psi_{1}\circ v_{ 0})|^{2}+\kappa^{2}|\psi_{1}\circ v_{0}|^{2}du\right)^{1/2}\left(\int_{u_{1}} ^{\infty}|\psi_{1}\circ v_{0}\circ\gamma^{-1}-\psi_{1}\circ v_{0}|^{2}du \right)^{1/2}\] \[\leq A\left(I.T.[\psi_{+}]\right)^{1/2}\left(\int_{u_{1}}^{ \infty}\left|\int_{u}^{\gamma(u)}\partial_{u}(\psi_{1}\circ v_{0})du\right|^{ 2}du\right)^{1/2} \tag{7.6.16}\] \[\leq A\left(I.T.[\psi_{+}]\right)^{1/2}\left(\int_{u_{1}}^{ \infty}\left|\int_{u_{1}}^{\gamma(u_{1})}|\partial_{u}(\psi_{1}\circ v_{0})| du\right|^{2}du\right)^{1/2}\] \[\leq A\left(I.T.[\psi_{+}]\right)^{1/2}\int_{u_{1}}^{\gamma(u_{1} )}\left(\int_{u_{1}}^{\infty}|\partial_{u}(\psi_{1}\circ v_{0})|^{2}du\right) ^{1/2}du^{\prime}\] \[\leq A|\gamma(u_{1})-u_{1}|I.T.[\psi_{+}].\]
where we have dropped the subscript from \(\gamma_{SE}\), \(\gamma_{E}\). We have used Minkowski's integral inequality to reach the penultimate line.
By using (7.6.12) and (7.6.15), we obtain
\[\left|\int_{\omega=-\infty}^{\infty}|\omega|\left(|\hat{\psi}_{ \cdot}|^{2}-\coth\left(\frac{\pi}{\kappa}|\omega|\right)|\hat{\psi}_{\cdot}|^{ 2}-|\hat{\psi}_{RN}|^{2}\right)d\omega\right| \leq A\int_{\omega=-\infty}^{\infty}|\omega|\left|\left|\hat{\psi}_{ \cdot}|^{2}-|\hat{\psi}_{1}|^{2}-|\hat{\psi}_{RN}|^{2}\right|d\omega \tag{7.6.17}\] \[+A\int_{\omega=-\infty}^{\infty}|\omega|\coth\left(\frac{\pi}{ \kappa}|\omega|\right)\left|\left|\psi_{1}\circ\widehat{v_{0}}\circ\gamma^{-1} \right|^{2}-|\hat{\psi}_{1}\circ\widehat{u}_{0}|^{2}\right|d\omega\] \[+A\int_{\omega=-\infty}^{\infty}|\omega|\coth\left(\frac{\pi}{ \kappa}|\omega|\right)\left|\left|\widehat{\psi_{0}}\circ v_{0}\right|^{2}-| \hat{\psi}_{0}|^{2}\right|d\omega\] \[+A\int_{\omega=-\infty}^{\infty}|\omega|\coth\left(\frac{\pi}{ \kappa}|\omega|\right)\left|\left|\hat{\psi}_{0}\right|^{2}-|\hat{\psi}_{ \cdot}|^{2}\right|d\omega.\]
In the extremal case, we set \(u_{1}=u_{0}-\sqrt{Mu_{0}}\). Then we can apply (7.6.16), Corollary 7.3.3 and Propositions 7.4.1 and 7.5.1 to obtain the required result, again noting \(|\omega|\coth\left(\frac{\pi}{\kappa}|\omega|\right)\leq\frac{\pi}{\kappa}+|\omega|\).
If we have faster decay in the non-extremal case, then we set \(u_{1}=(1-\delta)\,u_{0}\), and again use Corollary 7.3.3 and Propositions 7.4.1, 7.5.1.
## 8 Treatment of Error Terms
In this section we show the arbitrary polynomial decay of the \(I.E.\) terms, provided that \(\hat{\psi}_{+}\) vanishes and has all derivatives vanishing at \(\omega=0\). Most of the work to show this behaviour has been previously done in the extremal (\(|q|=1\)) case, in [8]. This gives our first Theorem:
**Theorem 8.1**: _[Decay of the I.E. terms in the \(|q|=1\) case] Let \(\psi_{+}\) be a Schwartz function on the cylinder, with \(\hat{\psi}_{+}\) compactly supported on \(\omega\geq 0\). Then for each \(n\), there exists an \(A_{n}(M,\psi_{+})\) such that_
\[I.E.[\psi_{+},v_{c},u_{1},u_{0}]\leq A_{n}\left((u_{0}-u_{1})^{-n}+(u_{0}-v_{c} )^{-n}\right). \tag{8.1}\]
_Here, \(I.E.\) is as defined in Theorem 7.1, in the case of an extremal (\(|q|=1\)) RNOS model._
Proof.: As \(\hat{\phi}_{+}\) and all it's \(\omega\) derivatives vanish at \(\omega=0\), then \(\hat{\psi}_{-n}=\omega^{-n}\psi_{+}\) is also a Schwartz function. Instead of imposing \(\psi_{+}\) as our radiation field on \(\mathcal{I}^{+}\), we can use \(\psi_{-n}\). The resulting solution has the property
\[\partial_{t^{*}}^{n}\psi_{-n}=\psi, \tag{8.2}\]
by uniqueness of solutions to the wave equation (Theorem 5.1.1).
We then apply Theorem 4.2 (with \(u_{0}\) as the origin) from [8] to \(\psi_{-n}\), to see
\[\int_{\mathcal{R}^{-}}(1+(u-u_{0})^{2})^{n}|\partial_{u}\psi_{ \mathcal{R}^{-}}|^{2} +(1+(u-u_{0})^{2})^{n}|\hat{\nabla}\hat{\psi}_{\mathcal{R}^{-}}|^{2}\sin d \theta d\phi du \tag{8.3}\] \[+\int_{\mathcal{I}^{-}}(1+(v-u_{0}-R)^{2})^{n}|\partial_{u}\psi_{ RN}|^{2}+(1+(v-u_{0}-R)^{2})^{n}|\hat{\nabla}\hat{\psi}_{RN}|^{2}\sin\theta d \theta d\phi du\leq A_{n}[\psi_{+}].\]
Restricting the integrals to \(u\leq u_{1}\) and \(v\leq v_{c}\), we can see that
\[I.E.[\psi_{+},v_{c},u_{1},u_{0}] \leq A\int_{u=-\infty}^{u_{1}}(1+(u-u_{0})^{2})^{3/2}|\partial_{u} \psi_{\mathcal{R}^{-}}|^{2}+(1+(u-u_{0})^{2})^{3/2}|\hat{\nabla}\hat{\psi}_{ \mathcal{R}^{-}}|^{2}\sin\theta d\theta d\phi du \tag{8.4}\] \[+\int_{v=-\infty}^{v_{c}}(1+(v-u_{0}-R)^{2})^{3/2}|\partial_{u} \psi_{\mathcal{R}N}|^{2}+(1+(v-u_{0}-R)^{2})^{3/2}|\hat{\nabla}\hat{\psi}_{ RN}|^{2}\sin\theta d\theta d\phi du\] \[\leq A_{n}[\psi_{+}]\left((1+(u_{1}-u_{0})^{2})^{-n+3/2}+(1+(u_{0}- v_{c})^{2})^{-n+3/2}\right),\]
giving our result.
We now look to extend this result to the sub-extremal case. The following section will closely follow the equivalent proof of the extremal case [8]. The next ingredient needed for the \(r^{\star\beta}\) method is integrated local energy decay, or \(ILED\). This will be done in a manner similar to [32].
**Proposition 8.2** (Iled for sub-extremal Reissner-Nordstrom): _Let \(\phi\) be a solution of (1.2) on a sub-extremal (\(q|<1\)) Reissner-Nordstrom background \(\mathcal{M}_{RN}\). Let \(t_{0}\) be a fixed value of \(t\), and let \(R\) be a large fixed constant. Then there exists a constant \(A=A(M,q,R,n)\) such that_
\[\int_{-\infty}^{t_{0}}\left(\int_{\Sigma_{t}\cap[r^{\star}]\leq R }|\partial_{r}\phi|^{2}\right)dt \leq A\int_{\Sigma_{t_{0}},R}dn(J^{\partial_{t}})\leq A\int_{\Sigma_{t_{0} }}-dt(J^{\partial_{t}}) \tag{8.5}\] \[\int_{-\infty}^{t_{0}}\left(\int_{\Sigma_{t}\cap[r^{\star}]\leq R }-dt(J^{\partial_{t}})\right)dt \leq A\sum_{|\alpha|+j\leq 1}\int_{\Sigma_{t_{0}},R}dn(J^{\partial_{t}}[ \partial_{t}^{\alpha}\phi])\leq A\sum_{|\alpha|+j\leq 1}\int_{\Sigma_{t_{0}}}-dt(J^{ \partial_{t}}[\partial_{t}^{j}\Omega^{\alpha}\phi]). \tag{8.6}\]
Proof.: Consider Reisnner-Nordstrom spacetime in \(r^{\star},t,\theta,\varphi\) coordinates. For ease of writing, we will denote
\[D(r^{\star})=1-\frac{2M}{r}+\frac{M^{2}q^{2}}{r^{2}}. \tag{8.7}\]
As done so far in this paper, we will restrict to a spherical harmonic. We will first consider the case \(l\geq 1\). We choose \(\omega=h^{\prime}/4+hD/2r\), \(h^{\prime}(r^{\star})=(A^{2}+(r^{\star}-R)^{2})^{-1}\), and consider divergence theorem applied to
\[J^{\mathbf{X}} :=J^{X,\omega}+\frac{h^{\prime}}{D}\beta\phi^{2}\partial_{r^{ \star}} \tag{8.8}\] \[\beta =\frac{D}{r}-\frac{r^{\star}-R}{A^{2}+(r^{\star}-R)^{2}},\]
for \(A\) and \(R\) yet to be chosen.
Note that the flux of this current through any \(t=const\) surface is bounded by the \(\partial_{r^{\star}}\phi\) and \(\partial_{t}\phi\) terms of the \(T\) energy. The bulk term of this is given by
\[K^{\mathbf{X}}=\nabla^{\star}J^{\mathbf{X}}_{r}=\frac{h^{\prime}}{D}\left( \partial_{r}\phi+\beta\phi\right)^{2}+\left(\frac{(r^{\star}-R)^{2}-A^{2}}{2D ((r^{\star}-R)^{2}+A^{2})^{3}}+\left(\frac{l(l+1)}{r^{2}}\left(\frac{D}{r}- \frac{D^{\prime}}{2D}\right)+\frac{D^{\prime}}{2r^{2}}-\frac{D^{\prime}}{2Dr} \right)h\right)\phi^{2}. \tag{8.9}\]
Calculating the coefficient of \(h\phi^{2}\) gives us
\[\frac{l(l+1)}{r^{2}}\left(\frac{D}{r}-\frac{D^{\prime}}{2D}\right) +\frac{D^{\prime}}{2r^{2}}-\frac{D^{\prime\prime}}{2Dr} \tag{8.10}\] \[=\frac{M^{4}}{r^{7}}\left(l(l+1)\left(\frac{r}{M}\right)^{4}-3(l (l+1)-1)\left(\frac{r}{M}\right)^{3}+(2q^{2}l(l+1)-4q^{2}-8)\left(\frac{r}{M} \right)^{2}+15q^{2}\left(\frac{r}{M}\right)-6q^{4}\right)\] \[=\frac{M^{4}}{r^{7}}\left((x-1)(x-2)(l(l+1)x^{2}+3(x-1))-(1-q^{2} )((2(l+1)-4)x^{2}+15x-6(1+q^{2}))\right)\] \[=l(l(l+1)(x-3)x^{3}+(3x-8)x^{2}+q^{2}((2l(l+1)-4)x^{2}+15x-6q^{2}),\]
where \(x=r/M\). Searching for roots of this, we can see there is a root at \(r=M\), but this is strictly less than \(0\) for \(r<2M\), and strictly greater than \(0\) for \(r>3M\). In this interval, we consider the function
\[f(x) =l(l+1)x-3(l(l+1)-1)+(2q^{2}l(l+1)-4q^{2}-8)x^{-1}+15q^{2}x^{-2}-6 q^{4}x^{-3} \tag{8.11}\] \[f^{\prime}(x) =l(l+1)(1-2q^{2}x^{-2})+(8+4q^{2})x^{-2}-30q^{2}x^{-3}+18q^{4}x^{ -4}>0, \tag{8.12}\]
for \(x>2\). Therefore the coefficient of \(h\phi^{2}\) in (8.9) has exactly one root, in a bounded region of \(r^{\star}\). We label this point \(r_{0}^{\star}\), and we let
\[h(r_{0}^{\star})=0. \tag{8.13}\]
As \(h\) has a positive gradient, this means that \(f(x)h\geq 0\), with a single quadratic root at \(r_{0}^{\star}\). Provided \(R>r_{0}^{\star}\), we also know \(h>\pi/2A\) for sufficiently large values of \(r^{\star}\). Thus to ensure \(K^{\mathbf{X}}\) is positive definite, it is sufficient to show that \(R\) and \(A\) can be chosen such that
\[\frac{(r^{\star}-R)^{2}-A^{2}}{2D((r^{\star}-R)^{2}+A^{2})^{3}}+\frac{M}{r^{ \star}}f\left(\frac{r}{M}\right)h>0. \tag{8.14}\]
We only need to consider the region \(|r^{\star}-R|<A\). By choosing \(R-r_{0}^{\star}-A>>M\), we can ensure that in this region, \(D>1-\epsilon\), \(\frac{M}{r}f\left(\frac{r}{M}\right)\geq l(l+1)(1-\epsilon)\), and \(r\leq r^{\star}(1-\epsilon)\). Thus it is sufficient to choose \(R\) and \(A\), with \(R-r_{0}^{\star}-A>>M\), such that
\[l(l+1)\pi(1-\epsilon)-\frac{A\left(A^{2}-(r^{\star}-R)^{2}\right)r^{\star 3}}{ \left((r^{\star}-R)^{2}+A^{2}\right)^{3}}>0. \tag{8.15}\]
Let \(y=\frac{r^{\star}-R}{A}\), then we are looking for the maximum of
\[\frac{(1-y^{2})(y+\frac{R}{A})^{3}}{(1+y^{2})^{3}}. \tag{8.16}\]
If we choose \(R-r_{0}^{\star}=1.001A\), and choose \(0.001A>>M\), then
\[\sup_{-1\leq y\leq 1}\frac{(1-y^{2})(y+1.001)^{3}}{(1+y^{2})^{3}}<\frac{\pi}{2}, \tag{8.17}\]
and we have \(K^{\mathbf{X}}\) is positive definite.
\[K^{\mathbf{X}} \geq\frac{\epsilon[\partial_{r}\phi+\beta\phi]^{2}}{D(M^{2}+r^{2})} +\epsilon\left(\frac{l(l+1)D\tanh\left(\frac{r^{*}-r_{0}^{*}}{M}\right)^{2}}{r ^{3}}+\frac{1}{D(M^{2}+r^{*2})^{2}}\right)|\phi|^{2} \tag{8.18}\] \[\geq\frac{\epsilon[\partial_{r}\phi]^{2}}{D(M^{2}+r^{*2})}+ \epsilon\left(\frac{l(l+1)D\tanh\left(\frac{r^{*}-r_{0}^{*}}{M}\right)^{2}}{r ^{3}}+\frac{1}{D(M^{2}+r^{*2})^{2}}\right)|\phi|^{2}.\]
To bound the \(T\)-energy locally, we can thus consider
\[A\sum_{|\alpha|+j\leq 1}K^{\mathbf{X}}[\partial_{t}^{j}\Omega^{ \alpha}\phi] \geq\frac{-dt(J^{\partial_{t}})}{M^{2}+r^{*2}} \tag{8.19}\] \[A\sum_{|\alpha|+j\leq 1}K^{\mathbf{X}}[\partial_{t}^{j}\Omega^{ \alpha}\phi] \geq A(-dt(J^{\partial_{t}}))\qquad\forall|r^{*}|\leq R, \tag{8.20}\]
where \(\Omega\) are the angular Killing Fields, as given by (3.4.2).
For the \(l=0\) case, we again follow the example of [32] and take \(X=\partial_{r^{*}}\). Given that all angular derivatives vanish, applying divergence theorem to \(J^{X}\) in the interval \(r^{*}\in(-\infty,r_{0}^{*})\), we obtain
\[\int_{-\infty}^{t_{0}}(\partial_{t}\phi(r_{0}^{*}))^{2}+(\partial_{r}\phi(r_{ 0}^{*}))^{2}r^{2}\sin\theta d\theta d\varphi dt+\int_{r^{*}-\infty}^{r_{0}^{*} }\frac{2D}{r}\int_{-\infty}^{t_{0}}\left(-(\partial_{t}\phi)^{2}+(\partial_{r} \phi)^{2}\right)r^{2}\sin\theta d\theta d\varphi dtdr^{*}\leq 4T\text{- energy}(\Sigma_{t_{0}^{*}}). \tag{8.21}\]
Let
\[F(r^{*}):=\int_{r^{*}-\infty}^{r_{0}^{*}}\frac{2D}{r}\int_{-\infty}^{t_{0}}( \partial_{t}\phi)^{2}r^{2}\sin\theta d\theta d\varphi dtdr^{*}. \tag{8.22}\]
Then (8.21) implies
\[F(r^{*})\leq\frac{2D}{r}F(r^{*})+\frac{8D}{r}T\text{-energy}(\Sigma_{t_{0}^{* }}). \tag{8.23}\]
Noting that \(\int_{r^{*}-\infty}^{r_{0}^{*}}\frac{2D}{r}dr^{*}=2\log\left(\frac{r}{r^{*}}\right)\), an application of Gronwall's inequality yields
\[F(r^{*})\leq A\left(\frac{r^{2}}{r_{+}^{2}}\right)T\text{-energy}(\Sigma_{t_{0 }^{*}}). \tag{8.24}\]
By applying this to (8.21), we can obtain
\[\left(\frac{r_{+}^{2}}{r_{0}^{2}}\right)\int_{t_{0}}^{\infty} \int_{r^{*}-\infty}^{r_{0}^{*}}\frac{2D}{r}\left(\partial_{t}\phi)^{2}+( \partial_{r^{*}}\phi)^{2}\right)\sin\theta d\theta d\varphi dr^{*}dt\leq AT \text{-energy} \tag{8.25}\] \[\int_{-\infty}^{t_{0}}\left(\int_{\Sigma_{t}\cap\{|r^{*}|\leq R \}}-dt(J^{\partial_{t}})\right)dt\leq AT\text{-energy} \tag{8.26}\]
We now have the result for all \(l\) using \(\Sigma_{t_{0}}\).
Once we note that the region \(\{t\leq t_{0},|r^{*}|\leq R\}\) is entirely in the domain of dependence of \(\bar{\Sigma}_{t_{0},R}\), we can consider the alternative solution, \(\tilde{\phi}\), given by the data of \(\phi\) on \(\bar{\Sigma}_{t_{0},R}\), but vanishing on \(\mathcal{H}^{-}\) and \(\mathcal{I}^{-}\) to the future of \(\bar{\Sigma}_{t_{0},R}\). We evolve this forward to \(\Sigma_{t_{0}}\), we can apply the above result. As \(\phi=\phi\) to the past of \(\bar{\Sigma}_{t_{0},R}\), we have the result.
**Remark 8.3** (Degeneracy at the Photon Sphere): _For the \(l\geq 1\) case, as \(l\to\infty\), the root of the \(h\) function chosen tends towards the root of_
\[1-\frac{3M}{r}+\frac{2M^{2}q^{2}}{r^{2}}=0, \tag{8.27}\]
_known as the photon sphere, \(r=r_{p}\). If we do not require control of the \(T\)-energy at this particular value, then we do not need to include angular derivatives_
\[\int_{t_{0}}^{\infty}\left(\int_{\Sigma_{t}\cap\{t\leq|r^{*}-r_{p}^{*}|\leq R \}}-dt(J^{\partial_{t}})\right)dt\leq A\sum_{j=0}^{1}\int_{\Sigma_{t_{0}}}-dt(J ^{\partial_{t}}[\partial_{t}^{j}\phi]). \tag{8.28}\]
**Remark 8.4** (Forward and higher order ILED): _By sending \(t\to-t\), Proposition 8.2 immediately gives us the result in the forward direction:_
\[\int_{t_{0}}^{\infty}\left(\int_{\Sigma_{t}\cap\{|r^{*}|\leq R\}}-dt(J^{ \partial_{t}})\right)dt\leq A\sum_{j+|\alpha|\leq 1}\int_{\Sigma_{t_{0}}}-dt(J^{ \partial_{t}}[\partial_{t}^{j}\Omega^{\alpha}\phi]). \tag{8.29}\]
_We can also apply the Proposition 8.2 to \(\partial_{t}^{j}\Omega^{\alpha}\phi\) to obtain_
\[\int_{t_{0}}^{\infty}\left(\int_{\Sigma_{t}\cap\{|r^{*}|\leq R\}}|\nabla^{ \alpha}\phi|^{2}\right)dt\leq A\sum_{j+|\alpha|\leq n}\int_{\Sigma_{t_{0}}}-dt(J ^{\partial_{t}}[\partial_{t}^{j}\Omega^{\alpha}\phi]), \tag{8.30}\]
_where we have rewritten terms in \(\nabla^{\alpha}\phi\) involving more than one \(r^{*}\) derivative using (1.2)._
**Proposition 8.5** (Boundedness of \(r^{*}\) Weighted Energy): _Let \(\psi_{+}\) be a Schwartz function. Let \(\psi\) be the solution to (7.1.2) on a sub-extremal Reissner-Nordstrom background \(\mathcal{M}_{RN}\), as given by Theorem 5.1.1, with radiation field on \(\mathcal{I}^{+}\) equal to \(\psi_{+}\), and which vanishes on \(\mathcal{H}^{+}\). Let \(R\) be a constant, and let \(t_{0}\) be a fixed value of \(t\). Then for each \(n\in\mathbb{N}_{0}\), we have the following bounds:_
\[\sum_{j+|t_{0}|\leq n}\int_{\Sigma_{t_{0},R}}(M^{2+2j}+|r^{*}|^{2+2j})dn(J^{ 0}[\Omega^{\alpha}\partial_{t}^{j}\partial_{t}^{j}])\leq A_{n}\sum_{1\leq j+| m|\leq n+1}\int_{-\infty}^{\infty}\left(M^{2j}+u^{2j}\right)(l+1)^{2m}\left| \partial_{t}^{j}\psi_{+}\right|^{2}du, \tag{8.31}\]
_where \(A_{n}=a_{n}(M,q,t_{0},R,n)\)._
Proof.: We start by bounding an \(r^{p}\) weighted norm on \(\Sigma_{u_{0}}\cap r^{*}\leq-R\) for some \(u_{0}\in\mathbb{R}\) and \(R\) large.
\[\int_{\Sigma_{u_{0}}\cap\{r^{*}\leq-R\}}(-R-r^{*})^{p}|\partial_{ v}|^{2}dv =\int_{\omega\geq u_{0},r^{*}\leq-R}-p(-r^{*}-R)^{p-1}|\partial_{ v}\psi|^{2}+(-R-r^{*})^{p}V\partial_{v}(|\psi|^{2})dudv \tag{8.32}\] \[\leq-\int_{\omega\geq u_{0},r^{*}\leq-R}\partial_{v}((-R-r^{*})^{ p}V)|\psi|^{2}dudv\] \[\leq A\int_{\omega\geq u_{0}}^{\infty}\int_{\omega^{*}}V|\psi|^{2 }dudu\] \[\leq A\int_{\omega=u_{0}}^{\infty}\int_{\omega^{*}}^{\infty}| \partial_{v}\psi|^{2}dudvdu=A\int_{w=u_{0}}^{\infty}(u-u_{0})|\partial_{u} \psi_{+}|^{2}du.\]
Here we have used \(T\) energy boundedness to reach the last line, along with an explicit calculation to show that \(-\partial_{v}((-R-r^{*})^{p}V)\leq AV\). \(A\) is a constant which depends on \(M,q\), and the choice of \(R\). Note this calculation applies for all \(p\in\mathbb{N}\) for sub-extremal Reissner-Nordstrom, but in the extremal case this only applies up to \(p=2\). By applying this result to \(\partial_{t}^{j}\Omega^{\alpha}\phi\), we obtain the required bound for \(\Sigma_{t_{0},R}\cap\{r^{*}\leq-R\}\).
For \(r^{*}\in[-R,R]\), we note that \(T\)-energy boundedness of \(\partial_{t}^{j}\Omega^{\alpha}\phi\) is sufficient for our result, as the constant \(A_{n}\) may depend on our choice of \(R\).
For the equivalent result on \(\Sigma_{u_{0}}\cap r^{*}\geq R\), a similar approach does not work, as the \(T\) energy on \(\Sigma_{v}\) does not approach \(0\) as \(v\to\infty\). Instead, we will make use of the vector field multiplier \(u^{2}\partial_{u}+v^{2}\partial_{v}\). Let \(u_{0}\leq v_{0}-R\). This will closely follow the proof of Proposition 8.1 in [8].
\[\int_{\Sigma_{u_{0}}\cap u_{0}\leq v_{0}-R}u^{2}|\partial_{u}\psi |^{2}+v^{2}V|\psi|^{2}du+ \int_{\Sigma_{u_{0}}\cap\{v_{0}\}}v^{2}|\partial_{v}\psi|^{2}+u^{ 2}V|\psi|^{2}du \tag{8.33}\] \[=\int_{\mathcal{I}^{+}\cap\{u\leq v_{0}-R\}}u^{2}|\partial_{u} \psi|^{2}+v^{2}V|\psi|^{2}du+\int_{\Sigma_{u_{0}-R}\cap\{v\geq v_{0}\}}v^{2}| \partial_{v}\psi|^{2}+u^{2}V|\psi|^{2}du\] \[+\int_{u\in[u_{0},u_{0}-R],v\geq v_{0}}\left(\partial_{v}(v^{2}V)+ \partial_{u}(u^{2}V)\right)|\psi|^{2}dudv.\]
We then note
\[\partial_{v}(v^{2}V)+\partial_{u}(u^{2}V)=2tV+tr^{*}V^{\prime}=t \{2V+r^{*}V^{\prime}\}\leq\begin{cases}\frac{4|t|}{N}\leq Ar^{-2}&l=0\\ \frac{4|t|t|\log(\frac{r}{M})}{r}\leq AV\log\left(\frac{r}{M}\right)&l\neq 0 \end{cases}, \tag{8.34}\]
using that \(|t|\leq r^{*}+\max\{v_{0}-R,-v_{0}\}\) in the region we are considering. Here \(A\) depends on the choice of \(v_{0}\) and \(R\). We can then take a supremum of (8.33) over \(u_{0}\leq v_{0}-R\) and \(v\geq v_{0}\) to obtain
\[\sup_{v\geq v_{0}}\int_{\Sigma_{v}\cap u\leq v_{0}-R}u^{2}|\partial _{u}\psi|^{2}+v^{2}V|\psi|^{2}du+ \sup_{\omega\leq v_{0}-R}\int_{\Sigma_{u_{0}}\cap\{v\geq v_{0}\}}v^{2}| \partial_{v}\psi|^{2}+u^{2}V|\psi|^{2}dv \tag{8.35}\] \[\leq\int_{\mathcal{I}^{+}\cap\{u\leq v_{0}-R\}}u^{2}|\partial_{u} \psi|^{2}+v^{2}V|\psi|^{2}du+\int_{\Sigma_{u_{0}-R}\cap\{v\geq v_{0}\}}v^{2}| \partial_{v}\psi|^{2}+u^{2}V|\psi|^{2}dv\] \[+A\int_{v\leq v_{0}-R,v\geq v_{0}}^{\infty}\left(V\log\left(\frac {r}{M}\right)+r^{-2}\right)|\psi|^{2}dudv.\]
We can bound the final integral using the following:
\[\int_{u=-\infty}^{v_{0}-R}\int_{v=v_{0}}^{\infty}\left(V\log\left( \frac{r}{M}\right)+r^{-2}\right)|\psi|^{2}dvdu \leq\int_{u=-\infty}^{v_{0}-R}\int_{v=v_{0}}^{\infty}\left(V\log\left( \frac{-u}{M}\right)+V\log\left(\frac{v}{M}\right)+u^{-2}\right)|\psi|^{2}dvdu \tag{8.36}\] \[\leq A\int_{u=-\infty}^{v_{0}-R}u^{-2}\log\left(\frac{-u}{M} \right)\int_{v=u}^{\infty}u^{2}V|\psi|^{2}dvdu\] \[+A\int_{v=v_{0}}^{\infty}v^{-2}\log\left(\frac{v}{M}\right)\int_{u =-\infty}^{v_{0}-R}v^{2}V|\psi|^{2}dudv\] \[+A\int_{u=-\infty}^{v_{0}-R}u^{-2}\int_{v=v_{0}}^{\infty}|\psi|^{2} dudv\] \[\leq\epsilon\sup_{u\leq v_{0}-R}\int_{\Sigma_{v}\cap\{v\geq v_{0}\}} v^{2}|\partial_{v}\psi|^{2}+u^{2}V|\psi|^{2}du\] \[+\epsilon\sup_{v\leq v_{0}-R}\int_{\Sigma_{v}\cap\{v\geq v_{0}\}} u^{2}|\partial_{v}\psi|^{2}+u^{2}V|\psi|^{2}du\] \[+\epsilon\sup_{v\leq v_{0}-R}\int_{\Sigma_{v}\cap\{v\geq v_{0}\}} v^{2}|\psi|^{2}dv,\]
where \(v_{0}\) and \(R\) are sufficiently large.
We can then apply Hardy's inequality to \(\chi(1+R/M)\psi\) (\(\chi\) as in Proposition 7.3.6) to get
\[\sup_{v\geq v_{0}-R}\int_{\Sigma_{v}\cap(v\geq v_{0})}|\psi|^{2}dv\leq A\sup_{u \leq v_{0}-R}\int_{\Sigma_{v}\cap(v\geq v_{0})}V|\psi|^{2}dv+\sup_{u\leq v_{0}-R }\int_{\Sigma_{v}\cap(v\geq v_{0})}v^{2}|\partial_{v}\psi|^{2}dv. \tag{8.37}\]
We can then rearrange (8.35) to see
\[\sup_{v\geq v_{0}}\int_{\Sigma_{v}\cap v_{0}\leq v_{0}-R}u^{2}| \partial_{u}\psi|^{2}+v^{2}V|\psi|^{2}du +\sup_{u\leq v_{0}-R}\int_{\Sigma_{v}\cap(v\geq v_{0})}v^{2}|\partial_{v} \psi|^{2}+u^{2}V|\psi|^{2}dv \tag{8.38}\] \[\leq A\int_{\mathcal{I}^{+}\cap(u\leq v_{0}-R)}u^{2}|\partial_{u} \psi|^{2}+v^{2}V|\psi|^{2}du+A\int_{\Sigma_{v_{0}-R}\cap(v\geq v_{0})}v^{2}| \partial_{v}\psi|^{2}+u^{2}V|\psi|^{2}dv.\]
By taking an appropriate limit of this, we can see that
\[\int_{\Sigma_{v_{0}}\cap u\leq v_{0}-R}u^{2}|\partial_{u}\psi|^{2 }+v^{2}V|\psi|^{2}du+ \int_{\mathcal{I}^{-}\cap(v\geq v_{0})}v^{2}|\partial_{v}\psi|^{ 2}+u^{2}V|\psi|^{2}dv \tag{8.39}\] \[\leq A\int_{\mathcal{I}^{+}\cap(u\leq v_{0}-R)}u^{2}|\partial_{u} \psi|^{2}+v^{2}V|\psi|^{2}du+A\int_{\Sigma_{v_{0}-R}\cap(v\geq v_{0})}v^{2}| \partial_{v}\psi|^{2}+u^{2}V|\psi|^{2}dv.\]
We can also consider a time reversal of this statement to get
\[\int_{\mathcal{I}^{+}\cap(u\leq v_{0}-R)}u^{2}|\partial_{u}\psi|^ {2}+v^{2}V|\psi|^{2}du+ \int_{\Sigma_{v_{0}-R}\cap(v\geq v_{0})}v^{2}|\partial_{v}\psi|^{ 2}+u^{2}V|\psi|^{2}dv \tag{8.40}\] \[\leq A\int_{\Sigma_{v}\cap u\leq v_{0}-R}u^{2}|\partial_{u}\psi| ^{2}+v^{2}V|\psi|^{2}du+A\int_{\mathcal{I}^{-}\cap(v\geq v_{0})}v^{2}| \partial_{v}\psi|^{2}+u^{2}V|\psi|^{2}dv.\]
In order to add more \(u\) and \(v\) weighting to this, we commute with the vector field \(S=u\partial-U+v\partial_{v}\).
\[(\partial_{u}\partial_{v}+V)S(f)=S[(\partial_{u}\partial_{v}+V)f]+(2V-r^{*} \partial_{r^{*}}V)f+2(\partial_{u}\partial_{v}+V)f. \tag{8.41}\]
Thus an easy induction argument gives
\[|(\partial_{u}\partial_{v}+V)S^{n}\psi|\leq A|V-r^{*}\partial_{r^{*}}V|\sum_{ k=0}^{n-1}S^{k}\psi, \tag{8.42}\]
noting that
\[|\partial_{r^{*}}^{\rho}(V-r^{*}\partial_{r^{*}}V)|\leq A|V-r^{*}\partial_{r^ {*}}V|\leq\frac{A(l+1)^{2}\log\left(\frac{r}{M}\right)}{r^{3}}. \tag{8.43}\]
Repeating (8.35), but applied to \(S^{n}\psi\), we obtain
\[F_{n}:=\sup_{v\geq v_{0}}\int_{\Sigma_{v}\cap u\leq v_{0}-R}u^{2} |\partial_{u}S^{n}\psi|^{2}+v^{2}V|S^{n}\psi|^{2}du+\sup_{u\leq v_{0}-R}\int_ {\Sigma_{v}\cap(v\geq v_{0})}v^{2}|\partial_{v}S^{n}\psi|^{2}+u^{2}V|S^{n} \psi|^{2}du \tag{8.44}\] \[\leq\int_{\mathcal{I}^{+}\cap(u\leq v_{0}-R)}u^{2}|\partial_{u}S ^{n}\psi|^{2}+v^{2}V|S^{n}\psi|^{2}du+\int_{\Sigma_{v_{0}-R}\cap(v\geq v_{0})} v^{2}|\partial_{v}S^{n}\psi|^{2}+u^{2}V|S^{n}\psi|^{2}du\] \[\qquad+A\int_{\Sigma_{v}\cap u\leq v_{0}-R,v\geq v_{0}}\left(V \log\left(\frac{r}{M}\right)+r^{-2}\right)|S^{n}\psi|^{2}dudv\] \[\qquad+A\sum_{k=0}^{n-1}\int_{u\leq v_{0}-R,v\geq v_{0}}\frac{(l +1)^{2}\log\left(\frac{r}{M}\right)}{r^{3}}|S^{k}\psi|^{2}|u^{2}\partial_{u}S^{ n}\psi+v^{2}\partial_{v}S^{n}\psi|\,dudv\] \[\leq\int_{\mathcal{I}^{+}\cap(u\leq v_{0}-R)}u^{2}|\partial_{u}S ^{n}\psi|^{2}+v^{2}V|S^{n}\psi|^{2}du+\int_{\Sigma_{v_{0}-R}\cap(v\geq v_{0})} v^{2}|\partial_{v}S^{n}\psi|^{2}+u^{2}V|S^{n}\psi|^{2}du\] \[\qquad+AeF_{n}+AF_{n}^{1/2}\sum_{k=0}^{n-1}\left(\int_{u=-\infty} ^{v_{0}-R}\left(l+1\right)^{4}\log\left(\frac{(v_{0},v_{0})}{M}\right)^{2} \right)^{1/2}\] \[\leq A\int_{\mathcal{I}^{+}\cap(u\leq v_{0}-R)}u^{2}|\partial_{u}S ^{n}\psi|^{2}+v^{2}V|S^{n}\psi|^{2}du+A\int_{\Sigma_{v_{0}-R}\cap(v\geq v_{0})} v^{2}|\partial_{v}S^{n}\psi|^{2}+u^{2}V|S^{n}\psi|^{2}du\] \[\qquad+AF_{n}^{1/2}\sum_{k=0}^{n-1}\left(\int_{u=-\infty}^{v_{0}-R }\left(l+1\right)^{4}\log\left(\frac{r(u,v_{0})}{M}\right)^{2}\right)^{1/2} \] \[\leq A\int_{\mathcal{I}^{+}\cap(u\leq v_{0}-R)}u^{2}|\partial_{u}S ^{n}\psi|^{2}+v^{2}V|S^{n}\psi|^{2}du+A\int_{\Sigma_{v_{0}-R}\cap(v\geq v_{0})} v^{2}|\partial_{v}S^{n}\psi|^{2}+u^{2}V|S^{n}\psi|^{2}du\] \[\qquad+Ae(l+1)F_{n}^{1/2}\sum_{k=0}^{n-1}F_{k}^{1/2}\] \[\leq A\int_{\mathcal{I}^{+}\cap(u\leq v_{0}-R)}u^{2}|\partial_{u}S ^{n}\psi|^{2}+v^{2}V|S^{n}\psi|^{2}du+A\int_{\Sigma_{v_{0}-R}\cap(v\geq v_{0})} v^{2}|\partial_{v}S^{n}\psi|^{2}+u^{2}V|S^{n}\psi|^{2}du\] \[\qquad+A(l+1)^{2}\sum_{k=0}^{n-1}F_{k}.\]
As \(F_{0}\) is bounded by (8.39), we can inductively obtain
\[\sum_{k+m\leq n}\Bigg{(}\int_{\Sigma_{m\cap n\leq v_{0}-R}}u^{2}(l+1)^ {2m}|\partial_{u}(S^{k}\psi)|^{2}+v^{2}V(l+1)^{m}|S^{k}\psi|^{2}du \tag{8.45}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\int_{\mathcal{I}^{-}(\{v >v_{0}\}}v^{2}(l+1)^{2m}|\partial_{v}((v\partial_{v})^{k}\psi)|^{2}+u^{2}V(l+ 1)^{2m}|((v\partial_{v})^{k}\psi)|^{2}dv\Bigg{)}\] \[\leq A\sum_{k+m\leq n}\Bigg{(}\int_{\mathcal{I}^{+}(\{v\leq v_{0} -R\}}u^{2}(l+1)^{2m}|\partial_{u}((u\partial_{u})^{k}\psi)|^{2}+v^{2}V(l+1)^{ 2m}|((u\partial_{u})^{k}\psi)|^{2}du\] \[\qquad\qquad\qquad\qquad+\int_{\Sigma_{m\cap n\leq v_{0}}}v^{2}(l +1)^{2m}|\partial_{v}(S^{k}\psi)|^{2}+u^{2}V(l+1)^{2m}|(S^{k}\psi)|^{2}dv \Bigg{)},\]
along with the time reversed result
\[\sum_{k+m\leq n}\Bigg{(}\int_{\mathcal{I}^{+}(\{v\leq v_{0}-R\}} u^{2}(l+1)^{2m}|\partial_{u}((u\partial_{u})^{k}\psi)|^{2}+v^{2}V(l+1)^{2m}|((u \partial_{u})^{k}\psi)|^{2}du\] \[\qquad\qquad\qquad\qquad+\int_{\Sigma_{m\cap n\leq v_{0}}}v^{2}(l +1)^{2m}|\partial_{v}(S^{k}\psi)|^{2}+u^{2}V(l+1)^{2m}|(S^{k}\psi)|^{2}dv\Bigg{)}\] \[\leq A\sum_{k+m\leq n}\Bigg{(}\int_{\Sigma_{m\cap n\leq v_{0}-R}} u^{2}(l+1)^{2m}|\partial_{u}(S^{k}\psi)|^{2}+v^{2}V(l+1)^{2m}|(S^{k}\psi)|^{2}du \tag{8.46}\] \[\qquad\qquad\qquad+\int_{\mathcal{I}^{-}\cap\{v\geq v_{0}\}}v^{2} (l+1)^{2m}|\partial_{v}((v\partial_{v})^{k}\psi)|^{2}+u^{2}V(l+1)^{2m}|((v \partial_{v})^{k}\psi)|^{2}dv\Bigg{)}.\]
All that is now left for the result is to bound
\[\sum_{k+m\leq n}\int_{\Sigma_{m\cap n\leq v_{0}-R}\cap\{v\geq v_{0}\}}v^{2}(l +1)^{2m}|\partial_{v}(S^{k}\psi)|^{2}+u^{2}V(l+1)^{2m}|(S^{k}\psi)|^{2}dv \leq\sum_{k+m+j\leq n}\int_{\Sigma_{m\cap n\geq v_{0}}}v^{2+2k}(l+1)^{2m}| \partial_{v}^{k+1}\partial_{v}^{j}\psi|^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+v^{2k}V( l+1)^{2m}|(\partial_{v}^{k}\partial_{v}^{j}\psi)|^{2}dv, \tag{8.47}\]
for fixed and arbitrarily large \(R,v_{0}\). We have used (7.1.2) to remove any \(\partial_{u}\partial_{v}\) derivatives, and have replaced any \(\partial_{u}\) derivatives with \(\partial_{t}+\partial_{v}\) derivatives. As \(\partial_{t}\) and \(\Omega\) are Killing fields, it is sufficient to bound
\[\int_{\Sigma_{m\cap n\leq-R}\cap\{v\geq v_{0}\}}v^{2k+2}|\partial_{v}^{k+1} \psi|^{2}dv\leq A\int_{\Sigma_{m\cap n\leq-R}}\chi\left(\frac{r-R}{M}-1\right) r^{2(k+1)}|\partial_{v}^{k+1}\psi|^{2}dv, \tag{8.48}\]
for \(k\geq 0\).
We can immediately apply Proposition 7.3.6 (with time reversed) to obtain the \(k=0\) case
\[\int_{\Sigma_{m\cap n\geq-R}}\chi r^{2}|\partial_{v}\psi|^{2}du\leq A\int_{ \mathcal{I}^{+}}(M^{2}+u^{2})|\partial_{u}\psi_{t}|^{2}+l(l+1)|\psi_{+}|^{2}du. \tag{8.49}\]
Here the constant \(A\) depends on choice of \(v_{0}\) and \(R\). We would now like to generalise this to the following result (closely based on Proposition 7.7 in [8]).
\[\int_{\Sigma_{m\cap n\leq-R}}\chi\left(\frac{r-R}{M}-1\right)r^{2k}|\partial_{ v}^{k}\psi|^{2}dv\leq A\sum_{1\leq m+j\leq k}\int_{u=v_{0}-R}^{\infty}\left(M^{2m} +(u-u_{R})^{2m}\right)(l+1)^{2j}|\partial_{u}^{m}\psi_{+}|^{2}du, \tag{8.50}\]
where \(A\) depends on \(M,q,n,R\). From here, we will denote \(v_{0}-R=u_{R}\).
We will prove this inductively. First, we consider commuting (7.1.2) with \(\partial_{v}\) to obtain
\[\partial_{u}\partial_{v}(\partial_{v}^{n}\psi)+V\partial_{v}^{n}\psi=-\partial_ {v}^{n}(V\psi)+V\partial_{v}^{n}\psi=-\sum_{j=0}^{n-1}\binom{n}{j}\partial_{v} ^{n-j}V\partial_{v}^{j}\psi\leq A\sum_{j=0}^{n-1}\frac{(l+1)^{2}}{r^{2k+m-j}}| \partial_{v}^{j}\psi|. \tag{8.51}\]
We then look at applying this to the following generalisation of the right hand side of (8.50)
\[\int_{\Sigma_{m\cap n\leq R}}\chi r^{p}|\partial_{v}^{k}\psi|^{2}dv =\int_{u\geq u_{R}}D(\chi r^{p}+p\chi r^{p-1})|\partial_{v}^{k}\psi|^{2}+2 \left(\chi r^{p}\partial_{v}^{k}\psi\sum_{j=0}^{k-1}\binom{k}{j}\partial_{v}^{ k-j}V\partial_{v}^{j}\psi\right)-D\partial_{r}-\left(\chi r^{p}V\right)| \partial_{v}^{k-1}\psi|dudv\] \[\qquad\qquad+\int_{\mathcal{I}^{+}}r^{p-2}l(l+1)|\partial_{v}^{k-1 }\psi_{+}|^{2}du \tag{8.52}\] \[\leq A\int_{u\geq u_{R},r^{\prime}\in[R,R+M]}\sum_{m+j\leq k-1}-dt(l +1)^{2j}J^{\alpha}|\partial_{v}^{m}\psi|)dudv+A\int_{u\geq u_{R}}\chi\sum_{j=0}^{k }\frac{(l+1)^{2j}}{r^{1+2j-p}}|\partial_{v}^{k-j}\psi|^{2}dudv\] \[\leq A\int_{u=u_{R}}\sum_{m+j\leq k-1}(l+1)^{2j}|\partial_{u}^{m+1 }\psi_{+}|^{2}du+A\int_{u\geq u_{R}}\chi\sum_{j=0}^{k}\frac{(l+1)^{2j}}{r^{1+2j -p}}|\partial_{v}^{k-j}\psi|^{2}dudv,\]
where we have used Proposition 8.2.
For our induction argument, we will assume we have proved (8.50) for \(k\leq n\), where \(n\geq 1\). We first consider 8.52, with \(k=n+1\) and \(p=1+2n\).
\[\int_{\Sigma_{n=k_{R}}}\chi^{n+2n}|\partial_{v}^{n+1}\psi|^{2}dv \leq\int_{u=u_{R}}\sum_{m+j\leq n}(l+1)^{2j}|\partial_{u}^{m+1} \psi_{+}|^{2}du+\int_{u\geq u_{R}}\chi\sum_{j=0}^{n+1}\frac{(l+1)^{2j}}{r^{2(j- n)}}|\partial_{v}^{1+n-j}\psi|^{2}dudv \tag{8.53}\] \[\leq A\int_{u=u_{R}}\sum_{m+j\leq n}(l+1)^{2j}|\partial_{u}^{m+1} \psi_{+}|^{2}du\] \[\qquad+A\int_{u\geq u_{R}}\chi\sum_{j=0}^{n}\frac{(l+1)^{2j}}{r^{2 (j-n)}}|\partial_{v}^{1+n-j}\psi|^{2}+(l+1)^{2n}\chi|\partial_{v}\psi|^{2}dudv\] \[\leq A\int_{u=u_{R}}\sum_{m+j\leq n}(l+1)^{2j}|\partial_{u}^{m+1} \psi_{+}|^{2}du+A\int_{u\geq u_{R}}\chi\sum_{j=0}^{n}(l+1)^{2(n-j)}r^{2j}| \partial_{v}^{j}(\partial_{t}+\partial_{u})\psi|^{2}dudv\] \[\qquad+A\int_{u=u_{R}}\sum_{m+j\leq n}(l+1)^{2j}|\partial_{u}^{m+ 1}\psi_{+}|^{2}du+A\int_{u\geq u_{R}}\chi\sum_{j=0}^{n}(l+1)^{2(n-j)}r^{2j}| \partial_{v}^{j-1}(V\psi)|^{2}dudv\] \[\qquad+A\int_{u=u_{R}}\sum_{1\leq m+j\leq n}\int_{u^{\prime}=u}^{ \infty}\left(M^{2m}+(u-u_{R})^{2m}\right)(l+1)^{2j}|\partial_{u}^{m}\partial_{ v}\psi_{+}|^{2}du^{\prime}du\] \[\leq A\sum_{0\leq m+j\leq n}\int_{u=u_{R}}^{\infty}\left(M^{2m}+(u -u_{R})^{2m+1}\right)(l+1)^{2j}|\partial_{u}^{m+1}\psi_{+}|^{2}du\] \[\qquad+A\int_{u\geq u_{R}}\chi\sum_{j=0}^{n}(l+1)^{2(n-j)+2}r^{2 j}|\partial_{v}^{j-1}(V\psi)|^{2}dudv\] \[\leq A\int_{u=u_{R}}\sum_{1\leq m+j\leq n+1}(l+1)^{2j}|\partial_{u }^{m}\psi_{+}|^{2}du,\]
where we have used that \(\partial_{t}\) is a Killing field along with our induction hypothesis in the final three lines.
We then proceed to prove (8.50):
\[\int_{\Sigma_{n=k_{R}}}\chi^{n+2n}|\partial_{v}^{n+1}\psi|^{2}dv \leq A\int_{u=u_{R}}\sum_{m+j\leq n}(l+1)^{2j}|\partial_{u}^{m+1} \psi_{+}|^{2}du \tag{8.54}\] \[\qquad+A\int_{u\geq u_{R}}\chi\sum_{j=0}^{n}\frac{(l+1)^{2j}}{r^{ 2(j-n)}-1}|\partial_{v}^{1+n-j}\psi|^{2}+(l+1)^{2n}r\chi|\partial_{v}\psi|^{2} dudv\] \[\leq A\int_{u=u_{R}}\sum_{m+j\leq n}(M+(u-u_{R}))(l+1)^{2j}| \partial_{u}^{m+1}\psi_{+}|^{2}du\] \[\qquad+A\int_{u\geq u_{R}}\chi\sum_{j=0}^{n}(l+1)^{2(n-j)}r^{2j+1 }|\partial_{v}^{j}(\partial_{t}+\partial_{u})\psi|^{2}dudv\] \[\leq A\sum_{1\leq m+j\leq n+1}\int_{u=u_{R}-R}^{\infty}\left(M^{2 m}+(u-u_{R})^{2m}\right)(l+1)^{2j}|\partial_{u}^{m}\psi_{+}|^{2}du,\]
applying (8.53), along with identical reasoning as used in (8.53).
**Proposition 8.6** (Integrated Decay of Higher Order Energy): _Let \(\psi_{+}\) be a Schwartz function. Let \(\psi\) be the solution of (7.1.2), as given by Theorem 5.1.1, on a sub-extremal Reissner-Nordstrom background \(\mathcal{M}_{RN}\), with radiation field on \(\mathcal{I}^{+}\) equal to \(\psi_{+}\), and which vanishes on \(\mathcal{H}^{+}\). Let \(R\) be a constant, and let \(t_{0}\) be a fixed value of \(t\). Then for each \(n\in\mathbb{N}_{0}\), we have the following bounds:_
\[\int_{t_{0},n+1}^{t_{0}}\int_{t_{0},n+1}^{t_{0}}\int_{t_{2n},n- \infty}^{t_{0}}\cdots\int_{t_{1}=-\infty}^{t_{2}}\int_{t_{1}=-\infty}^{t_{1}} \left(\int_{t_{0},n}-dt(J^{3}[\partial_{t}^{n}\phi])\right)dtdt_{1}dt_{2}dt_{ 2n+1} \tag{8.55}\] \[\qquad+\sum_{j+|\alpha|=m\leq n}\int_{v=u_{R}+R,r^{\ast}\geq R} \int_{v\leq u_{R}+R,r^{\ast}\geq R}r^{1+2j}\left(|\partial_{v}^{1+j}\partial_{t }^{m}\Omega^{\alpha}\psi|^{2}+jV|\partial_{v}^{j}\partial_{t}^{m}\Omega^{ \alpha}\psi|^{2}\right)dudv\] \[\qquad+\sum_{j+|\alpha|=m\leq n}\int_{u=u_{R}+R,r^{\ast}\leq-R} \stackrel{{(-r^{\ast})^{1+2j}}}{{\left(|\partial_{v}^{1+j} \partial_{t}^{m}\Omega^{\alpha}\psi|^{2}+(-r^{\ast})V|\partial_{v}^{j}\partial_ {t}^{m}\Omega^{\alpha}\psi|^{2}\right)dudv}\] \[\leq A_{n}\sum_{j+|\alpha|+m\leq n}\int_{v=u_{R}+R,r^{\ast}\geq R }r^{2+2j}|\partial_{u}^{1+j}\partial_{t}^{m}\Omega^{\alpha}\psi|^{2}du\] \[\qquad+A_{n}\sum_{j+|\alpha|+m\leq n}\int_{u=u_{R}+R,r^{\ast}\leq-R }\stackrel{{(-r^{\ast})^{2+2j}}}{{\left(|\partial_{v}^{1+j}\partial_ {t}^{m}\Omega^{\alpha}\psi|^{2}\right)}}\] \[\qquad+A_{n}\sum_{j+|\alpha|+m\leq n}\int_{v=u_{R}+R,r^{\ast}\leq -R}\stackrel{{(-r^{\ast})^{1+2j}}}{{\left(|\partial_{v}^{1+j} \partial_{t}^{m}\Omega^{\alpha}\psi|^{2}\right)}}\] \[\leq A_{n}\sum_{j+|\alpha|+m\leq n}\int_{v=u_{R}+R,r^{\ast}\geq R} r^{2+2j}|\partial_{u}^{1+j}\partial_{t}^{m}\Omega^{\alpha}\psi|^{2}du\] \[\qquad+A_{n}\sum_{j+|\alpha|+m\leq n}\int_{v=u_{R}+R,r^{\ast}\leq -R}\stackrel{{(-r^{\ast})^{2+2j}}}{{\left(|\partial_{v}^{1+j} \partial_{t}^{m}\Omega^{\alpha}\psi|^{2}\right)}}\] \[\qquad+A_{n}\sum_{j+|\alpha|\leq n+2}\int_{\Sigma_{n}}-dt(J^{3 }[\partial_{t}^{j}\Omega^{\alpha}\phi]),\]
_where \(A_{n}=A_{n}(M,q,n,R)\)._
Proof.: This proof again closely follows that of [8]. We will consider \(T\) energy through a null foliation, \(\Sigma_{t,R}\) (see (4.4)).
We first look at how the wave operator commutes with both \(\partial_{u}\) and \(\partial_{v}\):
\[\partial_{u}\partial_{v}(\partial_{v}^{n}\psi)+V\partial_{u}^{n} \psi=-\partial_{v}^{n}(V\psi)+V\partial_{v}^{n}\psi=\sum_{j=0}^{n-1}{n\choose j }(-1)^{n-j+1}\partial_{v}^{n-j}V\partial_{u}^{j}\psi\leq A\sum_{j=0}^{n-1}{V \over r^{n-j}}|\partial_{v}^{j}\psi| \tag{8.56}\] \[\partial_{u}\partial_{v}(\partial_{v}^{n}\psi)+V\partial_{v}^{n} \psi=-\partial_{v}^{n}(V\psi)+V\partial_{v}^{n}\psi=-\sum_{j=0}^{n-1}{n\choose j }\partial_{r^{*}}^{n-j}V\partial_{v}^{j}\psi\leq A\sum_{j=0}^{n-1}V\kappa^{n-j} |\partial_{v}^{j}\psi| \tag{8.57}\]
We apply the \(r^{p}\) and \(r^{rp}\) methods to the null segments of \(\widetilde{\Sigma}_{t_{0},R}\) to obtain:
\[\int_{r=t_{0}+R,r^{*}\geq R}r^{p}\chi\left({r^{*}-R\over M} \right)|\partial_{v}^{k}\psi|^{2}du =\int_{v\leq t_{0}+R,r^{*}\geq R}\left(pr^{p-1}D\chi+{r^{p}\over M }\chi^{\prime}\right)|\partial_{v}^{k}\psi|^{2}-r^{p}\chi V\partial_{u}\left(| \partial_{u}^{k-1}\psi|^{2}\right)dvdu \tag{8.58}\] \[\qquad+\int_{v\leq t_{0}+R,r^{*}\geq R}2^{p}\chi\Re\left(\partial _{v}^{k}\sum_{j=0}^{k-2}{k-1\choose j}(-1)^{k-j}\partial_{r^{*}-1-j}^{k-1}V \partial_{v}^{j}\psi\right)dudv\] \[\geq\int_{v\leq t_{0}+R,r^{*}\geq R}\left(pr^{p-1}D\chi+{r^{p} \over M}\chi^{\prime}\right)|\partial_{v}^{k}\psi|^{2}-\partial_{r^{*}}(r^{p} \chi V)\left(|\partial_{u}^{k-1}\psi|^{2}\right)dvdu\] \[\qquad-A\int_{v\leq t_{0}+R,r^{*}\geq R}r^{p}\chi|\partial_{v}^{ k}\psi|\sum_{j=0}^{k-2}{Vr^{1-k+j}}|\partial_{v}^{j}\psi|dudv+\int_{\mathcal{I} ^{-}}r^{p}V|\partial_{u}^{k-1}\psi|^{2}dv\] \[\geq a\int_{v\leq t_{0}+R,r^{*}\geq R}\chi r^{p-1}\left(p| \partial_{v}^{k}\psi|^{2}+(p-2)V|\partial_{u}^{k-1}\psi|^{2}\right)dudv\] \[\qquad-A\int_{R\leq r^{*}\leq M+R,t\leq t_{0}}|\partial_{u}^{k} \psi|^{2}d+V|\partial_{u}^{k-1}\psi|^{2}dr^{*}dt\] \[\qquad-A\sum_{j=0}^{k-2}\int_{v\leq t_{0}+R,r^{*}\geq R}\chi V^{ \prime}r^{3-2k+2j+1p}|\partial_{v}^{j}\psi|^{2}dudv.\]
\[\int_{u=t_{0}+R}(-r^{*})r\chi\left({-r^{*}-R\over M}\right)| \partial_{v}^{k}\psi|^{2}dv =\int_{u\leq t_{0}+R}\left(p(-r^{*})r^{-1}\chi+{(-r^{*})^{p}\over M }\chi^{\prime}\right)|\partial_{v}^{k}\psi|^{2}\] \[\qquad-(-r^{*})^{p}\chi V\partial_{v}\left(|\partial_{v}^{k-1} \psi|^{2}\right)-2\chi(-r^{*})^{p}\Re\left(\partial_{v}^{k}\psi\sum_{j=0}^{k-2 }{k-1\choose j}\partial_{r^{*}}^{k-1-j}V\partial_{v}^{j}\psi\right)dvdu\] \[\geq\int_{u\leq t_{0}+R}\left(p(-r^{*})r^{-1}\chi+{(-r^{*})^{p} \over M}\chi^{\prime}\right)|\partial_{v}^{k}\psi|^{2}+\partial_{r^{*}}((-r^{*}) r^{p}\chi V)\left(|\partial_{v}^{k-1}\psi|^{2}\right)\] \[\qquad-A\chi(-r^{*})^{p}|\partial_{v}^{k}\psi|\sum_{j=0}^{k-2}{V \kappa^{k-1-j}}|\partial_{v}^{j}\psi|dvdu \tag{8.59}\] \[\geq a\int_{u\leq t_{0}+R}\chi(-r^{*})^{p-1}\left(p|\partial_{v}^ {k}\psi|^{2}+(-r^{*}\kappa-p)V|\partial_{v}^{k-1}\psi|^{2}\right)dudv\] \[\qquad-A\int_{-r^{*}\geq M+R,t\leq t_{0}}|\partial_{v}^{k}\psi|^{ 2}+V|\partial_{v}^{k-1}\psi|^{2}dr^{*}dt\] \[\qquad-A\sum_{j=0}^{k-2}\int_{u\leq t_{0}+R}\chi(-r^{*})^{p+1} \kappa^{2k-2j-2}V^{2}|\partial_{v}^{j}\psi|^{2}dudv.\]
By summing (8.58) and (8.59) when \(p=1\), \(k=1\) (as then the two summations vanish), we obtain:
\[\int_{v=t_{0}+R,r^{*}\geq R}r\chi\left({r^{*}-R\over M}\right)| \partial_{u}\psi|^{2}du +\int_{u=t_{0}+R}(-r^{*})\chi\left({-r^{*}-R\over M}\right)| \partial_{v}\psi|^{2}dv+\int_{\Sigma_{0}}\sum_{j=0}^{1}-(l+1)^{2-2j}dt(J^{ \partial_{v}}[\partial_{t}^{j}\phi])\] \[\geq a\int_{t=-\infty}^{t_{0}}T\text{-energy}(\widetilde{\Sigma}_{t,R})dt \tag{8.60}\]
Here we have used Proposition 8.2.
We then consider the \(p=2\), \(k=1\) case to obtain
\[\int_{v=t_{0}+R,r^{*}\geq R}r^{2}\chi\left({r^{*}-R\over M} \right)|\partial_{u}\psi|^{2}du +\int_{u=t_{0}+R}(-r^{*})^{2}\chi\left({-r^{*}-R\over M}\right)| \partial_{v}\psi|^{2}dv+\int_{\Sigma_{0}}\sum_{j=0}^{2}-(l+1)^{4-2j}dt(J^{ \partial_{v}}[\partial_{t}^{j}\Omega^{\alpha}\phi])\] \[\geq a\int_{t=-\infty}^{t_{0}}\left(\int_{v=t+R}r\chi\left({r^{*}-R \over M}\right)|\partial_{u}\psi|^{2}du\right. \tag{8.61}\] \[\qquad+\int_{u=t_{0}+R}(-r^{*})\chi\left({-r^{*}-R\over M}\right) \left(|\partial_{v}\psi|^{2}+((-r^{*})\kappa-2)V|\psi|^{2}\right)dv\] \[\qquad+\int_{\Sigma_{1}}\sum_{j=0}^{1}-(l+1)^{2-2j}dt(J^{\partial_{ v}}[\partial_{t}^{j}\phi])\] \[\geq a\int_{t=-\infty}^{t_{0}}\int_{t^{\prime}=-\infty}^{t}T\text{- energy}(\Sigma_{t^{\prime},R})dt^{\prime}dt.\]
By using mean value theorem and \(T\)-energy boundedness (see [33] for an example of this), one can thus obtain
\[\int_{\mathbb{C}_{l,R}}T\text{-energy}\leq A(-t)^{-2}\int_{\mathcal{I}^{+}}(M^{2 }+u^{2})|\partial_{u}\psi_{+}|^{2}+l(l+1)|\psi_{+}|^{2}du. \tag{8.62}\]
By considering \(T\)-energy boundedness between \(\mathbb{S}_{l,R}\) and \(\mathcal{H}^{-}\cup\mathcal{I}_{-}\), we can also obtain:
\[\int_{t=-\infty}^{t_{0}}\int_{t^{\prime}=-\infty}^{t} \left(\int_{u=-\infty}^{t^{\prime}+R}|\partial_{u}\psi_{t^{-}}|^{2} du+\int_{v=-\infty}^{t^{\prime}+R}|\partial_{v}\psi_{-}|^{2}dv\right) \tag{8.63}\] \[=\int_{u=-\infty}^{t_{0}+R}(u-t_{0}-R)^{2}|\partial_{u}\psi_{t^{ -}}|^{2}du+\int_{v=-\infty}^{t_{0}+R}(v-t_{0}-R)^{2}|\partial_{v}\psi_{RN}|^{2}dv\] \[\leq A\int_{\mathcal{I}^{+}}(M^{2}+u^{2})|\partial_{u}\psi_{+}|^ {2}+l(l+1)|\psi_{+}|^{2}du.\]
We now proceed to prove the result inductively, given the case \(n=0\) is (8.61) (Provided \(R>3\kappa\)). We first look to bound the \(r\) and \(r^{*}\) weighted summations. We take \(p=2+2n,k=n+1\) in (8.58)
\[\int_{v\leq l_{0}+R}\chi^{r1+2n}\left(|\partial_{u}^{1+n}\psi|^{2 }+V|\partial_{u}^{n}\psi|^{2}\right)dudv \leq A\int_{v\neq l_{0}+R}\chi^{r2+2n}|\partial_{u}^{1+n}\psi|^{2}du \tag{8.64}\] \[\quad+A\sum_{R\leq r^{*}\leq M+R,t\geq 2n}|\partial_{u}^{0+1}| \partial_{u}^{n+1}\psi|^{2}+V|\partial_{u}^{n}\psi|^{2}d\mathbf{r}^{*}dt\] \[\quad+A\sum_{m=1}^{n-1}\int_{v\leq l_{0}+R}\chi(l+1)^{2}r^{1+2m}V |\partial_{u}^{m}\psi|^{2}dudv\] \[\quad+\int_{v\leq l_{0}+R}2\chi r^{2+2n}\mathbb{R}\left(\partial_ {u}^{n+1}\ddot{\psi}(-1)^{n}\partial_{v}^{n}V\psi\right)dudv\] \[\leq A\int_{v=l_{0}+R}\chi r^{2+2n}|\partial_{u}^{1+n}\psi|^{2}du +A\sum_{m+|\alpha|\leq n+1}(l+1)^{2}\int_{\Sigma_{l_{0}}}-dt(J^{\alpha_{0}}[ \partial_{u}^{n}\psi])\] \[\quad+A\sum_{j+k+m\leq 0}\int_{v=l_{0}+R,r^{*}\geq R}r^{2+2j}(l+1) ^{2k}|\partial_{u}^{1+j}\partial_{u}^{n}\psi|^{2}du\] \[\quad+A\sum_{j+k+m\leq 0}\int_{u=l_{0}+R,r^{*}\leq-R}(-r^{*})^{2+2j} (l+1)^{2k}|\partial_{v}^{1+j}\partial_{t}^{m}\psi|^{2}dv\] \[\quad+A\sum_{j+m\leq 2n+2}\int_{\Sigma_{l_{0}}}-dt(J^{\alpha_{0}}[ \partial_{t}^{j}\Omega^{\alpha}\phi])+\int_{v\leq l_{0}+R,r^{*}\geq R}\chi(l+ 1)^{2}rV|\psi|^{2}dudv\] \[\quad+\int_{v\leq l_{0}+R}2\chi r^{2+2n}\mathbb{R}\left(\partial_ {u}^{n+1}\ddot{\psi}(-1)^{n}\partial_{r}^{n}V\psi\right)dudv\] \[\leq A\sum_{j+k+m\leq 0}\int_{v=l_{0}+R,r^{*}\geq R}r^{2+2j}(l+1) ^{2k}|\partial_{v}^{1+j}\partial_{t}^{m}\psi|^{2}du\] \[\quad+A\sum_{j+k+m\leq 0}\int_{u=l_{0}+R,r^{*}\leq-R}(-r^{*})^{2+2j} (l+1)^{2k}|\partial_{v}^{1+j}\partial_{t}^{m}\psi|^{2}dv\] \[\quad+A\sum_{j+m\leq 2n+2}\int_{\Sigma_{l_{0}}}-dt(J^{\alpha_{0}}[ \partial_{t}^{j}\Omega^{\alpha}\phi])\] \[\quad+\int_{v\leq l_{0}+R}2\chi r^{2+2n}\mathbb{R}\left(\partial_ {u}^{n+1}\ddot{\psi}(-1)^{n}\partial_{r}^{n}V\psi\right)dudv\]
In order to bound the final term in (8.64), we first note that the usual method of separating does not work:
\[\int_{v=t+R}\chi r^{2+2n}\mathbb{R}\left(\partial_{u}^{n+1}\ddot{\psi}(-1)^{n }\partial_{r}^{n}V\psi\right)du\leq A\int_{v=t+R}\chi r^{2n+1}|\partial_{u}^{n +1}\psi|^{2}+r(l+1)^{2}V|\psi|^{2}du. \tag{8.65}\]
Unfortunately, we have no way to bound \(rV|\psi|^{2}\). If we consider lower order terms in \(r\), we can use Hardy's inequality.
\[\int_{v=t+R}V|\psi|^{2}\leq A\int_{v=t+R}(l+1)^{2}\partial_{u}\psi|^{2}+A\int_{ v=t+R,|r^{*}\leq R}V|\psi|^{2}du. \tag{8.66}\]
Thus, the only term we need to be concerned about in (8.64) is the leading order in \(r\) behaviour of the final term. This behaves as follows:
\[\int_{v=t+R}2l(l+1)\chi r^{n}\mathbb{R}\left(\partial_{u}^{n+1} \ddot{\psi}\psi\right)du =\int_{v=t+R}2l(l+1)Re\left(\partial_{u}\ddot{\psi}\sum_{j=0}^{n} \binom{n}{j}\partial_{r^{*}}^{n-j}(-1)^{j}(\chi r^{n})\partial_{u}^{j}\psi \right)du \tag{8.67}\] \[\leq A\int_{v=t+R}l(l+1)r|\partial_{u}\psi|^{2}+l(l+1)\sum_{j=1}^{n -1}r^{2j-1}|\partial_{u}^{j}\psi|^{2}du\] \[\quad+\int_{v=t+R}l(l+1)\partial_{r^{*}}^{n}(\chi r^{n})\partial_{ \alpha}\left(|\psi|^{2}\right)du\] \[\leq A\int_{v=t+R}l(l+1)r|\partial_{u}\psi|^{2}+l(l+1)\sum_{j=1}^{n -1}r^{2j-1}|\partial_{u}^{j}\psi|^{2}du\] \[\quad+\int_{v=t+R}l(l+1)\partial_{r^{*}}^{n+1}(\chi r^{n})|\psi|^{2 }du-l(l+1)n!(-1)^{n}|\psi|^{2}_{L^{*}}.\]
As \(\partial_{v^{\pm 1}}^{n+1}(r^{n})\leq\Lambda r^{-2}\), we can use (8.66) to bound this. Combining (8.64), (8.66) and (8.67), we obtain
\[\int_{v\leq_{0}+R}\chi r^{1+2n}\left(|\partial_{u}^{1+n}\psi|^{2}+ V|\partial_{v}^{n}\psi|^{2}\right)dudv \leq A\sum_{j+k+m\leq n}\int_{u=t+R,r^{*}\geq R}r^{2+2j}(l+1)^{2k}| \partial_{u}^{1+j}\partial_{t}^{m}\psi|^{2}du \tag{8.68}\] \[+A\sum_{j+k+m\leq n}\int_{u=t_{0}+R,r^{*}\leq-R}(-r^{*})^{2+2j}(l +1)^{2k}|\partial_{v}^{1+j}\partial_{t}^{m}\psi|^{2}dv\] \[+A\sum_{j+m\leq 2n+2}\int_{\Sigma_{0}}-dt(J^{0}[\partial_{t}^{1} \Omega^{\alpha}\phi]),\]
as required.
The \((-r^{*})^{p}\) section is made much more easy by the exponential behaviour of the potential. We take \(p=2+2n,k=n+1\) in (8.59):
\[\int_{u\leq_{0}+R}\chi(-r^{*})^{1+2n}\left(|\partial_{v}^{n+1} \psi|^{2}+(-r^{*})V|\partial_{v}^{n}\psi|^{2}\right)dudv \leq A\int_{u=t_{0}+R}\chi(-r^{*})^{2+2n}|\partial_{v}^{n+1}\psi|^ {2}dv \tag{8.69}\] \[+A\int_{u\leq_{0}+R,-r^{*}\leq 2(2+2n)/\kappa}\chi(-r^{*})^{2+2 n}V|\partial_{v}^{n}\psi|^{2}dudv\] \[+A\int_{R\leq_{r^{*}}+r^{*}\leq R+M,d\leq 4\epsilon_{0}}| \partial_{v}^{n+1}\psi|^{2}+V|\partial_{v}^{n}\psi|^{2}d\kappa^{*}dt\] \[+A\sum_{j=0}^{n-1}\int_{u\leq_{0}+R}\chi(-r^{*})^{3+2n}\kappa^{2 n-2j}V|\partial_{v}^{j}\psi|^{2}dudv\] \[\leq A\int_{u=t_{0}+R}\chi(-r^{*})^{2+2n}|\partial_{v}^{n}\psi|^ {2}dv\] \[+A\int_{R\leq_{r^{*}}+r^{*}\leq\max\{R+M,(2+2n)/\kappa\},t\leq 0}| \partial_{v}^{n+1}\psi|^{2}+V|\partial_{v}^{n}\psi|^{2}d\kappa^{*}dt\] \[+A\frac{(l+1)^{2}}{M^{2}}\sum_{j=0}^{n-1}\int_{u\leq_{0}+R}\chi( -r^{*})^{1+2j}\kappa^{2n-2j}V|\partial_{v}^{j}\psi|^{2}dudv\] \[\leq A\sum_{j+|\alpha|+m\leq n}\int_{u=t_{0}+R,r^{*}\geq R}r^{2+2 j}|\partial_{u}^{1+j}\partial_{t}^{m}\Omega^{\alpha}\psi|^{2}du\] \[+A\sum_{j+|\alpha|+m\leq n}\int_{u=t_{0}+R,r^{*}\leq-R}(-r^{*})^{ 2+2j}|\partial_{v}^{1+j}\partial_{t}^{m}\Omega^{\alpha}\psi|^{2}dv\] \[+A\sum_{j+|\alpha|\leq 2n+2}\int_{\Sigma_{0}}-dt(J^{0}[\partial_{t}^{j }\Omega^{\alpha}\phi]),\]
as required. In the final inequality, we have used the induction hypothesis.
By repeating the above argument, we can also show that
\[\int_{u\leq_{0}+R}\chi(-r^{*})^{2n}\left(|\partial_{v}^{n+1}\psi| ^{2}+(-r^{*})V|\partial_{v}^{n}\psi|^{2}\right)dudv +\int_{v\leq_{0}+R}\chi r^{2n}\left(|\partial_{u}^{1+n}\psi|^{2}+ V|\partial_{v}^{n}\psi|^{2}\right)dudv \tag{8.70}\] \[\leq A\sum_{j+|\alpha|+m\leq n}\int_{u=t_{0}+R,r^{*}\geq R}r^{1+2j}| \partial_{u}^{1+j}\partial_{t}^{m}\Omega^{\alpha}\psi|^{2}du\] \[+A\sum_{j+|\alpha|+m\leq n}\int_{u=t_{0}+R,r^{*}\leq-R}(-r^{*})^{ 1+2j}|\partial_{u}^{1+j}\partial_{t}^{m}\Omega^{\alpha}\psi|^{2}dv\] \[+A\sum_{j+|\alpha|\leq 2n+1}\int_{\Sigma_{0}}-dt(J^{0}[\partial_{t}^{j }\Omega^{\alpha}\phi])\]
We now look to prove the final part. Assuming the result is true in the \(n\) case, apply (8.55) to \(\partial_{t}\psi\). We then integrate twice with respect to \(t_{0}\) to obtain
\[\int_{t_{2n+3}=-\infty}^{t_{2n+3}}\int_{t_{2n+2}=-\infty}^{t_{2n+ 3}}...\int_{t_{1}=-\infty}^{t_{2}}\int_{t=-\infty}^{t_{1}}\left(\int_{\Sigma_{ 0}}-dt(J^{0}[\partial_{t}^{n+1}\phi])\right)dtdt_{1}dt_{2}..dt_{2n+3} \tag{8.71}\] \[\leq A\sum_{j+k+m\leq n}\int_{-\infty}^{t_{0}}\int_{-\infty}^{t_{2n+ 3}}\int_{u=t+R,r^{*}\geq R}r^{2+2k}(l+1)^{2k}|\partial_{u}^{1+k}\partial_{t}^{m+ 1}\psi|^{2}dudt_{2n+2}dt_{2n+3}\] \[+A\sum_{j+k+m\leq n}\int_{-\infty}^{t_{0}}\int_{-\infty}^{t_{2n+3} }\int_{u=t+R,r^{*}\leq-R}(-r^{*})^{2+2k}(l+1)^{2k}|\partial_{v}^{1+k}\partial_ {t}^{m+1}|^{2}dvdt_{2n+2}dt_{2n+3}\] \[+A\sum_{j+|\alpha|\leq 2n+2}\int_{-\infty}^{t_{0}}\int_{-\infty}^{t_{2n+ 3}}\left(\int_{\Sigma_{t}}-dt(J^{0}[\partial_{t}^{j+1}\Omega^{\alpha}\phi]) \right)dt_{2n+2}dt_{2n+3}.\]
The final term here can be immediately bounded using Proposition 8.2. To bound the earlier terms, we note that \(\partial_{u}+\partial_{v}=\partial_{t}\), and we can use (7.1.2) to remove any mixed \(u,v\) derivatives.
\[\sum_{j+k+m\leq n}\int_{-\infty}^{t_{0}}\int_{-\infty}^{t_{2n+3}} \int_{v=t+R,r^{\prime}\geq R}r^{2+2j}(l+1)^{2k}|\partial_{u}^{1+j} \partial_{t}^{m}\psi|^{2}dudt_{2n+3} \tag{8.72}\] \[\leq A\sum_{j+k+m\leq n}\int_{-\infty}^{t_{0}}\int_{-\infty}^{t_{ 2n+3}}\int_{v=t+R,r^{\prime}\geq R}r^{2+2j}(l+1)^{2k}|\partial_{u}^{2+j} \partial_{t}^{m}\psi|^{2}dudt_{2n+2}dt_{2n+3}\] \[\qquad+A\sum_{j+k+m\leq n}\int_{-\infty}^{t_{0}}\int_{-\infty}^{t_ {2n+3}}\int_{v=t+R,r^{\prime}\geq R}r^{2+2j}(l+1)^{2k}|\partial_{u}^{2} \partial_{t}^{m}(V\psi)|^{2}dudt_{2n+2}dt_{2n+3}\] \[\leq A\sum_{j+k+m\leq n}\int_{-\infty}^{t_{0}}\int_{-\infty}^{t_{ 2n+3}}\int_{v=t+R,r^{\prime}\geq R}r^{2+2j}(l+1)^{2k}|\partial_{u}^{2+j} \partial_{t}^{m}\psi|^{2}dudt_{2n+2}dt_{2n+3}\] \[\qquad+A\sum_{j+k+m\leq n}\int_{-\infty}^{t_{0}}\int_{-\infty}^{t_ {2n+3}}\int_{v=t+R,r^{\prime}\geq R}r^{2j}(l+1)^{2k+2}V|\partial_{u}^{2}\partial _{t}^{m}\psi|^{2}dudt_{2n+2}dt_{2n+3}\] \[\leq A\sum_{j+k+m\leq n}\int_{-\infty}^{t_{0}}\int_{v=t+R,r^{ \prime}\geq R}r^{1+2j}(l+1)^{2k}|\partial_{u}^{1+j}\partial_{t}^{m}\psi|^{2} dudt_{2n+3}\] \[\qquad+A\sum_{j+k+m\leq n+1}\int_{-\infty}^{t_{0}}\int_{u=t+R,r^{ \prime}\leq R}(-r^{*})^{1+2j}(l+1)^{2k}|\partial_{v}^{1+j}\partial_{t}^{m} \psi|^{2}dudt_{2n+3}\] \[\qquad+A\sum_{j+k+m\leq n+1}\int_{v=t+R,r^{\prime}\geq R}r^{2+2j} (l+1)^{2k}|\partial_{v}^{1+j}\partial_{t}^{m}\psi|^{2}ddv\] \[\qquad+A\sum_{j+k+m\leq n+1}\int_{v=t+R,r^{\prime}\leq R}(-r^{*} )^{2+2j}(l+1)^{2k}|\partial_{v}^{1+j}\partial_{t}^{m}\psi|^{2}dv\] \[\qquad+A\sum_{j+k+m\leq n+1}\int_{\Sigma_{0}}-dt(J^{\partial_{t} }[\partial_{t}^{j}\Omega^{\sigma}\phi]),\]
as required. An identical argument follows for the \(-r^{*}\geq R\) region.
**Theorem 8.7** (Boundedness of the \(u\) and \(v\) Weighted Energy): _Let \(\psi_{+}\) be a Schwartz function on the cylinder. Let \(\phi\) be a solution to (1.2) on a sub-extremal Reissner-Nordstrom background \(\mathcal{M}_{RN}\). Further, let \(\phi\) vanish on \(\mathcal{H}^{+}\) and have future radiation field equal to \(\psi_{+}\). Then there exists a constant \(A_{n}=A_{n}(M,q,n)\) (which also depends on the choice of origin of \(u,v\)) such that_
\[\sum_{k=0}^{2}\sum_{j+m+2|\leq 2n}\int_{\mathcal{H}^{-}\cap\{u \leq 0\}}(M^{2(j+1)-k}+u^{2(j+1)-k})|\partial_{u}^{i+m+k+1}\Omega^{\alpha}\psi_{ \mathcal{H}^{-}}|^{2}du \tag{8.73}\] \[+\sum_{k=0}^{2}\sum_{1\leq j+|\alpha|,2j+2|\alpha|+m\leq 2n+2}\int_{ \mathcal{I}^{-}\cap\{v\leq 0\}}(M^{2j-k}+v^{2j-k})|\partial_{v}^{i+k+m}\Omega^{ \alpha}\psi_{-}|^{2}dv\] \[\leq A\sum_{k=0}^{2}\sum_{1\leq j+|\alpha|,2j+2|\alpha|+m\leq 2n+2} \int_{\mathcal{I}^{-}}(M^{2j-k}+u^{2j-k})|\partial_{u}^{i+k+m}\Omega^{\alpha} \psi_{+}|^{2}du\]
Proof.: This result again follows closely that of [8]. It is an easy combination of Propositions 8.5 and 8.6, applied to \(T^{m}\Omega^{\alpha}\), for \(\alpha\leq n-j\) and \(m\leq 2n-2k-2\alpha\). All that remains to note
\[\int_{t_{2n+1}-\infty}^{t_{0}} \int_{t_{2n+-\infty}}^{t_{2n+-\infty}}...\int_{t_{1}=-\infty}^{t_{ 2}}\int_{t_{n-\infty}}^{t_{1}}\left(\int_{\Sigma_{k}}-dt(J^{\partial_{t}} \partial_{t}^{\sigma}\phi)\right)dtdt_{1}dt_{2}...dt_{2n+1} \tag{8.74}\] \[=\int_{t_{2n+1}-\infty}^{t_{0}}\int_{t_{2n}=-\infty}^{t_{2}}... \int_{t_{1}=-\infty}^{t_{2}}\int_{t_{n-\infty}}^{t_{1}}\int_{t_{n-\infty}}^{t_{ 1}}\left(\partial_{u}\psi_{\mathcal{H}^{-}}|^{2}\sin\theta d\theta d\varphi du+ \int_{-\infty}^{t+R}|\partial_{v}\psi_{-}|^{2}\sin\theta d\theta d\varphi du \right)dtdt_{1}dt_{2}...dt_{2n+1}\] \[=\frac{1}{(2n+2)!}\left(\int_{-\infty}^{t+R}(u-t_{0}-R)^{2n+2}| \partial_{u}\psi_{\mathcal{H}^{-}}|^{2}\sin\theta d\theta d\varphi du+\int_{- \infty}^{t+R}(v-t_{0}-R)^{2n+2}|\partial_{v}\psi_{-}|^{2}\sin\theta d\theta d \varphi dv\right),\]
by repeated integration by parts.
**Theorem 8.8** (Arbitrary polynomial decay of I.E. Terms): _Let \(\psi_{+}\) be a Schwartz function on the cylinder, with \(\hat{\psi}_{+}\) supported on \(\omega\geq 0\). Then for each \(n\), there exists an \(A_{n}(M,q,\psi_{+})\) such that_
\[I.E.[\psi_{+},v_{c},u_{1},u_{0}]\leq A_{n}\left((u_{0}-u_{1})^{-n}+(u_{0}-v_{c}) ^{-n}\right). \tag{8.75}\]
_Here \(I.E.\) is as defined in Theorem 7.1._
Proof.: This proof is identical to that of Theorem 8.1.
Proof of the Main Result
In this section we will combine Theorems 6.1, 7.1, and 8.8 to obtain Theorem 1 as stated in the introduction.
Proof of Theorem 1.: This result follows easily from Theorem 7.1 and Theorem 8.8. In the sub-extremal case, one can choose \(u_{1}\) such that \(e^{-\kappa u_{1}}=u_{0}^{-n}\) to obtain
\[\left|\int_{\omega=-\infty}^{\infty}\int_{S^{2}}|\omega||\hat{ \psi}_{+}|^{2}\sin\theta d\theta d\varphi d\omega-\int_{-\infty}^{\infty}| \omega|\coth\left(\frac{\pi}{\kappa}|\omega|\right)\sum_{l,m}|\tilde{T}_{\omega,l,m}|^{2}|\hat{\psi}_{+,l,m}|^{2}d\omega-\int_{-\infty}^{\infty}|\omega|\sum_{ l,m}|\tilde{R}_{\omega,l,m}|^{2}|\hat{\psi}_{+,l,m}|^{2}d\omega\right|\] \[\leq A_{n}u_{0}^{-n}+u_{0}^{2n}I.E.[\psi_{+},v_{c},\frac{n}{ \kappa}\ln(u_{0}),u_{0}]\] \[\leq A_{n}u_{0}^{-n}. \tag{9.1}\]
Note also that
\[\int_{\omega=-\infty}^{\infty}\omega|\hat{\psi}_{+}|^{2}d\omega=\frac{i}{2} \int_{\mathcal{I}^{+}}\bar{\psi}\nabla\psi-\psi\nabla\bar{\psi}du=\frac{i}{2} \int_{\mathcal{I}^{-}}\bar{\psi}\nabla\psi-\psi\nabla\bar{\psi}d\omega=\int_ {\omega=-\infty}^{\infty}\omega|\hat{\psi}_{-}|^{2}d\omega, \tag{9.2}\]
by (6.5) in Theorem 6.1.
Thus, we have
\[\left|\int_{\omega=-\infty}^{\infty}\int_{S^{2}}|\omega||\hat{ \psi}|^{2}\sin\theta d\theta d\varphi d\omega-\int_{-\infty}^{\infty}|\omega| \coth\left(\frac{\pi}{\kappa}|\omega|\right)\sum_{l,m}|\tilde{T}_{\omega,l,m} |^{2}|\hat{\psi}_{+,l,m}|^{2}d\omega-\int_{-\infty}^{\infty}|\omega|\sum_{l,m} |\tilde{R}_{\omega,l,m}|^{2}|\hat{\psi}_{+,l,m}|^{2}d\omega\right| \tag{9.3}\] \[=\left|\sum_{l,m}\left(\int_{\omega=-\infty}^{\infty}(|\omega|- \omega)|\hat{\psi}_{+,l,m}|^{2}d\omega-\int_{-\infty}^{\infty}|\omega|\coth \left(\frac{\pi}{\kappa}|\omega|\right)|\tilde{T}_{\omega,l,m}|^{2}|\hat{\psi} _{+,l,m}|^{2}d\omega-\int_{-\infty}^{\infty}\left(|\omega||\tilde{R}_{\omega, l,m}|^{2}-\omega\right)|\hat{\psi}_{+,l,m}|^{2}d\omega\right)\right|\] \[=\left|\sum_{l,m}\left(\int_{-\infty}^{\infty}(\omega-|\omega|| \tilde{R}_{\omega,l,m}|^{2}-|\omega|\coth\left(\frac{\pi}{\kappa}|\omega| \right)|\tilde{T}_{\omega,l,m}|^{2}\right)|\hat{\psi}_{+,l,m}|^{2}d\omega+2 \int_{\omega=-\infty}^{0}|\omega||\hat{\psi}_{+,l,m}|^{2}d\omega\right)\right|.\]
Finally, we note that \(|\tilde{R}_{\omega,l,m}|^{2}=1-|\tilde{T}_{\omega,l,m}|^{2}\) (Proposition 5.1.2), which allows us to simplify:
\[\int_{-\infty}^{\infty}\left(\omega-|\omega|\tilde{R}_{\omega,l,m }|^{2}-|\omega|\coth\left(\frac{\pi}{\kappa}|\omega|\right)|\tilde{T}_{\omega,l,m}|^{2}\right)|\hat{\psi}_{+,l,m}|^{2}d\omega \tag{9.4}\] \[=\int_{-\infty}^{\infty}\left(\omega-|\omega|+|\omega|\left(1- \coth\left(\frac{\pi}{\kappa}|\omega|\right)\right)|\tilde{T}_{\omega,l,m}|^{2} \right)|\hat{\psi}_{+,l,m}|^{2}d\omega\] \[=2\int_{-\infty}^{\infty}\left(\frac{\omega-|\omega|}{2}+|\omega| \left(e^{\frac{\pi\omega}{\kappa}}-1\right)^{-1}|\tilde{T}_{\omega,l,m}|^{2} \right)|\hat{\psi}_{+,l,m}|^{2}d\omega\] \[=2\int_{-\infty}^{\infty}|\omega|\left(e^{\frac{\pi\omega}{\kappa} }-1\right)^{-1}|\tilde{T}_{\omega,l,m}|^{2}\left|\hat{\psi}_{+,l,m}\right|^{2}d\omega,\]
which is the form of \(\psi_{\mathcal{H}_{-}}\) in Theorem 5.1.1. We have used that \(\hat{\psi}_{+}\) is only supported on \(\omega\geq 0\). The calculation follows identically for the extremal case, setting \(u_{1}=u_{0}-\sqrt{Mu_{0}}\).
As discussed in the introduction, Theorem 1 is the calculation of radiation of frequency \(\hat{\psi}_{+}\) given off by the RNOS model of a collapsing black hole, see [2] for a full discussion of this. We will however, comment that the quantity of particles emitted by extremal RNOS models is integrable. This means that the total number of particles given off by the forming extremal black hole is finite, and thus the black hole itself may never evaporate.
|
2305.04498 | Leveraging Deep Learning and Digital Twins to Improve Energy Performance
of Buildings | Digital transformation in buildings accumulates massive operational data,
which calls for smart solutions to utilize these data to improve energy
performance. This study has proposed a solution, namely Deep Energy Twin, for
integrating deep learning and digital twins to better understand building
energy use and identify the potential for improving energy efficiency. Ontology
was adopted to create parametric digital twins to provide consistency of data
format across different systems in a building. Based on created digital twins
and collected data, deep learning methods were used for performing data
analytics to identify patterns and provide insights for energy optimization. As
a demonstration, a case study was conducted in a public historic building in
Norrk\"oping, Sweden, to compare the performance of state-of-the-art deep
learning architectures in building energy forecasting. | Zhongjun Ni, Chi Zhang, Magnus Karlsson, Shaofang Gong | 2023-05-08T06:48:33Z | http://arxiv.org/abs/2305.04498v3 | # Leveraging Deep Learning and Digital Twins to Improve Energy Performance of Buildings
###### Abstract
Digital transformation in buildings accumulates massive operational data, which calls for smart solutions to utilize these data to improve energy performance. This study has proposed a solution, namely Deep Energy Twin, for integrating deep learning and digital twins to better understand building energy use and identify the potential for improving energy efficiency. Ontology was adopted to create parametric digital twins to provide consistency of data format across different systems in a building. Based on created digital twins and collected data, deep learning methods were used for performing data analytics to identify patterns and provide insights for energy optimization. As a demonstration, a case study was conducted in a public historic building in Norrkoping, Sweden, to compare the performance of state-of-the-art deep learning architectures in building energy forecasting.
deep learning, digital twin, building energy forecasting
## I Introduction
Digital transformation in buildings has brought considerable opportunities to optimize their energy performance by integrating various advanced information and communication technologies [1]. Among them, digital twin technology is today a powerful tool for building management. With continuously collected data [2], a digital twin reflects the latest status of its physical counterpart in nearly real-time [3]. In addition, more advanced data analysis applications, such as energy forecasting and predictive controls, can be developed based on the virtual model and data from meters, sensors, actuators, and control systems [4]. Deep learning has shown great potential in data analytics [5, 6]. Based on large amounts of the collected data, deep learning methods can be used to develop models for identifying patterns in operational data, such as making predictions about energy use and revealing the potential for energy optimization [7].
Previous studies have demonstrated the benefits of digital twins for the built environment, such as indoor or ambient climate monitoring [8, 9] and anomaly detection for building assets [10]. However, these studies lack an emphasis on the consistency of data representation of virtual models. Data collected from buildings are usually produced by various systems and methods [11]. Even the most advanced building management system generates a bluster of data and information flows that differ among buildings, vendors, and locations [12]. Lacking a consistent data format makes it challenging to extend previous solutions to other buildings and limits the deployment of energy applications. Furthermore, the integration of deep learning and digital twins is still in its early stage. State-of-the-art deep learning architectures, e.g., temporal fusion transformer (TFT) [13], have not been exploited in the built environment for improving energy performance.
This study aims to integrate deep learning and digital twins to better understand building energy usage. The main contributions of this work are:
* A solution, namely Deep Energy Twin, was proposed for analyzing building energy use and identifying potential for energy optimization. Ontology was adopted to create parametric digital twins to provide consistency of data format across different systems in a building. Deep learning was used for data analytics.
* A comprehensive case study was conducted to illustrate the capacity of five deep learning methods, including long short-term memory (LSTM), temporal convolutional network (TCN), Transformer, N-HiTS, and TFT, to predict building energy consumption and measure uncertainties.
## II Related Work
This section first introduces the application of digital twins in the built environment. Then, deep learning methods for time series forecasting are reviewed.
### _Digital Twins in the Built Environment_
Several studies have reported developing digital twin applications for the built environment, such as indoor or ambient climate monitoring [8, 9], anomaly detection for building assets [10], and heritage preservation [14, 15]. However, most of the studies lack an emphasis on the consistency of data representation of virtual models. They typically employ some customized data format, which makes it difficult to extend their solutions to other buildings, limits interoperability between buildings, and limits the deployment of energy applications. Only little work [15] looked into using a consistent metadata structure to represent buildings and subsystems. Nevertheless, integrating energy optimization solutions in buildings requires expertise in multiple domains [12]. For depicting such a complex system, it is preferable to use ontology to ensure accurate alignment across several domains, such as actuators, sensors, management workflows, and web resources [16].
An ontology is a formal statement of a conceptualization that includes the objects, concepts, and other entities presumed to exist in a given area, together with the relationships between them [17]. Several studies have attempted to tackle the challenge of creating a metadata schema across a broad range of buildings. Balaji et al. [12] proposed Brick, a standardized metadata schema for representing buildings. The schema defines a concrete ontology for sensors, subsystems, and their relationships, enabling the development of portable applications. RealEstateCore [18] is another ontology for the real estate business to speed up building modeling. Both Brick and RealEstateCore allow data output in the Digital Twin Definition Language [19] format, which facilitates deploying applications in Microsoft Azure Cloud. Using these ontologies to create virtual models is advantageous for gathering and documenting all necessary information for further knowledge management and data analytics.
### _Deep Learning for Time Series Forecasting_
Deep learning methods have emerged in recent years due to their enhanced abilities in addressing massive data, feature extraction, and modeling nonlinear processes [20]. Three fundamental deep learning methods for time series forecasting are recurrent neural networks (RNNs), convolutional neural networks (CNNs), and attention mechanism-based networks.
In previous studies, RNN and its variants [21] have been more frequently applied to building energy forecasting [22, 23]. A few studies [24, 25] have also applied TCN [26] and Transformer [27]. However, recent deep learning methods, such as TFT [13] and N-HiTS [28], were rarely used. Therefore, a practical comparison of which method is more effective in building energy forecasting is lacking. In addition, previous studies mostly made point forecasting, and little work was carried out on making probabilistic forecasting.
## III Methodology
First, the process of creating a parametric digital twin of building energy systems is presented. Then, the method for developing predictive models for a representative building energy application is described.
### _Creation of Parametric Digital Twins_
As depicted in Fig. 1, creating a parametric digital twin of a building lies in two aspects. One is to model essential physical entities and their relationships. The other is to provide the necessary interfaces to continuously update the status of entities from various data sources and supply data access for subsequent tasks, e.g., data analytics. As a reference implementation, the Brick ontology [12] was adopted for creating the parametric digital twin model. The ontology provides a consistent data representation that converts heterogeneous energy system data to a consistent format. In Fig. 1, square boxes represent classes, which abstract physical entities, such as _Location_, _Equipment_, _Resource_, and _Point_. Each round box represents a specific entity, which is an instance of a particular class. A class can have multiple instances of entities.
Locations refer to different spaces in buildings, such as rooms and floors. Resources are physical resources or stuff that are controlled or measured by points. Physical or virtual entities that generate time series data are called points. Typical physical points include sensors, setpoints, and equipment status. Virtual points, on the other hand, are generated by a mechanism that may operate on other time series data, such as an average floor humidity sensor. Each data point can have several relationships that connect it to other classes, such as its location or the equipment it belongs to.
Fig. 1: An illustration of the parametric digital twin model. A parametric digital twin model contains essential information about physical entities and their relationships. The model also provides necessary read or write application programming interfaces (APIs) for other modules, such as updating model status from data sources and providing data access for data analytics.
### _Building Energy Forecasting_
As a demonstration of data analytics, this subsection introduces one-step ahead building energy forecasting.
#### Iii-B1 Problem Formulation
A specific energy use, i.e., target variable, is denoted as \(y\) and \(y\in\mathbb{R}_{+}\). Predictor variables that affect energy use are denoted as \(\mathbf{x}\) and \(\mathbf{x}\in\mathbb{R}^{k}\). All target and predictor variables are supposed to be observed at constant intervals over time and grouped chronologically. At time \(t\), the observed value of the target variable is denoted as \(y_{t}\). Similarly, observed values of predictor variables are denoted as \(\mathbf{x}_{t}=[x_{1:t},x_{2:t},...,x_{k,t}]\).
Then, a point forecasting model takes the form
\[\hat{y}_{t+1}=f_{\theta}(y_{t-w+1:t},\mathbf{x}_{t-w+1:t}), \tag{1}\]
where \(\hat{y}_{t+1}\) is model forecast, \(y_{t-w+1:t}=[y_{t-w+1},y_{t-w+2},...,y_{t}]\) and \(\mathbf{x}_{t-w+1:t}=\{\mathbf{x}_{t-w+1},\mathbf{x}_{t-w+2},...,\mathbf{x}_{t}\}\) are observations of the target and predictor variables over a loopback window \(w\), and \(f_{\theta}(.)\) is the prediction function learned by the model.
Probabilistic forecasting models are developed to generate interested quantiles directly through quantile regression. Given a set of quantiles \(\mathcal{Q}\subset(0,1)\), a quantile forecasting model takes the form
\[\hat{y}_{t+1}^{(p)}=g_{\theta}(y_{t-w+1:t},\mathbf{x}_{t-w+1:t}), \tag{2}\]
where \(p\in\mathcal{Q}\), \(\hat{y}_{t+1}^{(p)}\) is the model forecast for the \(p\)th quantile of the target variable, \(y_{t-w+1:t}\) and \(\mathbf{x}_{t-w+1:t}\) have the same definition as in the point forecasting model, and \(g_{\theta}(.)\) is the prediction function learned by the model.
#### Iii-B2 Deep Learning Methods for Comparison
Five deep learning methods, namely LSTM, TCN, Transformer, N-HiTS, and TFT, were investigated to compare their performance in building energy forecasting.
#### Iii-B3 Loss Function and Evaluation Metrics
Point forecasting models were trained to minimize the total squared error. Probabilistic forecasting models were trained to minimize the total quantile loss. The \(p\)th quantile loss [13] is calculated as
\[\ell(\hat{y},y,p)=(1-p)(\hat{y}-y)_{+}+p(y-\hat{y})_{+}, \tag{3}\]
where \((.)_{+}=max(0,.)\). Then, the training quantile loss \(L_{q}(\theta)\) for a set \(\mathcal{S}=\{(y_{t-w+1:t},\mathbf{x}_{t-w+1:t},y_{t+1})\}_{t=w}^{n+w-1}\) is calculated as
\[L_{q}(\theta)=\sum\limits_{t=w}^{n+w-1}\sum\limits_{i=1}^{|\mathcal{Q}|}\ell \left(\hat{y}_{t+1}^{(p_{i})},y_{t+1},p_{i}\right), \tag{4}\]
where \(p_{i}\in\mathcal{Q}\).
The prediction accuracy of point forecasting models was evaluated by coefficient of variation of the root mean square error (CV-RMSE) and normalized mean bias error (NMBE). They are calculated by Eq. 5 and 6 [29].
\[CV\text{-}RMSE=\frac{\sqrt{\frac{\frac{1}{n}\sum\limits_{t=1}^{n}(\hat{y}_{t} -y_{t})^{2}}{\overline{y}}}}{\times 100}, \tag{5}\]
\[NMBE=\frac{\frac{\frac{1}{n}\sum\limits_{t=1}^{n}(\hat{y}_{t}-y_{t})}{\overline {y}}}{\times 100}, \tag{6}\]
where \(n\) denotes the size of forecast horizon, \(y_{t}\) and \(\hat{y}_{t}\) have the same definitions in the point forecasting model. \(\overline{y}\) is the mean actual value of the target variable over the forecast horizon.
The \(\rho\)-risk, which normalizes quantile losses, was used for evaluating the performance of probabilistic forecasting models. \(\rho\)-risk at \(p\)th quantile is calculated by [13]
\[\rho\text{-}risk(p)=\frac{2\times\sum\limits_{t=1}^{n}\ell\left(\hat{y}_{t}^{ (p)},y_{t},p\right)}{\sum\limits_{t=1}^{n}y_{t}}, \tag{7}\]
where \(n\) denotes the size of forecast horizon, \(\hat{y}_{t}^{(p)}\) is the predicted \(p\)th quantile value of a target variable at time \(t\), and \(\ell\left(\hat{y}_{t}^{(p)},y_{t},p\right)\) is the \(p\)th quantile loss calculated by Eq. 3.
## IV Case Study
To verify the performance of different deep learning methods, a case study was conducted to develop predictive models for energy use of one public historic building. This section describes details of the used dataset and experimental setup.
### _Dataset_
The dataset includes two parts. One is the historical electricity consumption and heating load from the City Museum (see Fig. 2) in Norrkoping, Sweden. The other is the meteorological data from a weather station located \(\sim\)2 km away from the building. The meteorological data include dry-bulb temperature, relative humidity, dew point temperature, precipitation, air pressure, and wind speed. All data range from 01:00 on January 1, 2016 to 00:00 on January 1, 2020, with a time granularity of one hour. Hours appearing in this paper are expressed in 24-hour format and are in local time.
The normal operation of the City Museum is to maintain an appropriate indoor climate for preservation of collections
Fig. 2: The City Museum, Norrköping, Sweden.
and human comfort of staff and visitors. In regular time, it is open six days a week, from Tuesday to Sunday. The opening time starts at 11:00. The closing time is 17:00 on Tuesdays, Wednesdays, and Fridays, 20:00 on Thursdays, and 16:00 on Saturdays and Sundays. According to the first three years of energy consumption data (see Fig. 3), both electricity and heating have a yearly seasonality. Moreover, there is no long-term trend in both energy use.
### _Data Preprocessing_
Data preprocessing seeks to turn raw data into a format that models can readily handle and understand.
#### Iv-B1 Data Cleaning and Dataset Splitting
First, missing values in meteorological data were linearly interpolated. Then, the dataset was partitioned into three subsets: a training set for learning model parameters, a validation set for tuning hyperparameters and avoiding overfitting, and a test set for evaluating model performance. The dataset is divided roughly according to the empirical ratio of 80:10:10, with 38 months of data from January 1, 2016 to February 28, 2019 are utilized as the training set, five months of data from March 1, 2019 to July 31, 2019 are used as the validation set, and the remaining five months of data are used as the test set.
#### Iv-B2 Feature Preparation
Four temporal features are extracted from timestamps: two cyclical and two binary variables. The cyclical variables are _hour_ (integer value from 0 to 23) and _weekday_ (integer value from 0 to 6, each value represents a day in a week, starting from Monday). The binary variables include one called _is holiday_ to indicate if a day is a Sweden public holiday and another called _is weekend_ to indicate if a day is a weekend. In addition, one feature called _is open_ with a binary value is added to indicate if the City Museum is open for a given hour.
#### Iv-B3 Data Transformation
A min-max normalization was used to scale target variables and meteorological features to a range of \([0,1]\). The training set was utilized to fit all min-max scales, which were then used to transform the validation and test sets. A sine-cosine transformation was used to convert cyclical features. Binary features were not transformed.
### _Experimental Setup_
Models were developed to forecast electricity consumption and heating load of the City Museum in the next hour. A lookback window size of 24 was used. Three baseline models were created for point forecasting. Two of them are based on the seasonal naive (SN) method [30], namely SN-1 and SN-24, because electricity consumption and heating load are highly seasonal. Each forecast of the SN-1 model is set to be the value observed one hour ago. The SN-24 model seeks to use daily seasonality, and each forecast of a target variable is set to the value observed 24 hours ago. The remaining one is linear regression (LR) model.
Five deep learning models (LSTM, TCN, Transformer, N-HiTS, and TFT) were trained to make both point and probabilistic forecasts. The predefined set of quantiles is \(\{0.1,0.5,0.9\}\), and we are interested in evaluating \(\rho\)-risk on 0.5th and 0.9th quantiles. Models were built using the Python packages PyTorch (v1.12.0), darts (v0.23.1), and scikit-learn (v1.2.1). All experiments were carried out on a computer equipped with an NVIDIA GeForce GTX 1080 graphics card.
## V Results and Discussion
### _Quantitative Analysis_
As shown in Table I, among all models, the TCN model performed best on predicting both energy use (CV-RMSE 13.90% on electricity and CV-RMSE 8.16% on heating). The performance of all models except for the SN-24 model on forecasting electricity has met the criterion suggested by the ASHRAE Guideline 14-2014 [29] (30% for CV-RMSE and \(\pm\)10% for NMBE). Furthermore, the performance of LR and five deep learning models indicates higher predictability in heating than electricity since all of them achieved a lower CV-RMSE on predictions of heating than electricity. The higher predictability in heating load is because the building utilizes adaptive heating, which is driven by the difference between indoor and outdoor temperatures.
In contrast to the dominance of the TCN model in point forecast, none of the five deep learning models dominates
Fig. 3: Historical hourly (**a**) electricity consumption and (**b**) heating load of the City Museum from 01:00 on January 1, 2016 to 00:00 on January 1, 2019.
the probabilistic forecast. For predicting the electricity, the TCN model performed best to capture the central tendency as it achieved the lowest \(\rho\)-risk at the 0.5th quantile (\(\rho\)-risk\((0.5)=0.0741\) as in Table II). The LSTM model, on the other hand, performed best to capture the upper end of the distribution of the electricity (\(\rho\)-risk\((0.9)=0.0470\)) and might be useful for predicting extreme values or identifying outliers. For predicting the heating, the TFT model performed best for both predicting median value (\(\rho\)-risk\((0.5)=0.0454\)) and capturing the upper end of the distribution (\(\rho\)-risk\((0.9)=0.0231\)).
The probabilistic forecasts also show that heating is more predictable than electricity. When predicting heating rather than electricity, all models obtained decreased \(\rho\)-risk at 0.5th quantile. Meanwhile, the uncertainties in electricity consumption are greater than those in heating load since these models achieved a higher \(\rho\)-risk at 0.9th quantile when predicting electricity than heating. Nevertheless, The uncertainty in predicting electricity also indicates that, on the one hand, it is advantageous to improve confidence by optimizing electricity use while still assuring the regular functionality of a building. On the other hand, for more accurate forecasting, additional operational model-related features that impact electricity consumption should be included.
### _Qualitative Analysis_
Previous quantitative analysis indicates that heating load is more predictable than electricity consumption. The lower predictability was partly due to changes in the operating mode of the City Museum on some days in November and December 2019. Fig. 4a shows such a change. During the two days, from November 29 to November 30, the hourly energy consumption in the nighttime was even higher than in the daytime of the previous days.
The changes in operating mode degrade the prediction accuracy of models during these days. On November 29 (the first day when the operating mode started to change), the predicted value has a certain lag (see Fig. 4a). As shown in Fig. 4b, the 80% prediction interval (from 0.1th quantile to 0.9th quantile) during the daytime of the two days was relatively higher than during the daytime of the days before the operating mode changed.
The higher predictability of heating load is attributed to strong influencing factors like dry-bulb temperature being involved in making predictions. In addition, the heating load is less affected by the change in operating mode. As shown in Fig. 5a, even on November 29 and 30, the two days when the operating mode changed, the best three models still made good predictions. Similarly, the uncertainty in predictions was greater during the daytime than d
Fig. 4: The actual and predicted hourly electricity consumption from November 27 to November 30, 2019. (**a**) Point forecasts of the best three models and (**b**) probabilistic forecast of the TCN model. The predicted median is P50, and the 80% prediction interval (PI) is from 0.1th to 0.9th quantile.
Fig. 5: The actual and predicted hourly heating load from November 27 to November 30, 2019. (**a**) Point forecasts of the best three models and (**b**) probabilistic forecast of the TFT model.
Fig. 4(b)). A possible explanation for the higher uncertainty during daytime is that there is more heat exchange between the indoor and outdoor environments when more people are entering and exiting the building.
## VI Conclusion
This study has presented a solution for integrating deep learning with digital twins to provide a more comprehensive understanding of building energy systems. Ontology was adopted for creating parametric digital twins of building energy systems to ensure a consistent data representation across various domains. Deep learning methods were applied to analyze the data collected by digital twins to identify patterns and seek the potential for saving energy. The results obtained from a case study in one public historic building in Norrkoping, Sweden, have shown that deep learning methods, such as TCN, LSTM, and TFT, exhibit strong capabilities in capturing tendency and uncertainty in building energy consumption.
The solution provides facility managers with a better insight into building energy use. Thus, facility managers can proactively optimize energy systems to avoid unnecessary energy use. In the long run, this could result in cost savings, increased human comfort, and a more sustainable built environment.
## Acknowledgment
The authors thank Johan Bjornh and his colleagues at Norrevo Fastigheter AB in Norrkoping for providing access to the City Museum and offering the historical energy consumption data. The Swedish Meteorological and Hydrological Institute is acknowledged for providing the weather data.
|
2310.18207 | INA: An Integrative Approach for Enhancing Negotiation Strategies with
Reward-Based Dialogue System | In this paper, we propose a novel negotiation dialogue agent designed for the
online marketplace. Our agent is integrative in nature i.e, it possesses the
capability to negotiate on price as well as other factors, such as the addition
or removal of items from a deal bundle, thereby offering a more flexible and
comprehensive negotiation experience. We create a new dataset called
Integrative Negotiation Dataset (IND) to enable this functionality. For this
dataset creation, we introduce a new semi-automated data creation method, which
combines defining negotiation intents, actions, and intent-action simulation
between users and the agent to generate potential dialogue flows. Finally, the
prompting of GPT-J, a state-of-the-art language model, is done to generate
dialogues for a given intent, with a human-in-the-loop process for post-editing
and refining minor errors to ensure high data quality. We employ a set of novel
rewards, specifically tailored for the negotiation task to train our
Negotiation Agent, termed as the Integrative Negotiation Agent (INA). These
rewards incentivize the chatbot to learn effective negotiation strategies that
can adapt to various contextual requirements and price proposals. By leveraging
the IND, we train our model and conduct experiments to evaluate the
effectiveness of our reward-based dialogue system for negotiation. Our results
demonstrate that the proposed approach and reward system significantly enhance
the agent's negotiation capabilities. The INA successfully engages in
integrative negotiations, displaying the ability to dynamically adjust prices
and negotiate the inclusion or exclusion of items in a bundle deal | Zishan Ahmad, Suman Saurabh, Vaishakh Sreekanth Menon, Asif Ekbal, Roshni Ramnani, Anutosh Maitra | 2023-10-27T15:31:16Z | http://arxiv.org/abs/2310.18207v1 | # INA: An Integrative Approach for Enhancing Negotiation Strategies with Reward-Based Dialogue System
###### Abstract
In this paper, we propose a novel negotiation dialogue agent designed for the online marketplace. Our agent is integrative in nature _i.e_, it possesses the capability to negotiate on price as well as other factors, such as the addition or removal of items from a deal bundle, thereby offering a more flexible and comprehensive negotiation experience. We create a new dataset called **Integrative Negotiation Dataset (IND)** to enable this functionality. For this dataset creation, we introduce a new semi-automated data creation method, which combines defining negotiation intents, actions, and intent-action simulation between users and the agent to generate potential dialogue flows. Finally, the prompting of GPT-J, a state-of-the-art language model, is done to generate dialogues for a given intent, with a human-in-the-loop process for post-editing and refining minor errors to ensure high data quality. We employ a set of novel rewards, specifically tailored for the negotiation task to train our Negotiation Agent, termed as the **Integrative Negotiation Agent (INA)**. These rewards incentivize the chatbot to learn effective negotiation strategies that can adapt to various contextual requirements and price proposals. By leveraging the IND, we train our model and conduct experiments to evaluate the effectiveness of our reward-based dialogue system for negotiation. Our results demonstrate that the proposed approach and reward system significantly enhance the agent's negotiation capabilities. The INA successfully engages in integrative negotiations, displaying the ability to dynamically adjust prices and negotiate the inclusion or exclusion of items in a bundle deal1.
Footnote 1: Codes and dataset available at [https://github.com/zishan-ai/neg](https://github.com/zishan-ai/neg) and [https://www.iitp.ac.in/~ai-nlp-ml/resources.html#INA](https://www.iitp.ac.in/~ai-nlp-ml/resources.html#INA)
## 1 Introduction
In an online marketplace, customers and sellers engage in discussions involving product inquiry and bargaining before reaching a common consensus (He et al., 2018). In such a setting, negotiation between the customer and the seller is a core facet of discourse that ultimately decides the profit of sale and customer satisfaction. Negotiation on the price of a product is very common, however, customers have an open-ended approach to negotiation often also involving negotiation on certain aspects related to the deal. For example, while buying a chair the customer may negotiate a deal without the cushions, or even negotiate between delivery and in-store pick-up. As a result, a dialogue system for negotiation in an online marketplace should be capable of engaging in negotiation on different aspects such as price, product, and delivery. Additionally, such a system should also be capable of responding to product inquiries with relevant and knowledge-grounded information.
A systemic survey conducted by (Zhan et al., 2022) discussed various datasets, evaluation metrics, and methodologies in common literature. From this, it can be implied that bargain in the marketplace typically follows a "Distributive" strategy where each party involved aims to maximize their gain rather than mutually benefiting outcomes. This strategy follows a win-lose model, where one party can gain only if the other party loses. The CraigslistBargains dataset (He et al., 2018) is the most prominent dataset in the price bargain domain with other datasets having less than 1,000 dialogues. This dataset contains dialogues between two human agents assigned the role of customer and seller negotiating over a product on Craigslist, the strategy used in the dialogues are largely distributive in nature. In contrast to a distributive approach, an "Integrative" approach to negotiation aims to reach a win-win situation by understanding the other party's needs and reaching a mutually satisfying consensus. It has been shown that an integrative approach to negotiation in retail e-commerce is more effective and leads to better customer satisfaction, than distributive approaches (Guttman
and Maes, 1998) that typically utilize agents that negotiate only on price. It is common in online marketplaces for products to have several items, such as a _"A chair and its cushion"_, a negotiation agent that is capable of satisfying customers that only want select items from the product such as customers that only want a chair or customers that only want a cushion is beneficial since the agent better understands customer requirements and may lead to win-win outcomes. Hence, treating a product as a "bundle" of items that customers can choose is a more integrative approach than treating the product as a single entity.
To incorporate this integrative approach, in this paper, we propose a novel dialogue system for negotiation in the online marketplace domain, which can respond to customers' inquiries and engage in negotiation with the customer. Unlike existing systems (He et al., 2018) that primarily focus on negotiation over the price of a product, our system follows a more integrative approach wherein negotiation involves different aspects such as adding or removing products from the aforementioned "bundle" of products, the price of the bundle, and the delivery of the product. Datasets for negotiation such as the CraigslistBargains dataset do not explicitly model the product as a bundle of smaller items. Hence, we construct a dataset (**IND**) consisting of integrative negotiation dialogues where the deal is modeled as a bundle of products. To avoid complete manual data creation, we design prompts for the GPT-J model (Wang and Komatsuzaki, 2021) to generate integrative negotiation utterances. To ensure the dataset's quality, we use humans in the loop for minor edits and filtering of the generated dialogues.
Using the constructed dataset, we build an integrative negotiation-powered dialogue agent (**INA**) using a supervised learning (SL) + reinforcement learning (RL) approach. To train our system, we leverage a novel reward function and maximize it using PPO loss (Schulman et al., 2017) to ensure aspects of negotiation consistency, negotiation power, and intent consistency. As per our knowledge, this is the first attempt to build an integrative-negotiation based dialogue system. Therefore we present a pioneering effort in developing an integrative-negotiation-based dialogue system, making several key contributions. _First,_ we introduce a new task of integrative negotiation, expanding the scope of dialogue system research. _Second,_ we propose an efficient approach for automatically generating data with minimal manual intervention, addressing the challenge of data scarcity in certain domains. This contribution will drive the development of more robust dialogue systems. _Third,_ we create a unique dataset of integrative negotiation dialogues. _Finally,_ we leverage the strengths of both supervised and reinforcement learning to construct a powerful dialogue system empowered by integrative negotiation strategies.
## 2 Related Work
Thompson et al. (2010) studied the effects of various intra-personal processes, such as mood, and interpersonal processes, such as emotion, on negotiation outcomes. They defined integrative negotiation as _"the extent to which the negotiated outcome satisfies the interests of both parties in a way that the outcome cannot be improved upon without hurting one or more of the parties involved"_. They also reported that the studies on the effectiveness of computer-mediated negotiation with respect to face-to-face negotiation give mixed results. Laroche and Genevay (2016) stated the importance of user adaptation in negotiation dialogue systems by performing experiments using different policies on simulated users in a newly designed negotiation dialogue game. Zhao et al. (2018) proposes a semi-automatic negotiation wherein a dialogue manager decides the intent after which a natural language generator presents conversational strategies to a human expert that writes the final utterance. Lewis et al. (2017) prepares a dataset and proposes end-to-end dialogue systems for "multi-issue bargaining". In this type of bargaining, two agents are presented with a set of items and asked to assign each item to one agent, each agent is also given a value function to decide the value of an item. He et al. (2018) prepares the CraiglistBargains dataset where two human agents negotiate over the price of a product listed on Craigslist, further, they decouple negotiation strategy and dialogue generation by proposing a dialogue manager to decide the intent of the next utterance and a generator that uses the intent to generate the utterance. Following this work, Yang et al. (2020) proposes a framework to integrate _"Theory of mind"_ (Premack and Woodruff, 1978) for inferring personality types to enhance negotiation dialogues.
Unlike these previous works, our proposed negotiation agent (INA) is capable of doing integrative
negotiation. Our agent is not only capable of negotiation with respect to the price of an item but can also modify the deal to better suit the customer's preference. Similarly, our agent can also handle the customization of a deal proposed by the customer and decide on accepting or rejecting the deal. These capabilities are currently absent in any negotiation agent.
## 3 Dataset Creation
We construct the **IND** dataset for the task of integrative negotiation. To save on human effort and resources, we come up with a novel mechanism based on prompting a large language model for dataset creation. We keep human annotators in the loop only for making minor edits and filtering the automatically generated dialogues to ensure the quality of the conversations. The overall process consists of creating a skeleton of dialogues by dynamically deciding the correct intent for any arbitrary conversation. Our overall dataset creation process consists of 5 steps: (i). Background Data Creation, (ii). Intent Definition, (iii). Dialogue Flow Generation, (iv). Prompting for Dialogue Generation, and (v). Data Correction.
### Background Data Creation
Although our method can be adapted to any product negotiation, we mainly focus on a list of 10 different electronic items: (i). Air Conditioning, (ii). Television, (iii). Refrigerator, (iv). Oven, (v). Washing Machine, (vi). Printer, (vii). Smart Phone, (viii). Laptop, (ix). Tablet, and (x). Camera. Along with these products, the deal bundle consists of a set of accessories related to the product. Therefore, our background database consists of the following information, such as Product Name, Product Description, Product Features, Price, Accessory List, and Accessory Description.
### Intent Definition
In order to build a robust negotiation system it is vital to define intents that can cover a diverse range of scenarios during negotiation. For an integrative negotiation agent, the scenario in the scope of the agent is not just price negotiation, but also item-level negotiation in the given bundle. To cover these properties, we come up with the following intents2: Footnote 2: Example utterances for each intent provided in Table 6 of the appendix.
* **Greet:** The utterances with general greetings like welcome and thank you come under this category.
* **Ask:** This intent is triggered when a user explicitly asks for information about an item or the ongoing negotiation.
* **Inform:** The agent may use the 'inform' intent to share detailed information about the products or services involved in the negotiation.
* **Ask-Clarification:** This intent captures the user's intention to seek further explanation or clarification regarding certain aspects of the negotiation or the overall deal according to the current negotiation state.
* **Negotiate-Price-Increase:** This intent indicates that the agent is seeking to increase the pricing terms of a product or service during the negotiation process.
* **Negotiate-Price-Decrease:** This intent indicates that the agent is seeking to decrease the pricing terms of a product or service during the negotiation process.
* **Negotiate-Price-NoChange:** This is an intent by the agent in a negotiation system indicating the system's intention to propose or assert that the price of a product or service should remain unchanged during the negotiation process. This is ideally done by highlighting the value and fairness of the current deal.
* **Negotiate-Add-X:** This intent by the agent or user refers to the intention to propose or suggest the addition of a specific item or feature to enhance the value of a product or service during the negotiation process. This may or may not lead to an increase in the price of the deal.
* **Negotiate-Remove-X:** This intent by the agent or user in refers to the intention to propose or suggest the removal of a specific item or feature from the deal in the negotiation process. This may or may not lead to a decrease in the price of the deal.
* **Accept:** This refers to the agent or user's intention to agree or accept a proposal, offer, or condition reached during the negotiation process.
* **Reject:** This refers to the agent or user's intention to agree or reject a proposal, offer, or condition reached during the negotiation process.
The above intents can occur either individually or in combination with other intents (e.g.: Greet-Ask).
### Dialogue Flow Generation
Our dialogue flow generator module assumes that the dialogue flow (intent sequence) during negotiation can be random. However, we also put some obvious constraints on this dataset-generation process. One simple constraint is that the conversation would be initiated by the customer with a greet intent. This greet intent could be accompanied by a request for clarification or one of the 'negotiate' intents for the customer. The agent can respond by the inform intent or one of the agent 'negotiate' intents.
For all the deal bundles, we maintain negotiation details of the ongoing deal with the customer, which consist of: (i). Minimum Seller price, (ii). Current Seller price, (iv). Tolerance value (\(tol\)) and (iii). Current Customer price. To enforce the integrative nature of our agent, we limit only price-based negotiations to \(d\) turns after which the 'Negotiate-Add-X' or 'Negotiate-Remove-X' intents would take over. To propose a price for the next turn, we assume that a decay in price difference (increment for customer and decrement for seller) over dialogue turns. This is in line with Faratin et al. (1998) where a similar function is used to model the price negotiation between the customer and seller. Equations 1 and 2 are used for the computation of the proposed price by customer (\(P_{b}\)) or seller (\(P_{s}\)) at dialogue turn \(t\). In the equations, \(k\) is a constant to control the rate of price change from one turn to the next. If it \(k\) is larger there will higher rate of concession, at a low value the rate of concession provided by the seller is low. For our setting, we have assumed a higher \(k\) value for the seller and a lower \(k\) for the customer, considering the customer is strict with their budget.
\[Ps_{t} =Pb_{t-1}+(Ps_{t-1}-Pb_{t-1})e^{-kt} \tag{1}\] \[Pb_{t} =Ps_{t-1}-(Ps_{t-1}-Pb_{t-1})e^{-kt} \tag{2}\]
The seller will choose intent 'Accept' when the customer offered price is less than or equal to the amount \(Ps_{t}-tol*Ps_{t}\). The customer will choose intent 'Reject' when the conversation has crossed the negotiation deadline, and the seller is no more ready to lower the bundle price. The dialogue flow terminates with the acknowledgment of 'accept' intent or the'reject' intent.
### Prompting for Dialogue Generation
We design few-shot prompts (Brown et al., 2020)3 for each intent, with around four shots for each prompt (due to the token limit of 2,048 in GPT-J). Each shot contains three parts, a description of the task, a summary of the relevant information from the dialogue, and an utterance following the intent, all in a natural language format. The summary of
Figure 1: An example conversation between the negotiation agent and a customer
the relevant information is designed considering the intent flow of the previous utterances of the dialogue. The description of the task is the sentences in the prompt that explains the situation and the goal of the intent, for instance, the task description for the _"Acknowledge acceptance"_ intent is _"A customer has agreed to purchase a product from a seller, the seller wants to thank the customer and proceed with the transaction"_. The utterance following the intent is a manually designed utterance following the task description and the information summary of the shot.
The flow generation module creates an ordered list of intents along with relevant information for each intent, for instance, for the intent _"Negotiate-Add-X"_ the item to be added is mentioned, and for _"Negotiate-Price-Decrease"_ the price to be proposed is mentioned. Our algorithm uses the list created by the flow generation module to create a shot that is augmented to the prompt of the respective intent, this prompt is then passed to the GPT-J model to produce the utterance.
### Data Correction
To ensure the quality of the automatically generated dataset, we implemented manual correction and filtration steps. We engaged three human experts who possess post-graduate qualifications and have two years of experience in the field. Their instructions were to make edits to the generated dialogues in order to ensure grounding in the provided background database, intent, action, and negotiation flow. Additionally, any utterances produced by the agent that referred to its own experiences or feelings, pretending to be human, were to be rephrased or removed (to maintain authenticity). The experts were also responsible for correcting minor grammatical errors. Furthermore, they were asked to rate the fluency of each utterance on a scale of 0-2, where 0 represented non-fluency and 2 indicated complete fluency. Dialogues containing utterances rated as 0 fluency were dropped from the dataset. These measures were implemented to uphold the quality standards of the dataset.
## 4 Dataset Statistic
The statistics of the dataset created are given in Table 1. The dataset has a total of 4,163 utterances and we follow an 80:12:8 split between train, test, and validation sets. The average number of turns per dialogue in the dataset is 13 and the number of unique words in the dataset, excluding numbers is 12,219, both these metrics are comparable to the metrics in the Craigslist Bargain dataset (avg. turns:9; unique words:11,799). Following (Wang et al., 2021), to automatically measure the **variability** of conversations of our dataset, we compute BLEU-1 and METEOR scores between the utterances. We obtain low BLEU-1 and METEOR scores of 0.08 and 0.05, respectively, indicating high variability between the utterances in IND. We ask three human experts to rate the **'engagingness'
Figure 2: Overall data creation process
and **'fairness'** of dialogues on a scale of 1 to 3 (higher the better). The dialogues obtained an average rating of 2.17 for 'engagingness' and 2.26 for 'fairness4.
Footnote 4: The overall inter-annotator agreement using Krippendorff’s alpha (Krippendorff, 2011) was found to be 0.84
## 5 Methodology
To force a language model to negotiate with the user while following its own price goal as well as approach, we fine-tune it using a novel-designed reward function in a reinforcement learning setting. Here, first, a pre-trained language model (GPT-2-medium) is fine-tuned in a supervised setting using traditional cross-entropy loss between the ground truth and predicted utterances probability distributions. For a supervised dialogue dataset \(D=\{d_{0},d_{1},..,d_{N}\}\), where, \(d=\{a_{0},u_{0},..,a_{i},u_{i},..,a_{T-1},u_{T-1}\}\) - a multi-turn dialogue with \(u_{i}+cxt_{i}\) (\(u_{i}\) - user's utterance at \(i^{th}\) turn and \(cxt_{i}=\{a_{0},u_{0},..,a_{i-1}\}\)) as input and \(a_{i}\) (agent's utterance at \(i^{th}\) turn) as output. The supervised learning dialogue model \(\rho_{\theta}(d)\) can be expressed as:
\[\rho_{\theta}(d)=\prod_{T=0}^{T-1}\rho_{u}(u_{i}|u_{<i},a_{<i})\rho_{a}(u_{i} |u_{<=i},a_{<i}) \tag{3}\]
where \(\rho_{u}\) and \(\rho_{a}\) are the user's and agent's utterances probability distributions. This trained SLDM is fine-tuned in an RL setting using the PPO loss formulated as below:
\[L^{CLIP}(\theta)=\hat{E}[min(pr_{r}(\theta)\hat{A}_{r},clip(pr_{ y}(\theta),\\ 1-\varepsilon,1+\varepsilon)\hat{A}_{r})] \tag{4}\]
where \(pr_{r}(\theta)=\mathcal{P}_{\theta}^{new}/\mathcal{P}_{\theta}^{old}\). \(\varepsilon\) and \(\hat{A}_{y}\) denote the clipping range and normalized rewards, respectively. Finally, the parameters' updation is done as follows:
\[\theta_{k+1}=\underset{\theta}{argmax}\underset{s,a\sim\mathcal{P}_{\theta_{k }}}{E}[L^{CLIP}] \tag{5}\]
Here, normalized rewards are obtained by a novel designed reward function (\(R\)) incorporating intent consistency reward (\(R_{1}\)), price gap reward (\(R_{2}\)), negotiation strategy reward (\(R_{3}\)) and interactiveness (\(R_{4}\)) in generated responses. \(R\) intuitively nudges SLDM towards these aspects by providing appropriate respective aspect penalization/reward for generated responses. For example, if the model generates intent inconsistent response then \(R_{3}\) will penalize the model to discourage it from generating a similar type of content. All five rewards can be written as:
oindent **Intent consistency:** In a negotiation system with complex intents there can often be divergence between the predicted intent and the intent of the generated utterance. To enforce this consistency, we propose Intent Consistency (IC) reward. This reward function is implemented by first training a BERT model (Devlin et al., 2018) on the training set of IND for the task of intent prediction. This task is modeled as a classification task where the input to the BERT model is an agent utterance at turn \(t\), \(Ua_{t}\), and the expected output is the intent of the utterance \(Ia_{t}\). The accuracy of the trained intent classifier is 71.2%. We use the \([CLS]\) token for computing the probability distribution of the intent classes. We sample the probability value \(P_{it}\) of the intent predicted \(i\) by our end-to-end SLDM dialogue model and use it as \(R_{1}\) (Eq. 6).
\[R_{1}=P_{it}(u_{t}) \tag{6}\]
**Price Gap Reward:** The purpose of negotiation is to find a win-win solution for both the customer and the seller. The winning scenario for a seller would be as little reduction in the initially proposed price as possible. In line with this logic, we propose a Price Gap (PG) reward. This reward is simply the fraction of the initial proposed price by the agent \(P_{ai}\) and the final selling price after negotiation \(P_{af}\) (Eq 7). The higher the final price the greater the reward.
\[R_{2}=\frac{P_{af}}{P_{ai}} \tag{7}\]
**Negotiation Strategy Reward:** A successful negotiation might not always entail deal acceptance. In cases where the customer wants to go below the minimum selling price of the agent \(P_{a-min}\) it would not be judicious for the seller to satisfy the customer. In such situations where the negotiation could result in a win-lose situation, the deal should be rejected. Hence, the success criterion of the negotiation lies in not just acceptance of the deal but also the fairness of the deal. To ensure that our
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & **Train** & **Test** & **Valid** \\ \hline _\#Dialogues_ & 3330 & 500 & 333 \\ _\#Uterances_ & 45,914 & 6887 & 4592 \\ \hline _Avg \# of words in Customer Utterance_ & 19.30 & 19.32 & 19.29 \\ \hline _Avg \# of words in Sales-Person Utterance_ & 33.13 & 33.32 & 33.27 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of the dataset created (IND)
negotiation succeeds only in win-win scenarios we design the Negotiation Strategy (NS) reward.
\[R_{3}=F(\frac{P_{b}-P_{a-min}}{P_{a-min}})G(Intent_{f}) \tag{8}\]
\[G(Intent_{f})=\begin{cases}1,&Intent_{f}=accept\\ -1,&Intent_{f}=reject\end{cases} \tag{9}\]
\[F(x)=\begin{cases}0,&x<0\\ e^{x},&x\geq 0\end{cases} \tag{10}\]
In the above equations, \(P_{b}\) is the customer's proposed price, and \(Intent_{f}\in\{Accept,Reject\}\) is the final intent in the conversation used to capture the negotiation result. The reward incentivizes acceptance of a deal when the negotiated price is within the limit of a minimum price for the seller, and rejection when the negotiated price is below this minimum price.
**Interactiveness:** To ensure interactiveness, repetitions, and conversation blackholes are penalized such that system can engage the user for a longer duration with interactive responses. To penalize the generation of similar utterances for a given intent in the dialogue we use Equation 11.
\[R_{4}=1-\frac{\sum_{i=1}^{i=m}\frac{v_{b}^{in}.v_{i}^{in}}{\|v_{k}^{in}\|v_{i} ^{in}\|}}{m} \tag{11}\]
where \(v_{k}^{in}\) is the vector (bag of words) representing the generated utterance with intent \(in\). \(v_{i}^{in}\) to \(v_{m}^{in}\) are the vectors representing the previously generated utterances in the dialogue with the same intent. The final normalized reward function \(R\) is formulated as:
\[R=\gamma_{1}R_{1}+\gamma_{2}R_{2}+\gamma_{3}R_{3}+\gamma_{4}R_{4} \tag{12}\]
with \(\gamma_{1}+\gamma_{2}+\gamma_{3}+\gamma_{4}=1\).
## 6 Experiments
### Evaluation Metrics
To properly assess INA's performance, we perform both automatic and manual evaluations. In automatic evaluation to measure the surface similarity with the gold responses, we compute **METEOR**(Banerjee and Lavie, 2005). For semantic similarity, we compute **BERT Score (BS-F1)**(Zhang et al., 2019) and **Word Mover distance (WM)**. We also report the **Perplexity (PPL)** and the **average response length (R-LEN)** of the generated responses.
Human evaluations were conducted by three postgraduate evaluators who possess proficiency in similar tasks. Each evaluator interacted with the proposed system 15 times and assessed the conversations based on: **(i). Negotiation Consistency (N-Con):** It is the measure of consistency (absence of arbitrariness) in the negotiation approach within a dialogue **(ii). Bargaining Efficacy (B-Eff):** It measures the ability of the negotiation system to present compelling arguments, reasoning, or incentives that influence the other party's decision-making process., **(iii). Outcome fairness (O-fair):** It assesses the fairness or equity of the final outcomes reached during the negotiation process., **(iv). Dialogue-fluency (D-F):** It measures the overall grammatical correctness of the generated responses, and **(v). Dialogue-Engagingness (D-E):** Measures the extent to which a conversation or dialogue is interesting, captivating, and able to hold the attention of the participants. The evaluators assigned scores on a scale of 1 to 3 for each metric (The higher the better).
points, respectively. Further, we obtain a score of **R-LEN** = 39.93 is also better than that of ARDM, ARDM+BK, ARDM+In, and Neg-TOD with a difference of 15.72, 2.28, 13.5, and 1.76, respectively. This indicates that the **INA** is able to generate longer responses, hence, showcasing more engagingness with the user. It can be due to the incorporation of all four rewards where \(R_{1}\), \(R_{2}\), and \(R_{3}\) play the crucial role in handling negotiation and price consistency, and \(R_{4}\) helps in maintaining the non-repetitiveness, hence, driving the agent to build the rapport with a user as well as be on the goal by generating diverse and interactive negotiation responses.
## 9 Limitations
Our data creation steps and modeling have some limitations. First, to create the data, GPT-J is used which requires a large GPU memory size (here, 40 GB). Another limitation of GPT-J is that it has a context window of 2,048 tokens, which constrains our prompting mechanism. Within this context window, we need to fit background data as well as dialogue history with a few shot examples. This allows us to only go for a maximum of 4 shots while prompting leading to some hallucinations in the created data which needed to be fixed manually.
## 10 Ethical Considerations
Since negotiation by nature entails bargain with the customer, it should be done ethically. Our integrative approach to negotiation gives greater flexibility to the customer and hence leads to a win-win scenario in negotiation. Our negotiation is not aimed at as a zero-sum game where a party has to lose in order for the other to win. The customer at any point of the conversation can reject the deal and thus is not compelled to continue with the negotiation if it does not suit them.
The dataset created in this work will be made available only after filling and signing an agreement declaring that the data will be used only for research purposes. The annotation, filtering/editing of data, and manual evaluations were done by human experts, who are regular employees of our research group and are paid in accordance with the institute's policy. There are no other issues to declare.
## 11 Acknowledgement
Authors acknowledge the grant received from Accenture LLP for the project T"Conversational Agents with Negotiation and Influencing ability".
|
2308.15569 | On lens space surgeries from the Poincaré homology sphere | Building on Greene's changemaker lattices, we develop a lattice embedding
obstruction to realizing an L-space bounding a definite 4-manifold as integer
surgery on a knot in the Poincar\'e homology sphere. As the motivating
application, we determine which lens spaces are realized by $p/q$-surgery on a
knot $K$ when $p/q > 2g(K) -1$. Specifically, we use the lattice embedding
obstruction to show that if $K(p)$ is a lens space and $p \geq 2g(K)$, then
there exists an equivalent surgery on a Tange knot with the same knot Floer
homology groups; additionally, using input from Baker, Hedden, and Ni, we
identify the only two knots in the Poincar\'e homology sphere that admit
half-integer lens space surgeries. Thus, together with the Finite/Cyclic
Surgery Theorem of Boyer and Zhang, we obtain the corollary that lens space
surgeries on hyperbolic knots in the Poincar\'e homology sphere are integral. | Jacob Caudell | 2023-08-29T18:49:20Z | http://arxiv.org/abs/2308.15569v1 | # On lens space surgeries from the Poincare homology sphere
###### Abstract.
Building on Greene's changemaker lattices, we develop a lattice embedding obstruction to realizing an L-space bounding a definite 4-manifold as integer surgery on a knot in the Poincare homology sphere. As the motivating application, we determine which lens spaces are realized by \(p/q\)-surgery on a knot \(K\) when \(p/q>2g(K)-1\). Specifically, we use the lattice embedding obstruction to show that if \(K(p)\) is a lens space and \(p\geq 2g(K)\), then there exists an equivalent surgery on a Tange knot with the same knot Floer homology groups; additionally, using input from Baker, Hedden, and Ni, we identify the only two knots in the Poincare homology sphere that admit half-integer lens space surgeries. Thus, together with the Finite/Cyclic Surgery Theorem of Boyer and Zhang, we obtain the corollary that lens space surgeries on hyperbolic knots in the Poincare homology sphere are integral.
## 1. Introduction
### Background
_Dehn surgery_--the cut and paste operation whereby one solid torus is removed from a 3-manifold and replaced by another--has been one of the most well-studied and befuddling constructions in low-dimensional topology since Dehn first performed surgery in 1910. A classical result ([16],[26]) states that for any two 3-manifolds \(Y\) and \(Y^{\prime}\), there is a link \(L\) in \(Y\) admitting a Dehn surgery to \(Y^{\prime}\), but determining whether \(Y^{\prime}\) can be obtained by surgery on a _knot_\(K\) in \(Y\)--and if so then for which knots in \(Y\)--is notoriously difficult in general. In certain cases--e.g. when \(Y\) is \(S^{3}\) and \(Y^{\prime}\) is reducible, contains an incompressible torus, or has a finite fundamental group--general constructions motivated by low-complexity examples have led to conjecturally complete accounts of all such surgeries. At present, we treat the case where \(Y\) is the Poincare homology sphere \(\mathcal{P}\), oriented as the boundary of the negative definite \(E_{8}\) plumbing, and \(Y^{\prime}\) is homeomorphic to a connected sum of exactly one or two lens spaces. Note that the scenario where \(Y^{\prime}\) is instead a connected sum of three or more lens spaces was dispatched in [5], and the main obstruction formulated there serves as the foundation for the present work.
To set up the statement of our main results, we recall some conventions of Dehn surgery. For \(K\subset\mathcal{P}\) with exterior \(M_{K}:=\mathcal{P}\setminus\overset{\circ}{\nu}(K)\), identify \(\mathbb{Q}\cup\{1/0\}\) with the set of _slopes_ on \(\partial M_{k}\) by \(p/q\mapsto p[\mu]+q[\lambda]\), where \(\mu\) is the meridian of \(K\) and \(\lambda\) is the _Seifert longitude_ of \(K\) oriented such that \(\mu\cdot\lambda=1\), i.e. the unique slope on \(\partial\nu(K)\) that bounds in \(M_{K}\) with \(\mu\cdot\lambda=1\). Denote by \(K(p/q)\) the manifold obtained by identifying the boundaries of an abstract solid torus \(D^{2}\times S^{1}\) and \(M_{K}\) such that \(\partial D^{2}\times\{\mathrm{pt}\}\) is identified with the slope \(p/q\), and call this manifold \(p/q\)-surgery on \(K\). Practitioners of Heegaard Floer homology will recognize that for \(K(p/q)\) to be a connected sum of lens spaces, we must have \(p/q\geq 2g(K)-1\), as
\(\mathcal{P}\) and connected sums of lens spaces are all _Heegaard Floer \(L\)-spaces_--that is, they satisfy \(\operatorname{rk}\,\widehat{HF}(Y)=|H_{1}(Y;\mathbb{Z})|\). Practitioners of surface intersection graph techniques will recognize that if \(p\geq 2g(K)\) and \(K(p)\) is reducible, then \(K\) is the \((r,s)\)-cable of a knot \(\kappa\), where \(p=rs\), \(s\geq 2\), and \(K(p)\cong\kappa(r/s)\#L(s,r)\)[17] (cf. [14]).
### Main results
We now state our main results, which resolve the question of which knots in \(\mathcal{P}\) admit half-integer lens space surgeries--thereby completely resolving the question of which knots admit non-integer lens space surgeries--and the question of which lens spaces are realized by \(\geq 2g(K)\)-surgery on a knot \(K\subset\mathcal{P}\).
#### 1.2.1. Half-integer surgeries
Figures 1 and 2 present two knots in \(\mathcal{P}\), \(E\) and \(C\), with half-integer lens space surgeries.
**Theorem 1.1**.: _The two knots \(E\) and \(C\) are the only knots in \(\mathcal{P}\) with half-integer lens space surgeries._
We point out that \(E(7/2)\cong L(7,6)\) and \(C(27/2)\cong L(27,16)\). It follows that 14-surgery on the \((7,2)\)-cable of \(E\) is homeomorphic to \(L(7,6)\#L(2,1)\), and that 54-surgery on the \((27,2)\)-cable of \(C\) is homeomorphic to \(L(27,16)\#L(2,1)\). In fact, the cabling construction ensures that the \(2p\)-surgery on the \((p,2)\)-cable of a knot \(K\) is homeomorphic to \(K(p/2)\#L(2,1)\). In order to obstruct half-integer lens space surgeries, we turn our attention to obstructing integer surgeries to 3-manifolds of the form \(L(p,q)\#L(2,1)\). The upshot of this is twofold: first, our lattice embedding obstruction does not deal directly with non-integer surgeries, though some modifications may be made to make it amenable to such surgeries, as Gibbons did with \(p/q\)-changemakers in [9], and second, any knot \(K\) where \(K(2p)\) is homeomorphic to \(L(p,q)\#L(2,1)\) must satisfy \(2p\geq 2g(K)\), so in fact \(K\) is a cable knot. Moreover, \(K\) must be the \((p,2)\)-cable of knot whose \(p/2\)-surgery yields \(L(p,q)\), since \(2/p\)-surgery certainly never yields a lens space for \(p>2\)-then, the genus of the knot would have to be 0, but then the knot is contained in a
Figure 1. Performing a slam-dunk of \(E\) through the 3-framed unknot followed by a slam-dunk of the resulting 1-framed unknot through the central 0-framed unknot yields a linear surgery diagram for \(L(7,6)\). Thus, \(E\subset\mathcal{P}\), the exceptional fiber of order 3, admits a half-integer surgery to \(L(7,6)\).
3-ball, thus the exterior of the knot is reducible. En route to proving Theorem 1.1, we prove the following.
**Theorem 1.2**.: _If \(K(2p)\cong L(p,q)\#L(2,1)\), then \((p,q)\in\{(27,16),(7,6)\}\) and \(g(K)=p\)._
Recall that \(\mathcal{P}\) is the branched double cover of the pretzel knot \(P(-2,3,5)\), and that the lens space \(L(p,q)\) is the branched double cover of the 2-bridge knot \(K(p,q)\). Recall also that, by the Montesinos trick, if the knot \(K_{1}\subset S^{3}\) may be obtained from the knot \(K_{0}\) by changing a crossing in a planar diagram for \(K_{0}\), then there is a knot in the branched double of \(K_{0}\) that admits a half-integer surgery to the branched double cover of \(K_{1}\). We immediately obtain the following corollary of Theorem 1.1.
**Corollary 1.3**.: _The only 2-bridge knots in \(S^{3}\) that admit crossing changes to \(P(-2,3,5)\) are \(K(7,6)\) and \(K(27,16)\). _
When paired with the Finite/Cyclic Surgery Theorem of Boyer-Zhang [4], Theorem 1.1 gives a remarkable corollary characterizing knots in \(\mathcal{P}\) with non-integral lens space surgeries. The following is a specialization of the Finite/Cyclic Surgery Theorem relevant to the case at hand.
**Theorem 1.4** (Theorem 1.1 (2) of [4]).: _Let \(K\subset\mathcal{P}\) and suppose that \(K(p/q)\) is a lens space. If \(q\geq 2\), then exactly one of the following holds:_
1. \(K\) _is an exceptional fiber in a Seifert fibering of_ \(\mathcal{P}\)_;_
2. \(K\) _is a cable of an exceptional fiber in a Seifert fibering of_ \(\mathcal{P}\)_; or_
3. _the interior of_ \(M_{K}\) _admits a hyperbolic metric with respect to which_ \(\partial M_{K}\) _is totally geodesic and_ \(q=2\)_._ _
In the case of (3), the knot \(K\) is said to be _hyperbolic_. In light of Theorem 1.4, we obtain the following corollaries.
Figure 2. After modifying the surgery diagram by adding a \(-2\)-framed meridian to \(C^{*}\) and changing the framing of \(C^{*}\) to \(-2\), a sequence of blow-ups and blow-downs takes the surgery diagram to the negative definite \(E_{8}\) plumbing. Thus, there is a knot \(C\subset\mathcal{P}\) with a half-integer surgery to \(L(27,16)\) whose surgery dual is \(C^{*}\).
**Corollary 1.5**.: _Lens space surgeries on hyperbolic knots in \(\mathcal{P}\) are integral._
Proof.: By Theorem 1.4, a non-integer lens space surgery on a hyperbolic knot must be a half-integer surgery. By Theorem 1.1, the only knots in \(\mathcal{P}\) with half-integer lens space surgeries are \(E\), which is the exceptional fiber of order \(3\), and \(C\), which is a cable of an exceptional fiber. Neither of these knots is hyperbolic.
**Corollary 1.6**.: _Let \(K\subset\mathcal{P}\) and suppose that \(K(p/q)\) is a connected sum of exactly two lens spaces. If \(p/q>2g(K)-1\), then \(K\) is either an exceptional fiber, or a once- or twice-iterated cable thereof._
Proof.: By Lemma 14 of [5], if \(K(p/q)\) is reducible, \(q\geq 2\), and \(p/q>2g(K)-1\), then \(K\) is an exceptional fiber. Otherwise, we have \(q=1\) and \(p\geq 2g(K)\), and therefore \(K\) is a cable knot. Together, Theorems 1.1 and 1.4 imply that \(K\) is either a cable of an exceptional fiber or a cable of a cable of an exceptional fiber.
Interestingly enough, the strategy we use to prove Theorem 1.2 cannot on its own be used to characterize the knots \(K\subset\mathcal{P}\) where \(K(2g(K)-1)\) is a connected sum of exactly two lens spaces. We are at present unable to to produce such a knot, nor are we able to prove that no such knot exists. However, we deduce an obstruction toward that end.
**Theorem 1.7**.: _Let \(K\subset\mathcal{P}\), \(p=2g(K)-1\), and suppose that \(K(p)\) is a connected sum of exactly two lens spaces. Then there is a torus knot, or a cable thereof, \(T\subset S^{3}\) with \(K(p)\cong T(p)\), and_
\[\Delta_{K}(t)=\Delta_{T}(t)-(t^{(p-1)/2}+t^{-(p-1)/2})+(t^{(p+1)/2}+t^{-(p+1)/ 2}).\]
#### 1.2.2. Integer surgeries
In [3], Berge produced an elegant construction of lens space surgeries on knots in \(S^{3}\). Let \(K\subset S^{3}\) lie on a genus two Heegaard surface \(\Sigma\), and suppose that \(K\) represents a _primitive_ element in the fundamental group of each of the two handlebodies bounded by \(\Sigma\), in which case \(K\) is said to be _doubly primitive_. The surface \(\Sigma\) induces an integral slope on \(\partial M_{K}\): \(\Sigma\cap\partial M_{K}\) consists of two anti-parallel curves that represent the same slope \(p\). Berge showed that the result of \(p\)-surgery on \(K\) is a lens space and tabulated a list of doubly primitive knots in \(S^{3}\), the specific collection of which are known as the _Berge knots_. The _Berge conjecture_ posits that every integer surgery on a knot in \(S^{3}\) arises from this construction. This conjecture has received considerable attention since Berge's observation more than thirty years ago. At the time of this writing, one of the furthest strides toward proving the Berge conjecture is the resolution of the _lens space realization problem_[11], where Greene proved the following.
**Theorem 1.8** (Theorem 1.2 of [11]).: _If \(K\subset S^{3}\), \(p\) is a positive integer, and \(K(p)\cong L(p,q)\), then there is a Berge knot \(B\subset S^{3}\) such that \(B(p)\cong L(p,q)\) and \(K\) has the same knot Floer homology as \(B\). _
As Berge noted, it is often convenient to take the perspective of the surgery dual to a doubly primitive knot \(K^{*}\subset K(p)\cong L(p,q)\), the core of the surgery solid torus. Following convention, we refer to the dual of a Berge knot also as a Berge knot, though the ambient manifold serves to prevent confusion. Berge showed in [3, Theorem 2] that \(K^{*}\) is a _simple_
knot in \(L(p,q)\)--that is, letting \(H_{0}\cup_{T^{2}}H_{1}\) denote a genus one Heegaard of splitting of \(L(p,q)\), \(K^{*}\) admits an isotopy in \(L(p,q)\) such that \(K^{*}\cap H_{i}\) is contained in a compression disk for \(H_{i}\), \(i\in\{0,1\}\), and there is a unique such knot \(K^{*}\) in each homology class in \(L(p,q)\). Thus, a Berge knot in a lens space is determined by its homology class.
Of course, one readily observes that Berge's construction is not unique to \(S^{3}\), and it is readily adapted to produce lens spaces surgeries on knots in any \(3\)-manifold admitting a genus two Heegaard splitting--consider, for instance, the Poincare homology sphere. In [24], Tange produced a list of simple knots in lens spaces admitting integer surgeries to \(\mathcal{P}\) that we call _Tange knots_, which, by the same ambiguity as with Berge knots, we also call their surgery duals in \(\mathcal{P}\), which are doubly-primitive. In [25], Tange made partial progress towards proving that any lens space \(L(p,q)\) realized by integer surgery on a knot \(K\subset\mathcal{P}\) with \(2g(K)\leq p\) is realized by surgery on a Tange knot. Here, we take up the strategy suggested in the remark following [11, Conjecture 1.10], to prove the following.
**Theorem 1.9**.: _If \(K(p)\cong L(p,q)\) and \(p\geq 2g(K)\), then there is a Tange knot \(T\) such that \(T(p)\cong L(p,q)\) and \(K\) has the same knot Floer homology as \(T\)._
We believe that one may furthermore show, following the argument in [11, Section 2], that the surgery duals of \(K\) and \(T\) in the statement of the theorem are homologous in \(H_{1}(L(p,q))\), though we omit details in this work.
A simple knot \(K\subset L(p,q)\) is said to be _Floer simple_, in the sense that \(\operatorname{rk}(\widehat{HFK}(L(p,q),K)=\operatorname{rk}\widehat{HF}(L(p,q ))=p\). In [13], Hedden constructed a knot \(T_{L}\) (and its mirror image \(T_{R}\)) in each lens space \(L(p,q)\) with the property that \(\operatorname{rk}\widehat{HFK}(L(p,q),T_{L})=p+2\). In [22], Rasmussen showed that the rank of the knot Floer homology of a knot \(K\subset L(p,q)\) with an integer surgery to an L-space homology sphere is at most \(p+2\), and he furthermore certified that, up to orientation reversal, \(\mathcal{P}\) is the only L-space homology sphere realized by surgery on a Hedden knot in \(L(p,q)\) for \(p\leq 38\). In [2], Baker also constructed an infinite family of knots in \(\mathcal{P}\) with integer lens space surgeries. Baker's construction identifies band surgeries that take the pretzel knot diagram \(P(-2,3,5)\) to a two-bridge knot; by the Montesinos trick, these band surgeries lift to integer lens space surgeries on knots in \(\mathcal{P}\). Notably, all but finitely many of the knots arising from Baker's construction are tunnel number two, and therefore none of these are doubly primitive, and, upon further inspection, none of the lens spaces realized by surgery on Baker knots are realized by surgery on either Tange or Hedden knots. While the lens space surgery slopes of Hedden and Baker knots were known at the time of their construction, we remark that a corollary of Theorem 1.9 is that if \(K\subset\mathcal{P}\), \(K(p)\cong L(p,q)\), and \(L(p,q)\) is not surgery on a Tange knot (e.g. \(K\) is a Hedden or a Baker knot), then \(p=2g(K)-1\). Obstructing \(2g(K)-1\) lens space surgeries falls outside of the purview of \(E_{8}\)-changemakers, though we obtain an alternate proof of [2, Theorem 8].
**Theorem 1.10**.: _If \(K(p)\cong L(p,q)\) for \(p=2g(K)-1\), then there is a Berge knot \(B\subset S^{3}\) such that \(B(p)\cong L(p,q)\) and_
\[\Delta_{K}(t)=\Delta_{B}(t)-(t^{(p-1)/2}+t^{-(p-1)/2})+(t^{(p+1)/2}+t^{-(p+1)/ 2}).\]
### Changemakers
In an influential pair of papers, Greene observed that by combining Donaldson's Diagonalization Theorem and the data of Heegaard Floer _correction terms_ (or
_d-invariants_), one may address the topological problem of realizing, for example, a lens space as surgery on a knot in \(S^{3}\) by means of a combinatorial heuristic. This heuristic, called a _changemaker lattice embedding_, is derived from the relationship between the Alexander polynomial of a knot in \(S^{3}\):
\[\Delta_{K}(T)=\sum_{i=0}^{\infty}a_{i}(T^{i}+T^{-i}), \tag{1}\]
the _torsion coefficients_ of \(K\):
\[t_{i}(K)=\sum_{j=1}^{\infty}j\cdot a_{|i|+j}, \tag{2}\]
the correction terms of \(K(p)\):
\[\{d(K(p),\mathfrak{t})\colon\mathfrak{t}\in\operatorname{Spin}^{c}(K(p))\}, \tag{3}\]
and the lengths of _characteristic elements_ in a suitably chosen negative-definite unimodular lattice \(L\):
\[\operatorname{Char}(L)=\{\mathfrak{c}\in L\colon\langle\mathfrak{c},v\rangle \equiv\langle v,v\rangle\bmod 2\text{ for all }v\in L\}. \tag{4}\]
In the case of studying lens space surgeries on knots in \(S^{3}\), Greene uses Donaldson's Diagonalization theorem to pin down the lattice \(L\) precisely as \(-\mathbb{Z}^{n+1}\), in which case:
\[\operatorname{Char}(-\mathbb{Z}^{n+1})=\Big{\{}\sum_{i=0}^{n}\mathfrak{c}_{i} d_{i}\colon\mathfrak{c}_{i}\equiv 1\bmod 2\Big{\}} \tag{5}\]
for \(\{d_{0},\dots,d_{n}\}\) an orthonormal basis of \(-\mathbb{Z}^{n+1}\). Greene showed that if \(L(p,q)\) is integer surgery on a knot in \(S^{3}\), then the canonical _linear lattice_\(\Lambda(p,q)\) is the orthogonal complement to some vector \(\sigma\in-\mathbb{Z}^{n+1}\) with the following curious property.
**Definition 1.11**.: _A vector \(\sigma=(\sigma_{0},\dots,\sigma_{n})\in-\mathbb{Z}^{n+1}\) is said to be a changemaker if \(0\leq\sigma_{0}\leq\dots\leq\sigma_{n}\), and for all \(1\leq i\leq n\)_
\[\sigma_{i}\leq 1+\sum_{j=0}^{i-1}\sigma_{j}.\]
_This is equivalent to and derived from the condition that_
\[\{\langle\mathfrak{c},\sigma\rangle\colon\mathfrak{c}\in\{\pm 1\}^{n+1}\}=\{j \in[-|\sigma|_{1},|\sigma|_{1}]\colon j\equiv|\sigma|_{1}\bmod 2\},\]
_where \(|\sigma|_{1}\) denotes the \(L_{1}\)-norm of \(\sigma\)._
**Remark 1.12**.: _The latter of the two conditions in Definition 1.11 is what generalizes to \(E_{8}\)._
More precisely, Greene showed the following.
**Theorem 1.13**.: _Let \(K\subset S^{3}\) and suppose that \(K(p)\cong L(p,q)\). Then \(\Lambda(p,q)\cong(\sigma)^{\perp}\) for some changemaker \(\sigma\in\mathbb{Z}^{n+1}\). Moreover,_
\[2g(K)=p-|\sigma|_{1},\]
_and if \(K^{\prime}(p)\cong L(p,q)\), then \(\widehat{HFK}(K^{\prime})\cong\widehat{HFK}(K)\)._
Sketch of proof.: Suppose that \(K(p)\cong L(p,q)\). Let \(W\) be the orientation reversal of the trace cobordism of \(p\)-surgery on \(K\), and let \(X:=X(p,q)\) be the _linear_ negative definite \(4\)-manifold bounded by \(L(p,q)\) presented by the Kirby diagram in Figure 4. Form the oriented, negative definite \(4\)-manifold \(Z\) by gluing \(W\) to \(X\) along \(L(p,q)\). Since \(\partial Z=S^{3}\), Donaldson's Theorem gives us an embedding \(Q_{W}\oplus Q_{X}\hookrightarrow-\mathbb{Z}^{n+1}\) for \(n=\operatorname{rk}Q_{X}\). Writing the image of the generator of \(Q_{W}\) under this embedding as \(\sigma=(\sigma_{0},\ldots,\sigma_{n})\), we have that
\[\mathfrak{c}^{2}+n+1\leq-8t_{i}(K), \tag{6}\]
for all \(|i|\leq p/2\) and any characteristic element \(\mathfrak{c}\) in \(-\mathbb{Z}^{n+1}\) with \(\langle c,\sigma\rangle+p\equiv 2i\,\operatorname{mod}\,2p\). Furthermore, for all \(|i|\leq p/2\), there is some \(\mathfrak{c}\) attaining equality in (6).
That \(\sigma\) is a changemaker is derived from (6), following the preliminary observation that, for \(K\) an L-space knot, the sequence \((t_{0}(K),t_{1}(K),\ldots)\) is a non-increasing sequence of non-negative integers with \(t_{i}(K)=0\) if and only if \(i\geq g(K)\). Therefore,
\[2g(K)=p-\max\{\langle\mathfrak{c},\sigma\rangle\colon\mathfrak{c}^{2}=-(n+1) \}=p-|\sigma|_{1}.\]
Similarly, we may compute \(g^{(i)}(K)=\min\{j\geq 0\colon t_{j}(K)=i\}\) for \(i\in\{0,\ldots,t_{0}(K)\}\) by the formula
\[2g^{(i)}(K)=p-\max\{\langle\mathfrak{c},\sigma\rangle\colon\mathfrak{c}^{2}=- (n+1)-8i\},\]
and thereby completely recover \(\Delta_{K}(T)\) from \(\sigma\). The knot Floer homology of an L-space knot in an integer homology sphere L-space is completely determined by \(\Delta_{K}(T)\), and therefore if \(K(p)\cong K^{\prime}(p)\cong L(p,q)\), then \(\widehat{HFK}(K)\cong\widehat{HFK}(K^{\prime})\) as bigraded abelian groups.
It follows that if for \(K\subset S^{3}\) we have \(K(p)\cong L(p_{1},q_{1})\#L(p_{2},q_{2})\), then \(p\geq 2g(K)\), and therefore \(K\) is a cable of knot \(\kappa\subset S^{3}\) admitting a non-integral surgery to a lens space. Our proof of Theorems 1.7 and 1.10 rests on the following Proposition (cf. [5, Theorem 3]).
**Proposition 1.14**.: _Let \(K\subset\mathcal{P}\) and \(p=2g(K)-1\). If \(K(p)\cong L(p_{1},q_{1})\#L(p_{2},q_{2})\), then the intersection form of the boundary connected sum of \(X(p_{1},q_{1})\) and \(X(p_{2},q_{2})\) is the orthogonal complement to a changemaker \(\sigma\in-\mathbb{Z}^{n+1}\). _
Proof of Theorems 1.7 and 1.10.: If \(K(p)\) is a lens space or the connected sum of exactly two lens spaces, then let \(\sigma\in-\mathbb{Z}^{n+1}\) be the changemaker associated to the surgery. As observed in [12, 11, 18, 5], \(\sigma\) determines \(\Delta_{K}(t)\). Now, if \(K(p)\) is the connected sum of a pair of lens spaces, the resolution of the cabling conjecture for connected sums of lens spaces in [12] gives us a knot \(K^{\prime}\subset S^{3}\) such that \(K^{\prime}(p)\cong K(p)\) and \(K^{\prime}\) is either a torus knot or a cable of a torus knot. On the other hand, if \(K(p)\cong L(p,q)\), then the resolution of the lens space realization problem in [11] gives us a knot \(K^{\prime}\subset S^{3}\) such that \(K^{\prime}(p)\cong L(p,q)\) and \(K^{\prime}\) is a Berge knot. In both cases, the changemaker \(\sigma\) determines \(\Delta_{K^{\prime}}(t)\), and by the assumption that \(p=2g(K)-1\), we see that
\[\Delta_{K}(t)=\Delta_{K^{\prime}}(t)-(t^{(p-1)/2}+t^{-(p-1)/2})+(t^{(p+1)/2}+t^ {-(p+1)/2}).\]
### \(E_{8}\)-changemakers
We take the changemaker lattice construction as inspiration in devising an obstruction to realizing an L-space bounding a _sharp_\(4\)-manifold as \(p\geq 2g(K)\) surgery on a knot \(K\subset\mathcal{P}\). In Section 4, we develop the following notion of an _\(E_{8}\)-changemaker_--the appropriate generalization of a changemaker in the lattice \(-E_{8}\oplus-\mathbb{Z}^{n+1}\)--and show that if \(p\)-surgery on a knot \(K\subset\mathcal{P}\) is an L-space that bounds a sharp (Definition 3.1) \(4\)-manifold \(X\), then \(Q_{X}\) is isomorphic to the orthogonal complement to some \(E_{8}\)-changemaker in \(-E_{8}\oplus-\mathbb{Z}^{n+1}\) for \(n=b_{2}(X)-8\).
For ease of notation, we now define the notion of a _parity interval_.
**Definition 1.15**.: _Let \(a\) and \(k\) be integers with \(k\geq 0\). The parity interval \(PI(a,a+2k)\) is the set of integers \(\{a+2j\colon 0\leq j\leq k\}\)._
Recall that if \(L\) is a negative-definite, unimodular lattice and \(\mathfrak{c}\in\operatorname{Char}(L)\), then \(\langle\mathfrak{c},\mathfrak{c}\rangle\equiv\operatorname{rk}L\bmod 8\). Define \(m(L)=\max\{\langle\mathfrak{c},\mathfrak{c}\rangle\colon\mathfrak{c}\in \operatorname{Char}(L)\}\). We say that a characteristic vector \(\mathfrak{c}\in L\) is _short_ if \(\langle\mathfrak{c},\mathfrak{c}\rangle=m(L)\), and _nearly short_ if \(\langle\mathfrak{c},\mathfrak{c}\rangle=m(L)-8\). Denote the sets of short and nearly short characteristic vectors in \(L\) by short\((L)\) and Short\((L)\), respectively. For a vector \(v\in L\), let \(c(v):=\max\{\langle\mathfrak{c},v\rangle\colon\mathfrak{c}\in\text{short}(L)\}\) and let \(C(v):=\max\{\langle\mathfrak{c},v\rangle\colon\mathfrak{c}\in\text{Short}(L)\}\).
We are now ready to define the notion of an _\(E_{8}\)-changemaker_.
**Definition 1.16**.: _A vector \(\tau=(s,\sigma)\in-E_{8}\oplus-\mathbb{Z}^{n+1}\) is said to be an \(E_{8}\)-changemaker if_
1. \(PI(-c(\tau),c(\tau))=\{\langle\mathfrak{c},\tau\rangle\colon\mathfrak{c}\in \text{short}(-E_{8}\oplus-\mathbb{Z}^{n+1})\}\)_, and_
2. \(PI(c(\tau)+2,C(\tau))\subset\{\langle\mathfrak{c},\tau\rangle\colon\mathfrak{ c}\in\text{Short}(-E_{8}\oplus-\mathbb{Z}^{n+1})\}\)_._
_If \(L\cong(\tau)^{\perp}\) for some \(E_{8}\)-changemaker \(\tau\), then \(L\) is said to be an \(E_{8}\)-changemaker lattice._
The combinatorial constraints defining the \(E_{8}\)-changemaker condition naturally extend those defining the changemaker condition. In fact, writing \(\tau=(s,\sigma)\in-E_{8}\oplus-\mathbb{Z}^{n+1}\) and noting that short\((-E_{8}\oplus-\mathbb{Z}^{n+1})=\text{short}(-E_{8})\oplus\text{short}(-\mathbb{Z}^{n +1})=\{0\}\oplus\{\pm 1\}^{n+1}\), in order for \(\tau\) to be an \(E_{8}\)-changemaker the first thing one finds is that \(\sigma\) is a changemaker. Furthermore, the \(E_{8}\)-changemaker \(\tau\) associated to a surgery \(K(p)\) determines \(\widetilde{HFK}(K)\), and in particular \(g(K)\), in much the same way that a changemaker \(\sigma\) arising from surgery determines the associated knot's Floer homology groups.
Our lattice embedding obstruction is derived from the following theorem.
**Theorem 1.17**.: _Let \(K\) be an L-space knot in \(\mathcal{P}\), and suppose that \(p\geq 2g(K)\). If \(K(p)\) bounds a sharp \(4\)-manifold with no torsion in \(H_{1}(X)\), then \(Q_{X}\) embeds as the orthogonal complement to an \(E_{8}\)-changemaker \(\tau=(s,\sigma)\) and \(\widetilde{HFK}(\mathcal{P},K)\) is determined by \(\tau\). In particular, \(2g(K)=p-|\sigma|_{1}\)._
We then develop the theory of \(E_{8}\)-changemaker lattices in order to implement the obstruction implicit in Theorem 1.17, and arrive at the following two theorems.
**Theorem 1.18**.: _Suppose that \(\Lambda(2,1)\oplus\Lambda(p,q)\cong(\tau)^{\perp}\) for some \(E_{8}\)-changemaker \(\tau\). Then \((p,q)=(7,6)\) or \((27,16)\). Furthermore, if \(K(2p)\cong L(2,1)\#L(p,q)\), then \(g(K)=p\)._
**Theorem 1.19**.: _If \(\Lambda(p,q)\cong(\tau)^{\perp}\) for some \(E_{8}\)-changemaker \(\tau\), then there is a Tange knot \(T\) such that \(T(p)\cong L(p,q)\)._
Proof of Theorem 1.9.: Let \(K\subset\mathcal{P}\), and suppose that \(p\geq 2g(K)\) and \(K(p)\cong L(p,q)\). Then \(K(p)\) bounds a sharp \(4\)-manifold \(X:=X(p,q)\), and Theorem 1.17 implies that \(Q_{X}\) embeds as the orthogonal complement to some \(E_{8}\)-changemaker \(\tau\) that completely determines \(\widehat{HFK}(\mathcal{P},K)\). By Theorem 1.19, there is a Tange knot \(T\subset\mathcal{P}\) such that \(T(p)\cong L(p,q)\), so \(\widehat{HFK}(\mathcal{P},K)\cong\widehat{HFK}(\mathcal{P},T)\).
In the current work, we demonstrate Theorem 1.19 explicitly only for \(\Lambda(p,q)\) with rank \(\geq 10\), but contend that a computer aided search of the \(1003\) non-zero \(E_{8}\)-changemakers in \(-E_{8}\oplus-\mathbb{Z}^{n}\) with \(n\in\{0,1,2\}\) demonstrates the theorem to be true in the case that \(7\leq\operatorname{rk}\Lambda(p,q)\leq 9\). There are two reasons for this omission: first, the technical lemmas we use to characterize \(E_{8}\)-changemaker embeddings of linear lattices when \(n\geq 3\) break down when the diagonal summand is small, so this case requires a different approach than the general one we take; second, Rasmussen, according to the remark at the end of Section 6.3 of [22], has used a computer to verify that all lens spaces \(L(p,q)\) with \(p\leq 100,000\) realized by integer surgery on a knot \(K\subset\mathcal{P}\) with \(2g(K)\leq p\) are realized by integer surgery on a Tange knot; by explicit computation, the author has found that linear \(E_{8}\)-changemaker lattices \(\Lambda(p,q)\) with \(n\in\{0,1,2\}\) satisfy \(p\leq 100,000\).
### Beyond \(E_{8}\)-changemakers
The utility of changemakers and \(E_{8}\)-changemakers in both constructing and obstructing lens space surgeries from \(S^{3}\) and \(\mathcal{P}\), respectively, suggests that appropriate generalizations of their defining combinatorics may give way to tractable strategies for understanding the more general phenomenon of when Dehn surgery on a knot \(K\) in an arbitrary integer homology sphere \(Y\) yields the lens space \(L(p,q)\). In the case that \(Y\) is an L-space with \(d(Y)\geq 4\), then, by considering the \(4\)-manifold \(Z\) obtained by gluing the trace cobordism from \(L(p,q)\) to \(Y\) to \(X(p,q)\) along \(L(p,q)\) in light of data from Floer homology (cf. Lemma 3.2) we are led to consider vectors \(\tau\) in negative definite, unimodular lattices of the form \(\Lambda\cong-L\oplus-\mathbb{Z}^{n+1}\), where \(L\) has no vectors of length \(1\), with the property that
\[\mathfrak{c}^{2}+\operatorname{rk}(\Lambda)-4d(Y)\leq-8t_{i}(K)\]
for all \(|i|\leq p/2\) and \(\mathfrak{c}\in\operatorname{Char}(\Lambda)\) such that \(\langle\mathfrak{c},\tau\rangle+p\equiv 2i\bmod 2p\), and moreover over for all \(|i|\leq p/2\) there is some \(\mathfrak{c}\) which attains equality. If \(p\geq 2g(K)\), then we are led to the following definition.
**Definition 1.20**.: _Let \(L\) be a positive definite unimodular lattice with no vectors of norm 1, and let \(\Lambda=-L\oplus-\mathbb{Z}^{n}\). Let_
\[m(\Lambda)=\max\{\langle\mathfrak{c},\mathfrak{c}\rangle\colon\mathfrak{c}\in \text{Char}(\Lambda)\}.\]
_Denote by \(\text{Char}_{i}(\Lambda)\) the characteristic elements in \(\Lambda\) of norm \(m(\Lambda)-8i\), and for a vector \(v\in\Lambda\), let \(C_{i}(v)\) denote \(\{\langle\mathfrak{c},v\rangle\colon c\in\text{Char}_{i}(\Lambda)\}\), and let \(c_{i}(v)=\max(C_{i}(v))\). A vector \(\tau\in\Lambda\) is said to be an \(L\)-changemaker if_
1. \(PI(-c_{0}(\tau),c_{0}(\tau))=C_{0}(\tau)\)_, and_
2. \(PI(c_{i-1}(\tau)+2,c_{i}(\tau))\subset C_{i}(\tau)\) _for_ \(1\leq i\leq\frac{\operatorname{rk}(\Lambda)+m(\Lambda)}{8}\)_._
_A lattice \(\mathcal{L}\) is said to be an \(L\)-changemaker lattice if \(\mathcal{L}\cong(\tau)^{\perp}\) for some \(L\)-changemaker \(\tau\)._
We point out that the definition above coincides with Greene's definition of a changemaker if \(L\) is taken to be the trivial lattice, i.e. the lattice of rank \(0\), and that it agrees with the above definition of an \(E_{8}\)-changemaker if \(L\) is taken to be \(E_{8}\).
At present, the only known irreducible integer homology sphere L-spaces with non-negative \(d\)-invariant are \(S^{3}\) and \(\mathcal{P}\). One conceivable strategy for producing a novel irreducible integer homology sphere L-space by way of \(L\)-changemakers would proceed as follows. First, fix a positive definite unimodular lattice \(L\). Next, one may work out the specific combinatorics of the \(L\)-changemaker condition, as we do here for \(L\cong E_{8}\). One may then attempt to identify a linear lattice \(\Lambda(p,q)\) which embeds in \(-L\oplus-\mathbb{Z}^{n}\) as the orthogonal complement to an \(L\)-changemaker \(\tau\). If there is such a linear lattice, it is conceivable that there is an integer homology sphere L-space \(Y\) which bounds a \(4\)-manifold \(Z\) with \(Q_{Z}\cong-L\oplus-\mathbb{Z}^{n}\) obtained as surgery on a knot in the lens space \(L(p,q)\). The data of the embedding \((\tau)\oplus\Lambda\hookrightarrow-L\oplus-\mathbb{Z}^{n}\) has the potential to produce \(Y\) as in the following example.
The author discovered the \(E_{8}\)-changemaker embedding \(\Lambda(2,1)\oplus\Lambda(27,16)\hookrightarrow-E_{8}\) prior to discovering the knot \(C\); in fact, the explicit embedding of \(\Lambda(2,1)\oplus\Lambda(27,16)\) in \(-E_{8}\) as the sublattice generated by \(\mathcal{S}=\{e_{3},e_{2},e_{1}-e_{4},e_{5},e_{6},e_{7},e_{8}\}\) was used together with the fact that \(E_{8}\) is generated by \(\mathcal{S}\cup\{e_{4}\}\) to produce a Kirby diagram of \(\mathcal{P}\) realizing \(\mathcal{P}\) as surgery on a knot in \(L(2,1)\#L(27,16)\) (see Figure 3).
It is conceivable that there are negative-definite unimodular lattices \(L\) such that no \(L\)-changemaker lattice is isomorphic to a sum of linear lattices. In fact, if one can show that no \(L\)-changemaker lattice is isomorphic to a sum of linear lattices for all lattices \(L\) with \(\frac{\operatorname{rk}(L)+m(L)}{4}=d\) then one will have shown that no integer homology sphere L-space \(Y\) with \(d(Y)=d\) may be obtained by surgery on a knot \(K\) in a lens space \(L(p,q)\) with \(p\geq 2g(K)\). We issue the following conjecture, in the spirit of [22, Conjecture 1].
**Conjecture 1.21**.: _If \(L\) is a definite unimodular lattice and \(\Lambda(p,q)\) is an \(L\)-changemaker lattice, then \(L(p,q)\) is realized by integer surgery on a knot enumerated by Berge or Tange. In particular, \(L\) is the trivial lattice or \(L\cong-E_{8}\)._
On the other hand, one may wonder if some appropriate generalization of the changemaker
Figure 3. A sequence of blow-ups and blow-downs takes this Kirby diagram to the negative definite \(E_{8}\) plumbing. The black unknots correspond to the elements of \(\mathcal{S}\), while the blue unknot corresponds to the simple root \(e_{4}\).
condition may shed light on surgeries from lens spaces to a non-L-space integer homology sphere \(Y\). Every Heegaard genus two integer homology sphere admits a lens space surgery by the doubly primitive construction--might the bootstrapping together of Floer homology and lattice embeddings entirely capture these realization problems as it has for \(S^{3}\) and \(\mathcal{P}\)? Since \(Y\) is not an L-space, a reexamination of the input from the surgery exact triangle in Heegaard Floer homology that leads to the \(L\)-changemaker definition is in order, as the following example demonstrates.
Consider the lens space \(L(46,15)\), which admits a surgery description in terms of a linear chain of \(15\) unknots, all of which are \(-2\)-framed except for, say, the first one, which bears a \(-4\)-framing. Then, \(-2\)-surgery with respect to the blackboard framing on the meridian of the third unknot produces a Seifert-fibered integer homology sphere \(Y\) that is not an L-space that bounds the even unimodular lattice of rank \(16\) called \(\Gamma_{16}\). We assert, without supplying proof here, that the obvious embedding \(\Lambda(46,15)\hookrightarrow\Gamma_{16}\) implicit in the surgery diagram is not a \(\Gamma_{16}\)-changemaker embedding. It is important to note that this does not rule out the existence of a \(\Gamma_{16}\)-changemaker embedding of \(\Lambda(46,15)\), though the author doubts that one exists.
### Organization
In Section 2, we recall preliminary observations about lattices recorded in Section 3 of [11]. In Section 3, we recall input from Floer homology begun in [5] to address \((2g-1)\)-surgeries, and bring it to bear on \(p\geq 2g\)-surgeries, culminating in the notion of an \(E_{8}\)-changemaker. In the first part of Section 4, we discuss some essentials required to take a hands-on approach to working with the \(E_{8}\) lattice that are more familiar to number theorists and lattice theorists, as communicated to the author by Daniel Allcock and Richard Borcherds, and that the author suspects are less familiar to low-dimensional topologists. In the second part of Section 4, we develop the input from Floer homology in Section 3 in order to derive combinatorial constraints on \(E_{8}\)-changemakers and implement our lattice embedding obstruction. At the end of Section 4, we record some basic observations about linear lattices that are \(E_{8}\)-changemaker lattices. In Section 5, we implement the \(E_{8}\)-changemaker obstruction to characterize when a linear lattice is realized by the orthogonal complement to an \(E_{8}\)-changemaker in \(-E_{8}\oplus-\mathbb{Z}^{n+1}\) with \(n\geq 2\). In Section 6, we explicitly identify each of the thirty-eight families of indecomposable linear \(E_{8}\)-changemaker lattices identified in Section 5 with a Tange family, identify each of the six families of decomposable linear \(E_{8}\)-changemaker lattices identified in section 5 with a family of surgeries on cables of an exceptional fiber, and prove the main theorems.
For readers with some familiarity with changemaker lattices, we offer a brief summary of the strategy we employ to obstruct linear lattices from admitting \(E_{8}\)-changemaker lattice embeddings. First, as in [11], we take the perspective of an \(E_{8}\)-changemaker \(\tau=(s,\sigma)\). In Section 4, we prove that the orthogonal complement to \(\tau\) admits an _standard basis_ of irreducible vectors that are either _tight_, _gappy_, or _just right_, in much the same way that the orthogonal complement to a changemaker does, and that contains the standard changemaker basis of \((\sigma)^{\perp}\subset\mathbb{Z}^{n+1}\). Greene's analysis of changemakers whose orthogonal complements are linear lattices progresses by analyzing first the case when every standard basis element is just right, then the case when there is a gappy vector but no tight vector, and finally the case when there is a tight vector. The result of his analysis showed that the collection of changemaker
lattices whose associated _intersection graphs_ contained no _claws_, _heavy triples_, or _incomplete cycles_ corresponds precisely to the set of orthogonal sums of linear lattices coming from connected sums of lens spaces realized either by lens space surgery on a Berge knot or a reducible surgery on a torus knot or a cable thereof. Our analysis of \(E_{8}\)-changemakers whose orthogonal complements are sums of linear lattices thus proceeds by first conditioning on whether the changemaker basis of \((\sigma)^{\perp}\) contains a gappy or tight vector, deducing characteristics of such bases whose intersection graphs contain no claw, heavy triples, or incomplete cycles, then elucidating how such a changemaker basis may be extended to an \(E_{8}\)-_changemaker basis_ without introducing any claws, heavy triples, or incomplete cycles. We comment further on our strategy at the beginning of Section 5.
### Acknowledgments
The author wishes to thank Daniel Allcock, Ken Baker, John Baldwin, Richard Borcherds, Steve Boyer, Josh Greene, Cameron Gordon, Yi Ni, and Motoo Tange for insightful conversations, valuable encouragement, and probing questions. The author would also like to thank the mathematics graduate student community at Boston College for fostering a warm and welcoming atmosphere, which provided an invaluable reservoir of morale.
## 2. Lattices.
Here we provide an executive summary, of Section 3 of [11].
### Generalities
Let \(V\) be a Euclidean vector space of dimension \(n\), and let \(\mathcal{V}\) be an orthonormal basis of \(V\). A _lattice_\(L\) is the \(\mathbb{Z}\)-module of integer linear combinations of elements of a basis \(\mathcal{B}\) for \(V\), and we say \(\mathrm{rk}L=n\). The inner product \(\langle-,-\rangle:V\otimes V\to\mathbb{R}\) restricts to a positive definite, symmetric, bilinear form on \(L\); \(L\) is said to be an _integer lattice_ if the image of this restricted pairing is contained in \(\mathbb{Z}\). Equivalently, letting \(B\) be the \(n\times n\) matrix whose columns are the vectors in \(\mathcal{B}\) with respect to the coordinates \(\mathcal{V}\), \(L\) is an integer lattice if every entry in the matrix \(A=B\cdot B^{T}\) is an integer. In this case, we define the _dual_ of \(L\),
\[L^{*}:=\{x\in V|\ \langle x,y\rangle\in\mathbb{Z}\text{ for all }y\in L\},\]
and the discriminant of \(L\), \(\mathrm{disc}(L)\), is the index \([L^{*}:L]\). Note here that the restriction of the inner product of \(V\) to \(L^{*}\) is given, with respect to the implicit basis \(\mathcal{B}^{*}\) of \(\mathrm{Hom}(V,\mathbb{R})\cong V\) dual to \(\mathcal{B}\), by the matrix \(A^{-1}\), thus the vectors in \(V\) expressed by the columns of \(A^{-1}\) with respect to the basis \(\mathcal{V}\) form a \(\mathbb{Z}\)-basis for \(L^{*}\). If now \(|\det A|=1\), in which case we say that \(L\) is _unimodular_, then \(A\in\mathrm{SL}(2,\mathbb{Z})\), hence \(L^{*}\) is an integer lattice. It follows that \(L=L^{*}\) and that \(A^{-1}:L\to L^{*}\) encodes this isomorphism with respect to the bases \(\mathcal{B}\) of \(L\) and \(\mathcal{V}^{*}=\mathcal{V}\). Henceforth, we work only with integer lattices.
The _norm_ of a vector \(v\in L\) is \(|v|:=\langle v,v\rangle\). A vector \(v\) is said to be _reducible_ if it can be written as the sum of two non-zero vectors \(x\), \(y\in L\) with \(\langle x,y\rangle\geq 0\). A vector is _irreducible_ if it is not reducible. A vector \(v\) is _breakable_ if \(v=x+y\) for some \(x,y\in L\) with \(|x|,|y|\geq 3\) and \(\langle x,y\rangle=-1\). A vector is _unbreakable_ if it is not breakable. A lattice \(L\) is _decomposable_ if it can be written as an orthogonal direct sum \(L=L_{1}\oplus L_{2}\) with \(L_{1},L_{2}\neq(0)\). A lattice is _indecomposable_ if it is not decomposable.
Every integer lattice \(L\) admits a basis \(S=\{v_{1},\ldots,v_{n}\}\) of irreducible vectors. Given such a basis \(S\), we define its _pairing graph_
\[\hat{G}(S)=(S,E),\ \ E=\{(v_{i},v_{j})\ |\ i\neq j\text{ and }\langle v_{i},v_{j} \rangle\neq 0\}.\]
### Linear lattices
Let \(p>q>0\) be relatively prime integers. Note that the fraction \(p/q\) admits a unique Hirzebruch-Jung continued fraction expansion
\[p/q=[a_{1},a_{2},\ldots,a_{n}]^{-}=a_{1}-\frac{1}{a_{2}-\frac{1}{\ddots-\frac{1 }{a_{n}}}}\]
with each \(a_{i}\geq 2\) an integer. The _linear lattice_\(\Lambda(p,q)\) is the integer lattice freely generated by the _vertex basis_\(V=\{x_{1},\ldots,x_{n}\}\), where
\[\langle x_{i},x_{j}\rangle=\begin{cases}-a_{i},&\text{ if }i=j;\\ 1,&\text{ if }|i-j|=1\\ 0,&\text{ if }|i-j|>1\end{cases}.\]
Notably, the linear lattices \(\Lambda(p,q)\) and \(\Lambda(p^{\prime},q^{\prime})\) are isomorphic if and only if there exists an orientation preserving homeomorphism \(L(p,q)\cong L(p^{\prime},q^{\prime})\).
**Proposition 2.1** ([8]).: _If \(\Lambda(p,q)\cong\Lambda(p^{\prime},q^{\prime})\), then \(p=p^{\prime}\) and \(q=q^{\prime}\) or \(qq^{\prime}\equiv 1(\operatorname{mod}\,p)\). _
An interval \(T\) in a linear lattice \(L\) is a (possibly singleton) set of consecutive vertices \(\{x_{i},x_{i+1},\ldots,x_{j}\}\) in the pairing graph of the vertex basis \(V\) for \(L\). Let \([T]\in L\) denote the vector \(x_{i}+\ldots+x_{j}\). Let \(T=\{x_{i},\ldots,x_{j}\}\) and \(T^{\prime}=\{x_{k},\ldots,x_{l}\}\) be distinct intervals. We say that \(T\) and \(T^{\prime}\)_share a common endpoint_ if \(i=k\) or \(j=l\) and write \(T\prec T^{\prime}\) if \(T\subset T^{\prime}\). We say that \(T\) and \(T^{\prime}\) are _consecutive_ if \(k=j+1\) or \(i=l+1\), and write \(T\mathord{\uparrow}T^{\prime}\). If \(T\) and \(T^{\prime}\) either share a common endpoint or are consecutive, then we write \(T\sim T^{\prime}\) and say that \(T\) and \(T^{\prime}\) abut. If \(T\) and \(T^{\prime}\) do not abut, then either \(T\cap T^{\prime}=\emptyset\), in which case we say that \(T\) and \(T^{\prime}\) are distant, or \(T\cap T^{\prime}\) is a proper non-empty subset of both \(T\) and \(T^{\prime}\), in which case we write \(T\pitchfork T^{\prime}\). Note that if \(T\pitchfork T^{\prime}\), then the symmetric difference of \(T\) and \(T^{\prime}\) is a union of two distant intervals.
**Proposition 2.2** (Corollary 3.5 of [11]).: _Let \(L=\bigoplus_{k}L_{k}\) denote a sum of linear lattices._
1. _The irreducible vectors in_ \(L\) _take the form_ \(\pm[T]\)_, where_ \(T\) _is an interval in some_ \(L_{k}\)_;_
2. _each_ \(L_{k}\) _is indecomposable;_
3. _if_ \(T\pitchfork T^{\prime}\)_, then_ \([T\setminus T^{\prime}]\pm[T^{\prime}\setminus T]\) _is reducible;_
4. \([T]\) _is unbreakable iff_ \(T\) _contains at most one vertex basis element of norm_ \(\geq 3\)_._ _
## 3. Input from Floer homology.
In this section, we will use input from Floer homology to begin to define the \(E_{8}\)-changemaker property.
### Generalizing the notion of a changemaker
While there are by now many flavors of Heegaard Floer homology, in this work we invoke only some of the original theory's most basic properties. Recall that to a rational homology \(3\)-sphere \(Y\) equipped with a \(\operatorname{spin^{c}}\) structure \(\mathfrak{t}\), Ozvath-Szabo associated a non-trivial \(\mathbb{F}_{2}\)-vector space \(\widehat{HF}(Y,\mathfrak{t})\) and a numerical invariant \(d(Y,\mathfrak{t})\in\mathbb{Q}\), called a _correction term_. We denote by \(\widehat{HF}(Y)\) the direct sum
\[\bigoplus_{\mathfrak{t}\in\operatorname{Spin^{c}}(Y)}\widehat{HF}(Y, \mathfrak{t}),\]
and we observe that \(\dim\widehat{HF}(Y)\geq|\operatorname{Spin^{c}}(Y)|=|H^{2}(Y)|\). We say that \(Y\) is an _L-space_ if \(\widehat{HF}(Y)\) has minimal rank, i.e. \(\dim\widehat{HF}(Y)=|H^{2}(Y)|\), in which case \(\dim\widehat{HF}(Y,\mathfrak{t})=1\) for all \(\mathfrak{t}\in\operatorname{Spin^{c}}(Y)\).
To a \(4\)-dimensional cobordism \(W:Y_{0}\to Y_{1}\) equipped with a \(\operatorname{spin^{c}}\) structure \(\mathfrak{s}\) restricting to \(\mathfrak{t}_{0}\) on \(Y_{0}\) and \(\mathfrak{t}_{1}\) on \(Y_{1}\), the theory associates a homomorphism \(\widehat{F}_{W;\mathfrak{s}}:\widehat{HF}(Y_{0},\mathfrak{t}_{0})\to\widehat {HF}(Y_{1},\mathfrak{t}_{1})\). Osvath-Szabo showed that if \(W\) is negative-definite, then
\[4d(Y_{0},\mathfrak{t}_{0})+c_{1}(\mathfrak{s})^{2}+b_{2}(W)\leq 4d(Y_{1}, \mathfrak{t}_{1}). \tag{7}\]
Note that if \(Y\) bounds a negative definite \(4\)-manifold \(X\), we may take \(W:=X\setminus B^{4}\) to be a negative definite cobordism from \(S^{3}\) to \(Y\) and obtain, upon noting that \(d(S^{3},\mathfrak{t})=0\) for the unique \(\mathfrak{t}\in\operatorname{Spin^{c}}(S^{3})\), that
\[c_{1}(\mathfrak{s})^{2}+b_{2}(X)\leq 4d(Y,\mathfrak{t}), \tag{8}\]
for every \(\mathfrak{s}\in\operatorname{Spin^{c}}(X)\) extending \(\mathfrak{t}\in\operatorname{Spin^{c}}(Y)\).
**Definition 3.1**.: _A negative definite cobordism \(W:Y_{0}\to Y_{1}\) is sharp if, for every \(\mathfrak{t}_{0}\in\operatorname{\it Spin^{c}}(Y_{0})\) and \(\mathfrak{t}_{1}\in\operatorname{\it Spin^{c}}(Y_{1})\), there exists some extension \(\mathfrak{s}\in\operatorname{\it Spin^{c}}(X)\) attaining equality in the bound (7). Furthermore, a negative definite \(4\)-manifold \(X\) with connected boundary is said to be sharp if \(W:=X\setminus B^{4}\) is._
The atomic sharp \(4\)-manifold for us at present is the manifold \(X(p,q)\) with boundary \(L(p,q)\), which we now construct. Take the Hirzebruch-Jung continued fraction expansion of the reduced improper fraction \(p/q=[a_{1},\dots,a_{n}]^{-}\); \(X(p,q)\) is the \(4\)-manifold prescribed by the Kirby diagram in Figure 4. Observe that the intersection form \(Q_{X(p,q)}\), which is presented by the linking matrix of the Kirby diagram in Figure 4, is isomorphic to the linear lattice \(\Lambda(p,q)\). Ozsvath-Szabo showed that \(X(p,q)\) is sharp, hence the boundary connected sum \(\natural_{i=1}^{n}X(p_{i},q_{i})\), whose intersection form is the orthogonal sum \(\oplus_{i=1}^{n}\Lambda(p_{i},q_{i})\), is also sharp.
Figure 4. The canonical sharp \(4\)-manifold with boundary \(L(p,q)\), \(X(p,q)\).
Now, let \(X\) be a negative definite 4-manifold with \(H_{1}(X)\) torsion-free and \(b_{2}(X)=n\), and suppose that \(K(p)\cong\partial X\) for some knot \(K\) in an integer homology sphere L-space \(Y\), e.g. \(\mathcal{P}\), and some integer \(p\geq 1\). Form the negative definite 4-manifold \(Z\) with \(\partial Z\cong Y\) by identifying the orientation reversal of the trace cobordism of \(p\)-surgery on \(K\), which we call \(W\), and \(X\) along their common boundary component \(K(p)\cong\partial X\). Let \(\tau\in Q_{Z}\) denote the class of the generator of \(Q_{W}\), i.e. a Seifert surface of \(K\) in \(Y\) capped off with the core of the 2-handle in \(W\), under the image of inclusion into \(Q_{Z}\).
In [5], the author showed the following.
**Lemma 3.2**.: _Let \(K\) denote an L-space knot in an L-space integer homology sphere \(Y\), and suppose that \(K(p)\) bounds a smooth, negative definite 4-manifold \(X\) with \(H_{1}(X)\) torsion-free. Then_
\[\mathfrak{c}^{2}+(n+1)-4d(Y)\leq-8t_{i}(K) \tag{9}\]
_for all \(|i|\leq p/2\) and \(\mathfrak{c}\in\text{Char}(Q_{Z})\) such that \(\langle\mathfrak{c},\tau\rangle+p\equiv 2i\ \mathrm{mod}\ 2p\). Furthermore, if \(X\) is sharp, then for every \(|i|\leq p/2\) there exists \(\mathfrak{c}\) attaining equality in (9). _
Recall also the following theorem of Scaduto, anticipated by Froyshov ([7]).
**Theorem 3.3** (Corollary 1.4 of [23]).: _If \(Z\) is a negative definite 4-manifold with no 2-torsion in its homology and \(\partial Z=\mathcal{P}\), then \(Q_{Z}\cong-\mathbb{Z}^{n}\ (n\geq 1)\) or \(Q_{Z}\cong-E_{8}\oplus-\mathbb{Z}^{n}\ (n\geq 0)\). _
Supposing now that \(Y=\mathcal{P}\), that \(p\geq 2g(K)\), and that \(X\) is sharp, then the intersection form of the 4-manifold \(Z=W\cup X\) is either \(-\mathbb{Z}^{n+1}\) or \(-E_{8}\oplus-\mathbb{Z}^{n-7}\). Then, by (9) and the observation that \(t_{i}(K)=0\) if and only if \(i\geq g(K)\), it follows that \(Q_{Z}\not\cong-\mathbb{Z}^{n+1}\) since there is no vector in \(\text{Char}(-\mathbb{Z}^{n+1})\) attaining the equality \(\mathfrak{c}^{2}=-n+7\). We conclude, then, that \(Q_{Z}\cong-E_{8}\oplus-\mathbb{Z}^{n-7}\). We write \(\tau=(s,\sigma)\in-E_{8}\oplus-\mathbb{Z}^{n-7}\) for the image of the generator of \(Q_{W}\) under inclusion into \(H_{2}(Z)\) where \(s\in-E_{8}\) and \(\sigma\in-\mathbb{Z}^{n-7}\). Noting that \(\text{short}(-E_{8}\oplus-\mathbb{Z}^{n-7})=\{0\}\oplus\{\pm 1\}^{n-7}\), and that \(|\langle\mathfrak{c},\sigma\rangle|\leq|\sigma|_{1}\leq|\sigma|\leq p\), the sharpness of \(X\) implies the equality of the following two sets
\[\{\langle(0,\mathfrak{c}^{\prime}),(s,\sigma)\rangle\,:\ \mathfrak{c}^{\prime} \in\{\pm 1\}^{n-7}\}=PI(2g(K)-p,2g(K)+p). \tag{10}\]
It follows from (10) that \(\sigma\), after a suitable isometry of \(-E_{8}\oplus-\mathbb{Z}^{n-7}\) putting \(\sigma\) in the first orthant of \(-\mathbb{Z}^{n-7}\), is a changemaker, and that \(2g(K)=p-c(\tau)=p-|\sigma|_{1}\).
We may deduce restrictions on \(s\) by considering (9) for characteristic vectors in the set \(\text{Short}(-E_{8}\oplus-\mathbb{Z}^{n+1})\). We assert the following lemma now, but postpone its proof until after we have laid some groundwork for working in the \(E_{8}\) lattice in the next section.
**Lemma 3.4**.: _If \(\tau=(s,\sigma)\) arises as above, and \(\langle s,s\rangle\leq-4\), then \(|\langle\mathfrak{c},\tau\rangle|\leq p\) for all \(\mathfrak{c}\in\text{Short}(-E_{8}\oplus-\mathbb{Z}^{n-7})\). Moreover, for any \(\mathfrak{c}\in\text{Short}(-E_{8}\oplus-\mathbb{Z}^{n-7})\) and \(|i|\leq p/2\), if \(\langle\mathfrak{c},\tau\rangle+p\equiv 2i\ \mathrm{mod}\ 2p\), then \(\langle\mathfrak{c},\tau\rangle+p=2i\)._
**Remark 3.5**.: _If \(\langle s,s\rangle\geq-2\), then either \(s=0\) or \(\langle s,s\rangle=-2\). We will see upon explicit inspection in Section 4, after we have a handle on working with \(E_{8}\), that \(\tau\) is in fact an \(E_{8}\)-changemaker regardless of whether or not \(\langle s,s\rangle\leq-4\)._
Let \(g^{(1)}(K)=\min\{i\geq 0\,:\ t_{i}(K)=1\}\). Then, assuming \(X\) is sharp, Lemmas 3.2 and 3.4 together imply that if \(\langle s,s\rangle\leq-4\), then \(2g^{(1)}=p-C(\tau)\), and
\[PI(-C(\tau),-c(\tau)-2)\subset\{\langle\mathfrak{c},\tau\rangle\colon \mathfrak{c}\in\operatorname{Short}(-E_{8}\oplus-\mathbb{Z}^{n-7})\}, \tag{11}\]
i.e. for any \(j\) in the parity interval \(PI(c(\tau)+2,C(\tau))\), there is some \(\mathfrak{c}\in\operatorname{Char}(-E_{8}\oplus-\mathbb{Z}^{n-7})\) with norm \(-(n+1)\) such that \(\langle\mathfrak{c},\tau\rangle=j\).
In summary, we arrive at the definition of an \(E_{8}\)-changemaker as in Definition 1.16, and obtain the following proposition.
**Proposition 3.6**.: _For \(K\subset\mathcal{P}\) an L-space knot and \(p\geq 2g(K)\), if \(K(p)\) bounds a sharp \(4\)-manifold \(X\) with \(H_{1}(X)\) torsion-free, then \(Q_{X}\) embeds in the orthogonal complement to an \(E_{8}\)-changemaker \(\tau\)._
Proof.: Let \(\tau=(s,\sigma)\in-E_{8}\oplus-\mathbb{Z}^{n+1}\), and let \(\{d_{0},\ldots,d_{n}\}\) be an orthonormal basis for \(-\mathbb{Z}^{n+1}\).
The discussion following Lemma 3.4 proves the proposition in the case that \(\langle s,s\rangle\leq-4\).
Note that if \(s=0\), then
\[\{\langle\mathfrak{c},\tau\rangle\colon\operatorname{Short}(-E_{8}\oplus- \mathbb{Z}^{n-7})\}=\{\langle\mathfrak{c},\sigma\rangle\colon\mathfrak{c}\in \operatorname{Short}(-\mathbb{Z}^{n+1})\}.\]
Observe that
\[\max\{\langle\mathfrak{c},\sigma\rangle\colon\mathfrak{c}\in\operatorname{ Short}(-\mathbb{Z}^{n-7})\}=|\sigma|_{1}+2\sigma_{n}.\]
We will now show that \(PI(|\sigma|_{1}+2,|\sigma|_{1}+2\sigma)\subset\{\langle\mathfrak{c},\sigma \rangle\colon\mathfrak{c}\in\operatorname{Short}(-\mathbb{Z}^{n+1})\}\). Let \(\sigma^{\prime}=(\sigma_{0},\ldots,\sigma_{n-1})\in-\mathbb{Z}^{n+1}\), and note that \(\sigma^{\prime}\) is a changemaker, thus, for every \(0\leq k\leq|\sigma^{\prime}|_{1}\), there is some \(A\subset\{0,\ldots,n-1\}\) such that \(\sum_{i\in A}\sigma_{i}=k\). Therefore, \(PI(3\sigma_{n}-|\sigma^{\prime}|_{1},3\sigma_{n}+|\sigma^{\prime}|_{1})\subset \{\langle\mathfrak{c},\sigma\rangle\colon\mathfrak{c}\in\operatorname{Short} (-\mathbb{Z}^{n-7})\}\). Observe now that
\[3\sigma_{n}-|\sigma^{\prime}|_{1} \leq\sigma_{n}+|\sigma|_{1}+1-|\sigma^{\prime}|_{1}\] \[\leq\sigma_{n}-(\sigma_{0}+\ldots+\sigma_{n-1})+|\sigma|_{1}+1\] \[\leq 1+|\sigma|_{1}+1,\]
and the proof in the case that \(s=0\) is complete.
We complete the proof of the proposition in Proposition 4.2 once we have a handle on working in \(E_{8}\).
In the front matter, we have chosen to adhere to the convention that the lens space \(L(p,q)\) is \(-p/q\)-surgery on the unknot in \(S^{3}\). This convention precipitates a preference for working with negative-definite \(4\)-manifolds. However, as the arguments henceforth are primarily lattice-theoretic and combinatorial in nature, we ask the reader to keep in mind that the analysis on the negative-definite \(4\)-manifolds arising from our topological considerations may be carried out on the corresponding positive-definite lattices by means of reversing the orientation of every \(4\)-manifold we have constructed so far. In what follows, every lattice will be positive definite, and in particular we will abuse notation and write \(\Lambda(p,q)\) to refer to the intersection form of \(-X(p,q)\).
## 4. Working in the \(E_{8}\) lattice.
The \(E_{8}\) integer lattice is the unique even, unimodular lattice of rank \(8\). It is realized by restricting the dot product on \(\mathbb{R}^{8}\) to the subset
\[\Big{\{}x\in\mathbb{Z}^{8}\cup\Big{(}\mathbb{Z}+\frac{1}{2}\Big{)}^{8}|\sum_{i=0 }^{8}x_{i}\equiv 0\text{ mod }2\Big{\}}. \tag{12}\]
It admits the following Gram matrix with respect to the basis specified in Figure 5, which we fix as the preferred basis throughout this section:
\[A=\begin{bmatrix}2&-1&-1&0&-1&0&0&0\\ -1&2&0&0&0&0&0&0\\ -1&0&2&-1&0&0&0&0\\ 0&0&-1&2&0&0&0&0\\ -1&0&0&0&2&-1&0&0\\ 0&0&0&0&-1&2&-1&0\\ 0&0&0&0&0&-1&2&-1\\ 0&0&0&0&0&0&-1&2\end{bmatrix}.\]
Many low-dimensional topologists are at least familiar with the \(E_{8}\) lattice, though we suspect many are unfamiliar with performing explicit computations in in this lattice. In what follows, we outline a perspective on \(E_{8}\) that significantly eases our computational load. We are grateful to Daniel Allcock and Richard Borcherds for sharing the following perspective.
### The fundamental Weyl chamber of \(E_{8}\)
A _root_ is any vector \(v\in E_{8}\) with \(\langle v,v\rangle=2\), of which \(E_{8}\) is known to have \(240\). We refer to the basis elements specified in Figure 5 as _simple roots_. Let \(R\) be one of the \(240\) roots in \(E_{8}\) and write \(R=\sum_{i=1}^{8}a_{i}e_{i}\). Then either \(a_{i}\geq 0\) for all \(i\in\{1,\dots,8\}\) or \(a_{i}\leq 0\) for all \(i\in\{1,\dots,8\}\), and \(R\) is said to be a _positive root_ or _negative root_ accordingly. Denote by \(\mathcal{R}_{+}\) the set of positive roots. The set of positive roots admits a partial order: \(\sum_{i=1}^{8}a_{i}e_{i}\leq\sum_{i=1}^{8}b_{i}e_{i}\) if \(a_{i}\leq b_{i}\) for all \(i\in\{1,\dots,8\}\).
For the standard embedding \(E_{8}\subset\mathbb{R}^{8}\) specified in (12), the _Weyl group_ of \(E_{8}\) is the group of linear transformations of \(\mathbb{R}^{8}\) generated by the reflections through each of the \(120\) hyperplanes in \(\mathbb{R}^{8}\) orthogonal to one of the \(120\) positive roots in \(E_{8}\). The _fundamental Weyl chamber_--a fundamental domain for the action of the Weyl group on \(E_{8}\)--is the subset
\[\mathcal{C}=\{v\in E_{8}|\langle v,e_{i}\rangle\geq 0\text{ for all }1\leq i\leq 8\}.\]
Figure 5. A basis of simple roots for the \(E_{8}\) lattice.
As \(E_{8}\) is unimodular, there is a natural isometry \(E_{8}\stackrel{{\sim}}{{\to}}E_{8}^{*}:=\{v^{*}\in\operatorname{Hom}( \mathbb{R}^{8},\mathbb{R})\colon v^{*}(x)\in\mathbb{Z}\text{ for all }x\in E_{8}\}\) given by \(v\mapsto\langle v,-\rangle\). Let \(\{e_{1}^{*},\ldots,e_{8}^{*}\}\) denote the basis of \(E_{8}^{*}\) such that \(e_{i}^{*}(e_{j})=\delta_{ij}\), and for a vector \(v\in E_{8}\) let \(v_{i}^{*}=e_{i}^{*}(v)\) for \(i\in\{1,\ldots,8\}\). Then, \(v^{*}=\sum_{i=1}^{8}v_{i}^{*}e_{i}^{*}\). A vector \(s\in E_{8}\) is in \(\mathcal{C}\) if and only if \(s_{i}^{*}\geq 0\) for all \(i\in\{1,\ldots,8\}\).
Recall that if \(p\geq 2g(K)\) and \(K(p)\) is an L-space bounding a sharp \(4\)-manifold \(X\) with \(b_{2}(X)=n\geq 7\) and \(H_{1}(X)\) torsion-free, then there exists a full rank embedding \(Q_{X}\subset(\tau)^{\perp}\) for some \(\tau\in E_{8}\oplus\mathbb{Z}^{n-7}\), and \(\tau\) is an \(E_{8}\)-changemaker if \(\langle s,s\rangle\geq 4\). Let us now fix a possibly empty orthonormal basis \(\{d_{0},\ldots,d_{n-8}\}\) for \(\mathbb{Z}^{n-7}\). By applying an appropriate isometry of \(E_{8}\oplus\mathbb{Z}^{n-7}\), we may arrange that \(\tau=(s,\sigma)\in E_{8}\oplus\mathbb{Z}^{n-7}\) has \(s\in\mathcal{C}\) and \(0\leq\sigma_{0}\leq\ldots\leq\sigma_{n-8}\).
By restricting our attention to the fundamental Weyl chamber, we may readily deduce some combinatorial constraints on \(s\) imposed by (11). The set \(\operatorname{Short}(E_{8}\oplus\mathbb{Z}^{n-7})\) partitions naturally as \(\bigl{(}\operatorname{Short}(E_{8})\oplus\operatorname{short}(\mathbb{Z}^{n-7 })\bigr{)}\coprod\bigl{(}\operatorname{short}(E_{8})\oplus\operatorname{Short} (\mathbb{Z}^{n-7})\bigr{)}\). Explicitly, we have a decomposition of \(\operatorname{Short}(E_{8}\oplus\mathbb{Z}^{n-7})\) into:
\[\mathfrak{C}_{1}:=\operatorname{Short}(E_{8})\oplus\operatorname{short}( \mathbb{Z}^{n-7})=\{(2R,\chi)\,:\,\,|R|=2,\chi\in\{\pm 1\}^{n-7}\},\text{ and}\]
\[\mathfrak{C}_{2}:=\operatorname{short}(E_{8})\oplus\operatorname{Short}( \mathbb{Z}^{n-7})=\{(0,\chi+2\langle\chi,d_{i}\rangle d_{i}):\chi\in\{\pm 1 \}^{n-7}\},\]
or, prosaically, \(\mathfrak{C}_{2}\) is the set of vectors where the \(E_{8}\) coordinates are all \(0\), and all but one of the \(\mathbb{Z}^{n-7}\) coordinates are \(\pm 1\), and the remaining coordinate is \(\pm 3\). Notice that if \(|\sigma|_{1}=0\), then \(2\in\{\langle\mathfrak{c},\tau\rangle\colon\mathfrak{c}\in\operatorname{Short} (E_{8})\}\), thus there is some root \(R\in E_{8}\) with \(\langle R,\tau\rangle=1\).
Proof of Theorem 1.17.: Suppose that \(p\geq 2g(K)\), and that \(K(p)\) bounds a sharp \(4\)-manifold \(X\) with no torsion in \(H_{1}(X)\). Then, by Proposition 3.6, \(Q_{X}\) embeds in the orthogonal complement to some \(E_{8}\)-changemaker \(\tau=(s,\sigma)\in E_{8}\oplus\mathbb{Z}^{n+1}\) (\(n\geq-1\)). If \(|\sigma|_{1}\geq 1\), then \(\sigma_{i}=1\) for some \(0\leq i\leq n\); let \(w=d_{i}\). If \(|\sigma|_{1}=0\), then there is some root \(R\in E_{8}\oplus\{(0)\}\) such that \(\langle R,\tau\rangle=1\); let \(w=R\). Following [11, Lemma 3.6], consider the map \(\varphi:E_{8}\oplus\mathbb{Z}^{n+1}\to\mathbb{Z}/|\tau|\mathbb{Z}\) given by \(\varphi(v)=\langle v,\tau\rangle\text{ mod }|\tau|\), which is onto since \(\langle w,\tau\rangle=1\). Therefore, the lattice \(\mathfrak{K}=\ker(\varphi)\) has discriminant \([E_{8}\oplus\mathbb{Z}^{n+1}\colon\mathfrak{K}]^{2}=|\tau|^{2}\). On the other hand, since \(\mathfrak{K}=(\tau)^{\perp}\oplus(\tau)\), we have that \(\operatorname{disc}(\tau)^{\perp}=\operatorname{disc}(\mathfrak{K})/|\tau|=|\tau|\). It follows \(Q_{X}\cong(\tau)^{\perp}\) since \(Q_{X}\) is a full rank sublattice of \((\tau)^{\perp}\) and \(\operatorname{disc}(Q_{X})=\operatorname{disc}(\tau)^{\perp}=p\).
In order to prove our main theorems, and in keeping with the conventions of the study of changemaker lattices established in [11], we seek to understand exactly when the orthogonal complement to an \(E_{8}\)-changemaker \(\tau=(s,\sigma)\in E_{8}\oplus\mathbb{Z}^{n+1}\) for \(n\geq 2\) is a linear lattice. In the next section, we will see how the partial order on \(\mathcal{R}_{+}\) readily leads to desirable constraints on \(s\) for \(\tau=(s,\sigma)\in E_{8}\oplus\mathbb{Z}^{n+1}\) an \(E_{8}\)-changemaker and informs an algorithm for producing the _standard basis_ of \((\tau)^{\perp}\).
### \(E_{8}\)-changemaker lattices
Ultimately, we are interested in constructing a basis \(\mathcal{S}\) for the orthogonal complement of an \(E_{8}\)-changemaker \(\tau=(s,\sigma)\in E_{8}\oplus\mathbb{Z}^{n+1}\). We have already established that \(\sigma\) is a changemaker with \(0\leq\sigma_{0}\leq\ldots\leq\sigma_{n}\), and in fact \(\sigma_{0}=1\) if \((\tau)^{\perp}\cong\oplus_{i=1}^{k}\Lambda(p_{i},q_{i})\) in light of Theorem 1.17 since no \(\Lambda(p_{i},q_{i})\) contains a \(\mathbb{Z}\) summand. The upshot of the fact that \(\sigma\) is a changemaker is that we may construct a basis for \((\tau)^{\perp}\) that includes the \(n\)_changemaker basis_ elements of \((\sigma)^{\perp}\subset\mathbb{Z}^{n+1}\), which we denote by \(\mathcal{V}\). The task at hand is now to extend this set of vectors to a basis for all of \((\tau)^{\perp}\), which will consist of the vectors in
\(\mathcal{V}\) together with \(8\) additional vectors, each one corresponding a unique simple root of \(E_{8}\); we denote this set of \(8\) vectors by \(\mathcal{W}\). The following lemma establishes constraints on \(s\) imposed by (11) that allow us to construct \(\mathcal{W}\) favorably.
**Lemma 4.1**.: _Let \(\tau=(s,\sigma)\) be an \(E_{8}\)-changemaker. Then,_
1. \(s_{i}^{*}\leq|\sigma|_{1}+1\) _for_ \(i\in\{1,5,6,7,8\}\)_;_
2. \(s_{2}^{*}\leq|\sigma|_{1}+1\) _or_ \(s_{3}^{*}\leq|\sigma|_{1}+1\)_;_
3. _if_ \(s_{2}^{*}>|\sigma|_{1}+1\)_, then_ 1. \(s_{2}^{*}\leq s_{3}^{*}+s_{4}^{*}+|\sigma|_{1}+1\)_, and_
Figure 6. A Hasse diagram giving the partial order on the set of positive roots in \(E_{8}\). Nodes are labeled by positive roots according to the table in Figure 7.
\begin{tabular}{|l|l|l|} \hline \(R_{1}=(0,0,0,0,0,0,0,1)\) & \(R_{41}=(1,1,1,1,1,0,0,0)\) & \(R_{81}=(3,1,2,1,3,2,1,0)\) \\ \hline \(R_{2}=(0,0,0,0,0,1,0)\) & \(R_{42}=(1,1,1,1,1,1,0,0)\) & \(R_{82}=(3,1,2,1,3,2,1,1)\) \\ \hline \(R_{3}=(0,0,0,0,0,1,1)\) & \(R_{43}=(1,1,1,1,1,1,0)\) & \(R_{83}=(3,1,2,1,3,2,2,1)\) \\ \hline \(R_{4}=(0,0,0,0,1,0,0)\) & \(R_{44}=(1,1,1,1,1,1,1)\) & \(R_{84}=(3,1,2,1,3,3,2,1)\) \\ \hline \(R_{5}=(0,0,0,0,0,1,1,0)\) & \(R_{45}=(2,1,1,0,1,0,0,0)\) & \(R_{85}=(3,2,2,1,2,1,0,0)\) \\ \hline \(R_{6}=(0,0,0,0,1,1,1)\) & \(R_{46}=(2,1,1,0,1,1,0,0)\) & \(R_{86}=(3,2,2,1,2,1,1,0)\) \\ \hline \(R_{7}=(0,0,0,1,0,0,0)\) & \(R_{47}=(2,1,1,0,1,1,1,0)\) & \(R_{87}=(3,2,2,1,2,1,1)\) \\ \hline \(R_{8}=(0,0,0,1,1,0,0)\) & \(R_{48}=(2,1,1,0,1,1,1)\) & \(R_{88}=(3,2,2,1,2,2,1,0)\) \\ \hline \(R_{9}=(0,0,0,1,1,1,0)\) & \(R_{49}=(2,1,1,0,2,1,0,0)\) & \(R_{89}=(3,2,2,1,2,2,1,1)\) \\ \hline \(R_{10}=(0,0,0,0,1,1,1,1)\) & \(R_{50}=(2,1,1,0,2,1,1,0)\) & \(R_{90}=(3,2,2,1,2,2,2,1)\) \\ \hline \(R_{11}=(0,0,1,0,0,0,0)\) & \(R_{51}=(2,1,1,0,2,1,1,1)\) & \(R_{91}=(3,2,2,1,3,2,1,0)\) \\ \hline \(R_{12}=(0,0,1,0,0,0,0)\) & \(R_{52}=(2,1,1,0,2,2,1,0)\) & \(R_{92}=(3,2,2,1,3,2,1,1)\) \\ \hline \(R_{13}=(0,0,1,1,0,0,0)\) & \(R_{53}=(2,1,1,0,2,2,1,1)\) & \(R_{93}=(3,2,2,1,3,2,2,1)\) \\ \hline \(R_{14}=(0,1,0,0,0,0,0,0)\) & \(R_{54}=(2,1,1,0,2,2,2,1)\) & \(R_{94}=(3,2,2,1,3,3,2,1)\) \\ \hline \(R_{15}=(1,0,0,0,0,0,0)\) & \(R_{55}=(2,1,1,1,1,0,0,0)\) & \(R_{95}=(4,2,2,1,3,2,1,0)\) \\ \hline \(R_{16}=(1,0,0,1,0,0,0)\) & \(R_{56}=(2,1,1,1,1,1,0,0)\) & \(R_{96}=(4,2,2,1,3,2,1,1)\) \\ \hline \(R_{17}=(1,0,0,1,1,0,0)\) & \(R_{57}=(2,1,1,1,1,1,0)\) & \(R_{97}=(4,2,2,1,3,2,2,1)\) \\ \hline \(R_{18}=(1,0,0,0,1,1,1,0)\) & \(R_{58}=(2,1,1,1,1,1,1)\) & \(R_{98}=(4,2,2,1,3,3,2,1)\) \\ \hline \(R_{19}=(1,0,0,0,1,1,1,1)\) & \(R_{59}=(2,1,1,1,2,1,0,0)\) & \(R_{99}=(4,2,2,1,4,3,2,1)\) \\ \hline \(R_{20}=(1,0,1,0,0,0,0,0)\) & \(R_{60}=(2,1,1,1,2,1,1,0)\) & \(R_{100}=(4,2,3,1,3,2,1,0)\) \\ \hline \(R_{21}=(1,0,1,0,1,0,0,0)\) & \(R_{61}=(2,1,1,1,2,1,1,1)\) & \(R_{101}=(4,2,3,1,3,2,1,1)\) \\ \hline \(R_{22}=(1,0,1,0,1,1,0,0)\) & \(R_{62}=(2,1,1,1,2,2,1,0)\) & \(R_{102}=(4,2,3,1,3,2,2,1)\) \\ \hline \(R_{23}=(1,0,1,0,1,1,1,0)\) & \(R_{63}=(2,1,1,1,2,2,1,1)\) & \(R_{103}=(4,2,3,1,3,3,2,1)\) \\ \hline \(R_{24}=(1,0,1,0,1,1,1,1)\) & \(R_{64}=(2,1,1,1,2,2,2,1)\) & \(R_{104}=(4,2,3,1,4,3,2,1)\) \\ \hline \(R_{25}=(1,0,1,1,0,0,0,0)\) & \(R_{65}=(2,1,2,1,1,0,0,0)\) & \(R_{105}=(4,2,3,2,3,2,1,0)\) \\ \hline \(R_{26}=(1,0,1,1,1,0,0,0)\) & \(R_{66}=(2,1,2,1,1,1,0,0)\) & \(R_{106}=(4,2,3,2,3,2,1,1)\) \\ \hline \(R_{27}=(1,0,1,1,1,0,0)\) & \(R_{67}=(2,1,2,1,1,1,1,0)\) & \(R_{107}=(4,2,3,2,3,2,1)\) \\ \hline \(R_{28}=(1,0,1,1,1,1,0)\) & \(R_{68}=(2,1,2,1,1,1,1,1)\) & \(R_{108}=(4,2,3,2,3,3,2,1)\) \\ \hline \(R_{29}=(1,0,1,1,1,1,1)\) & \(R_{69}=(2,1,2,1,2,1,0,0)\) & \(R_{109}=(4,2,3,2,4,3,2,1)\) \\ \hline \(R_{30}=(1,1,0,0,0,0,0,0)\) & \(R_{70}=(2,1,2,1,2,1,1,0)\) & \(R_{110}=(5,2,3,1,4,3,2,1)\) \\ \hline \(R_{31}=(1,1,0,0,1,0,0,0)\) & \(R_{71}=(2,1,2,1,2,1,1,1)\) & \(R_{111}=(5,2,3,2,4,3,2,1)\) \\ \hline \(R_{32}=(1,1,0,0,1,1,0,0)\) & \(R_{72}=(2,1,2,1,2,2,1,0)\) & \(R_{112}=(5,2,4,2,4,3,2,1)\) \\ \hline \(R_{33}=(1,1,0,0,1,1,1,0)\) & \(R_{73}=(2,1,2,1,2,2,1,1)\) & \(R_{113}=(5,3,3,1,4,3,2,1)\) \\ \hline \(R_{34}=(1,1,0,0,1,1,1,1)\) & \(R_{74}=(2,1,2,1,2,2,2,1)\) & \(R_{114}=(5,3,3,2,4,3,2,1)\) \\ \hline \(R_{35}=(1,1,1,0,0,0,0,0)\) & \(R_{75}=(3,1,2,1,2,1,0,0)\) & \(R_{115}=(5,3,4,2,4,3,2,1)\) \\ \hline \(R_{36}=(1,1,1,0,1,0,0,0)\) & \(R_{76}=(3,1,2,1,2,1,1,0)\) & \(R_{116}=(6,3,4,2,4,3,2,1)\) \\ \hline \(R_{37}=(1,1,1,0,1,1,0,0)\) & \(R_{77}=(3,1,2,1,2,1,1,1)\) & \(R_{117}=(6,3,4,2,5,3,2,1)\) \\ \hline \(R_{38}=(1,1,1,0,1,1,1,0)\) & \(R_{78}=(3,1,2,1
_,_
2. \(s_{2}^{*}\leq s_{5}^{*}+2s_{6}^{*}+2s_{7}^{*}+s_{8}^{*}+|\sigma|_{1}+1\)_;_
4. _if_ \(s_{3}^{*}>|\sigma|_{1}+1\)_, then_ 1. \(s_{3}^{*}\leq s_{2}^{*}+|\sigma|_{1}+1\)_, and_ 2. \(s_{3}^{*}\leq s_{2}^{*}+s_{6}^{*}+s_{7}^{*}+s_{8}^{*}+|\sigma|_{1}+1\)__
5. _if_ \(s_{4}^{*}>|\sigma|_{1}+1\)_, then_ \(s_{4}^{*}\leq s_{2}^{*}+s_{1}^{*}+s_{5}^{*}+s_{6}^{*}+s_{7}^{*}+s_{8}^{*}+| \sigma|_{1}+1\)_;_
6. _and if_ \(s_{2}^{*}>|\sigma|_{1}+1\) _and_ \(s_{4}^{*}>|\sigma|_{1}+1\)_, then_ 1. \(s_{2}^{*}\leq s_{3}^{*}+|\sigma|_{1}+1\)_, and_ 2. \(s_{4}^{*}\leq s_{1}^{*}+s_{5}^{*}+s_{6}^{*}+s_{7}^{*}+s_{8}^{*}+|\sigma|_{1}+1\)_._
Proof.: For a list of the \(120\) positive roots in \(E_{8}\) expressed in coordinates with respect to the basis of simple roots in Figure 5, see Figure 7. For a visualization of the partial order on \(\mathcal{R}_{+}\), see Figure 6.
Let \(\tau=(s,\sigma)\in E_{8}\oplus\mathbb{Z}^{n+1}\) be an \(E_{8}\) changemaker. Note that for any root \(R=\sum_{i=1}^{8}a_{i}e_{i}\in E_{8}\), \(\langle R,s\rangle=\sum_{i=1}^{8}a_{i}s_{i}^{*}\). Then for any \(\mathfrak{c}=(2R,\chi)\in\mathfrak{C}_{1}\), we have \(\langle\mathfrak{c},\tau\rangle=\sum_{i=1}^{8}a_{i}s_{i}^{*}+\chi\cdot\sigma\), where here \(\cdot\) denotes the standard dot product. In particular, we have
\[\max\{\langle\mathfrak{c},\tau\rangle\,:\ \mathfrak{c}\in\mathfrak{C}_{1}\} =\langle 2R_{120},s\rangle+|\sigma|_{1}\] \[=2(6s_{1}^{*}+3s_{2}^{*}+4s_{3}^{*}+2s_{4}^{*}+5s_{5}^{*}+4s_{6} ^{*}+3s_{7}^{*}+2s_{8}^{*})+|\sigma|_{1}.\]
On the other hand,
\[\max\{\langle\mathfrak{c},\tau\rangle\,:\ \mathfrak{c}\in \mathfrak{C}_{2}\} =|\sigma|_{1}+2\sigma_{n}\] \[\leq 2|\sigma|_{1}+1.\]
Notice that if \(\max\{\langle\mathfrak{c},\tau\rangle\,:\ \mathfrak{c}\in\mathfrak{C}_{2}\} \geq\langle(2R_{44},(1,\ldots,1)),(s,\sigma)\rangle\), then \(2(s_{1}^{*}+\ldots+s_{8}^{*})\leq 2|\sigma|_{1}+1\), in which case \(s_{i}^{*}\leq|\sigma|_{1}+1\) for \(i=1,\ldots 8\) and the lemma follows. We now assume that \(2\sigma_{n}<s_{1}^{*}+\ldots+s_{8}^{*}\).
Since \(\sigma\) is a changemaker, it is clear that for any \(1\leq i\leq 120\), for all \(j\in[\langle 2R_{i},s\rangle-|\sigma|_{1},\langle 2R_{i},s\rangle+|\sigma|_{1}]\) with \(j\equiv\langle\tau,\tau\rangle\bmod 2\) there is some \(\mathfrak{c}\in\mathfrak{C}_{1}\) such that \(\langle c,\tau\rangle=j\). Therefore, we have
\[\{j\in[|\sigma|_{1}+2,\langle 2R_{120},s\rangle+|\sigma|_{1}]\,:\ j\equiv \langle\tau,\tau\rangle\bmod 2\}\subset\bigcup_{i\geq 44}[\langle 2R_{i},s \rangle-|\sigma|_{1},\langle 2R_{i},s\rangle+|\sigma|_{1}]. \tag{13}\]
Upon consulting the top \(6\) vertices of Figure 6, we see that
\[\langle 2R_{i},s\rangle+|\sigma|_{1}+2\geq\langle 2R_{i+1},s\rangle-|\sigma|_{1} \text{ for }115\leq i\leq 119, \tag{14}\]
which gives us (1) of the lemma.
Part (2) of the lemma follows from observing vertices \(112\), \(114\), and \(115\) in Figure 6, i.e.
\[\langle 2R_{112},s\rangle+|\sigma|_{1}+2\geq\langle 2R_{115},s\rangle-|\sigma|_{1} \text{ or }\langle 2R_{114},s\rangle+|\sigma|_{1}+2\geq\langle 2R_{115},s \rangle-|\sigma|_{1} \tag{15}\]
Part (3)(a) comes from noting that if \(R<R_{113}\), then \(R<R_{112}\), so if \(s_{2}^{*}>|\sigma|_{1}+1\), then
\[\langle 2R_{112},s\rangle+|\sigma|_{1}+2\geq\langle 2R_{113},s\rangle-| \sigma|_{1}. \tag{16}\]
To establish (3)(b), note that \(R_{85}\) is the unique minimum of \(\{R\in\mathcal{R}_{+}\,:\,\,R\not\leq R_{84}\}\), so that if \(s_{2}^{*}>|\sigma|_{1}+1\), then
\[\langle 2R_{84},s\rangle+|\sigma|_{1}+2\geq\langle 2R_{85},s\rangle-|\sigma|_{1}. \tag{17}\]
To see (4)(a), note that if \(s_{3}^{*}>|\sigma|_{1}+1\), then
\[\langle 2R_{114},s\rangle+|\sigma|_{1}+2\geq\langle 2R_{112},s\rangle-|\sigma|_{1}. \tag{18}\]
To see (4)(b), note that \(R_{100}\) is the unique minimum of \(\{R\in\mathcal{R}_{+}\,:\,\,R\not\leq R_{99}\}\), so that if \(s_{3}^{*}>|\sigma|_{1}+1\), then
\[\langle 2R_{99},s\rangle+|\sigma|_{1}+2\geq\langle 2R_{100},s\rangle-|\sigma|_{1}. \tag{19}\]
To see (5), observe that \(R_{105}\) is the unique minimum of \(\{R\in\mathcal{R}_{+}\,:\,\,R\not\leq R_{113}\}\), so in the even \(s_{4}^{*}>|\sigma|_{1}+1\) it must be that
\[\langle 2R_{113},s\rangle+|\sigma|_{1}+2\geq\langle 2R_{105},s\rangle-|\sigma|_{1}. \tag{20}\]
To see (6)(a), note that if both \(s_{2}^{*}>s_{3}^{*}+|\sigma|_{1}+1\) and \(s_{4}^{*}>|\sigma|_{1}+1\), then \(\langle 2R_{114},s\rangle-|\sigma|_{1}-2\not\in\{\langle\mathfrak{c},\tau\rangle \,:\,\,\mathfrak{c}\in\mathfrak{C}_{1}\}\). Therefore if \(s_{2}^{*}\) and \(s_{4}^{*}\) are both greater than \(|\sigma|_{1}+1\), it must be that \(s_{2}^{*}\leq s_{3}^{*}+|\sigma|_{1}+1\). To see (6)(b), note that \(R_{105}\) is the unique minimum of \(\{R\in\mathcal{R}_{+}\,:\,\,R\not\leq R_{110}\}\), so that if \(s_{2}^{*}>|\sigma|_{1}+1\) and \(s_{4}^{*}>|\sigma|_{1}+1\), then
\[\langle 2R_{110},s\rangle+|\sigma|_{1}+2\geq\langle 2R_{105},s\rangle-|\sigma|_{1}. \tag{21}\]
Proof of Lemma 3.4.: Let \(\tau=(s,\sigma)\in E_{8}\oplus\mathbb{Z}^{n+1}\), with \(\sigma\) a changemaker and \(|\tau|=p\geq|\sigma|+4\). We may arrange so that \(s_{i}^{*}\geq 0\) for all \(i\in\{1,\ldots,8\}\) by applying an isometry of \(E_{8}\) that places \(s\) in the fundamendtal Weyl chamber of \(E_{8}\). If \(C(\tau)=|\sigma|_{1}+2\sigma_{n}\) and \(C(\tau)>p\), then \(|\sigma|_{1}+2\sigma_{n}>|\sigma|\), so either \(\sigma=(0,\ldots,0,1,\ldots,1)\) and \(s=0\), or \(\sigma=(0,\ldots,0,1,\ldots,1,2)\) and \(s^{*}=(0,0,0,0,0,0,1)\), in which case \(|s|=2\). We may therefore assume that
\[C(\tau)=12s_{1}^{*}+6s_{2}^{*}+8s_{3}^{*}+4s_{4}^{*}+10s_{5}^{*}+8s_{6}^{*}+6 s_{7}^{*}+4s_{8}^{*}+|\sigma|_{1}.\]
Upon consulting the diagonal entries of the matrix
\[A^{-1}=\begin{bmatrix}30&15&20&10&24&18&12&6\\ 15&8&10&5&12&9&6&3\\ 20&10&14&7&16&12&8&4\\ 10&5&7&4&8&6&4&2\\ 24&12&16&8&20&15&10&5\\ 18&9&12&6&15&12&8&4\\ 12&6&8&4&10&8&6&3\\ 6&3&4&2&5&4&3&2\end{bmatrix},\]
which records the pairings \((A^{-1})_{ij}=\langle e_{i}^{*},e_{j}^{*}\rangle\) of the basis of \(E_{8}\) dual to the choice of simple roots \(\{e_{1},\ldots,e_{8}\}\), we see that
\[p\geq 30(s_{1}^{*})^{2}+8(s_{2}^{*})^{2}+14(s_{3}^{*})^{2}+4(s_{4}^{*})^{2}+20(s_ {5}^{*})^{2}+12(s_{6}^{*})^{2}+6(s_{7}^{*})^{2}+2(s_{8}^{*})^{2}+|\sigma|.\]
Since \(|\sigma|\geq|\sigma|_{1}\), if \(C(\tau)>p\), we must have \(s_{i}^{*}=0\) for all \(1\leq i\leq 7\) and \(s_{8}^{*}\leq 1\), in which case \(|s|\leq 2\). Conclude that if \(|s|\geq 4\), then \(C(\tau)\leq p\).
**Proposition 4.2**.: _If \(|s|=2\), then \(\tau\) is an \(E_{8}\)-changemaker._
Proof.: If \(|s|=2\), then we may take \(s=R_{120}\) (cf. Figure 7), which is the unique root in the fundamental Weyl chamber of \(E_{8}\). It follows that either \(C(\tau)=|\sigma|_{1}+4\), in which case \(\sigma_{n}\leq 1\), or \(C(\tau)=|\sigma|_{1}+2\sigma_{n}\), and in either case we have \(PI(|\sigma|_{1}+2,C(\tau))\subset\{\langle\mathfrak{c},\tau\rangle\colon \mathfrak{c}\in\operatorname{Short}(-E_{8}\oplus-\mathbb{Z}^{n+1})\}\).
### The standard basis of an \(E_{8}\)-changemaker lattice
We are now ready to describe the standard basis \(\mathcal{S}\) of irreducible vectors of an \(E_{8}\)-changemaker lattice \((\tau)^{\perp}\). We first note that, by a computer search, there are 1003 non-zero \(E_{8}\)-changemakers in \(E_{8}\). The author developed an algorithm for producing a standard basis of irreducible vectors for an \(E_{8}\)-changemaker lattice in \(E_{8}\), but feels that it does not bear mentioning further in light of the following proposition.
**Proposition 4.3**.: _If \(\tau=(s,\sigma)\in E_{8}\oplus\mathbb{Z}^{n+1}\) is an \(E_{8}\)-changemaker and \(n\in\{-1,0,1\}\), then \(\langle\tau,\tau\rangle\leq 100,000\)._
Proof.: By Lemma 4.1, we have that \(s_{j}^{*}\leq|\sigma|_{1}+1\) for \(j\in\{1,5,6,7,8\}\), \(s_{2}^{*}\leq 3|\sigma|_{1}+3\), \(s_{3}^{*}\leq 2|\sigma|_{1}+2\), \(s_{4}^{*}\leq 7|\sigma|_{1}+7\), \(s_{2}^{*}\leq|\sigma|_{1}+1\) or \(s_{3}^{*}\leq|\sigma|_{1}+1\), and \(s_{2}^{*}\leq 2|\sigma|_{1}+2\) or \(s_{4}^{*}\leq 6|\sigma|_{1}+6\), where we understand \(|\sigma|_{1}\) to be \(0\) if \(n=-1\). Recall the Gram matrix for \(E_{8}\),
\[A=\begin{bmatrix}2&-1&-1&0&-1&0&0&0\\ -1&2&0&0&0&0&0&0\\ -1&0&2&-1&0&0&0&0\\ 0&0&-1&2&0&0&0&0\\ -1&0&0&0&2&-1&0&0\\ 0&0&0&0&-1&2&-1&0\\ 0&0&0&0&0&-1&2&-1\\ 0&0&0&0&0&0&-1&2\end{bmatrix},\]
and note that
\[A^{-1}=\begin{bmatrix}30&15&20&10&24&18&12&6\\ 15&8&10&5&12&9&6&3\\ 20&10&14&7&16&12&8&4\\ 10&5&7&4&8&6&4&2\\ 24&12&16&8&20&15&10&5\\ 18&9&12&6&15&12&8&4\\ 12&6&8&4&10&8&6&3\\ 6&3&4&2&5&4&3&2\end{bmatrix}\]
facilitates the computation, for \(s=\sum_{i=1}^{8}s_{i}e_{i}\),
\[\langle s,s\rangle=\begin{bmatrix}s_{1}&\cdots&s_{8}\end{bmatrix}\cdot A\cdot \begin{bmatrix}s_{1}\\ \vdots\\ s_{8}\end{bmatrix}=\begin{bmatrix}s_{1}&\cdots&s_{8}\end{bmatrix}\cdot A(A^{-1}A) \cdot\begin{bmatrix}s_{1}\\ \vdots\\ s_{8}\end{bmatrix}=\begin{bmatrix}s_{1}^{*}&\cdots&s_{8}^{*}\end{bmatrix} \cdot A^{-1}\cdot\begin{bmatrix}s_{1}^{*}\\ \vdots\\ s_{8}^{*}\end{bmatrix}.\]
It is then straightforward to certify that \(\sigma=(1,2)\) and \(s^{*}=(4,4,8,28,4,4,4,4)\) maximizes the quantity \(\langle\tau,\tau\rangle\) for \(\tau=(s,\sigma)\in E_{8}\oplus\mathbb{Z}^{n+1}\) with \(n\in\{-1,0,1\}\), and in particular
\[\begin{bmatrix}4&4&8&28&4&4&4&4\end{bmatrix}\cdot A^{-1}\cdot\begin{bmatrix}4 \\ 4\\ 8\\ 28\\ 4\\ 4\\ 4\\ 4\end{bmatrix}+1^{2}+2^{2}=25,541\leq 100,000.\]
As Rasmussen has computational proof [22, Section 6.3] that every lens space \(L(p,q)\) that is realized by surgery on a knot \(K\subset\mathcal{P}\) with \(2g(K)\leq p\leq 100,000\) is realized by surgery on a Tange knot, we will not in this work address when \(\Lambda(p,q)\cong(\tau)^{\perp}\) for some \(\tau\in E_{8}\oplus\mathbb{Z}^{n+1}\) for \(n\in\{-1,0,1\}\). However, we remain interested in when \((\tau)^{\perp}\cong\Lambda(p_{1},q_{1})\oplus\Lambda(p_{2},q_{2})\) for all \(n\geq-1\), but postpone the discussion of this case until Section 6.
Suppose now that \(n\geq 2\) and \(\tau\in E_{8}\oplus\mathbb{Z}^{n+1}\) is an \(E_{8}\)-changemaker. We construct a basis for the \(E_{8}\)-changemaker lattice \(L=(\tau)^{\perp}\subset E_{8}\oplus\mathbb{Z}^{n+1}\) as follows. Fix an index \(1\leq j\leq n\), and suppose that \(\sigma_{j}=1+\sum_{i=0}^{j-1}\sigma_{i}\). In this case, set \(v_{j}=-d_{j}+2d_{0}+\sum_{i=1}^{j-1}d_{i}\). Otherwise, \(\sigma_{j}\leq\sum_{i=0}^{j-1}\sigma_{i}.\) It follows that there exists a subset \(A\subset\{0,\ldots,j-1\}\) such that \(\sigma_{j}=\sum_{i\in A}\sigma_{i}\). Amongst all such subsets, choose the one maximal with respect to the total order \(<\) on subsets of \(\{0,1,\ldots,n\}\) defined by declaring \(A^{\prime}<A\) if the largest element in \((A\cup A^{\prime})\setminus(A\cap A^{\prime})\) lies in \(A\); equivalently, \(A^{\prime}<A\) if \(\sum_{i\in A^{\prime}}2^{i}<\sum_{i\in A}2^{i}.\) Then set \(v_{j}=-d_{j}+\sum_{i\in A}d_{i}\in L\). If \(v=-d_{j}+\sum_{i\in A^{\prime}}d_{i}\) for some \(A^{\prime}<A\), then write \(v\ll v_{j}\). We call the set \(\mathcal{V}:=\{v_{1},\ldots,v_{n}\}\) the _changemaker basis_ for \(\tau\).
Now fix an index \(1\leq j\leq 8\). If \(s_{j}^{*}=|\sigma|_{1}+1\), then set \(w_{j}=-e_{j}+2d_{0}+\sum_{i=1}^{n}d_{i}\). If \(s_{j}^{*}\leq|\sigma|_{1}\), then set \(w_{j}=-e_{j}+\sum_{i\in A}d_{i}\) for \(A\) maximal such that \(\sum_{i\in A}\sigma_{i}=s_{j}^{*}\). If \(s_{j}^{*}>|\sigma|_{1}+1\), then \(j\in\{2,3,4\}\), and producing \(w_{j}\) requires more care. Lemma 4.1 ensures that if \(s_{j}^{*}>|\sigma|_{1}+1\), then there is an especially simple root \(r\) such that \(0\leq-e_{j}+r\leq|\sigma|_{1}+1\) for which we may then make change with \(\sigma\). In this case, we write \(w_{j}|_{E_{8}}:=-e_{j}+r\).
The simplest case to deal with is when \(s_{3}^{*}>|\sigma|_{1}+1\). By Lemma 4.1, we observe that \(s_{2}^{*}\leq|\sigma|_{1}+1\) and \(|\sigma|_{1}|+1<s_{3}^{*}\leq s_{2}^{*}+|\sigma|_{1}+1\). In particular, \(0<s_{3}^{*}-s_{2}^{*}\leq|\sigma|_{1}+1\). If \(s_{3}^{*}-s_{2}^{*}=|\sigma|_{1}+1\), then set \(w_{3}=-e_{3}+e_{2}+2d_{0}+\sum_{i=1}^{n}d_{i}\). If \(s_{3}^{*}-s_{2}^{*}\leq|\sigma|_{1}\), then set \(w_{3}=-e_{3}+d_{2}+\sum_{i\in A}d_{i}\) with \(A\) maximal such that \(\sum_{i\in A}=s_{3}^{*}-s_{2}^{*}\).
Consider now the case when \(s_{2}^{*}>|\sigma|_{1}+1\). By Lemma 4.1, we observe that \(s_{3}^{*}\leq|\sigma|_{1}+1\) and \(|\sigma|_{1}|+1<s_{2}^{*}\leq s_{3}^{*}+s^{*}4+|\sigma|_{1}+1\). If \(s_{3}^{*}=0\), then set \(w_{2}=-e_{2}+e_{4}+2d_{0}+\sum_{i=1}^{n}d_{i}\)
or \(w_{2}=-e_{2}+e_{4}+\sum_{i\in A}d_{i}\) as appropriate. If \(s_{3}^{*}>0\), then in the event that \(s_{4}^{*}=0\) or \(s_{2}^{*}-s_{3}^{*}-s_{4}^{*}<0\), set \(w_{2}=-e_{2}+e_{3}+2d_{0}+\sum_{i=1}^{n}d_{i}\) or \(w_{2}=-e_{2}+e_{3}+\sum_{i\in A}d_{i}\) as appropriate, and in the event that \(s_{4}^{*}>0\) and \(0\leq s_{2}^{*}-s_{3}^{*}-s_{4}^{*}\), set \(w_{2}=-e_{2}+e_{3}+e_{4}+2d_{0}+\sum_{i=1}^{n}d_{i}\) or \(w_{2}=-e_{2}+e_{3}+e_{4}+\sum_{i\in A}d_{i}\) as appropriate. Note that it is possible that \(\langle w_{2},d_{i}\rangle=0\) for all \(0\leq i\leq n\), and in this case \(w_{2}=-e_{2}+e_{3}+e_{4}\).
Consider lastly the case when \(s_{4}^{*}>|\sigma|_{1}+1\). By Lemma 4.1, we observe that \(s_{4}^{*}\leq s_{2}^{*}+s_{1}^{*}+s_{5}^{*}+s_{6}^{*}+s_{7}^{*}+s_{8}^{*}+| \sigma|_{1}+1\). We will describe an iterative algorithm to produce \(w_{4}\) where we first produce \(w_{4}|_{E_{8}}\). First, set \(w_{4}|_{E_{8}}=-e_{4}\). Proceed in steps, as follows.
* Step 1: If \(0<s_{2}^{*}\leq|\sigma|_{1}+1\), set \(w_{4}|_{E_{8}}=w_{4}|_{E_{8}}+e_{2}\).
* Step 2: If \(\langle w_{4}|_{E_{8}},\tau\rangle<\langle w_{4}|_{E_{8}}+e_{1},\tau\rangle\leq 0\) or \(|w_{4}|_{E_{8}}|=4\) and \(\langle w_{4}|_{E_{8}},\tau\rangle<\langle w_{4}|_{E_{8}}+e_{j},\tau\rangle\leq 0\) for some \(j\in\{1,5,6,7,8\}\), then set \(w_{4}|_{E_{8}}=w_{4}|_{E_{8}}+e_{1}\), else proceed to Step 7.
* Step 3: If \(\langle w_{4}|_{E_{8}},\tau\rangle<\langle w_{4}|_{E_{8}}+e_{5},\tau\rangle\leq 0\) or \(|w_{4}|_{E_{8}}|=4\) and \(\langle w_{4}|_{E_{8}},\tau\rangle<\langle w_{4}|_{E_{8}}+e_{j},\tau\rangle\leq 0\) for some \(j\in\{5,6,7,8\}\), then set \(w_{4}|_{E_{8}}=w_{4}|_{E_{8}}+e_{5}\), else proceed to Step 7.
* Step 4: If \(\langle w_{4}|_{E_{8}},\tau\rangle<\langle w_{4}|_{E_{8}}+e_{6},\tau\rangle\leq 0\) or \(|w_{4}|_{E_{8}}|=4\) and \(\langle w_{4}|_{E_{8}},\tau\rangle<\langle w_{4}|_{E_{8}}+e_{j},\tau\rangle\leq 0\) for some \(j\in\{6,7,8\}\), then set \(w_{4}|_{E_{8}}=w_{4}|_{E_{8}}+e_{6}\), else proceed to Step 7.
* Step 5: If \(\langle w_{4}|_{E_{8}},\tau\rangle<\langle w_{4}|_{E_{8}}+e_{7},\tau\rangle\leq 0\) or \(|w_{4}|_{E_{8}}|=4\) and \(\langle w_{4}|_{E_{8}},\tau\rangle<\langle w_{4}|_{E_{8}}+e_{j},\tau\rangle\leq 0\) for some \(j\in\{7,8\}\), then set \(w_{4}|_{E_{8}}=w_{4}|_{E_{8}}+e_{7}\), else proceed to Step 7.
* Step 6: If \(\langle w_{4}|_{E_{8}},\tau\rangle<\langle w_{4}|_{E_{8}}+e_{8},\tau\rangle\leq 0\) or \(|w_{4}|_{E_{8}}|=4\) and \(\langle w_{4}|_{E_{8}},\tau\rangle<\langle w_{4}|_{E_{8}}+e_{8},\tau\rangle\leq 0\), then set \(w_{4}|_{E_{8}}=w_{4}|_{E_{8}}+e_{8}\).
* Step 7: If \(\langle w_{4}|_{E_{8}},\tau\rangle=|\sigma|_{1}+1\), then set \(w_{4}=w_{4}|_{E_{8}}+2d_{0}+\sum_{i=1}^{n}d_{i}\), else set \(w_{4}=w_{4}|_{E_{8}}+\sum_{i\in A}d_{i}\) for \(A\) maximal such that \(\langle w_{4}|_{E_{8}},\tau\rangle=-\sum_{i\in A}\sigma_{i}\).
**Remark 4.4**.: _Note that it is possible that \(\langle w_{4},d_{i}\rangle=0\) for all \(0\leq i\leq n\)._
We call the set \(\mathcal{W}:=\{w_{1},\ldots,w_{8}\}\) an \(E_{8}\)-_extension set_ for \(\tau\).
The vectors \(w_{1},\ldots,w_{8},v_{1},\ldots,v_{n}\) are clearly linearly independent. The fact that they span \(L\) is straightforward to verify, too: conditioning on whether \(s_{2}^{*}>|\sigma|_{1}+1,s_{3}^{*}>|\sigma|_{1}+1\), and \(s_{4}^{*}>|\sigma|_{1}+1\), add suitable multiples of \(w_{2},w_{3},w_{4}\) in turn, followed by suitable multiples of \(w_{1},w_{5},\ldots w_{8},v_{n},\ldots,v_{1}\) in turn to a given \(x\in L\) to produce a sequence of vectors that converges to \(0\).
**Definition 4.5**.: _The set \(S=\mathcal{V}\cup\mathcal{W}\) constitutes the standard basis for \(L\). In keeping with [11, Definition 3.11], a vector \(v\in\mathcal{S}\) is_
* tight _if the projection of_ \(v\) _onto_ \(\mathbb{Z}^{n+1}\) _is_ \(2d_{0}+\sum_{i=1}^{n}d_{i}\)_;_
* gappy _if the projection of_ \(v\) _onto_ \(\mathbb{Z}^{n+1}\) _is_ \(\sum_{i\in A}d_{i}\)_,_ \(A\neq\emptyset\)_, and_ \(A\) _does not consist of consecutive integers;_
* just right _if the projection of_ \(v\) _onto_ \(\mathbb{Z}^{n+1}\) _is_ \(\sum_{i\in A}d_{i}\) _and_ \(A=\emptyset\) _or_ \(A\) _consists of consecutive integers._
_A_ gappy index _for a gappy vector \(v_{j}\) is an index \(k\in A\) such that \(k+1\not\in A\cup\{j\}\). A_ gappy index _for a gappy vector \(w_{j}\) is an index \(k\in A\), \(k<n\), such that \(k+1\not\in A\). A vector \(w_{j}\in\mathcal{W}\) is_ loaded _if \(s_{j}^{*}>|\sigma|_{1}+1\), and otherwise it is_ unloaded.
Every vector in \(\mathcal{S}\) is either tight, gappy, or just right (exclusively). At most one vector in each of the sets \(\{w_{2},w_{3}\}\), \(\{w_{4}\}\) is loaded, and no other basis elements are loaded.
**Lemma 4.6**.: _The standard basis elements of an \(E_{8}\)-changemaker lattice are irreducible._
In order to establish Lemma 4.6, we will make use of the following structural proposition regarding vectors of norm at most \(4\) in \(E_{8}\) and their pairings against loaded elements in the \(E_{8}\) extension set \(\mathcal{W}\).
**Proposition 4.7**.: _Let \(w_{j}\) be a loaded vector in the standard basis of the \(E_{8}\)-changemaker lattice \((\tau)^{\perp}\subset E_{8}\oplus\mathbb{Z}^{n}\) with \(w_{j}|_{E_{8}}=-e_{j}+r\) for some positive root \(r\), and let \(z\in E_{8}\oplus\{0\}^{n}\)._
1. _If_ \(z\) _is a positive root and_ \(\langle-e_{j}+r+z,-z\rangle=0\)_, then_ \(\langle z,\tau\rangle>|\sigma|_{1}+1\)_._
2. _If_ \(z\) _is a positive root and_ \(\langle-e_{j}+r+z,-z\rangle=-1\)_, then_ \(\langle z,\tau\rangle=0\)_,_ \(\langle z,\tau\rangle>|\sigma|_{1}+1\)_, or_ \(\langle-e_{j}+r+z,\tau\rangle>0\)_._
3. _If_ \(|z|=4\) _and_ \(\langle-e_{j}+r+z,-z\rangle=-1\)_, then either_ \(\langle z,\tau\rangle\leq 0\)_, or_ \(\langle z,\tau\rangle>|\sigma|_{1}+1\)_, or_ \(\langle-e_{j}+r+z,\tau\rangle=0\) _and_ \(|-e_{j}+r+z|=2\)_, or_ \(\langle-e_{j}+r+z,\tau\rangle>0\)_._
Sketch of proof.: The proposition is established by explicit computation in a Jupyter notebook, available at [https://github.com/caudellj/LoadedPairings](https://github.com/caudellj/LoadedPairings).
Proof of Lemma 4.6.: Choose a standard basis element \(v_{j}\in S\) and suppose that \(v_{j}=x+y\) for \(x,y\in L\) with \(\langle x,y\rangle\geq 0\). In order to prove that \(v_{j}\) is irreducible, it stands to show that one of \(x\) and \(y\) equals \(0\). Since \(v_{j}|_{E_{8}}=0\), write \(x=z+\sum_{i=0}^{n}x_{i}d_{i}\) and \(y=-z+\sum_{i=0}^{n}y_{i}d_{i}\) for some \(z\in E_{8}\). The proof of [11, Lemma 3.13] establishes the inequality \(\sum_{i=0}^{n}x_{i}y_{i}\leq 1\), so we may conclude that \(z=0\) since \(-\langle z,z\rangle\leq-2\) for all \(z\in E_{8}\setminus\{0\}\). That \(v_{j}\) is irreducible then follows from the same argument as in the proof of [11, Lemma 3.13].
Now choose a standard basis element \(w_{j}\in S\).
_Case 1._ Suppose that \(w_{j}\) is not loaded and not tight. Write \(x=-e_{j}+z+\sum_{i=1}^{n}x_{i}d_{i}\) and \(y=-z+\sum_{i=1}^{n}y_{i}d_{i}\) for some \(z\in E_{8}\). Since \(x_{i}+y_{i}\in\{0,1\}\) for all \(i\), we have \(x_{i}y_{i}\leq 0\) for all \(i\). Observe that \(\langle-e_{j}+z,-z\rangle=\langle e_{j},z\rangle-\langle z,z\rangle\leq 0\), with equality if and only if \(z\in\{0,e_{j}\}\). If \(w_{j}\) is reducible, it must be that \(z\in\{0,e_{j}\}\) and \(\langle x,y\rangle=0\), and so \(0\leq x_{i},y_{i}\leq 1\) for all \(i\). Therefore \(x=0\) or \(y=0\), as desired.
_Case 2._ Suppose that \(w_{j}\) is loaded and not tight. According to our algorithm for producing standard basis elements, the projection of of \(w_{j}\) onto \(E_{8}\) is \(-e_{j}+r\), where \(r\) is some positive root and \(\langle-e_{j}+r,-e_{j}+r\rangle=4\). In this case, write \(x=-e_{j}+r+z+\sum_{i=1}^{n}x_{i}d_{i}\) and \(y=-z+\sum_{i=1}^{n}y_{i}d_{i}\) for some \(z\in E_{8}\oplus\{0\}^{n+1}\). Then
\[\langle x,y\rangle=\langle-e_{j}+r,-z\rangle-\langle z,z\rangle+\sum_{i=0}^{n} x_{i}y_{i}.\]
Since \(\langle w_{j},d_{i}\rangle\in\{0,1\}\) for all \(0\leq i\leq n\), it follows that \(\sum_{i=1}^{n}x_{i}y_{i}\leq 0\), as in Case 1. Observe that
\[\langle-e_{j}+r,-z\rangle-\langle z,z\rangle\leq\sqrt{\langle-e_{j}+r,-e_{j}+r \rangle\langle z,z\rangle}-\langle z,z\rangle=2\sqrt{|z|}-|z|, \tag{22}\]
it follows that \(|z|\leq 4\).
If \(|z|=4\), then we must have \(\langle-e_{j}+r,-z\rangle=4\), i.e. \(z=e_{j}-r\). Then \(x=0\) or \(y=0\).
If \(|z|=2\), then \(\langle-e_{j},-z\rangle+\langle r,-z\rangle-|z|\leq 0\) and \(z\in\mathcal{R}_{+}\). If we have equality, then it follows that \(\langle e_{j},z\rangle=1=-\langle r,z\rangle\), or else \(z\in\{e_{j},-r\}\), in which case \(x=0\) or \(y=0\). Since \(z\) is a positive root and \(\langle e_{j},z\rangle=1\), it follows that \(\langle z,\tau\rangle\geq s_{j}^{*}\geq|\sigma|_{1}+1\) so that \(y_{k}\geq 2\) for some \(k\) and that \(\langle-e_{j}+r+z,\tau\rangle\geq 1\) so that \(x_{l}\leq-1\) for some \(l\). However, then \(\langle x,y\rangle\leq x_{k}y_{k}+x_{l}y_{l}\leq-3\), a contradiction.
_Case 3_. Suppose that \(w_{j}\) is not loaded and tight. In this case, write \(x=-e_{j}+z+\sum_{i=1}^{n}x_{i}d_{i}\) and \(y=-z+\sum_{i=0}^{n}y_{i}d_{i}\). In contrast with the case where \(w_{j}\) is not loaded and not tight, \(\sum_{i=0}^{n}x_{i}y_{i}\leq 0\) unless \(x_{1}=y_{1}=1\) and \(x_{i}y_{i}=0\) for all \(i\neq 0\). If \(\sum_{i=0}^{n}x_{i}y_{i}\leq 0\), then the argument in the case \(w_{j}\) is not loaded and not tight shows that \(w_{j}\) is irreducible. If \(\sum_{i=0}^{n}x_{i}y_{i}=1\), then \(\langle-e_{j}+z,-z\rangle\geq-1\), and therefore \(|z|=2\). Now since \(y_{i}\geq 0\) for all \(i\), \(z\) must be a positive root. Since \(\langle e_{j},e_{i}\rangle\leq 0\) for all \(i\neq j\), it must that \(z_{j}\geq 1\), in which case \(\langle z,\tau\rangle\geq|\sigma|_{1}+1\). Then \(y_{k}\geq 2\) for some \(k\geq 2\), in which case \(\langle x,y\rangle\leq-2\).
_Case 4_. Suppose that \(w_{j}\) is loaded and tight. Write \(x=-e_{j}+r+z+\sum_{i=0}^{n}x_{i}d_{i}\) and \(y=-z+\sum_{i=0}^{n}y_{i}d_{i}\). As in Case 3, we may assume that \(x_{1}=y_{1}=1\), and \(x_{i}y_{i}=0\) for all \(i\neq 1\), and so we seek to rule out the existence of a \(z\in E_{8}\) such that \(\langle-e_{j}+r+z,-z\rangle=-1\) and \(\langle x,y\rangle=0\). If \(|z|\geq 6\), then \(\langle-e_{j}+r+z,-z\rangle<-1\), so conclude that \(|z|\leq 4\).
Supposing that \(|z|=4\), then \(\langle-e_{j}+r+z,-z\rangle=-1\), and supposing that \(|z|=2\), then \(\langle-e_{j}+r+z,-z\rangle=-1\). In both cases, Proposition 4.7 tells us that there are no such \(z\in E_{8}\oplus\{0\}^{n}\) that yield \(x,y\in(\tau)^{\perp}\) such that \(w_{j}=x+y\) and \(\langle x,y\rangle\geq 0\).
**Lemma 4.8**.: _If a standard basis element is not tight, it is unbreakable._
Proof.: Suppose that \(v_{i}\) is not tight. If \(v_{i}\) is breakable, then we may write \(v_{i}=x+y\) with \(x=z+\sum_{i=1}^{n}x_{i}d_{i}\) and \(y=-z+\sum_{i=1}^{n}y_{i}d_{i}\) for some \(z\in E_{8}\) such that \(|x|\geq 3\), \(|y|\geq 3\), and \(\langle x,y\rangle=-1\). Then
\[\langle x,y\rangle=\langle z,-z\rangle+\sum_{i=1}^{n}x_{i}y_{i}\leq\langle z, -z\rangle\leq-2\text{ if }z\neq 0,\]
so we may conclude that \(z=0\). That \(v_{i}\) is unbreakable then follows from the proof of [11, Lemma 3.15].
Suppose that \(w_{j}\) is unloaded and not tight. Write \(w_{j}=x+y\) with \(x=-e_{j}+z+\sum_{i=0}^{n}x_{i}d_{i}\) and \(y=-z+\sum_{i=0}^{n}y_{i}d_{i}\) for some \(z\in E_{8}\) such that \(|x|\geq 3\), \(|y|\geq 3\), and \(\langle x,y\rangle=-1\). But since
\[\langle x,y\rangle=\langle-e_{j}+z,-z\rangle+\sum_{i=0}^{n}x_{i}y_{i},\]
it must be that either (1) \(\langle-e_{j}+r+z,-z\rangle=-1\) and \(x_{i}y_{i}\geq 0\) for \(i=0,\ldots,n\), or (2) \(\langle-e_{j}+r+z,-z\rangle=0\) and \(x_{k}y_{k}=-1\) for a unique index \(k\) and \(x_{i}y_{i}=0\) for all other indices. In either scenario, we must have that \(|z|\leq 2\) since \(w_{j}|_{E_{8}}=-e_{j}\). In (1), we must have \(x_{i},y_{i}\in\{0,1\}\) for \(i=1,\ldots,n\) and \(\langle z,\tau\rangle\geq 1\), and in particular \(|z|=2\). Then since \(|z|=2\) and \(\langle-e_{j},-z\rangle=1\), we must have \(z_{j}\geq 1\), in which case \(\langle-e_{j}+z,\tau\rangle\geq 0\) and therefore \(x_{i}=0\) for \(i=1,\ldots,n\). But then \(|x|=\langle-e_{j}+z,-e_{j}+z\rangle=2\). In (2), without loss of generality we may assume \(z=0\), so that \(y=\sum_{i=1}^{n}y_{i}d_{i}\), \(y_{k}=-1\) (and so \(x_{k}=1\), and \(y_{i}=1\) if and only if \(i\in\operatorname{supp}(w_{j})\). Then either \(|y|=2\) or \(k=\max(\operatorname{supp}(y))\), in which case \(w_{j}<<x\).
Suppose that \(w_{j}\) is loaded and not tight. Write \(w_{j}=x+y\) with \(x=-e_{j}+r+z+\sum_{i=0}^{n}x_{i}d_{i}\) and \(y=-z+\sum_{i=0}^{n}y_{i}d_{i}\) for some \(z\in E_{8}\). Then, as in the case \(w_{j}\) is unloaded, either (I) \(\langle-e_{j}+z,-z\rangle=-1\) and \(x_{i}y_{i}\geq 0\) for \(i=0,\ldots,n\), or (II) \(\langle-e_{j}+z,-z\rangle=0\) and \(x_{k}y_{k}=-1\) for a unique index \(k\) and \(x_{i}y_{i}=0\) for all other indices. In (I), we must then have \(x_{i},y_{i}\in\{0,1\}\) for \(i=0,1,\ldots,n\) and (I.A) \(|z|=2\) and \(\langle-e_{j}+r+z,-z\rangle=-1\) or (I.B) \(|z|=4\) and \(\langle-e_{j}+r+z,-z\rangle=-1\). Items (2) and (3) of Proposition 4.7 show that there are no such \(z\) satisfying \(w_{j}=x+y\), \(x,y\in(\tau)^{\perp}\), \(|x|,|y|\geq 3\), and \(\langle x,y\rangle=-1\) for any \(E_{8}\)-changemaker \(\tau\) and loaded, non-tight \(w_{j}\). In (II), we are in the hypotheses of item (1) of Proposition 4.7, which shows that there are again no such \(z\).
### Standard basis elements and intervals
Taken together, (1) of Proposition 2.2 and Lemma 4.6 then imply the following about the standard basis elements of \((\tau)^{\perp}\).
**Proposition 4.9**.: _For each standard basis element \(v\in\mathcal{S}\), there is some interval \(T(v)\) such that \(v=\epsilon(v)[T(v)]\) for some \(\epsilon(v)\in\{\pm 1\}\). _
The interval \(T(v)\) is either breakable, in which case \(v\) is tight, or it is unbreakable. For an unbreakable interval \(T\) with \(|[T]|\geq 3\), then \(T\) contains a single vertex with norm \(\geq 3\). Let \(z(T)\) denote the unique vertex of norm \(\geq 3\) contained in the unbreakable interval \(T\). Breakable intervals play an important role in the analysis of standard bases, and we record how breakable intervals may pair against unbreakable intervals in a linear lattice in the following lemma.
**Lemma 4.10**.: _Suppose that \(T\) is a breakable interval, and that \(V\) is an unbreakable interval. If \(|[V]|\geq 3\), then \([T]\cdot[V]\) equals_
1. \(|[V]|-1\)_, iff_ \(V\prec T\)_;_
2. \(|[V]|-2\)_, iff_ \(z(V)\in T\) _and_ \(V\pitchfork T\)_, or_ \(|[V]|=3\)_, and_ \(V\dagger T\)_;_
3. \(1\)_, iff_ \(|[V]|=3\)_,_ \(z(V)\in T\)_,_ \(V\pitchfork T\)_, and_ \(\epsilon_{V}\epsilon_{T}=\epsilon\)_;_
4. \(-1\)_, iff_ \(V\dagger T\)_; or_
5. \(0\)_, iff_ \(z(V)\not\in T\) _and either_ \(V\) _and_ \(T\) _are distant or_ \(V\pitchfork T\)_._
_If \(|[V]|=2\), then \(|[V]\cdot[T]|\leq 1\), with equality iff \(V\) and \(T\) abut._
Proof sketch.: The result follows by using the fact that \(V\) is unbreakable and conditioning on how \(V\) meets \(T\) and whether or not \(|[V]|\geq 3\).
Greene uses this characterization of pairings together with the structure of changemaker basis elements to then prove the following lemmas.
**Lemma 4.11** (Corollary 4.3 of [11]).: _A changemaker basis \(\mathcal{V}\) contains at most one breakable vector, and it is tight. _
**Lemma 4.12** (Lemma 4.4 of [11]).: _Given a pair of unbreakable vectors \(v_{i},v_{j}\in\mathcal{V}\) with \(|v_{i}|\), \(|v_{j}|\geq 3\), we have \(|v_{i}\cdot v_{j}|\leq 1\), with equality if and only if \(T(v_{i})\dagger T(v_{j})\) and \(\epsilon(v_{i})\epsilon(v_{j})=-v_{i}\cdot v_{j}\). _
**Corollary 4.13** (Corollary 4.5 of [11]).: _If \(T(v_{i})\) and \(T(v_{j})\) are distinct unbreakable intervals with \(|v_{i}|\), \(|v_{j}|\geq 3\), then \(z(v_{i})\neq z(v_{j})\). _
We collect here one more observation about tight vectors in \(\mathcal{S}\).
**Lemma 4.14**.: _If \(v_{t}\in\mathcal{V}\) is tight, then no \(w_{j}\in\mathcal{W}\) is tight._
Sketch of proof.: If \(v_{t}\) and \(w_{j}\) are both tight, then \(3<|v_{t}|-2=v_{t}\cdot w_{j}<|w_{j}|-2\), so \(v_{t}\pitchfork w_{j}\) and \(w_{j}-v_{t}=\pm([T(w_{j})-T(v_{t})]-[T(v_{t})-T(w_{j})])\) is reducible. However, \(w_{j}-v_{t}=w_{j}|_{E_{8}}+2d_{t}+\sum_{i=t+1}^{n}d_{i}\) is irreducible, the proof of which is essentially the same as the proof of Lemma 4.6 (cf. Proposition 4.7).
### The intersection graph
We introduce here the notion of the _intersection graph_ of a collection of intervals, upon which the basis of our analysis in Section 5 rests. The intersection graph is a variant of the notion of the pairing graph adapted to the study of a collection of intervals in a linear lattice.
Recall that every element of the standard basis \(\mathcal{S}=\mathcal{V}\cup\mathcal{W}\) is represented by an interval in the pairing graph of the vertex basis of \(L\). We write \(\bar{\mathcal{S}}\), \(\bar{\mathcal{V}}\), \(\bar{\mathcal{W}}\) to denote the subset of unbreakable intervals in \(\mathcal{S}\), \(\mathcal{V}\), \(\mathcal{W}\), respectively.
**Definition 4.15**.: _Given a collection of intervals \(\mathcal{S}=\{T_{1},\dots,T_{k}\}\) whose classes are linearly independent, we define the intersection graph as_
\[G(\mathcal{S})=(\mathcal{S},\mathcal{E}),\ \ \mathcal{E}=\{(T_{i},T_{j})\ |\ T_{i}\ \text{abuts}\ T_{j}\}.\]
We now collect several key properties of the intersection graph.
**Definition 4.16**.: _The claw\((i;j,k,l)\) is the graph \(Y=(V,E)\) with_
\[V=\{i,j,k,l\}\quad\text{and}\quad E=\{(i,j),(i,k),(i,l)\}.\]
_A graph \(G\) is claw-free if it does not contain an induced subgraph isomorphic to \(Y\)._
Equivalently, if three vertices in \(G\) neighbor a fourth, then some two of them neighbor.
**Lemma 4.17**.: \(G(\mathcal{S})\) _is claw-free._
Proof.: If some interval \(T_{i}\) abuts three intervals \(T_{j},T_{k},T_{l}\), then it abuts some two at the same end, and then those two abut.
**Definition 4.18**.: _A heavy triple\((T_{i},T_{j},T_{k})\) consists of distinct intervals of norm \(\geq 3\) contained in the same component of \(G(\mathcal{S})\), none of which separates the other two in \(G(\mathcal{S})\). In particular, if \((x_{i},x_{j},x_{k})\) spans a triangle, then it spans \(a\) heavy triangle._
**Lemma 4.19** (Lemma 4.10 of [11]).: \(G(\bar{\mathcal{S}})\) _does not contain a heavy triple. _
**Lemma 4.20**.: _If \(\hat{G}(\mathcal{S})\) is connected, then so is \(G(\mathcal{S})\)._
Proof.: If \(\hat{G}(\mathcal{S})\) is connected, then \((\tau)^{\perp}\) is indecomposable, so \((\tau)^{\perp}\cong\Lambda(p,q)\). If now \(G(\mathcal{S})\) is disconnected, then there is a pair \(v,v^{\prime}\in\mathcal{S}\) with \(v\cdot v^{\prime}\neq 0\) but no path from \(v\) to \(v^{\prime}\) in \(G(\mathcal{S})\). It follows that \(v\pitchfork v^{\prime}\), \(T(v)=\{x_{i},\ldots,x_{j}\}\), \(T(v^{\prime})=\{x_{i^{\prime}},\ldots,x_{j^{\prime}}\}\), and, without loss of generality, \(i<i^{\prime}\) and either \(i^{\prime}<j<j^{\prime}\) or \(j^{\prime}<j\). If \(i^{\prime}<j<j^{\prime}\), then either \([\{x_{i},\ldots,x_{i^{\prime}-1}\}]\), \([\{x_{i^{\prime}},\ldots,x_{j}\}]\), or \([\{x_{j+1},\ldots,x_{j^{\prime}}\}]\) is in the span of \(\mathcal{S}\setminus\{v,v^{\prime}\}\), or else \(\mathcal{S}\) does not generate \((\tau)^{\perp}\), but then there is a path from \(v\) to \(v^{\prime}\). If \(j^{\prime}<j\), then either \([\{x_{i},\ldots,x_{i^{\prime}-1}\}]\) or \([\{x_{j^{\prime}+1},\ldots,x_{j}\}]\) is in the span of \(\mathcal{S}\setminus\{v,v^{\prime}\}\), or else \(\mathcal{S}\) does not generate \((\tau)^{\perp}\), but then there is a path from \(v\) to \(v^{\prime}\).
**Lemma 4.21**.: _For \(x\), \(y\), \(z\) irreducible in \(\Lambda(p,q)\), if \(|x|,|y|\geq 3\), \(|z|=2\), \(x\cdot y=0\), and \(|x\cdot z|=|y\cdot z|=1\), then \(\epsilon(x)\epsilon(y)=(x\cdot z)(y\cdot z)\)._
Proof.: In this scenario either \(x\mathord{\uparrow}z\) and \(y\mathord{\uparrow}z\), in which case \(\epsilon(x)\epsilon(z)=-x\cdot z\) and \(\epsilon(y)\epsilon(z)=-y\cdot z\), or \(z\prec x\) and \(z\prec y\), in which case \(\epsilon(x)\epsilon(z)=x\cdot z\) and \(\epsilon(y)\epsilon(z)=y\cdot z\), and in either case the lemma follows.
**Lemma 4.22**.: _For \(x\), \(y\), \(z\) irreducible in \(\Lambda(p,q)\), if \(|x|,|y|\geq 3\), \(x\pitchfork y\), \(|z|=2\), and \(|x\cdot z|=|y\cdot z|=1\), then \(\epsilon(x)\epsilon(y)=-(x\cdot z)(y\cdot z)\)_
Proof.: In this scenario, without loss of generality, \(z\prec x\) and \(z\mathord{\uparrow}y\), so \(\epsilon(x)\epsilon(z)=x\cdot z\) and \(\epsilon(y)\epsilon(z)=-y\cdot z\).
**Definition 4.23**.: _We say that there is a sign error between \(x\) and \(y\) mediated by \(z\), if there is a triple \(x\), \(y\), \(z\in\mathcal{S}\) standing in contradiction to lemmas (4.21) and (4.22). We occasionally drop reference to the vectors \(x\), \(y\), or \(z\) if it is clear from context._
**Proposition 4.24**.: _If \((\tau)^{\perp}\cong\Lambda(p,q)\), then there are no sign errors. _
**Lemma 4.25**.: _If \(|v_{i}|,|w_{j}|\geq 3\), \(v_{i}\) and \(w_{j}\) are unbreakable, and \(v_{i}\cdot w_{j}\neq 0\), then \(v_{i}\not\pitchfork w_{j}\)._
Proof.: If \(v_{i}\pitchfork w_{j}\), then \(|v_{i}|=|w_{j}|\) and \(v_{i}\cdot w_{j}=|v_{i}|-2\). Then \(v_{i}-w_{j}\) is reducible, but \(|v_{i}-w_{j}|=4\), and \(v_{i}-w_{j}=e_{j}-d_{i}+d_{m}\) for some \(m\), and \(e_{j}\not\in(\tau)^{\perp}\), a contradiction.
**Lemma 4.26**.: _If \(|w_{i}|,|w_{j}|\geq 3\), \(w_{i}\) and \(w_{j}\) are unbreakable, and \(w_{i}\cdot w_{j}\neq 0\), then \(w_{i}\not\pitchfork w_{j}\)._
Proof.: If \(w_{i}\pitchfork w_{j}\), then \(|w_{i}|=|w_{j}|\) and \(w_{i}\cdot w_{j}=|w_{i}|-2\), and so \(|w_{i}-w_{j}|=4\) and \(w_{i}-w_{j}\) is reducible, but \(e_{j}-e_{i}\) is irreducible in \((\tau)^{\perp}\) since \(e_{j}\cdot\tau=e_{i}\cdot\tau\neq 0\).
**Lemma 4.27** (Lemma 3.8 of [11]).: _Given a cycle \(C\subset G(\mathcal{S})\), the intervals in \(V(C)\) abut pairwise at a common end. That is, there exists an index \(j\) such that each \(T_{i}\in V(C)\) has left endpoint \(x_{j+1}\) or right endpoint \(x_{j}\). In particular, \(V(C)\) induces a complete subgraph of \(G(\mathcal{S})\). _
In the sequel, we will abuse notation and use \(v_{i}\) and \(w_{j}\) to refer both to the elements of \(\mathcal{S}\) and to the intervals \(T(v_{i})\) and \(T(w_{j})\) that they represent in \(L\).
## 5. Identifying \(E_{8}\)-changemaker embeddings.
In this section, we identify when the orthogonal complement of an \(E_{8}\)-changemaker \(\tau=(s,\sigma)\in E_{8}\oplus\mathbb{Z}^{n+1}\) (\(n\geq 2\)) is a linear lattice, or a lattice that decomposes as the orthogonal sum of linear lattices, and record each such \(\tau\) in terms of \(s^{*}\) and \(\sigma\). One might assume, given the structural diversity of linear lattices admitting changemaker embeddings, that the full breadth of linear lattices admitting \(E_{8}\)-changemaker embeddings is difficult to capture. While there are forty-four distinct countably infinite families of linear lattices and orthogonal sums of pairs of linear lattices that admit \(E_{8}\)-changemaker embeddings, _a posteriori_ there are only \(4\) distinct families of changemaker tails \(\sigma\) represented in this census: \((1,\ldots,1)\), \((1,\ldots,1,n+1)\), \((1,2,\ldots,2)\), and \((1,1,2,\ldots,2)\in\mathbb{Z}^{n+1}\).
One notices that the claw \((e_{1};e_{2},e_{3},e_{5})\) in the \(E_{8}\) Dynkin diagram \(5\) persists in \(G(\mathcal{S})\) if \(s_{1}^{*}=s_{2}^{*}=s_{3}^{*}=s_{5}^{*}=0\). By Lemma 4.17, we therefore cannot have \(s_{1}^{*}=s_{2}^{*}=s_{3}^{*}=s_{5}^{*}=0\). Of most interest is \(w_{1}\), which corresponds to the trivalent vertex in the \(E_{8}\) Dynkin diagram. Clearly, there are only so many ways that the claw in the Dynkin diagram may be resolved given a fixed form for \(w_{1}\). Of next importance to us is the vector \(w_{5}\)--unlike \(w_{2}\) and \(w_{3}\), \(w_{5}\) is always unloaded, and thus \(w_{1}\cdot w_{5}\) is easier to control since \(w_{5}\cdot-e_{1}=-1\) no matter what \(w_{5}\) is. Our general strategy for identifying when \(L\) is a linear lattice is: first we fix a changemaker tail \(\sigma\), then we fix \(w_{1}\), then, generically but not exclusively, we fix \(w_{5}\), and then check whether \(\{v_{1},\ldots,v_{n},w_{1},w_{5}\}\) can be completed to a full standard basis \(\mathcal{S}\) whose intersection graph contains none of the forbidden features detailed in Section 4.5.
This section, devoted entirely to the analysis necessary necessary to characterize which linear lattices admit \(E_{8}\)-changemaker embeddings, consists of four general subsections, corresponding to the three general types of indecomposable changemaker bases (cf. Sections 6, 7, and 8 of [11]) and to the case when the sublattice of \(L\) generated by \(\mathcal{V}\) is decomposable (cf. Section 4 of [11]): in the first subsection we treat the case when every element of \(\mathcal{V}\) is just right; in the second subsection we treat the case when \(\mathcal{V}\) contains a gappy vector but no tight vector; in the third subsection we treat the case when there is a tight \(v_{t}\in\mathcal{V}\). Throughout, we rely extensively on the classification of changemaker bases whose intersection graphs contain no claws, heavy triples, or incomplete cycles carried out in [11] and make frequent reference to structural lemmas there without supplying proof here.
In what follows, we write \(A_{j}\) to mean \(\{0\leq i\leq n\colon w_{j}\cdot d_{i}\neq 0\}\). We write \(v\sim w\) if \(v\) and \(w\) are adjacent in \(G(\mathcal{S})\), i.e. \(v{\dagger}w\), \(v\prec w\), or \(w\prec v\).
### When all vectors are just right
First, we treat the case when \(\sigma=(1,\ldots,1)\in\mathbb{Z}^{n+1}\).
**Proposition 5.1**.: _Suppose \(\sigma=(1,\ldots,1)\in\mathbb{Z}^{n+1}\) and \(n\geq 2\). If \(|w_{1}|=2\) then one of the following holds:_
1. \(s^{*}=(0,1,n+1,0,0,0,0,0)\)_,_
2. \(s^{*}=(0,1,n+1,1,0,0,0,0)\)_,_
3. \(s^{*}=(0,1,0,0,n+1,0,0,0)\)_,_
4. \(s^{*}=(0,1,0,0,n+1,1,0,0)\)_,_
5. \(s^{*}=(0,n+2,n+1,0,0,0,0,0)\)_,_
6. \(s^{*}=(0,n+2,n+1,1,0,0,0,0)\)
_,_
7. \(s^{*}=(0,n+2,0,0,n+1,0,0,0)\)_, or_
8. \(s^{*}=(0,n+2,0,0,n+1,1,0,0)\)_._
Proof.: Observe that if \(w_{j}\in\mathcal{W}\) is unloaded, then either \(|w_{j}|=2\), \(w_{j}=-e_{5}+d_{n}\), \(w_{j}=-e_{5}+d_{1}+\ldots+d_{n}\), \(w_{j}=-e_{5}+d_{0}+\ldots d_{n}\), or \(w_{j}\) is tight. We begin to break our analysis into cases by conditioning on \(w_{5}\).
Case I: \(|w_{5}|=2\). First suppose that \(|w_{5}|=2\). Then both \(w_{2}\) and \(w_{3}\) are unloaded, for if \(w_{2}\) or \(w_{3}\) is loaded, then at least one of \(w_{2}\) and \(w_{3}\) is not tight, and so there is an incomplete cycle since \(|w_{j}|\geq 3\) for some \(j\in\{6,7,8\}\) if \(w_{2}\) or \(w_{3}\) is loaded and \(|w_{5}|=2\). It follows then that \(w_{2}\sim w_{3}\) and either \(w_{2}=-e_{2}+d_{0}+\ldots+d_{n}\) or \(w_{3}=-e_{3}+d_{0}+\ldots+d_{n}\) or else there is an incomplete cycle, and furthermore that \(|w_{6}|=|w_{7}|=|w_{8}|=2\).
Suppose, by way of contradiction, that \(w_{3}\) is tight. Then \(w_{2}=-e_{2}+d_{0}+\ldots+d_{n}\), and \(|w_{4}|\geq 3\) or else \((w_{3};v_{1},w_{1},w_{4})\) is a claw. If \(|w_{4}|\geq 3\) and \(w_{4}\) is unloaded, then \(w_{4}=-e_{4}+d_{n}\) or else \(2\leq w_{4}\cdot w_{2}\leq|w_{4}|-2\), but then \((v_{1},\ldots,v_{n},w_{4},w_{2},w_{3},v_{1})\) is an incomplete cycle. If \(w_{4}\) is loaded, then \(w_{4}|_{E_{8}}=-e_{4}+e_{2}\) and \(|A_{4}|\geq 2\). It follows that either \(w_{4}=-e_{4}+e_{2}+d_{n-1}+d_{n}\), in which case \((v_{1},\ldots,w_{4},w_{1},w_{3},v_{1})\) is an incomplete cycle, or \(w_{4}=-e_{4}+e_{2}+d_{n-2}+d_{n-1}+d_{n}\), in which case \((w_{3},w_{2},w_{4},w_{1},w_{3})\) is an incomplete cycle, or else \(2\leq w_{4}\cdot w_{2}\leq|w_{2}|-3\). Conclude that \(w_{3}\) is not tight.
Case I.1: \(w_{3}=-e_{3}+d_{0}+\ldots+d_{n}\).
Suppose that \(w_{3}=-e_{3}+d_{0}+\ldots+d_{n}\). Then either \(w_{2}=-e_{2}+d_{n}\), in which case \(\epsilon_{2}=-\epsilon_{3}\), or \(w_{2}\) is tight. If \(w_{2}=-e_{2}+d_{n}\), then \(w_{4}\) is unloaded, or else \(w_{4}=-e_{4}+e_{2}+2d_{0}+d_{1}+\ldots+d_{n}\), and so \((v_{1},\ldots v_{n},w_{2},w_{1},w_{4},v_{1})\) is an incomplete cycle, and so either \(|w_{4}|=2\) or \(w_{4}=-e_{4}+d_{n}\), or else \((w_{1},w_{2},w_{4},w_{3},w_{1})\) is an incomplete cycle or \(2\leq w_{4}\cdot w_{3}\leq|w_{3}|-3\) or \(w_{4}\) is tight, in which case either \((v_{1},\ldots,v_{n},w_{2},w_{4},v_{1})\) is an incomplete cycle or \(\epsilon_{2}=\epsilon_{4}=\epsilon_{3}\), a contradiction since \(w_{2}\cdot w_{3}=1\). So, if \(w_{2}=-e_{2}+d_{n}\), then either \(s^{*}=(0,1,n+1,0,0,0,0,0)\) or \(s^{*}=(0,1,n+1,1,0,0,0,0)\). Suppose now that \(w_{2}\) is tight. If \(w_{4}\) is loaded, then \(w_{4}\sim w_{1}\), and so \(w_{4}=-e_{4}+e_{2}+d_{0}+\ldots+d_{n}\) or else there is an incomplete cycle, but then \(w_{4}\cdot w_{3}=n=|w_{3}|-3\), which is absurd. Conclude that \(w_{4}\) is unloaded, and so either \(|w_{4}|=2\) or \(w_{4}=-e_{4}+d_{n}\), or else there is an incomplete cycle. So, if \(w_{2}\) is tight, then \(s^{*}=(0,n+2,n+1,0,0,0,0,0)\) or \(s^{*}=(0,n+2,n+1,1,0,0,0,0)\).
Case I.2: \(w_{2}=-e_{2}+d_{0}+\ldots+d_{n}\).
Suppose that \(w_{2}=-e_{2}+d_{0}+\ldots+d_{n}\). Then either \(w_{3}=-e_{3}+d_{n}\) or \(w_{3}\) is tight, and in either case there is a claw or an incomplete cycle if \(w_{4}\) is unloaded. If \(w_{4}\) is loaded, then \(w_{4}|_{E_{8}}=-e_{4}+e_{2}\), and so \(w_{4}\sim w_{1}\). But then \(w_{4}\sim w_{3}\) or else \((w_{1};w_{3},w_{4},w_{5})\) is a claw, and similarly \(w_{4}\sim w_{2}\) or else \((w_{1};w_{2},w_{4},w_{5})\) is a claw. But then \(A_{4}=\emptyset\) or else \((w_{1},w_{3},w_{4})\) is a negative triangle, in which case \(s_{4}^{*}=|\sigma|_{1}\), which is absurd since \(w_{4}\) is loaded.
Case II: \(w_{5}=-e_{5}+d_{n}\).
Now suppose that \(w_{5}=-e_{5}+d_{n}\). Then \(|w_{6}|\geq 3\) or else \((w_{5};v_{n},w_{1},w_{6})\) is a claw, and so either \(w_{6}=-e_{6}+d_{0}+\ldots+d_{n}\) or \(w_{6}\) is tight, or else \((v_{n};v_{n-1},w_{5},w_{6})\) is a claw. Suppose that \(w_{6}=-e_{6}+d_{0}+\ldots+d_{n}\). Let \(j\in\{2,3\}\). If \(A_{j}=\{n\}\), then \(|w_{2}|\), \(|w_{3}|\geq 3\) and \(w_{j}\) is loaded, or else \((v_{n},w_{5},w_{1},w_{j},v_{n})\) is an incomplete cycle, but then \((w_{2},w_{3},w_{5})\) is a heavy triple or either \(w_{2}\) or \(w_{3}\) is unloaded and tight, so either \((w_{2};v_{1},w_{1},w_{6})\) or \((w_{3};v_{1},w_{1},w_{6})\)
is a claw. If \(A_{j}=\{i,\ldots,n\}\) for \(0\leq i\leq n-1\), then \(2\leq w_{j}\cdot w_{6}\leq|w_{6}|-2\). If \(w_{j}\) is tight, then \(w_{j}\) is loaded and \(|w_{k}|\geq 3\) for \(k\in\{2,3\}\setminus\{j\}\), or else \((v_{1},\ldots,v_{n},w_{5},w_{1},w_{j},v_{1})\) is an incomplete cycle, but then \(w_{k}\) is unloaded, and \(A_{k}=\{i,\ldots,n\}\) for some \(0\leq i\leq n\), so \((w_{k},w_{5},w_{6})\) is a heavy triple. Suppose instead that \(w_{6}\) is tight. Again, let \(j\in\{2,3\}\). If \(A_{j}=\{n\}\), then \(|w_{2}|\), \(|w_{3}|\geq 3\) and \(w_{j}\) is loaded, or else \((v_{n},w_{5},w_{1},w_{j},v_{n})\) is an incomplete cycle, but then \((w_{5},w_{j},w_{6},v_{1},\ldots,v_{n},w_{j})\) is an incomplete cycle. If \(A_{j}=\{i,\ldots,n\}\) for some \(1\leq i\leq n-1\), then \(w_{j}\) is unloaded, and thus \((w_{1},w_{j},v_{i},\ldots,v_{n},w_{5},w_{1})\) is an incomplete cycle. If \(A_{j}=\{0,\ldots,n\}\), then \(w_{j}=-e_{j}+d_{0}+\ldots+d_{n}\), and so \((w_{1},w_{j},w_{6},v_{1},\ldots,v_{n},w_{5},w_{1})\) is an incomplete cycle.
Case III: \(w_{5}=-e_{5}+d_{1}+\ldots+d_{n}\).
Now suppose that \(w_{5}=-e_{5}+d_{1}+\ldots+d_{n}\). Then \(w_{6}=-e_{6}+d_{n}\), or \(n=2\) and \(w_{6}=-e_{6}+d_{1}+d_{2}\), or else there is a claw. Let \(j\in\{2,3\}\) if \(A_{j}=\{n\}\), then \((w_{j},w_{5},w_{6})\) is a heavy triple. If \(A_{j}=\{i,\ldots,n\}\) for \(0\leq i\leq n\), then \(2\leq w_{j}\cdot w_{5}\leq|w_{5}|-2\), which is absurd since \(w_{j}\) and \(w_{5}\) are unbreakable. If \(w_{j}\) is tight, then \(w_{j}\) is loaded or else \((w_{1},w_{j},v_{1},w_{5},w_{1})\) is an incomplete cycle, but then \(w_{k}\) is unloaded and \(w_{k}=-e_{k}+d_{i}+\ldots+d_{n}\) for \(k\in\{2,3\}\setminus\{j\}\). If \(n=2\) and \(w_{6}=-e_{6}+d_{1}+\ldots+d_{n}\), then \(|w_{j}|\geq 3\) for some \(j\in\{2,3\}\), and either \((w_{j},w_{5},w_{6})\) is a heavy triple, or \((v_{1};v_{2},w_{j},w_{5})\) is a claw.
Case IV: \(w_{5}=-e_{5}+d_{0}+\ldots+d_{n}\).
Now suppose that \(w_{5}=-e_{5}+d_{0}+\ldots+d_{n}\). Then \(|w_{j}|\geq 3\) for some unloaded \(w_{j}\) with \(j\in\{2,3\}\), and moreover either \(A_{j}=\{n\}\) or \(w_{j}\) is tight. It follows that \(|w_{k}|=2\) for \(k\in\{2,3\}\setminus\{j\}\), or else, without loss of generality, either \((w_{j},v_{n},w_{k},w_{5},w_{j})\) is an incomplete cycle or \((w_{j},v_{1},\ldots,v_{n},w_{k},w_{5},w_{j})\) is an incomplete cycle if \(w_{j}\) is tight. Suppose that \(w_{j}=-e_{j}+d_{n}\). Furthermore, \(|w_{7}|=|w_{8}|=2\) or else there is an incomplete cycle, and either \(|w_{6}|=2\), or \(w_{6}=-e_{6}+d_{n}\), or else \(w_{6}=-e_{6}+d_{n}+d_{n-1}\) and either \(A_{j}=\{n\}\) and \((w_{j},w_{5},w_{6})\) is a heavy triple, or \(w_{j}\) is tight and \((w_{5};w_{1},w_{j},w_{6})\) is a claw. In any event, if \(j=3\), then \(w_{4}\) is loaded or else either \(|w_{4}|=2\), and either \(w_{3}=-e_{3}+d_{n}\) and \((w_{3};v_{n},w_{4},w_{5})\) is a claw or \(w_{3}\) is tight and \((w_{3};v_{1},w_{4},w_{5})\) is a claw, \(w_{4}=-e_{4}+d_{n}\) and either \(w_{3}=-e_{3}+d_{n}\) and \((w_{3},w_{4},w_{5})\) is a heavy triple or \(w_{3}\) is tight and \((w_{3},v_{1},\ldots,v_{n},w_{4},w_{5},w_{3})\) is an incomplete cycle, or \(2\leq w_{4}\cdot w_{5}\leq|w_{5}|-2\), which is absurd. If \(j=3\) and \(w_{4}\) is loaded then, since \(|w_{1}|=|w_{2}|=2\), either \(|w_{6}|=2\) and \(w_{4}|_{E_{8}}=-e_{4}+e_{5}\), in which case \((w_{1},w_{4},w_{6},w_{5},w_{1})\) is an incomplete cycle, or \(w_{6}=-e_{6}+d_{n}\) and \(w_{4}|_{E_{8}}=-e_{4}+e_{5}+e_{6}\), in which case either \(w_{3}=-e_{3}+d_{n}\) and \((w_{1},w_{3},w_{6},w_{7},w_{4},w_{1})\) is an incomplete cycle or \(w_{3}\) is tight and \((w_{3},v_{1},\ldots,v_{n},w_{6},w_{7},w_{4},w_{1},w_{3})\) is an incomplete cycle. Conclude that \(j\neq 3\). Suppose instead that \(j=2\). Since then \(|w_{3}|=2\), we must have that \(w_{2}\) is unloaded, for if not, then \((w_{2},w_{3},w_{1},w_{5},w_{2})\) is an incomplete cycle. It follows that if \(|w_{4}|\geq 3\), then \(w_{4}\) is loaded, for if not, then either \(w_{4}=-e_{4}+d_{n}\) and either \(w_{2}=-e_{2}+d_{n}\) and \((w_{2},w_{4},w_{5})\) is a heavy triple or \(w_{2}\) is tight and \((w_{2},v_{1},\ldots,v_{n},w_{4},w_{5},w_{2})\) is an incomplete cycle, or \(w_{4}\) is tight, in which case \(w_{2}=-e_{2}+d_{n}\), so \((w_{4},v_{1},\ldots,v_{n},w_{2},w_{5},w_{4})\) is an incomplete cycle. Suppose now that \(w_{4}\) is loaded. If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}\), then \(w_{2}\) is tight and \(w_{4}=-e_{4}+e_{2}+d_{n}\), or else \(2\leq w_{4}\cdot w_{5}\leq|w_{5}|-2\neq|w_{4}|-2\). But then \((w_{2},v_{1},\ldots,v_{n},w_{4},w_{5},w_{2})\) is an incomplete cycle. If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}\), then either \(w_{6}=-e_{6}+d_{n}\) and \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}\), in which case \(w_{2}\) is tight and so \((w_{2},v_{1},\ldots,v_{n},w_{6},w_{4},w_{2})\) is an incomplete cycle, or \(|w_{6}|=2\) and \(w_{4}\sim w_{6}\), and so \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}\) or else \((w_{6};w_{4},w_{5},w_{7})\) is a claw or \((w_{4},w_{5},w_{6})\) is a negative triangle, but then \(w_{4}\cdot w_{2}=-1\), so \((w_{2},w_{4},w_{6},w_{5},w_{2})\)
is an incomplete cycle. If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}+e_{6}\), then \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}+e_{6}\) or else \((w_{7};w_{4},w_{6},w_{8})\) is a claw, but then either \(w_{2}=-e_{2}+d_{n}\) and \((w_{2},w_{6},w_{7},w_{4},w_{2})\) is an incomplete cycle, or \(w_{2}\) is tight and \((w_{2},v_{1},\ldots,v_{n},w_{6},w_{7},w_{4},w_{2})\) is an incomplete cycle. Conclude that \(w_{4}\) is not loaded, hence \(|w_{4}|=2\) or else \((w_{4},w_{3},w_{1},w_{5},w_{4})\) is an incomplete cycle, hence \(s^{*}=(0,1,0,0,n+1,0,0,0)\), \(s^{*}=(0,1,0,0,n+1,1,0,0)\), \(s^{*}=(0,n+2,0,0,n+1,0,0)\), or \(s^{*}=(0,n+2,0,0,n+1,1,0,0)\).
Case V: \(w_{5}\) is tight.
Suppose lastly, by way of contradiction, that \(w_{5}\) is tight. Then, if \(j\in\{2,3\}\), \(|w_{j}|\geq 3\), and \(w_{j}\cdot w_{1}=-1\), then \(w_{j}=-e_{j}+d_{0}+\ldots+d_{n}\) or else there is an incomplete cycle or \(w_{j}\) is loaded and \(2\leq w_{5}\cdot w_{j}\leq|w_{j}|-3\). It follows that \(|w_{6}|\geq 3\) or else \((w_{5};v_{1},w_{1},w_{6})\) is a claw, but then \(w_{6}=-e_{6}+d_{n}\) or else \(2\leq w_{6}\cdot w_{j}=|w_{6}|-2\) or else \(w_{6}\) is tight and \((v_{1};v_{2},w_{5},w_{6})\) is a claw, but then \((v_{1},\ldots,v_{n},w_{6},w_{j},w_{1},w_{5},v_{1})\) is an incomplete cycle.
**Proposition 5.2**.: _If \(\sigma=(1,\ldots,1)\in\mathbb{Z}^{n+1}\), \(n\geq 2\), and \(w_{1}=-e_{1}+d_{n}\), then one of the following holds:_
1. \(s^{*}=(1,n+2,n+1,0,0,0,0,0)\)_,_
2. \(s^{*}=(1,n+2,n+1,1,0,0,0,0)\)_,_
3. \(s^{*}=(1,n+2,0,0,n+1,0,0,0)\)_, or_
4. \(s^{*}=(1,n+2,0,0,n+1,1,0,0)\)_._
Proof.: As in the previous lemma, if \(w_{j}\in\mathcal{W}\) is unloaded, then either \(|w_{j}|=2\), \(w_{j}=-e_{j}+d_{n}\), \(w_{j}=-e_{j}+d_{1}+\ldots+d_{n}\), \(w_{j}=-e_{j}+d_{0}+\ldots+d_{n}\), or \(w_{j}\) is tight. We break our analysis into cases by conditioning on \(w_{5}\).
Case I: \(|w_{5}|=2\).
Suppose first that \(|w_{5}|=2\). Let \(j\in\{2,3\}\). Then both \(w_{2}\) and \(w_{3}\) are unloaded and \(|w_{2}|\), \(|w_{3}|\geq 3\), or else there is a claw at \(w_{1}\). It follows that \(w_{j}=-e_{j}+d_{0}+\ldots+d_{n}\) and \(w_{k}\) is tight for \(\{j,k\}=\{2,3\}\), and so \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else there is an incomplete cycle. If \(w_{4}\) is loaded then \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}\), so \((w_{5};w_{1},w_{4},w_{6})\) is a claw. It follows that \(|w_{4}|=2\), in which case \(w_{3}=-e_{3}+d_{0}+\ldots+d_{n}\) or \((w_{3};v_{1},w_{2},w_{4})\) is a claw, or \(w_{4}=-e_{4}+d_{n}\), in which case \(w_{3}=-e_{3}+d_{0}+\ldots+d_{n}\), or else \(w_{3}\) is tight and \((w_{3},v_{1},\ldots,v_{n},w_{4},w_{2},w_{3})\) is an incomplete cycle. Conclude that \(s^{*}=(1,n+2,n+1,0,0,0,0,0)\) or \(s^{*}=(1,n+2,n+1,1,0,0,0,0)\).
Case II: \(w_{5}=-e_{5}+d_{n}\).
Suppose now that \(w_{5}=-e_{5}+d_{n}\). Then \((v_{n};v_{n-1},w_{1},w_{5})\) is a claw.
Case III: \(w_{5}=-e_{5}+d_{1}+\ldots+d_{n}\).
Suppose now that \(w_{5}=-e_{5}+d_{1}+\ldots+d_{n}\). Then, for \(j\in\{2,3\}\), \(w_{j}\cdot v_{i}=0\) for all \(1\leq i\leq n\) or else either there is a claw or \(w_{j}\) is unbreakable and \(2\leq w_{j}\cdot w_{5}\leq|w_{5}|-2\), and \(A_{j}\neq\{0,1,\ldots,n\}\) or else \(w_{j}\cdot w_{5}=|w_{5}|-2<|w_{j}|-2\). It follows that \(|w_{2}|=|w_{3}|=2\), but then \((w_{1};v_{n},w_{2},w_{3})\) is a claw.
Case IV: \(w_{5}=-e_{5}+d_{0}+\ldots+d_{n}\).
Suppose now that \(w_{5}=-e_{5}+d_{0}+\ldots+d_{n}\). Then either \(|w_{6}|=2\) or \(w_{6}=-e_{6}+d_{n}\), and so either \(|w_{7}|=2\) or \(w_{7}\) is tight, in which case either \((w_{7};v_{1},w_{5},w_{8})\) is a claw or \((w_{1},w_{6},w_{8})\) is a
heavy triple, and therefore \(|w_{8}|=2\) or else either \(w_{8}\) is tight, in which case \((w_{5},w_{6},w_{7},w_{8},w_{5})\) is an incomplete cycle if \(|w_{6}|=2\) and \(w_{8}\) mediates a sign error between \(w_{1}\) and \(w_{6}\) if \(w_{6}=-e_{6}+d_{n}\), or \(w_{8}=-e_{8}+d_{n}\) and \((w_{8};v_{n},w_{5},w_{7})\) is a claw. Furthermore, \(w_{j}\) is tight and \(|w_{k}|=2\) for \(\{j,k\}=\{2,3\}\), since if \(|w_{2}|=|w_{3}|=2\), then \((w_{1};v_{n},w_{2},w_{3})\) is a claw, if \(A_{j}=\{n\}\) and \(w_{j}\cdot e_{1}=-1\) then \((v_{n};v_{n-1},w_{1},w_{j})\) is a claw, and if \(A_{j}=\{n\}\) and \(w_{j}\cdot e_{1}=0\) then \(w_{k}\sim w_{5}\) and either \(w_{k}\) is unbreakable, in which case \((w_{2},w_{3},w_{5})\) is a heavy triple, or \(w_{k}\) is tight, in which case \((w_{k},v_{1},\ldots,v_{n},w_{j},w_{5},w_{k})\) is an incomplete cycle. If \(|w_{2}|=2\), then \((w_{3};v_{1},w_{4},w_{5})\) is a claw if \(|w_{4}|=2\), and if \(|w_{4}|\geq 3\) and \(w_{4}\) is unloaded then \((v_{1},\ldots,v_{n},w_{1},w_{4},w_{5},w_{3},v_{1})\) is an incomplete cycle. If \(w_{4}\) is loaded then either \(w_{4}|_{E_{8}}=-e_{4}+e_{1}+e_{5}\), in which case either \(|w_{6}|=2\) and \(w_{4}=-e_{4}+e_{1}+e_{5}+d_{n}\), and therefore \((w_{2},w_{1},v_{n},w_{4},w_{2})\) is an incomplete cycle, or \(w_{6}=-e_{6}+d_{n}\) and \(w_{4}=-e_{4}+e_{1}+e_{5}\), so \(s_{4}^{*}=|\sigma|_{1}+1\), or \(w_{6}=-e_{6}+d_{n}\) and \(w_{4}|_{E_{8}}=-e_{4}+e_{1}+e_{5}+e_{6}\), in which case \((w_{2},w_{1},w_{6},w_{7},w_{4},w_{2})\) is an incomplete cycle. Conclude that \(w_{2}\) is tight and \(|w_{3}|=2\). If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}\), then \(2\leq w_{4}\cdot w_{5}<|w_{5}|-2\), if \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}\) then either \(|w_{6}|=2\), in which case \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}\) or else either \((w_{6};w_{4},w_{5},w_{7})\) is a claw or \((w_{4},w_{5},w_{6})\) is a negative triangle, but then there is a sign error between \(w_{4}\) and \(w_{5}\) mediated by \(w_{2}\), or \(w_{6}=-e_{6}+d_{n}\), in which case \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}\), in which case \((w_{2},v_{1},\ldots,v_{n},w_{6},w_{4},w_{5},w_{2})\) is an incomplete cycle, and if \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}+e_{6}\), then \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}+e_{6}\) or else \((w_{7};w_{4},w_{6},w_{8})\) is a claw, but then either \((w_{2},v_{1},\ldots,v_{n},w_{6},w_{2})\) is an incomplete cycle, or there is a sign error between \(w_{4}\) and \(w_{6}\) mediated by \(w_{2}\). If \(w_{4}\) is unloaded, then \(|w_{4}|=2\) or else \((w_{4},v_{n},w_{1},w_{3},w_{4})\) is an incomplete cycle. Conclude that \(s^{*}=(1,n+2,0,0,n+1,0,0,0)\) or \(s^{*}=(1,n+2,0,0,n+1,1,0,0)\).
Case V: \(w_{5}\) is tight.
Suppose lastly that \(w_{5}\) is tight. Then for \(j\in\{2,3\}\) with \(w_{j}\) unloaded, \(w_{j}=-e_{j}+d_{0}+\ldots+d_{n}\) or else \((v_{n};v_{n-1},w_{1},w_{j})\) is a claw, and so \(|w_{6}|\geq 3\) or else \((w_{5};v_{1},w_{j},w_{6})\) is a claw. It follows that \(w_{6}=-e_{6}+d_{n}\) since \(w_{6}\) is not tight and \(w_{6}\cdot w_{2}\leq 1\), but then \((v_{1},\ldots,v_{n},w_{6},w_{j},w_{5},v_{1})\) is an incomplete cycle.
**Lemma 5.3**.: _If \(\sigma=(1,\ldots,1)\in\mathbb{Z}^{n+1}\), \(n\geq 2\), and \(w_{1}=-e_{1}+d_{1}+\ldots+d_{n}\), then \((\tau)^{\perp}\) is not a linear lattice._
Proof.: If \(w_{1}=-e_{1}+d_{1}+\ldots+d_{n}\), then for \(j\in\{2,3,5\}\) with \(w_{j}\) unloaded, either \(w_{j}=-e_{j}+d_{n}\) or \(n=2\) and \(w_{j}=-e_{j}+d_{1}+d_{2}\), or else \(w_{j}\) is tight and \((v_{1};v_{2},w_{1},w_{j})\) is a claw. It follows that \(|w_{j}|\geq 3\) for at most one \(j\in\{2,3,5\}\) with \(w_{j}\) unloaded or else there is a heavy triple or a claw at either \(v_{1}\) or \(v_{n}\), so there is a claw at \(w_{1}\) if \(w_{2}\) and \(w_{3}\) are unloaded. If \(w_{3}\) is loaded, then \(w_{3}\) is tight or else \(w_{3}=-e_{3}+e_{2}+d_{n}\) and \((w_{1},v_{1},\ldots,v_{n},w_{3},w_{1})\) is an incomplete cycle or \(2\leq w_{3}\cdot w_{1}=|w_{3}|-4\), but then \((v_{1};v_{2},w_{1},w_{3})\) is an incomplete cycle. If \(w_{2}\) is loaded and \(|w_{3}|=2\), then \(|w_{5}|\geq 3\) or else \((w_{1};v_{1},w_{3},w_{5})\) is an incomplete cycle, so unless \(w_{2}\) is tight, then \(w_{2}\sim w_{5}\) and either \(w_{2}\sim w_{1}\) or \(w_{2}\sim v_{n}\), and in either case \((w_{1},w_{2},w_{5})\) is a heavy triple. If \(w_{2}\) is loaded and \(|w_{3}|\geq 2\), then either \(w_{2}\) is tight, in which case \((v_{1};v_{2},w_{1},w_{2})\) is a claw, or \(w_{2}\cdot w_{1}=0\), in which case \(w_{2}=-e_{2}+e_{3}+e_{4}\), or else \(w_{2}\cdot w_{1}=1\) and \((v_{1},\ldots,v_{n},w_{2},w_{1},v_{1})\) is an incomplete cycle. If \(w_{2}=-e_{2}+e_{3}+e_{4}\), then \(|w_{3}|\geq 3\) and \(|w_{4}|\geq 3\) and \(w_{4}\) is not tight, or else \((v_{1};v_{2},w_{1},w_{4})\) is a claw, so \(w_{4}=-e_{4}+d_{n}\) or else \(2\leq w_{4}\cdot w_{1}\leq|w_{1}|-2\), and therefore \((v_{1},\ldots,v_{n},w_{4},w_{1},v_{1})\) is an incomplete cycle.
**Proposition 5.4**.: _If \(\sigma=(1,\ldots,1)\in\mathbb{Z}^{n+1}\), \(n\geq 2\), and \(w_{1}=-e_{1}+d_{0}+\ldots+d_{n}\), then one of the following holds:_
1. \(s^{*}=(n+1,1,0,0,0,0,0,0)\)_,_
2. \(s^{*}=(n+1,0,1,0,0,0,0,0)\)_,_
3. \(s^{*}=(n+1,0,0,0,1,0,0,0)\)_,_
4. \(s^{*}=(n+1,1,1,0,0,0,0,0)\)_,_
5. \(s^{*}=(n+1,1,0,0,1,0,0,0)\)_,_
6. \(s^{*}=(n+1,0,1,0,1,0,0,0)\)_,_
7. \(s^{*}=(n+1,1,0,1,0,0,0,0)\)_,_
8. \(s^{*}=(n+1,0,0,1,1,0,0,0)\)_,_
9. \(s^{*}=(n+1,0,0,n+2,1,0,0,0)\)_,_
10. \(s^{*}=(n+1,1,1,n+2,0,0,0,0)\)_,_
11. \(s^{*}=(n+1,1,0,n+2,0,0,0,0)\)_, or_
12. \(s^{*}=(n+1,0,1,n+2,1,0,0,0)\)_._
Proof.: Since \(w_{1}\cdot v_{i}=0\) for all \(1\leq i\leq n\), \(|w_{j}|=2\) for at most two \(j\in\{2,3,5\}\). We will break the analysis at hand into three cases based on the size of \(\{j\in\{2,3,5\}\colon|w_{j}|\geq 3\}\).
Case I: \(|\{j\in\{2,3,5\}\colon|w_{j}|\geq 3\}|=1\).
Suppose now that \(|w_{j}|\geq 3\) for a single \(j\in\{2,3,5\}\). It follows that \(|w_{7}|=|w_{8}|=2\) or else there is a claw at \(w_{1}\). Now, if \(|w_{6}|\geq 3\), then \(w_{6}=-e_{6}+d_{n}\) and \((w_{6};v_{n},w_{1},w_{7})\) is a claw or \(w_{6}\) is tight and \((w_{6};v_{1},w_{1},w_{7})\) is a claw. It follows that neither \(w_{2}\) nor \(w_{3}\) is loaded in this case, since if \(|w_{2}|\) or \(|w_{3}|\geq 3\) then \(|w_{5}|=|w_{6}|=|w_{7}|=|w_{8}|=2\). Furthermore, we must have \(w_{j}=-e_{j}+d_{n}\) or \(w_{j}\) is tight, or else \((w_{1};w_{2},w_{3},w_{5})\) is a claw.
Case I.1: \(|w_{2}|\geq 3\).
Suppose that \(|w_{2}|\geq 3\), \(|w_{3}|=2\), and \(|w_{5}|=2\). If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}\), then \(w_{2}\) is tight and \(w_{4}=-e_{4}+e_{2}+d_{i}+\ldots+d_{n}\) for some \(1\leq i\leq n\), but then \(2\leq w_{4}\cdot w_{1}\leq|w_{1}|-2\), which is absurd since \(w_{4}\) and \(w_{1}\) are unbreakable. If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}\), then \(w_{4}=-e_{4}+e_{2}+e_{1}\), or else either \((w_{5};w_{1},w_{4},w_{6})\) is a claw or \((w_{1},w_{4},w_{5})\) is a negative triangle, but then \(w_{4}{\dagger}w_{2}\), \(w_{4}{\dagger}w_{1}\), and \(w_{1}\pitchfork w_{2}\), which is absurd. Conclude that \(w_{4}\) is not loaded. If \(w_{2}\) is tight, then either \(|w_{4}|=2\) and \(\hat{G}(\mathcal{S})\) is connected but \(G(\mathcal{S})\) is not, or \(w_{4}=-e_{4}+e_{n}\), in which case there is either a sign error between \(w_{4}\) and \(w_{1}\) mediated by \(w_{2}\) or \((v_{1},\ldots,v_{n},w_{4},w_{2},v_{1})\) is an incomplete cycle. Conclude that \(w_{2}=-e_{2}+d_{n}\) and either \(|w_{4}|=2\), \(w_{4}=-e_{4}+d_{n}\), or \(w_{4}\) is unloaded and tight, and so either \(s^{*}=(n+1,1,0,0,0,0,0,0)\), \(s^{*}=(n+1,1,0,1,0,0,0,0)\), or \(s^{*}(n+1,1,0,n+2,0,0,0,0)\).
Case I.2: \(|w_{3}|\geq 3\).
Suppose now that \(|w_{3}|\geq 3\), \(|w_{2}|=2\), and \(|w_{5}|=2\). As before, we must have \(|w_{6}|=|w_{7}|=|w_{8}|=2\). It follows that \(w_{4}\) is unloaded, or else \(w_{4}|_{E_{8}}=-e_{4}+e_{1}\), in which case \((w_{2},w_{1},w_{5},w_{4},w_{2})\) is an incomplete cycle. If \(w_{3}\) is tight, then \(\hat{G}(\mathcal{S})\) is connected but \(G(\mathcal{S})\) is not if \(|w_{4}|=2\), and \((w_{1};w_{2},w_{4},w_{5})\) is a claw if \(w_{4}=-e_{4}+d_{n}\). It follows that \(w_{3}=-e_{3}+d_{n}\), and thus that \(|w_{4}|=2\) since \((v_{n};v_{n-1},w_{3},w_{4})\) is a claw if \(w_{4}=-e_{4}+d_{n}\) and \((w_{1};w_{2},w_{4},w_{5})\) is a claw if \(w_{4}\) is tight, so \(s^{*}=(n+1,0,1,0,0,0,0,0)\).
Case I.3: \(|w_{5}|\geq 3\).
Suppose now that \(|w_{5}|\geq 3\), \(|w_{2}|=2\), and \(|w_{3}|=2\). It follows that \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else there is a claw at \(w_{1}\). Suppose that \(w_{4}|_{E_{8}}=-e_{4}+e_{1}\). Then \(w_{5}\) is tight and \(w_{4}=-e_{4}+e_{1}+d_{i}+\ldots+d_{n}\) for some \(0\leq i\leq n-1\), but then \(2\leq w_{4}\cdot w_{5}=|w_{4}|-3\), which is absurd since \(w_{4}\) is unbreakable. If \(w_{4}|_{E_{8}}=-e_{4}+e_{1}+e_{5}\), then \(w_{4}=-e_{4}+e_{1}+e_{5}\) or else either \((w_{6};w_{4},w_{5},w_{7})\) is a claw or \((w_{4},w_{5},w_{6})\) is a negative triangle, so \(w_{5}\) is tight and \((w_{5};v_{1},w_{4},w_{6})\) is a claw. Conclude that \(w_{4}\) is unloaded. If \(w_{5}\) is tight, then \(\hat{G}(\mathcal{S})\) is connected but \(G(\mathcal{S})\) is not if \(|w_{4}|=2\), and either there is a sign error between \(w_{1}\) and \(w_{4}\) mediated by \(w_{5}\) or \((v_{1},\ldots,v_{n},w_{4},w_{5},v_{1})\) is an incomplete cycle if \(w_{4}=-e_{4}+d_{n}\). Conclude that \(w_{5}=-e_{5}+d_{n}\) and either \(|w_{4}|=2\), \(w_{4}=-e_{4}+d_{n}\), or \(w_{4}\) is unloaded and tight; hence \(s^{*}=(n+1,0,0,0,1,0,0,0)\), \(s^{*}=(n+1,0,0,1,1,0,0,0)\), or \(s^{*}=(n+1,0,0,n+2,1,0,0,0)\).
Case II: \(|\{j\in\{2,3,5\}\colon|w_{j}|\geq 3\}|=2\)
Case II.1: \(|w_{5}|=2\).
Suppose now that \(|w_{5}|=2\) and \(|w_{2}|\), \(|w_{3}|\geq 3\). If \(w_{j}\) is loaded for \(j\in\{2,3\}\) then \(A_{j}=\{n\}\) and \(w_{j}{\dagger}w_{1}\) or \(w_{j}\) is tight and \(w_{1}\prec w_{j}\), and \(|w_{k}|\geq 3\) for some \(k\in\{6,7,8\}\) minimal, and so the subgraph of \(G(\mathcal{S})\) induced by \(\{w_{j},w_{1},w_{5},\ldots,w_{6},v_{1},\ldots,v_{n}\}\) contains an incomplete cycle. Conclude that at least one \(w_{j}=-e_{j}+d_{n}\) and at most one \(w_{j}\) is tight and unloaded for \(j\in\{2,3\}\). Note that \(|w_{7}|=|w_{8}|=2\), otherwise \(w_{k}\sim w_{1}\) and either \(w_{k}\sim v_{1}\) or \(w_{k}\sim v_{n}\) for \(k\in\{7,8\}\) minimal, and either \(|w_{6}|=2\), in which case \((w_{1},w_{5},w_{6},\ldots,w_{k},w_{1})\) is an incomplete cycle or \(|w_{6}|\geq 3\), in which case \(w_{6}\sim w_{1}\) and either \(w_{6}\sim v_{1}\) or \(w_{6}\sim v_{n}\), in which case the subgraph of \(G(\mathcal{S})\) induced by \((w_{1},w_{5},w_{6},w_{7},w_{8},v_{1},\ldots,v_{n})\) contains an incomplete cycle. If \(|w_{6}|\geq 3\), then either \(w_{6}=-e_{6}+d_{1}\) or \(w_{6}\) is tight. Suppose \(w_{6}=-e_{6}+d_{1}\). Then \(w_{j}\) is tight and \(w_{k}=-e_{k}+d_{n}\) for \(\{j,k\}=\{2,3\}\), or else \((w_{j},w_{k},w_{6})\) is a heavy triple. But then \(w_{k}{\dagger}w_{6}\) and \(\epsilon_{k}=-\epsilon_{6}\), but \(w_{j}\pitchfork w_{k}\) and \(w_{j}\pitchfork w_{6}\) or else there is an incomplete cycle, so \(\epsilon_{k}=\epsilon_{j}=\epsilon_{6}\), a contradiction. Suppose that \(w_{6}\) is tight. Then \(w_{j}=-e_{j}+d_{n}\) and \(w_{k}=-e_{k}+d_{n}\) for \(j,k\in\{2,3\}\), but then \(\epsilon_{j}=-\epsilon_{k}\) since \(w_{j}{\dagger}w_{k}\) and \(\epsilon_{j}=\epsilon_{6}=\epsilon_{k}\) since \(w_{j}\pitchfork w_{6}\) and \(w_{k}\pitchfork w_{6}\). Conclude that \(|w_{6}|=2\). If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}\), then \(w_{2}\) is tight, and \(2\leq w_{4}\cdot w_{1}\leq|w_{1}|-2\), which is absurd since \(w_{4}\) and \(w_{1}\) are unbreakable. If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}\), then \(w_{4}=-e_{4}+e_{2}+e_{1}\) or else either \((w_{5};w_{1},w_{4},w_{6})\) is a claw or \((w_{4},w_{5},w_{6})\) is a negative triangle, but then \(w_{2}\) is tight and \(w_{4}{\dagger}w_{2}\), which is absurd since \(w_{4}{\dagger}w_{2}\) and \(w_{4}{\dagger}w_{1}\) but \(w_{1}\pitchfork w_{2}\). Conclude that \(w_{4}\) is unloaded. If \(w_{2}\) or \(w_{3}\) is tight, then \(w_{4}=-e_{4}+d_{n}\) or else \(\hat{G}(\mathcal{S})\) is connected but \(G(\mathcal{S})\) is not, but then there is a sign error between \(w_{4}\) and \(w_{1}\) mediated by \(w_{2}\) or \((v_{1},\ldots,v_{n},w_{4},w_{2},v_{1})\) is an incomplete cycle if \(w_{2}\) is tight, and \(w_{4}\) separates \(w_{3}\) from \(w_{1}\) in \(G(\mathcal{S})\) but \(w_{1}\pitchfork w_{3}\) and \(w_{4}\cdot w_{3}=0\) if \(w_{3}\) is tight. Conclude that \(w_{2}=-e_{2}+d_{n}\), \(w_{3}=-e_{3}+d_{n}\), and either \(|w_{4}|=2\) or \(w_{4}\) is unloaded and tight; hence \(s^{*}=(n+1,1,1,0,0,0,0,0,0)\) or \(s^{*}=(n+1,1,1,n+2,0,0,0,0)\).
Case II.2: \(|w_{2}|=2\).
Suppose now that \(|w_{2}|=2\) and \(|w_{3}|\), \(|w_{5}|\geq 3\). It follows that \(w_{3}\) is unloaded, hence at most one \(w_{j}\) is tight and at least one \(w_{j}=-e_{j}+d_{n}\) for \(j\in\{3,5\}\). We must furthermore have that \(|w_{7}|=|w_{8}|=2\) or else for some \(j\in\{7,8\}\) either \((w_{3},w_{5},w_{j})\) is a heavy triple, or one of \(w_{3},w_{5},w_{j}\) is tight and so either there is a sign error between the other two or the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{3},w_{5},w_{j}\}\) contains an incomplete cycle. It follows that \(|w_{6}|=2\), or else \(w_{6}=-e_{6}+d_{n}\) and \((w_{6};v_{n},w_{1},w_{7})\) is a claw, or \(w_{6}\) is tight and \((w_{6};v_{1},w_{1},w_{7})\) is a
\(w_{4}\) claw. If \(w_{4}|_{E_{8}}=-e_{4}+e_{1}\), then \(w_{5}\) is tight and \(|A(w_{4})|\geq 1\), but then \(2\leq w_{4}\cdot w_{5}=|w_{4}|-3\), which is absurd since \(w_{4}\) is unbreakable. If \(w_{4}|_{E_{8}}=-e_{4}+e_{1}+e_{5}\), then \(w_{4}=-e_{4}+e_{1}+e_{5}\) or else either \((w_{6};w_{4},w_{5},w_{7})\) is a claw or \((w_{4},w_{5},w_{6})\) is a negative triangle, but then \(w_{5}\) is tight and \(w_{1}\pitchfork w_{5}\), which is absurd since \(w_{4}\dagger w_{1}\) and \(w_{4}\dagger w_{5}\). Conclude that \(w_{4}\) is unloaded. If \(w_{3}\) or \(w_{5}\) is tight, then \(w_{4}=-e_{4}+d_{n}\) or else \(\hat{G}(\mathcal{S})\) is connected but \(G(\mathcal{S})\) is not, but then \((v_{n};v_{n-1},w_{3},w_{4})\) is a claw if \(w_{5}\) is tight and \(w_{4}\) separates \(w_{1}\) from \(w_{3}\), \(w_{1}\pitchfork w_{3}\), and \(w_{4}\cdot w_{3}=0\) if \(w_{3}\) is tight. Conclude that \(w_{3}=-e_{3}+d_{n}\), \(w_{5}=-e_{5}+d_{n}\), and either \(|w_{4}|=2\) or \(w_{4}\) is unloaded and tight, hence \(s^{*}=(n+1,0,1,0,1,0,0,0)\) or \(s^{*}=(n+1,0,1,n+2,1,0,0,0)\).
Case II.3: \(|w_{3}|=2\).
Suppose now that \(|w_{3}|=2\) and \(|w_{2}|\), \(|w_{5}|\geq 3\). If \(w_{2}\) is loaded, then \(w_{2}|_{E_{8}}=-e_{2}+e_{4}\), so \(|w_{4}|\geq 3\) and \(w_{4}\) is unloaded, and therefore \(w_{4}\sim w_{1}\), and furthermore \(w_{2}\sim w_{3}\), \(w_{4}\sim w_{3}\), and \(w_{3}\sim w_{1}\). It follows that either \(w_{2}\) or \(w_{4}\) is tight, or else \((w_{1},w_{2},w_{4})\) is a heavy triple. If \(w_{2}\) is tight, then \(w_{4}=-e_{4}+d_{n}\), in which case \((v_{1},\ldots,v_{n},w_{4},w_{1},w_{3},w_{2},v_{1})\) is an incomplete cycle. If \(w_{4}\) is tight, then \(w_{2}\cdot v_{i}=0\) for all \(1\leq i\leq n\) or else the subgraph induced by \((v_{1},\ldots,v_{n},w_{1},w_{2},w_{3},w_{4})\) contains an incomplete cycle, but then \(2\leq w_{2}\cdot w_{1}=|w_{1}|-3\), which is absurd since both \(w_{1}\) and \(w_{2}\) are unbreakable. Conclude that \(w_{2}\) is not loaded, and therefore at most one \(w_{j}\) is tight and at least one \(w_{j}=-e_{j}+d_{n}\) for \(j\in\{2,5\}\).
We must furthermore have \(|w_{7}|=|w_{8}|=2\), or else for some \(j\in\{7,8\}\) either \((w_{2},w_{5},w_{j})\) is a heavy triple, or one of \(w_{2}\), \(w_{5}\), \(w_{j}\) is tight and there is either a sign error between the other two or the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{2},w_{5},w_{j}\}\) contains an incomplete cycle. It follows that \(|w_{6}|=2\), or else \(w_{6}=-e_{6}+d_{n}\) and \((w_{6};v_{n},w_{1},w_{7})\) is a claw, or \(w_{6}\) is tight and \((w_{6};v_{1},w_{1},w_{7})\) is a claw. If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}\), then \(w_{2}\) is tight, so \(w_{4}\) is unbreakable and \(w_{4}\dagger w_{5}\), in which case \(w_{4}=-e_{4}+e_{2}+d_{n}\) or else \((w_{5};v_{n},w_{4},w_{6})\) is a claw, but then \(2=w_{4}\cdot w_{1}=|w_{1}|-n-1\), which is absurd. If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}\), then either \(w_{2}\) is tight and \(w_{5}=-e_{5}+d_{n}\) and \(w_{4}=-e_{4}+e_{2}+e_{1}\), in which case \((v_{1},\ldots,v_{n},w_{5},w_{4},w_{2},v_{1})\) is an incomplete cycle, or \(w_{2}=-e_{2}+d_{n}\) and \(w_{5}\) is tight and either \(2\leq w_{4}\cdot w_{5}\leq|w_{4}|-3\) or \(w_{4}=-e_{4}+e_{2}+e_{1}+d_{0}+\ldots+d_{n}\) and \((w_{5};v_{1},w_{4},w_{6})\) is a claw. If \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}\), then \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}\) or else either \((w_{6};w_{4},w_{5},w_{7})\) is a claw or \((w_{4},w_{5},w_{6})\) is a negative triangle, but then the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{2},w_{4},w_{5}\}\) contains an incomplete cycle. Conclude that \(w_{4}\) is unloaded. If \(w_{j}\) is tight for \(\{j,k\}=\{2,5\}\), then either \((v_{1},\ldots,v_{n},w_{k},w_{j},v_{1})\) is an incomplete cycle, or \(w_{4}=-e_{4}+d_{n}\) and either \((v_{1},\ldots,v_{n},w_{4},w_{j},v_{1})\) is an incomplete cycle or there is a sign error between \(w_{4}\) and \(w_{k}\) mediated by \(w_{j}\), or \(|w_{4}|=2\) and \(\hat{G}(\mathcal{S})\) is connected but \(G(\mathcal{S})\) is not. Conclude that \(w_{2}=-e_{2}+d_{n}\) and \(w_{5}=-e_{5}+d_{n}\), and therefore that \(|w_{4}|=2\); hence \(s^{*}=(n+1,1,0,0,1,0,0,0)\).
Case III: \(|\{j\in\{2,3,5\}\colon|w_{j}|\geq 3\}|=3\).
Suppose lastly, by way of contradiction, that \(|w_{2}|\), \(|w_{3}|\), and \(|w_{5}|\geq 3\).
Case III.1: \(w_{2}\) or \(w_{3}\) is loaded.
If \(w_{j}\) is loaded for some \(j\in\{2,3\}\), then \(w_{j}\sim w_{1}\) and either \(A_{j}=\{n\}\) and \(w_{j}\sim v_{n}\) or \(w_{j}\) is tight and \(w_{j}\sim v_{1}\). It follows that \(w_{5}=-e_{5}+d_{n}\), since if \(w_{5}\) is tight, then \(A_{j}=\{n\}\) and \(w_{j}\cdot w_{1}=1\), so either \(w_{j}\dagger w_{5}\) and \((v_{1},\ldots,v_{n},w_{j},w_{5},v_{1})\) is an incomplete cycle, or \(w_{j}\pitchfork w_{5}\) and there is a sign error between \(w_{1}\) and \(w_{j}\) mediated by \(w_{5}\). It follows that for \(\{j,k\}=\{2,3\}\), either \(w_{j}\) is tight or \(w_{k}\) is tight, otherwise \((w_{2},w_{3},w_{5})\) is a heavy triple.
Case III.1.a: \(w_{j}\) is tight.
If \(w_{j}\) is tight, then \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else \((v_{1},\ldots,v_{n},w_{l},w_{1},w_{j},v_{1})\) is an incomplete cycle for some \(l\in\{6,7,8\}\), so \(w_{k}=-e_{k}+d_{n}\) since \(s_{j}^{*}\leq|\sigma|_{1}+1+s_{5}^{*}=|\sigma|_{1}+2\), and therefore \(w_{j}=-e_{j}+e_{k}+2d_{0}+\ldots+d_{n}\). It follows that \(w_{4}\cdot v_{i}=0\) for all \(1\leq i\leq n\) or else \((w_{k},w_{4},w_{5})\) is a heavy triple. Furthermore, since \(w_{j}\) is tight and hence \(w_{4}\) is unbreakable, we must have that \(w_{4}\cdot w_{1}=0\), and therefore \(w_{4}|_{E_{8}}=w_{4}\) or else \(A_{4}=\{0,\ldots,n\}\) and \(2\leq w_{4}\cdot w_{1}\). If \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}\), then the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{2},w_{4},w_{5}\}\) contains an incomplete cycle. If \(w_{4}=-e_{4}+e_{2}\), \(-e_{4}+e_{2}+e_{1}\), \(-e_{4}+e_{1}\), or \(-e_{4}+e_{1}+e_{5}\), then \(|w_{4}\cdot w_{1}|=1\). Conclude that \(w_{4}\) is unloaded, hence \(|w_{4}|=2\), in which case either \(j=2\) and \((v_{1},\ldots,v_{n},w_{3},w_{4},w_{2},v_{1})\) is an incomplete cycle, or \(j=3\) and \((w_{3};v_{1},w_{1},w_{4})\) is a claw.
Case III.1.b: \(w_{k}\) is tight.
If \(w_{k}\) is tight, then \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else for some \(l\in\{6,7,8\}\)\(w_{l}=-e_{l}+d_{n}\) and either \(w_{k}\dagger w_{l}\), in which case \((v_{1},\ldots,v_{n},w_{l},w_{k},v_{1})\) is an incomplete cycle, or \(w_{k}\pitchfork w_{l}\) and there is a sign error between \(w_{1}\) and \(w_{l}\) mediated by \(w_{k}\). It follows that \(w_{j}=-e_{j}+e_{k}+d_{n}\) since \(s_{j}^{*}\leq|\sigma|_{1}+1+s^{*}5=|\sigma|_{1}+2\), hence \(w_{j}\cdot w_{k}=-1\neq\pm(|w_{j}|-2)\), so \(w_{j}\dagger w_{k}\) and \((v_{1},\ldots,v_{n},w_{j},w_{k},v_{1})\) is an incomplete cycle.
Case III.2: \(w_{2}\), \(w_{3}\), and \(w_{5}\) are unloaded.
If \(w_{2}\), \(w_{3}\), and \(w_{5}\) are unloaded, then, without loss of generality, for \(\{j,k,l\}=\{2,3,5\}\), \(w_{j}=-e_{j}+d_{n}\), \(w_{k}=-e_{k}+d_{n}\), and \(w_{l}\) is tight, or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{2},w_{3},w_{5}\}\) contains a claw or a heavy triple. Then, either \(w_{l}\dagger w_{j}\) and \((v_{1},\ldots,v_{n},w_{j},w_{l},v_{1})\) is an incomplete cycle or \(w_{l}\dagger w_{k}\) and \((v_{1},\ldots,v_{n},w_{k},w_{l},v_{1})\) is an incomplete cycle, or \(w_{j}\pitchfork w_{l}\) and \(w_{k}\pitchfork w_{l}\) and there is a sign error between \(w_{j}\) and \(w_{k}\) mediated by \(w_{l}\).
**Lemma 5.5**.: _Suppose \(w_{1}\) is tight. Then \((\tau)^{\perp}\) is not a linear lattice._
Proof.: Observe that if \(w_{1}\) is tight, then, since \(w_{5}\) is unloaded, either \(|w_{5}|=2\), \(w_{5}=-e_{5}+d_{n}\), or \(w_{5}=-e_{5}+d_{0}+\ldots+d_{n}\). We break our analysis into three respective cases.
Case I: \(|w_{5}|=2\).
Suppose that \(|w_{5}|=2\). Then either \(|w_{6}|=2\), in which case \(|w_{7}|=|w_{8}|=2\) or else there is an incomplete cycle, or \(w_{6}=-e_{6}+d_{0}+\ldots+d_{n}\) and \(w_{6}\prec w_{1}\), or else \((v_{1},\ldots v_{i},w_{6},w_{5},w_{1},v_{1})\) is an incomplete cycle for some \(1\leq i\leq n\). If \(|w_{6}|=2\), then \(w_{2}\) and \(w_{3}\) are both unloaded and \(w_{j}=-e_{j}+d_{n}\) or \(w_{j}=-e_{j}+d_{0}+\ldots+d_{n}\) for both \(j\in\{2,3\}\), or else there is a claw at \(w_{1}\). If \(A_{2}=A_{3}=\{n\}\), then \((w_{1},w_{2},w_{3})\) induces a heavy triple since \(w_{1}\cdot w_{2}=w_{1}\cdot w_{3}=0\), and if \(A_{j}=\{n\}\) and \(A_{k}=\{0,\ldots,n\}\) for \(\{j,k\}=\{2,3\}\), then \(w_{j}\) separates \(w_{k}\) from \(w_{1}\) in \(G(\mathcal{S})\), but \(w_{1}\cdot w_{j}=0\) while \(w_{1}\cdot w_{k}\neq 0\). If \(w_{6}=-e_{6}+d_{0}+\ldots+d_{n}\), then \(|w_{2}|=|w_{3}|=2\) or else \(A_{j}=\{n\}\) for some \(j\in\{2,3\}\) and \((v_{1},\ldots,v_{n},w_{j},w_{6},w_{1},v_{1})\) is an incomplete cycle, and so \((w_{1};v_{1},w_{2},w_{3})\) is a claw.
Case II: \(w_{5}=-e_{5}+d_{n}\).
Suppose now that \(w_{5}=-e_{5}+d_{n}\). If \(A_{k}=\{0,\ldots,n\}\) for some \(k\in\{2,3\}\), then \(w_{k}\) is unloaded, or else \(2\leq w_{1}\cdot w_{k}=|w_{k}|-3\), and \(w_{5}\) then separates \(w_{1}\) from \(w_{k}\) in \(G(\mathcal{S})\), but \(w_{5}\cdot w_{1}=0\) and \(w_{k}\cdot w_{1}\neq 0\), which is absurd. If \(A_{j}=\{n\}\) for some \(j\in\{2,3\}\), then \(w_{j}\) is loaded and \(|w_{k}|\geq 3\) for \(\{j,k\}=\{2,3\}\), or else \((w_{1},w_{j},w_{5})\) induces a heavy triple since
\(w_{1}\cdot w_{j}=w_{1}\cdot w_{5}=0\), and furthermore then \(w_{k}=-e_{k}+d_{0}+\ldots+d_{n}\), which by the argument above is absurd, or else \((w_{j},w_{k},w_{5})\) is a heavy triple. It follows that \(|A_{2}|=|A_{3}|=\emptyset\), but then \((w_{1};v_{1},w_{2},w_{3})\) is a claw.
Case III: \(w_{5}=-e_{5}+d_{0}+\ldots+d_{n}\).
Suppose lastly that \(w_{5}=-e_{5}+d_{0}+\ldots+d_{n}\). Then for both \(j\in\{2,3\}\), either \(A_{j}=\emptyset\) or \(A_{j}=\{n\}\), or else \(2\leq w_{j}\cdot w_{5}\leq|w_{5}|-2\). If \(A_{j}=\{n\}\), then \(w_{j}\) is loaded and \(|w_{k}|\geq 3\) for \(\{j,k\}=\{2,3\}\), or else \(w_{j}\) separates \(w_{5}\) from \(w_{1}\) in \(G(\mathcal{S})\) but \(w_{j}\cdot w_{1}=0\) and \(w_{5}\cdot w_{1}\neq 0\), but then \(w_{k}=-e_{k}+d_{n}\). If \(A_{2}=A_{3}=\emptyset\), then \(w_{2}\sim w_{1}\) and \(w_{3}\sim w_{1}\), and \((w_{1};v_{1},w_{2},w_{3})\) is a claw.
Having identified every linear lattice orthogonal to an \(E_{8}\)-changemaker with \(\sigma=(1,\ldots,1)\), we now consider the case when every vector in \(\mathcal{V}\) is just right, \(|v_{i}|\geq 3\) for some \(1\leq i\leq n\), and \(G(\mathcal{V})\) is connected.
We start with a basic observation.
**Proposition 5.6**.: _If every element of \(\mathcal{V}\) is just right and \(\sigma\neq(1,\ldots,1)\), then for \(r=\min\{i\in\{2,\ldots,n\}\colon|v_{i}|\geq 3\}\), either \(v_{r}=-d_{r}+d_{r-1}+\ldots+d_{0}\) or \(v_{r}=-d_{r}+d_{r-1}+\ldots+d_{1}\). _
**Lemma 5.7**.: _If every element of \(\mathcal{V}\) is just right, \(G(\mathcal{V})\) is connected, and either \(|v_{r}|=r+1\) for some \(2\leq r\leq n\) or \(|v_{r}|=r\) for some \(3\leq r\leq n\), then \((\tau)^{\perp}\) is not a linear lattice._
Proof.: Observe that no \(w_{j}\in\mathcal{W}\) is tight: if \(w_{j}\) is tight then \(w_{j}\sim v_{1}\) and either \(|v_{r}|=r+1\) for some \(2\leq r\leq n\), in which case \(v_{r}\prec w_{j}\) and there is an incomplete cycle since \(G(\mathcal{V})\) is connected, or \(|v_{r}|=r\) for some \(3\leq r\leq n\), in which case \((v_{1};v_{2},v_{r},w_{j})\) is a claw.
Suppose, by way of contradiction, that \(G(\mathcal{S})\) contains a triangle of the form \((v_{i},w_{j},v_{k})\) with \(i<k\) and \(w_{j}\) unbreakable. Then \(|v_{l}|=2\) for all \(1\leq l\leq i\) and \(|v_{r}|=r+1\), or else there is a heavy triangle. It furthermore follows that \(i=r-1\), or else \(2\leq i<r-1\) and \((v_{i};v_{i-1},v_{i+1},w_{j})\) is a claw, or \(i=1\) and \(3\leq r\), in which case \(w_{j}\cdot v_{r}\geq 1\), so \(w_{j}\cdot v_{r}=1\) and \(w_{j}\sim v_{r}\), and therefore \((v_{1},w_{j},v_{r},\ldots,v_{1})\) is an incomplete cycle. It follows that \(r\in A_{j}\) or else there is an incomplete cycle, since \(G(\mathcal{V})\) is connected, and so \(k=r+1\), and \(v_{k}=-d_{k}+d_{k-1}+d_{k-2}\), or else \(w_{j}\cdot v_{k}\geq 2\). If \(|v_{l}|=2\) for all \(l\geq r+2\), then \(G(\mathcal{V})\) is disconnected, in contradiction to one of our assumptions. Let \(l\geq r+2\) be minimal such that \(|v_{l}|\geq 3\). Then \(w_{j}\cdot v_{l}\geq 1\), so in fact \(l=r+2\), \(v_{r+2}=-d_{r+2}+d_{r+1}+d_{r}\), and \(w_{j}\dagger v_{r+2}\). Since \(G(\mathcal{V})\) is connected by assumption, we then have a path in \(G(\mathcal{V})\) from \(v_{r-1}\) to \(v_{r+2}\), so there is an incomplete cycle \((v_{r-1},w_{j},v_{r+2},\ldots,v_{r-1})\). Conclude that \(G(\mathcal{S})\) does not contain a triangle of the form \((v_{i},w_{j},v_{k})\), and therefore \(w_{j}\) has at most one neighbor in \(G(\mathcal{V})\) for all \(w_{j}\) in \(\mathcal{W}\).
According to the classification of changemaker lattices whose intersection graphs do not contain claws, heavy triples, or incomplete cycles, and whose standard basis elements are all just right, \(G(\mathcal{V})\) is either a path or a union of a triangle \((v_{i},v_{i+2},v_{k})\) and three vertex disjoint paths \(P_{1}\), \(P_{2}\), and \(P_{3}\) emanating from \(v_{i}\), \(v_{i+2}\), and \(v_{k}\), respectively [11, Section 6]. Furthermore, \(|v_{l}|=2\) for all \(v_{l}\in P_{1}\), or else there is a heavy triple. It follows that \(|w_{j}|\geq 3\) for at most two values \(j\in\{1,2,3,5\}\) and that the neighbor of \(w_{j}\) in \(G(\mathcal{V})\) is one of two vertices with degree \(1\) in \(G(\mathcal{V})\) or else there is a claw, an incomplete cycle, or a heavy triple, but then either there is claw at \(w_{1}\) or there is an incomplete cycle including \(w_{1}\)
### When there is a gappy vector but no tight vector
**Lemma 5.8**.: _If \(\mathcal{V}\) contains a gappy element but no tight element, then \((\tau)^{\perp}\) is not a linear lattice._
Proof.: If \(\mathcal{V}\) contains a gappy element \(v_{g}=-d_{g}+d_{g-1}+\ldots+d_{j}+d_{k}\) with \(k\leq j-2\) and no tight element, then \(v_{g}\) is the unique gappy element of \(\mathcal{V}\), and \(v_{k}\) and \(v_{k+1}\) belong to different components of \(G(\mathcal{V}_{g-1})\)[11, Section 7]. It follows that \(|v_{r}|=r+1\) for \(r=\min\{i\in\{2,\ldots,n\}\colon|v_{i}|\geq 3\}\), and that either there are paths from \(v_{1}\) to \(v_{k}\) and from \(v_{r}\) to \(v_{k+1}\) or from \(v_{1}\) to \(v_{k+1}\) and from \(v_{r}\) to \(v_{k}\) in \(G(\mathcal{V}_{g-1})\). If \(w_{j}\) is tight, then the union of the pair of paths mentioned above with \(v_{g}\) and \(w_{j}\) produces an incomplete cycle. Conclude that \(w_{j}\) is not tight for any \(w_{j}\in\mathcal{W}\).
Suppose, by way of contradiction, that \(G(\mathcal{S})\) contains a triangle of the form \((v_{i},w_{j},v_{k})\), where \(i<k\), without loss of generality, and \(w_{j}\) is unbreakable. Then \(|v_{l}|=2\) for all \(1\leq l\leq i\), so \(i=r-1\). We must then have that \(r\in A_{j}\), or else \(w_{j}\dagger v_{r}\) and thus there is an incomplete cycle since \(G(\mathcal{V})\) is connected.
Let \(l\geq r+1\) be minimal such that \(|v_{l}|\geq 3\).
If \(l=g\), then \(g\geq r+2\) and \(v_{g}=-d_{g}+d_{g-1}+d_{r-1}\) or else \(2\leq w_{j}\cdot v_{g}\), so then \(g=r+2\) or else \((v_{r},v_{r+1},\ldots,v_{g},v_{r})\) is an incomplete cycle. We must then have that \(\{r-1,r,r+1\}\subset A_{j}\), and furthermore that \(g=r+2\in A_{j}\) or else \(w_{j}\cdot v_{g}=2\). Then \(n=g\), for if \(|v_{g+1}|=2\), then \((v_{g};v_{r},w_{j},v_{g+1})\) is a claw, and if \(|v_{g+1}|\geq 3\), then either \(v_{g+1}=-d_{g+1}+d_{g}+d_{g-1}\), in which case \((v_{r},v_{g},v_{g+1})\) is a heavy triple, or \(|v_{g+1}|\geq 4\), in which case \(2\leq w_{j}\cdot v_{g+1}\). It follows then that if \(|w_{j^{\prime}}|\geq 3\) for \(j^{\prime}\neq j\), then \(|w_{j^{\prime}}\cdot v_{r}|=1\) and \(w_{j^{\prime}}\cdot v_{l}=0\) for all \(l\neq r\) or else there is a heavy triple, a claw, or an incomplete cycle. But then \(|A_{j^{\prime}}\cap A_{j}|\geq 3\), so \(w_{j^{\prime}}\) or \(w_{j}\) is loaded, \(w_{j}\cdot w_{j^{\prime}}=1\), and \(w_{j}\dagger w^{\prime}_{j}\), but then \((w_{j},w_{j^{\prime}},v_{r},v_{g},w_{j})\) is an incomplete cycle.
Conclude that \(l\neq g\), so \(v_{l}\) is just right and \(w_{j}\cdot v_{l}\geq 1\), so \(w_{j}\dagger v_{l}\). We must then have that \(v_{l}\dagger v_{r-1}\), otherwise there is a length \(\geq 2\) path \(v_{r-1},\ldots,v_{l}\) in \(G(\mathcal{V})\) since \(G(\mathcal{V})\) is connected by assumption, and therefore \((w_{j},v_{r-1},\ldots,v_{l},w_{j})\) is an incomplete cycle. We must then have that \(l=r+1\), and \(v_{r+1}=-d_{r+1}+d_{r}+d_{r-1}\). If \(|v_{r+2}|=2\), then \(|v_{m}|=2\) for all \(r+2\leq m\leq g\) or else \(2\leq w_{j}\cdot v_{m}\) for \(m\) minimal such that \(m\geq r+3\), \(|v_{m}|\geq 3\), and \(v_{l}\) is just right. Then, \(g=r+3\) and \(v_{g}=-d_{g}+d_{g-1}+d_{r}\) or else there is a claw or \((v_{g},v_{r+1},w_{j})\) is a heavy triple. Since \(|v_{r+2}|=2\), we have \(\{r-1,r,r+1,r+2\}\subset A_{j}\), so either \(w_{j}\cdot v_{g}=2\) or \(g\in A_{j}\), in which case \(w_{j}\cdot v_{g}=1\) and \((w_{j},v_{r+1},v_{r+2},v_{g},w_{j})\) is an incomplete cycle. Conclude that \(|v_{r+2}|\geq 3\), hence \(v_{r+2}\) is just right, or else \((v_{r+2},v_{r+1},w_{j})\) is a heavy triple. We must then have that \(v_{r+2}=-d_{r+2}+d_{r+1}+d_{r}\) and \(r+2\in A_{j}\), or else \(w_{j}\cdot v_{r+2}\geq 2\). Then, since \(G(\mathcal{V})\) is connected there is a length \(\geq 2\) path from \(v_{r+1}\) to \(v_{r+2}\) in \(G(\mathcal{V})\), hence there is an incomplete cycle \((w_{j},v_{r+1},\ldots,v_{r+2},w_{j})\). Conclude that there is no triangle of the form \((v_{i},w_{j},v_{k})\) in \(G(\mathcal{S})\), so each \(w_{j}\in\mathcal{W}\) has at most one neighbor in \(G(\mathcal{V})\).
As in the case where \(G(\mathcal{V})\) is connected and every element of \(\mathcal{V}\) is just right
By the classification of changemaker latices with no claws, heavy triples, or incomplete cycles that contain a gappy vector but no tight vector, there are exactly two \(v_{i}\)'s with a single neighbor in \(\mathcal{V}\) such that no heavy triple is formed if \(w_{j}\sim v_{i}\) for some \(w_{j}\)[11, Section 7]. But
then \(|w_{j}|\geq 3\) for at most two distinct values \(j\in\{1,2,3,5\}\) or else there is a heavy triple, but then either there is a claw at \(w_{1}\) or else there is an incomplete cycle containing \(w_{1}\).
### When \(t\geq 2\) and \(v_{t}\) is tight
**Lemma 5.9**.: _Let \(t\geq 2\) and suppose that \(v_{t}\) is tight. Then \(\mathcal{V}=\mathcal{V}_{t}\), \(|v_{i}|=2\) for \(1\leq i\leq t-1\), and \(w_{1}=-e_{1}+d_{0}+\ldots+d_{t-1}\)._
Proof.: Let \(m=\min(A_{1})\). Since by Lemma 4.14\(w_{1}\) is not tight, we have six cases to consider: _Case 0_: \(m=n\), _Case I_: \(t<m<n\), _Case II_: \(t=m\), _Case III_: \(m=t-1\), _Case IV_: \(0<m<t-1\), and _Case V_: \(m=0\).
Case 0: \(t<m=n\).
If \(m=n\), then \(w_{1}=-e_{1}+d_{n}\) and \(w_{1}\sim v_{n}\). We break this case into three subcases: Case 0.1: \(|w_{5}|=2\), Case 0.2: \(|w_{5}|\geq 3\) and \(n\not\in A_{5}\), and Case 0.3: \(n\in A_{5}\).
Case 0.1: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then \(|w_{6}|=|w_{7}|=|w_{8}|=2\), or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{5},\)\(\ldots,w_{l}\}\) for \(l=\min\{i\in\{6,7,8\}\colon|w_{i}|\geq 3\}\) contains an incomplete cycle. It follows that both \(w_{2}\) and \(w_{3}\) are unloaded, and \(|w_{2}|,|w_{3}|\geq 3\) or else there is a claw at \(w_{1}\). For \(j\in\{2,3\}\), if \(n\not\in A_{j}\), then either \((w_{1};v_{n},w_{j},w_{5})\) is a claw or \(w_{j}\sim v_{n}\), but then \(|v_{n}|\geq 3\), and so \((v_{n},w_{1},w_{j})\) is a heavy triple. It follows that \(n\in A_{2}\cap A_{3}\), hence \(n=A_{2}\cap A_{3}\), or else \(2\leq w_{2}\cdot w_{3}\leq|w_{2}|-2\). Since \(w_{2}\dagger w_{3}\) and both \(w_{2}\) and \(w_{3}\) have neighbors in \(\mathcal{V}\), there is a unique \(r\in\{1,\ldots,n\}\) such that \(w_{2}\sim v_{r}\) and \(w_{3}\sim v_{r}\) or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{2},w_{3}\}\) contains an incomplete cycle. Furthermore, \(|v_{r}|=2\), or else \((v_{r},w_{2},w_{3})\) is a heavy triple; thus \(r\in A_{2}\cap A_{3}\) and \(r=n\), but then \((w_{1},w_{2},w_{3})\) is a heavy triple.
Case 0.2: \(|w_{5}|\geq 3\) and \(n\not\in A_{5}\).
If \(|w_{5}|\geq 3\) and \(n\not\in A_{5}\), then \(w_{5}\sim w_{1}\), so \(w_{5}\sim v_{n}\) and \(w_{5}\cdot v_{i}=0\) for all \(i\in\{1,\ldots,t-1,t+1,\ldots,n\}\) or else the subgraph induced by \(\{v_{1}\ldots,v_{n},w_{1},w_{5}\}\) contains an incomplete cycle. Then \(w_{5}\cdot v_{n}=1\) and \(|v_{n}|\geq 3\), but then \((v_{n},w_{1},w_{5})\) is an incomplete cycle.
Case 0.3: \(n\in A_{5}\).
If \(n\in A_{5}\), then \(w_{5}\cdot w_{1}=0\). If \(w_{5}\sim v_{n}\), then \(w_{5}\sim v_{r}\) for some \(1<r<n\) with \(v_{n}\sim v_{r}\) or else \((v_{n};v_{r},w_{1},w_{5})\) is a claw, and, moreover, \(w_{5}\cdot v_{i}=0\) for all \(i\in\{1,\ldots,n-1\}\setminus\{r\}\), or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{5}\}\) contains an incomplete cycle. It follows that \(|v_{n}|\geq 3\) and thus \(|v_{r}|=2\), or else \((v_{r},v_{n},w_{5})\) is a heavy triple, so \(w_{5}\cdot v_{n}=1\), \(w_{5}\cdot v_{r}=-1\), and \(v_{n}\cdot v_{r}=-1\). It then follows that \(v_{n}\) is just right, so \(v_{n}=-d_{n}+d_{n-1}+d_{n-2}\) and \(w_{5}=-e_{5}+d_{n}+d_{n-1}+d_{n-2}\), and so \((v_{l},v_{n},w_{5})\) induces a heavy triple for \(l=\max\{i<n-2\colon|v_{i}|\geq 3\}\).
Case I: \(t<m<n\).
If \(t<m\) then \(v_{m}\) and \(v_{m+1}\) are unbreakable, \(v_{m}\sim v_{r}\) for some \(r<m\), and \(v_{m+1}\sim v_{s}\) for some \(s\leq m\). It follows that \(v_{m+1}\cdot v_{m}=0\), or else \(v_{m+1}\dagger v_{m}\), in which case either \(w_{1}\cdot v_{m+1}=0\) and \((v_{m};v_{r},v_{m+1},w_{1})\) is a claw, or \(w_{1}\cdot v_{m+1}=1\), in which case \(v_{m+1}\cdot v_{m}=-1\) or else \((v_{m},v_{m+1},w_{1})\) is a negative triangle, but then \((v_{m},w_{1},v_{m+1},v_{s},\ldots,v_{r},v_{m})\) is an incomplete cycle. Since \(G(\mathcal{V})\) is connected, it follows that \(m+1\in A_{1}\) or else \(G(\mathcal{V}\cup\{w_{1}\})\) contains an incomplete cycle. Hence, \(A_{1}=\{m,\ldots,n\}\) or else there is some \(m+2\leq k\leq n\) such that
\(v_{k}\) is gappy and \(w_{1}\cdot v_{k}=1\), but then the subgraph induced by \(\{v_{1},\ldots,v_{k},w_{1}\}\) contains an incomplete cycle. Furthermore, we must have \(|v_{i}|=2\) for all \(m+2\leq i\leq n\). We now break the analysis down into three subcases based on the assumption that \(|w_{5}|=2\), \(|w_{5}|\geq 3\) and \(A_{5}\cap A_{1}=\emptyset\), or \(A_{5}\cap A_{1}\neq\emptyset\).
Case I.1: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then, as in Case 0, \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else there is an incomplete cycle, and therefore \(w_{2}\) and \(w_{3}\) are loaded and \(|w_{2}|,|w_{3}|\geq 3\). For \(j\in\{2,3\}\), if \(A_{j}\cap A_{1}=\emptyset\), then \(w_{j}\sim w_{1}\), so \(w_{j}\sim v_{m}\) or else there is an incomplete cycle, but then \(|v_{m}|\geq 3\), so \((v_{m},w_{1},w_{j})\) is a heavy triple. If \(m\in A_{j}\), then \(m+1\in A_{j}\) or else \(A_{j}\cap\operatorname{supp}(v_{m+1})=\{m\}\), in which case the subgraph induced by \(\{v_{1},\ldots,v_{m+1},w_{j}\}\) contains an incomplete cycle. But then \(\{m,m+1\}=A_{j}\cap\{m,\ldots,n\}\) or else \(2\leq w_{j}\cdot w_{1}<|w_{1}|-2\), so we must have \(m+1=n\) since \(|v_{i}|=2\) for all \(m+2\leq i\leq n\). If \(m\not\in A_{j}\), then \(n\in A_{j}\), again since \(|v_{i}|=2\) for all \(m+2\leq i\leq n\). It follows that \(n\in A_{2}\cap A_{3}\), so \(w_{2}\cdot w_{3}=1\), and therefore there is a unique \(i\in\{1,\ldots,n\}\) with \(w_{2}\sim v_{i}\) and \(w_{3}\sim v_{i}\), so \(w_{2}=-e_{2}+d_{n}\) and \(w_{3}=-e_{3}+d_{n}\), but then \((v_{m+1},w_{2},w_{3})\) is a heavy triple.
Case I.2: \(|w_{5}|\geq 3\) and \(A_{5}\cap A_{1}=\emptyset\).
If \(|w_{5}|\geq 3\) and \(A_{5}\cap A_{1}=\emptyset\), then \(w_{5}\sim w_{1}\), so \(w_{5}\sim v_{m}\) and \(w_{5}\cdot v_{i}=0\) for all \(i\in\{1,\ldots,t-1,t+1,\ldots,n\}\) or else the subgraph induced by \(\{v_{1}\ldots,v_{n},w_{1},w_{5}\}\) contains an incomplete cycle. Then \(w_{5}\cdot v_{m}=1\) and \(|v_{m}|\geq 3\), but then \((v_{m},w_{1},w_{5})\) is an incomplete cycle.
Case I.3: \(A_{5}\cap A_{1}\neq\emptyset\).
If \(A_{5}\cap A_{1}\neq\emptyset\), then the argument from Case I.1 shows that \(n\in A_{5}\). Moreover, for \(j\in\{2,3\}\) if \(w_{j}\) is unloaded and \(|w_{j}|\geq 3\), or if \(j=2\), \(w_{2}\) is loaded and \(|w_{3}|=2\), then \(n\in A_{j}\), so \(w_{j}\cdot w_{5}=1\), and therefore there is a unique \(i\in\{1,\ldots,n\}\) with \(w_{j}\sim v_{i}\) and \(w_{5}\sim v_{i}\), so \(A_{j}=\{n\}\) and \(w_{5}=-e_{5}+d_{n}\), but then \((v_{m+1},w_{j},w_{5})\) is a heavy triple.
Case II: \(m=t\).
If \(m=t\), then \(w_{1}\cdot v_{t}=-1\), so either \(w_{1}\dagger v_{t}\) or \(w_{1}\pitchfork v_{t}\). We will first show that \(w_{1}\dagger v_{t}\). Suppose, by way of contradiction, that \(w_{1}=-e_{1}+d_{t}\) and \(w_{1}\pitchfork v_{t}\). We must have \(\mathcal{V}=\mathcal{V}_{t}\); if \(|v_{t+1}|=2\), then \(t+1\in A_{1}\), so \(|w_{1}\cdot v_{t}|\leq|w_{1}|-3\) and therefore \(w_{1}\dagger v_{1}\), and if \(v_{t+1}=-d_{t+1}+d_{t}+d_{t-1}\), then \(v_{t+1}\) separates \(v_{t}\) from \(w_{1}\) in \(G(\mathcal{S})\), but \(v_{t+1}\cdot v_{t}=0\). Furthermore, \(w_{1}+v_{t}\) is reducible. Then we may write \(w_{1}+v_{t}=-e_{1}+2d_{0}+d_{1}+\ldots+d_{t-1}\) as \(x+y\) with \(x,y\in(\tau)^{\perp}\) and \(x\cdot y=0\), which we will show is absurd. If \(x\cdot d_{i}\), \(y\cdot d_{i}\geq 0\) for all \(1\leq i\leq n\), then either \(y|_{E_{8}}\neq 0\), or \(x=-e_{1}+2d_{0}+d_{1}+\ldots+d_{t-1}\) and \(y=0\), or \(x\cdot\tau\neq 0\). If \(y|_{E_{8}}\neq 0\), then \(x|_{E_{8}}\cdot y|_{E_{8}}=-1\), \(x\cdot d_{0}=y\cdot d_{0}=1\), and \(\{x\cdot d_{i},y\cdot d_{i}\}=\{0,1\}\) for all \(1\leq 0\leq t-1\). However, if \(x|_{E_{8}}\cdot y|_{E_{8}}=-1\), then \(y|_{E_{8}}\) is a negative root, so \(y\) is irreducible in \((\tau)^{\perp}\). Furthermore, \(y\cdot w_{1}=-1\) and \(y+w_{1}\) is irreducible in \((\tau)^{\perp}\) since \(|(y+w_{1})|_{E_{8}}|=2\) and \((y+w_{1})\cdot d_{i}\in\{0,1\}\) for all \(0\leq i\leq n\), so \(y\dagger w_{1}\) and \(\epsilon(y)=\varepsilon(v_{t})\); hence there is a sign error between \(y\) and \(w_{1}\) mediated by \(v_{t}\). If now \(x\cdot d_{n}=-1\), then \(y\cdot d_{n}=1\), \(x\cdot d_{0}=1\), \(y\cdot d_{0}=1\), and \(y|_{E_{8}}=0\), thus \(y\cdot\tau\neq 0\). Conclude that \(w_{1}\dagger v_{t}\).
Since \(w_{1}\dagger v_{t}\), either \(\mathcal{V}=\mathcal{V}_{t}\), or \(|v_{t+1}|=2\), in which case \((v_{t};v_{1},v_{t+1},w_{1})\) is a claw, or \(v_{t+1}=-d_{t+1}+d_{t}+d_{t-1}\), in which case \(t+1\in A_{1}\) or else \((v_{1},\ldots,v_{t-1},v_{t+1},w_{1},v_{t},v_{1})\) is an incomplete cycle. It follows then that \(|v_{i}|=2\) for all \(t+2\leq i\leq n\) and \(A_{1}=\{t,\ldots,n\}\)
We will break the remaining analysis into three subcases, as usual, depending on whether \(|w_{5}|=2\), \(|w_{5}|\geq 3\) and \(A_{5}\cap A_{1}=\emptyset\), or \(A_{5}\cap A_{1}\neq\emptyset\).
Case II.1: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then, as before, \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else there is an incomplete cycle, since the argument that shows that \(w_{1}\dagger v_{t}\) shows that \(w_{j}\dagger v_{t}\) if \(w_{j}\cdot v_{t}=-1\) for all \(j\in\{1,\ldots,8\}\). It follows that \(w_{2}\) and \(w_{3}\) are unloaded, and that \(|w_{2}|,|w_{3}|\geq 3\), or else there is a claw at \(w_{1}\). Furthermore, \(A_{2}\cap A_{1}\neq\emptyset\) and \(A_{3}\cap A_{1}\neq\emptyset\), or else there is an incomplete cycle. Then, for \(j\in\{2,3\}\), it follows that either \(n\in A_{j}\), or else \(n>t\) and \(A_{j}\cap A_{1}=\{t\}\), in which case \(w_{j}\sim v_{t+1}\) and either \(w_{j}\cdot v_{t}\neq 0\), in which case \(w_{j}\dagger v_{t}\) and thus \((v_{1},\ldots,v_{t-1},v_{t+1},w_{j},v_{t},v_{1})\) is an incomplete cycle, or \(w_{j}\cdot v_{t}=0\), in which case \(w_{j}=-e_{j}+d_{t}+d_{r}\) for some \(r<t\), and \(w_{j}\sim v_{r}\), and therefore either \(r<t-1\) and the subgraph induced by \(\{v_{1},\ldots,v_{t+1},w_{j}\}\) contains and incomplete cycle, or \(r=t-1\), and \((v_{t},v_{t+1},w_{j})\) induces a heavy triple since \(v_{t+1}\cdot v_{t}=w_{j}\cdot v_{t}=0\). Conclude that \(\{n\}=A_{2}\cap A_{3}\), so \(w_{2}\dagger w_{3}\) and either there is an incomplete cycle or \(A_{2}=A_{3}=\{n\}\), in which case \((v_{l},w_{2},w_{3})\) induces a heavy triple for \(l=\max\{i\leq n\colon|v_{i}|\geq 3\}\).
Case II.2: \(|w_{5}|\geq 3\) and \(A_{5}\cap A_{1}=\emptyset\).
If \(w_{5}\geq 3\) and \(A_{5}\cap A_{1}=\emptyset\), then \(w_{5}\dagger w_{1}\) and \(A_{5}\subset\{0,\ldots,t-1\}\), and so either \(|v_{i}|=2\) for all \(1\leq i\leq t-1\) and \(w_{5}=-e_{5}+d_{0}+\ldots+d_{t-1}\), or there is an incomplete cycle. It follows that \(t=n\), or else \((v_{1},\ldots,v_{t-1},v_{t+1},w_{5},v_{t},v_{1})\) is an incomplete cycle. It follows that for \(j\in\{2,3\}\) if \(|w_{j}|\geq 3\), then \(A_{j}=\{t\}\), or else \(w_{j}\dagger w_{5}\) and \(w_{j}\sim v_{t-1}\), and therefore \((v_{1},\ldots,v_{t-1},w_{j},w_{5},v_{t},v_{1})\) is an incomplete cycle. It follows then that if \(|w_{j}|\geq 3\) for \(j\in\{2,3\}\), then \(w_{j}\dagger v_{t}\), using the same argument as for \(w_{1}\) if \(w_{j}\) is unloaded, and therefore \((v_{t};v_{1},w_{5},w_{j})\) is a claw.
Case II.3: \(A_{5}\cap A_{1}\neq\emptyset\).
If \(A_{5}\cap A_{1}\neq\emptyset\), then either \(n\in A_{5}\), or \(A_{5}\cap A_{1}=\{t\}\) and \(w_{5}\cdot v_{t+1}=1\), in which case either \(t=n\), or \(t<n\) and either \(w_{5}\cdot v_{t}\neq 0\), in which case \(w_{5}\dagger v_{t}\) and thus \((v_{1},\ldots,v_{t-1},v_{t+1},w_{5},v_{t},v_{1})\) is an incomplete cycle, or \(w_{5}\cdot v_{t}=0\), in which case \(w_{5}=-e_{5}+d_{t}+d_{r}\) for some \(r<t\), and \(w_{5}\sim v_{r}\), and therefore either \(r<t-1\) and the subgraph induced by \(\{v_{1},\ldots,v_{t+1},w_{5}\}\) contains and incomplete cycle, or \(r=t-1\), and \((v_{t},v_{t+1},w_{5})\) induces a heavy triple since \(v_{t+1}\cdot v_{t}=w_{5}\cdot v_{t}=0\). Conclude that \(n\in A_{5}\). If \(|w_{2}|=|w_{3}|=2\), then \((w_{1};v_{t},w_{2},w_{3})\) is a claw, so let \(j\in\{2,3\}\) with \(|w_{j}|\geq 3\) and \(w_{j}\cdot-e_{1}=-1\). As with \(w_{5}\) in Case II.2, either \(n\in A_{j}\), in which case \(w_{j}\dagger w_{5}\) and there is a unique \(v_{i}\) such that \(w_{j}\sim v_{i}\) and \(w_{5}\sim v_{i}\), so \(i=n\) and \((v_{l},w_{j},w_{5})\) induces a heavy triple for \(l=\max\{i\in\{1,\ldots,n\}\colon|v_{i}|\geq 3\), or \(|v_{i}|=2\) for all \(1\leq i\leq t-1\) and \(w_{j}=-e_{j}+d_{0}+\ldots+d_{n}\), in which case \((v_{t};v_{1},w_{j},w_{5})\) is a claw.
Case III: \(m=t-1\).
If \(m=t-1\), then \(w_{1}\dagger v_{t-1}\), so either \(w_{1}=-e_{1}+d_{t}\) and \(w_{1}\dagger v_{t}\), or \(t\in A_{1}\), in which case \(|v_{i}|=2\) for all \(t+1\leq i\leq n\) or else \(v_{t+1}=-d_{t+1}+d_{t}+d_{t-1}\) and \((v_{t},v_{t+1},w_{1})\) induces a heavy triple since \(v_{t+1}\cdot v_{t}=w_{1}\cdot v_{t}=0\), and so furthermore \(w_{1}=-e_{1}+d_{t-1}+\ldots+d_{n}\).
Case III.1 \(|w_{5}|=2\).
If \(|w_{5}|=2\), then \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else there is an incomplete cycle, so then \(|w_{2}|,|w_{3}|\geq 3\) and \(w_{2}\) and \(w_{3}\) are both unloaded. Let \(j\in\{2,3\}\). If \(A_{j}\cap A_{1}=\emptyset\), then
\(w_{1}=-e_{1}+d_{t-1}\), and \(n\in A_{j}\) since \(|v_{i}|=2\) for all \(t+1\leq i\leq n\), but then the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{j}\}\) contains an incomplete cycle. Suppose now that \(A_{j}\cap A_{1}\neq\emptyset\). If \(w_{1}=-e_{1}+d_{t-1}\), then either \(2<t\) and \((v_{t-1};v_{t-2},w_{1},w_{j})\) is a claw or \((v_{1},\ldots,v_{t-1},w_{j},v_{t},v_{1})\) is an incomplete cycle, or \(t=2\) and \((v_{1};v_{2},w_{1},w_{j})\) is a claw or \((v_{1},v_{2},w_{j})\) is a negative triangle. If \(w_{1}=-e_{1}+d_{t-1}+\ldots+d_{n}\), then either \(n\in A_{j}\) or \(w_{j}=-e_{j}+d_{0}+\ldots+d_{t-1}\) and \(w_{j}\prec v_{t}\), or else \(w_{j}=-e_{j}+d_{t-1}\) and there is either a claw, an incomplete cycle, or a negative triangle as before. If \(n\in A_{2}\cap A_{3}\), then \(w_{2}\natural w_{3}\), so the subgraph induced by \(\{v_{1},\ldots,v_{n}\}\) contains an incomplete cycle, unless \(w_{2}=-e_{2}+d_{n}\) and \(w_{3}=-e_{3}+d_{n}\), in which case \((v_{t},w_{2},w_{3})\) induces a heavy triple. If \(w_{j}=-e_{j}+d_{0}+\ldots+d_{t-1}\), then \(t=n\) or else \((v_{t};v_{1},v_{t+1},w_{j})\) is a claw, and if then \(w_{k}=-e_{k}+d_{t}\) for \(\{j,k\}=\{2,3\}\), then \((v_{t};v_{1},w_{j},w_{k})\) is a claw, and if \(w_{k}=-e_{k}+d_{t-1}+d_{t}\), then \((v_{1},\ldots,v_{t-1},w_{1},w_{k},w_{j},v_{n},v_{1})\) is an incomplete cycle.
Case III.2\(|w_{5}|\geq 3\) and \(A_{5}\cap A_{1}=\emptyset\).
If \(|w_{5}|\geq 3\) and \(A_{5}\cap A_{1}=\emptyset\), then \(w_{1}=-e_{1}+d_{t-1}\) and \(A_{5}\subset\{t,\ldots,n\}\), in which case the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{5}\}\) contains an incomplete cycle.
Case III.3\(A_{5}\cap A_{1}\neq\emptyset\).
If \(A_{5}\cap A_{1}\neq\emptyset\), then it follows from the argument in Case III.1 that \(w_{1}=-e_{1}+d_{t-1}+\ldots+d_{n}\) and either \(w_{5}=-e_{5}+d_{n}\) or \(w_{5}=-e_{5}+d_{0}+\ldots+d_{t-1}\). Let \(j\in\{2,3\}\) and suppose that \(w_{j}\cdot-e_{1}=-1\). Then, if \(|w_{j}|\geq 3\), precisely one element \(w_{k}\) of \(\{w_{j},w_{5}\}\) has \(n\in A_{k}\) while the other element \(w_{l}\) has \(A_{l}=\{0,\ldots,t-1\}\), in which case either \((v_{t};v_{1},w_{k},w_{l})\) is a claw or \((v_{1},\ldots,v_{t-1},w_{1},w_{k},w_{j},v_{n},v_{1})\) is an incomplete cycle, as in Case III.1.
Case IV: \(0<m<t-1\).
If \(0<m<t-1\), then \(|v_{m+1}|\geq 3\) or else there is a claw at \(v_{m}\), and \(m+1\in A_{1}\) or else the subgraph induced by \(\{v_{1},\ldots,v_{m+1},v_{t},w_{1}\}\) contains an incomplete cycle. It follows, since every element of \(\{v_{1},\ldots,v_{t-1}\}\) is just right [11, Lemma 8.1] and \(w_{1}\not\sim v_{t}\), that \(w_{1}=-e_{1}+d_{m}+\ldots+d_{t-1}\), and furthermore that \(|v_{i}|=2\) for all \(m+2\leq i\leq t-1\). Since \(|v_{m+1}|\geq 3\), we have \(n=t\) by [11, Lemma 8.2].
Case IV.1: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else there is an incomplete cycle, as before. It follows that \(w_{2}\) and \(w_{3}\) are unloaded, and \(|w_{2}|\), \(|w_{3}|\geq 3\). Let \(j\in\{2,3\}\). If \(A_{j}\cap A_{1}=\emptyset\), then \(A_{j}\subset\{1,\ldots,m-1,t\}\). If \(A_{j}=\{t\}\), then \((v_{t};v_{1},v_{r},w_{j})\) is a claw for the unique \(2\leq r\leq m+1\) with \(|v_{r}|=r+1\). If \(A_{j}\cap\{1,\ldots,m-1\}\neq\emptyset\), then there is some \(k\leq m\) with \(|v_{k}|\geq 3\) such that \(w_{j}\cdot v_{k}=1\), and some \(l<k\) such that \(w_{j}\cdot v_{l}=-1\), thus the subgraph induced by \(\{v_{1},\ldots,v_{k},v_{t},w_{j}\}\) contains an incomplete cycle. It follows that \(A_{j}\cap A_{1}\neq\emptyset\), so either \(t-1\in A_{j}\) or \(A_{j}=\{m\}\), in which case the subgraph induced by \(\{v_{1},\ldots,v_{m+1},v_{t},w_{j}\}\) contains an incomplete cycle. If \(t-1\in A_{j}\), then either \(w_{j}=-e_{j}+d_{t-1}\) or \(w_{j}=-e_{j}+d_{t-1}+d_{t}\); \(|A_{j}\cap A_{1}|=1\) and \(A_{j}\cap\{1,\ldots,m-1\}\neq 0\), in which case there is a pair \(1\leq l<k\) with \(v_{l}\) and \(v_{k}\) in separate components of \(G(\bar{\mathcal{V}})\), \(w_{j}\cdot v_{k}=1\), and \(w_{j}\cdot v_{l}=-1\), and so there is an incomplete cycle in the subgraph induced by \(\{v_{1},\ldots,v_{t},w_{j}\}\); or \(w_{j}=-e_{j}+d_{t-2}+d_{t-1}\) and \(m=t-2\), but then \(w_{1}\natural w_{j}\) and there is a sign error between \(w_{1}\) and \(w_{j}\) mediated by \(v_{t}\). It follows that for \(\{j,k\}=\{2,3\}\), \(w_{j}=-e_{j}+d_{t-1}\) and \(w_{k}=-e_{k}+d_{t-1}+d_{t}\), or or else either \(w_{j}\cdot w_{k}=2\) or \(A_{j}=A_{k}=\{n\}\) and so there is a sign error between \(w_{j}\) and \(w_{k}\) mediated by \(v_{t}\), but then
\((v_{m+1},w_{j},w_{k})\) is a heavy triangle.
Case IV.2: \(|w_{5}|\geq 3\) and \(A_{5}\cap A_{1}=\emptyset\).
If \(|w_{5}|\geq 3\) and \(A_{5}\cap A_{1}=\emptyset\), then, as in the argument for \(w_{j}\) in Case IV.1, either \(w_{j}=\{t\}\) and \((v_{t};v_{1},v_{r},w_{5})\) is a claw, or \(w_{5}\cdot v_{k}=1\) for some \(r\leq k<t\) and \(w_{5}\cdot v_{l}\) for some \(1\leq l<k\), so the subgraph induced by \(\{v_{1},\ldots,v_{k},v_{t},w_{5}\}\) contains an incomplete cycle.
Case IV.3: \(A_{5}\cap A_{1}\neq\emptyset\).
If \(A_{5}\cap A_{1}\neq\emptyset\), then, as in the argument for \(w_{j}\) in Case IV.1, either \(w_{5}=-e_{5}+d_{t-1}\) or \(w_{5}=-e_{5}+d_{t-1}+d_{t}\). If \(|w_{2}|=|w_{3}|=2\), then \((w_{1};v_{m},w_{2},w_{3})\) is a claw, and, for \(j\in\{2,3\}\), if \(|w_{j}|\geq 3\) and \(w_{j}\cdot-e_{1}=-1\), then, letting \(\{k,l\}=\{j,5\}\), \(A_{k}=\{t-1\}\) and \(A_{l}=\{t-1,t\}\), but then \((v_{m+1},w_{j},w_{5})\) is a heavy triple.
Case V: \(m=0\).
If \(m=0\), then \(\{0,\ldots,t-1\}\subset A_{1}\), so \(|v_{i}|=2\) for all \(1\leq i\leq t-1\). If \(t\in A_{1}\), then \(2\leq w_{1}\cdot v_{t}\leq|w_{1}|-3\), which is absurd. Suppose that \(w_{1}=-e_{1}+d_{0}+\ldots+d_{t-1}+d_{r}\) for some \(t<r\). Then \(2\leq w_{1}\cdot v_{t}=|w_{1}|-2\), so \(w_{1}-v_{t}\) is reducible. Suppose \(w=-e_{1}+d_{r}+d_{t}-d_{0}=x+y\) for some \(x,y\in(\tau)^{\perp}\) and \(x\cdot y\geq 0\). Then, without loss of generality, either \(x=w\) and \(y=0\), or \(|x|=3\), \(|y|=2\), and \(x\cdot y=0\). We must have \(y|_{E_{8}}=0\) if \(y\in(\tau)^{\perp}\), so we must have \(x=-e_{1}+d_{k}\) for some \(k\in\{0,t,r\}\), which is absurd since \(e_{1}\cdot\tau=t+d_{r}\cdot\sigma>\max\{d_{0}\cdot\sigma,d_{t}\cdot\sigma,d_ {r}\cdot\sigma\}\). Conclude that \(w_{1}=-e_{1}+d_{0}+\ldots+d_{t-1}\) and \(w_{1}\prec v_{t}\). It follows that \(t=n\); if not, then either \(|v_{t+1}|=2\), in which case \((v_{t};v_{1},v_{t+1},w_{1})\) is a claw, or \(v_{t+1}=-d_{t+1}+d_{t}+d_{t-1}\), in which case \((v_{1},\ldots,v_{t+1},w_{1},v_{t},v_{1})\) is an incomplete cycle.
**Proposition 5.10**.: _If \(t\geq 2\) and \(v_{t}\) is tight, then \(\sigma=(1,\ldots,1,n+1)\in\mathbb{Z}^{n+1}\), and one of the following holds:_
1. \(s^{*}=(n,n+1,1,0,0,0,0,0)\)_,_
2. \(s^{*}=(n,1,n+1,0,0,0,0,0)\)_,_
3. \(s^{*}=(n,n+2,1,0,0,0,0,0)\)_,_
4. \(s^{*}=(n,1,n+2,0,0,0,0,0)\)_,_
5. \(s^{*}=(n,n+1,0,0,1,0,0,0)\)_,_
6. \(s^{*}=(n,1,0,0,n+1,0,0,0)\)_,_
7. \(s^{*}=(n,n+2,0,0,1,0,0,0)\)_,_
8. \(s^{*}=(n,1,0,0,n+2,0,0,0)\)_,_
9. \(s^{*}=(n,0,n+1,0,1,0,0,0)\)_,_
10. \(s^{*}=(n,0,1,0,n+1,0,0,0)\)_,_
11. \(s^{*}=(n,0,n+2,0,1,0,0,0)\)_, or_
12. \(s^{*}=(n,0,1,0,n+2,0,0,0)\)_._
Proof.: First note that for \(w_{j}\in\mathcal{W}\)\(j\neq 1\), and \(w_{j}\) unloaded, if \(A_{j}\neq\emptyset\), then \(A_{j}\in\{\{t\},\{t,t-1\},\{t-1\}\}\). Recall that if \(A_{j}=t\), then \(w_{j}\dagger v_{t}\) for all \(j\in\{1,\ldots,8\}\). Note that \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else either \((v_{t};v_{1},w_{1},w_{j})\) is a claw or \((v_{1},\ldots,v_{t-1},w_{j},w_{1},v_{t},v_{1})\) is an incomplete cycle for some \(j\in\{6,7,8\}\)
Case I: \(|w_{5}|=2\).
Suppose that \(|w_{5}|=2\). It follows that \(w_{2}\) and \(w_{3}\) are both unloaded, and furthermore that \(w_{j}=-e_{j}+d_{t-1}\) and \(w_{k}=-e_{k}+d_{t-1}+d_{t}\) or \(w_{k}=-e_{k}+d_{t}\) for \(\{j,k\}=\{2,3\}\). If \(w_{4}\) is loaded, then \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}\) or else \(s_{4}^{*}\leq|\sigma|_{1}\), and therefore \(w_{4}=-e_{4}+e_{2}+e_{1}+d_{t}\) or else \(w_{4}\cdot w_{1}\geq 0\) and either \((w_{5};w_{1},w_{4},w_{6})\) is a claw or \((w_{1},w_{4},w_{5})\) is a negative triangle, but then \((v_{t},w_{1},w_{5},w_{4},v_{t})\) is an incomplete cycle. Conclude that \(w_{4}\) is unloaded. If \(t\in A_{4}\), then \((v_{1},\ldots,v_{t-1},w_{4},w_{1},v_{t},v_{1})\) is an incomplete cycle. If \(w_{4}=-e_{4}+d_{t}\), then \((v_{t};v_{1},w_{1},w_{4})\) is a claw. Conclude that \(|w_{4}|=2\), thus \(s^{*}=(n,n+1,1,0,0,0,0,0)\), \(s^{*}=(n,1,n+1,0,0,0,0,0)\), \(s^{*}=(n,n+2,1,0,0,0,0,0)\), or \(s^{*}=(n,1,n+2,0,0,0,0,0)\).
Case II: \(w_{5}=-e_{5}+d_{t-1}\).
Suppose now that \(w_{5}=-e_{5}+d_{t-1}\). If \(w_{j}\) is loaded and \(w_{j}\cdot e_{1}=0\) for \(j\in\{2,3\}\), then \(A_{j}=\{t\}\) or else \((v_{1},\ldots,v_{t-1},w_{j},w_{1},v_{t},v_{1})\) is an incomplete cycle. But then \((v_{t};v_{1},w_{1},w_{j})\) is a claw. If \(w_{2}\) is loaded but \(|w_{3}|=2\), then \(w_{2}\sim w_{3}\) and \(w_{3}\sim w_{1}\), so either \(w_{2}=-e_{2}+e_{4}+d_{t-1}+d_{t}\) and \((v_{1},\ldots,v_{t-1},w_{5},w_{2},w_{3},w_{1},v_{t},v_{1})\) is an incomplete cycle, or \(w_{2}=-e_{2}+e_{4}+d_{t}\), and \((v_{t},w_{1},w_{3},w_{2},v_{t})\) is an incomplete cycle. It follows that \(w_{j}=-e_{j}+d_{t}\) or \(w_{j}=-e_{j}+d_{t-1}+d_{t}\) and \(|w_{k}|=2\) for \(\{j,k\}=\{2,3\}\). Since \(|\sigma|_{1}+1=2n+2\), \(s_{1}^{*}=n\), \(s_{2}^{*}\leq n+2\), and \(s_{5}^{*}=1\), we must have \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}\), in which case \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}+d_{t}\), or else \((w_{6};w_{4},w_{5},w_{7})\) is a claw. But then \((v_{1},\ldots,v_{t-1},w_{5},w_{4},v_{t},v_{1})\) is an incomplete cycle. Conclude that \(w_{4}\) is unloaded. If \(t-1\in A_{4}\), then \((v_{1},\ldots,v_{t-1},w_{5},w_{4},w_{1},v_{t},v_{1})\) is an incomplete cycle, and if \(w_{4}=-e_{4}+d_{t}\), then \((v_{t};v_{1},w_{1},w_{4})\) is a claw. Conclude that \(|w_{4}|=2\) so \(s^{*}=(n,0,n+1,0,1,0,0,0)\), \(s^{*}=(n,n+1,0,0,1,0,0,0)\), \(s^{*}=(n,0,n+2,0,1,0,0,0)\), or \(s^{*}=(n,n+2,0,0,1,0,0,0)\).
Case III: \(w_{5}=-e_{5}+d_{t}\).
Suppose now that \(w_{5}=-e_{5}+d_{t}\). Citing the same argument as from the beginning of Case II, we see that \(w_{2}\) and \(w_{3}\) are both unloaded. It follows that \(w_{j}=-e_{j}+d_{t-1}\) and \(|w_{k}|=2\) for \(\{j,k\}=\{2,3\}\). If \(w_{4}\) is loaded, then either \(w_{2}=-e_{2}+d_{t-1}\) and \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}\) or \(|w_{2}|=2\) and \(w_{4}|_{E_{8}}=-e_{4}+e_{1}+e_{5}\), and in either case we see that \(w_{4}\sim w_{6}\), so \(w_{4}\dagger w_{5}\) and \(w_{4}\cdot w_{5}=-1\) or else \((w_{6};w_{4},w_{5},w_{7})\) is a claw. But then \(A_{4}=\{t-1\}\), so \((v_{1},\ldots,v_{t-1},w_{4},v_{t},v_{1})\) is an incomplete cycle or \((v_{1},v_{t},w_{4})\) is a negative triangle. Conclude that \(w_{4}\) is unloaded. If \(t-1\in A_{4}\), then \((v_{1},\ldots,v_{t-1},w_{4},w_{1},v_{t},v_{1})\) is an incomplete cycle, and if \(w_{4}=-e_{4}+d_{t}\), then \((v_{t},w_{4},w_{5})\) induces a heavy triple. Conclude that \(|w_{4}|=2\), so \(s^{*}=(n,0,1,0,n+1,0,0,0)\) or \(s^{*}=(n,1,0,0,n+1,0,0,0)\).
Case IV: \(w_{5}=-e_{5}+d_{t-1}+d_{t}\).
Suppose lastly that \(w_{5}=-e_{5}+d_{t-1}+d_{t}\). Citing the same argument as from the beginning of Case III, it follows that \(w_{j}=-e_{j}+d_{t-1}\) and \(|w_{k}|=2\) for \(\{j,k\}=\{2,3\}\). If \(w_{4}\) is loaded then either \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}\) or \(w_{4}|_{E_{8}}=-e_{4}+e_{1}+e_{5}\), and in both cases we see that \(w_{4}\sim w_{6}\), so \(w_{4}\dagger w_{5}\) and \(w_{4}\cdot w_{5}=-1\), but then \(A_{4}=\emptyset\), which is absurd since then \(s_{4}^{*}\leq n+2=|\sigma|_{1}+1\). Conclude that \(w_{4}\) is unloaded. If \(w_{4}\) is unloaded and \(t-1\in A_{4}\), then \((v_{1},\ldots,v_{t-1},w_{4},w_{1},v_{t},v_{1})\) is an incomplete cycle. If \(w_{4}=-e_{4}+d_{t}\), then \((v_{t};v_{1},w_{1},w_{4})\) is a claw. Conclude that \(|w_{4}|=2\), so \(s^{*}=(n,0,1,0,n+2,0,0,0)\) or \(s^{*}=(n,1,0,0,n+2,0,0,0)\).
### When \(v_{1}\) is tight
**Lemma 5.11**.: _If \(v_{1}\) is tight, then \(v_{2}\sim v_{1}\)._
Proof.: Either \(|v_{2}|=2\), so \(v_{2}\mathord{\dagger}v_{1}\) or \(v_{2}\prec v_{1}\) since \(v_{2}\cdot v_{1}=-1\), or \(v_{2}=-d_{2}+d_{1}+d_{0}\), in which case \(|v_{2}|=3\), \(v_{2}\cdot v_{1}=1\), and so either \(v_{2}\mathord{\dagger}v_{1}\) or \(v_{2}\pitchfork v_{1}\). Suppose, by way of contradiction, that \(v_{2}\pitchfork v_{1}\). Then \(T(v_{1})\cup T(v_{2})\setminus T(v_{1})\cap T(v_{2})\) consists of two intervals \(T_{0}\) and \(T_{1}\), and without loss of generality, either \(|T_{0}|=4\) and \(|T_{1}|=2\) or \(|T_{0}|=|T_{1}|=3\). Suppose first that \(|T_{0}|=4\) and \(|T_{1}|=2\). Since \(\varepsilon_{1}=\varepsilon_{2}\), it follows that there is some \(x\in(\tau)^{\perp}\cap\mathbb{Z}^{n+1}\) with \(|x|=2\) such that \(x\cdot v_{1}=1\) and \(x\cdot v_{2}=-1\). Write \(x=\sum_{i=0}^{n}x_{i}d_{i}\), and we must have \(2x_{0}-x_{1}=1\) and \(x_{0}+x_{1}-x_{2}=-1\), and therefore \(x_{0}=x_{2}=0\) and \(x_{1}=-1\), or else \(|x|\geq 2\). But then \(x_{i}=0\) for all but one \(2\leq i\leq n\), so \(x\cdot\sigma\neq 0\), which is absurd. Suppose instead that \(|T_{0}|=3\) and \(|T_{1}|=3\). Then there is some \(x\in(\tau)^{\perp}\) with \(|x|=3\) such that \(x\cdot v_{1}=2\) and \(x\cdot v_{2}=-1\). Then either \(x=r+d_{i}\) for some \(r\in E_{8}\oplus(0)\), in which case \(x=r+d_{0}\) or else \(x\cdot v_{1}\neq 2\), and so \(x\cdot v_{2}=1\), or \(x=\sum_{i=0}^{n}x_{i}d_{i}\), in which case \(2x_{0}-x_{1}=2\), so \(x_{0}=1\) and \(x_{1}=0\), and \(1-x_{2}=-1\), so \(|x|\geq 3\). Conclude that \(v_{2}\mathord{\dagger}v_{1}\) if \(v_{2}=-d_{2}+d_{1}+d_{0}\).
**Lemma 5.12**.: _If \(v_{1}\) is tight and \(v_{3}=-d_{3}+d_{2}+d_{1}\), then \(v_{3}\mathord{\dagger}v_{1}\)._
Proof.: Suppose to the contrary. Then \(T(v_{1})\cup T(v_{3})\setminus T(v_{1})\cap T(v_{3})\) consists of two intervals \(T_{0}\) and \(T_{1}\), and without loss of generality, either \(|T_{0}|=4\) and \(|T_{1}|=2\) or \(|T_{0}|=|T_{1}|=3\). Suppose there is some \(x\in(\tau)^{\perp}\) with \(|x|=2\), \(x\cdot v_{1}=1\), and \(x\cdot v_{3}=1\). Then, writing \(x=\sum_{i=0}^{n}x_{i}d_{i}\), \(2x_{0}-x_{1}=1\) and \(x_{1}+x_{2}-x_{3}=1\), so \(x_{0}=x_{1}=1\), but then \(x\cdot\tau\neq 0\). Suppose there is some \(x\in(\tau)^{\perp}\) with \(|x|=3\), \(x\cdot v_{1}=2\), and \(x\cdot v_{3}=1\). Then, \(2x_{0}-x_{1}=2\), so \(x_{0}=1\) and \(x_{1}=0\), and \(x_{2}-x_{3}=1\), so either \(x_{2}=0\) and \(x_{3}=-1\), in which case \(x\cdot\tau=\sigma_{0}-\sigma_{3}\pm\sigma_{j}\neq 0\) since \(|\sigma_{j}|>|\sigma_{3}-\sigma_{1}|\) for \(j\geq 4\), or \(x_{2}=1\) and \(x_{3}=0\), in which case \(x\cdot\tau=\sigma_{0}+\sigma_{2}+\sigma_{j}\neq 0\) since \(|\sigma_{j}|\geq\sigma_{0}+\sigma_{2}\) for \(j\geq 4\). Conclude that \(v_{3}\mathord{\dagger}v_{1}\)
**Corollary 5.13**.: _If \(v_{1}\) is tight, then for all \(2\leq j\leq n\) there is a path from \(v_{j}\) to \(v_{1}\) in \(G(\mathcal{V})\)._
**Lemma 5.14**.: _If \(v_{1}\) is tight and \(n\geq 2\), then \(w_{1}=-e_{1}+d_{1}+\ldots+d_{n}\) and \(|v_{i}|=2\) for all \(2\leq i\leq n\)._
Proof.: Let \(m=\min(A_{1})\). We break our analysis into cases based on \(m\).
Case I: \(m=0\).
If \(m=0\), then either \(w_{1}=-e_{1}+d_{0}\) and \(w_{1}\prec v_{1}\), \(w_{1}=-e_{1}+d_{0}+d_{g}\) for some \(2\leq g\) and \(w_{1}\pitchfork v_{1}\), or \(1\in A_{1}\) and \(w_{1}\mathord{\dagger}v_{1}\).
Case I.1: \(w_{1}=-e_{1}+d_{0}\).
Suppose \(w_{1}=-e_{1}+d_{0}\). It follows that \(v_{i}\) is just right for all \(2\leq i\leq n\), or else \(v_{g}=-d_{g}+d_{g-1}+d_{0}\) for some \(2<g\), in which case there is a sign error between \(v_{g}\) and \(w_{1}\) mediated by \(v_{1}\). If \(j\in\{2,3,5\}\), \(|w_{j}|\geq 3\), and \(w_{j}\cdot-e_{1}=-1\), then either \(0\in A_{j}\) or \(1\in A_{j}\), or else \((v_{1},\ldots,v_{m^{\prime}},w_{j},w_{1},v_{1})\) is an incomplete cycle for \(m^{\prime}=\min(A_{j})\).
Suppose now that \(0\in A_{j}\). It follows that \(w_{j}\pitchfork v_{1}\) or else either \(|v_{2}|=2\) and \((v_{1};v_{2},w_{1},w_{j})\) is a claw, or \(v_{2}=-d_{2}+d_{1}+d_{0}\) and \((v_{1},w_{1},v_{2},w_{j},v_{1})\) is a claw. Conclude that \(w_{j}=-e_{j}+d_{0}+d_{g}\) for some \(2\leq g\).
Suppose instead that \(0\in A_{j}\) but \(1\in A_{j}\). It follows that \(w_{j}\mathord{\dagger}v_{1}\) or else there is a sign error between \(w_{1}\) and \(w_{j}\) mediated by \(v_{1}\). If \(v_{2}=-d_{2}+d_{1}+d_{0}\), then \(2\in A_{j}\) or else \((v_{1},v_{2},w_{j})\) is
a negative triangle, but then \((v_{1},v_{2},w_{1},w_{j},v_{1})\) is an incomplete cycle. Conclude that \(|v_{i}|=2\) for all \(2\leq i\leq n\) and \(A_{j}=\{1,2,\ldots,n\}\).
We further break this case down depending on whether \(|w_{5}|=2\), \(w_{5}=-e_{5}+d_{0}+d_{g}\) for some \(2\leq g\), or \(|v_{i}|=2\) for all \(2\leq i\leq n\) and \(w_{5}=-e_{5}+d_{1}+\ldots+d_{n}\).
Case I.1.a: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then \(|w_{6}|=|w_{7}|=|w_{8}|=2\), or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{5},\)\(w_{6},w_{7},w_{8}\}\) contains an incomplete cycle. It follows that \(w_{2}\) and \(w_{3}\) are both unloaded and \(|w_{2}|\), \(|w_{3}|\geq 3\), or else there is a claw at \(w_{1}\). Let \(\{j,k\}=\{2,3\}\). Then, without loss of generality, \(w_{j}=-e_{j}+d_{0}+d_{g}\) for some \(2\leq g\) and \(w_{k}=-e_{k}+d_{1}+\ldots+d_{n}\) and \(|v_{i}|=2\) for all \(2\leq i\leq n\), or else either \(2\leq w_{j}\cdot w_{k}\leq|w_{j}|-2\) or \(w_{j}\cdot w_{k}=1\) and \(w_{j},w_{k}\pitchfork v_{1}\), in which case there is a sign error between \(w_{j}\) and \(w_{k}\) mediated by \(v_{1}\). However, then \(w_{j}=-e_{j}+d_{0}+d_{n}\) and there is a sign error between \(w_{j}\) and \(v_{2}+\ldots+v_{n}\) mediated by \(v_{1}\). Conclude that \(|w_{5}|\geq 3\).
Case I.1.b: \(w_{5}=-e_{5}+d_{0}+d_{g}\).
Suppose that \(w_{5}=-e_{5}+d_{0}+d_{g}\). We must then have \(|w_{j}|\geq 3\) for some \(j\in\{2,3\}\) with \(w_{j}\cdot-e_{1}=-1\), and so by the same argument as in Case I.1.a, we must have \(|v_{i}|=2\) for all \(2\leq i\leq n\) and \(A_{j}=\{1,\ldots,n\}\), but then \(w_{5}=-e_{5}+d_{0}+d_{n}\) and there is a sign error between \(w_{5}\) and \(v_{2}+\ldots+v_{n}\) mediated by \(v_{1}\).
Case I.1.c: \(|v_{i}|=2\) for all \(2\leq i\leq n\) and \(w_{5}=-e_{5}+d_{1}+\ldots+d_{n}\).
If \(|v_{i}|=2\) for all \(2\leq i\leq n\) and \(w_{5}=-e_{5}+d_{1}+\ldots+d_{n}\), then there is some \(j\in\{2,3\}\) with \(|w_{j}|\geq 3\) and \(w_{j}\cdot-e_{1}=-1\), so \(w_{j}=-e_{j}+d_{0}+d_{n}\) by the arguments outlined in Cases I.1.a and I.1.b, but then \((v_{1},\ldots,v_{n},w_{j},w_{5},v_{1})\) is an incomplete cycle.
Case I.2: \(w_{1}=-e_{1}+d_{0}+d_{g}\) for some \(2\leq g\).
Suppose that \(w_{1}=-e_{1}+d_{0}+d_{g}\) for some \(2\leq g\). Then either \(g=n\), or \(|v_{g+1}|\geq 3\), which is absurd since then \(w_{1}\cdot v_{g+1}=1\), and so \(v_{g+1}\) separates \(w_{1}\) from \(v_{1}\) in \(G(\mathcal{V})\) but \(v_{g+1}\not\prec v_{1}\). Conclude that \(g=n\), and furthermore that \(0\in\operatorname{supp}(v_{n})\), and moreover that \(|v_{n}|\geq 3\), or else \(w_{1}\dagger v_{n}\) and so \(v_{n}\) separates \(w_{1}\) from \(v_{1}\) in \(G(\mathcal{V})\) but \(v_{n}\not\prec v_{1}\). Then, since \(|w_{1}|=4\), \(|v_{1}|=5\), and \(w_{1}\cdot v_{1}=2\), there exists some \(x\in(\tau)^{\perp}\) with \(|x|=2\) and \(x\cdot w_{1}=x\cdot v_{1}=-1\). Since \(-d_{0}+d_{1}\not\in(\tau)^{\perp}\), we must have \(x=d_{1}-d_{n}\), in which case \(|v_{i}|=2\) for all \(2\leq i\leq n\), a contradiction.
Case I.3: \(1\in A_{1}\) and \(w_{1}\dagger v_{1}\).
If \(1\in A_{1}\), then \(w_{1}=-e_{1}+d_{0}+\ldots+d_{n}\), and therefore \(|v_{i}|=2\) for all \(2\leq i\leq n\) or else either \(v_{2}=-d_{2}+d_{1}+d_{0}\), which is absurd since then \(v_{2}\dagger v_{1}\) and \(v_{2}\dagger w_{1}\), or \(v_{3}=-d_{3}+d_{2}+d_{1}\), which is absurd since then \((v_{1},v_{3},w_{1})\) is a negative triangle. It follows that if \(|w_{j}|\geq 3\) and \(w_{j}\cdot-e_{1}=-1\), then \(A_{j}=\{n\}\), or else either \(w_{j}=-e_{j}+d_{0}\), in which case \((v_{1};v_{2},w_{1},w_{j})\) is a claw, or \(w_{j}=-e_{j}+d_{0}+d_{n}\), in which case there is a sign error between \(w_{j}\) and \(v_{1}+\ldots+v_{n}\) mediated by \(v_{1}\), or \(A_{j}=\{n-1,n\}\), in which case either \(n=2\) and \((v_{1},w_{1},w_{j})\) is a negative triangle, or \(n>2\) and \((v_{n-1};v_{n-2},v_{n},w_{j})\) is a claw.
Case I.3.a: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else there is an incomplete cycle, so \(w_{2}\) and \(w_{3}\) are unloaded and \(|w_{2}|\), \(|w_{3}|\geq 3\) or else there is a claw at \(w_{1}\). We must then have that
\(w_{2}=-e_{2}+d_{n}\) and \(w_{3}=-e_{3}+d_{n}\), but then \((v_{1},w_{2},w_{3})\) induces a heavy triple.
Case I.3.b: \(w_{5}=-e_{5}+d_{n}\).
If \(w_{5}=-e_{5}+d_{n}\), then we must have \(|w_{j}|\geq 3\) and \(w_{j}\cdot-e_{1}=-1\) for some \(j\in\{2,3\}\) or else \(|w_{2}|=|w_{3}|=2\) and \((w_{1};v_{1},w_{2},w_{3})\) is a claw. But then \(A_{j}=\{n\}\) and \((v_{1},w_{j},w_{5})\) induces a heavy triple.
Case II: \(m=1\).
If \(m=1\), then by work above we see that \(w_{1}{\dagger}v_{1}\). If \(|v_{2}|=2\), then \(2\in A_{1}\), in which case \(w_{1}=-e_{1}+d_{1}+\ldots+d_{n}\), in which case either \(|v_{i}|=2\) for all \(2\leq i\leq n\), or \(v_{3}=-d_{3}+d_{2}+d_{1}\) or \(v_{3}=-d_{3}+d_{2}+d_{1}+d_{0}\), but in either case \(v_{3}{\dagger}v_{1}\) and \(v_{3}{\dagger}w_{1}\), which is absurd. Suppose now that \(w_{2}=-d_{2}+d_{1}+d_{0}\). We must have \(2\in A_{1}\) or else \((v_{1},v_{2},w_{1})\) is a negative triangle, and so either \(|v_{i}|=2\) for all \(3\leq i\leq n\) and \(A_{1}=\{1,\ldots,n\}\), or \(v_{3}=-d_{3}+d_{2}+d_{0}\), in which case either \(3\in A_{1}\) and \((v_{1};v_{2},v_{3},w_{1})\) is a claw or \(3\not\in A_{1}\) and \((v_{1},v_{3},w_{1})\) is a negative triangle. Conclude that \(A_{1}=\{1,\ldots,n\}\), \(|v_{i}|=2\) for all \(3\leq i\leq n\), and \(|v_{2}|\in\{2,3\}\).
Suppose that \(v_{2}=-d_{2}+d_{1}+d_{0}\) and let \(|w_{j}|\geq 3\) with \(w_{j}\cdot-e_{1}=-1\). If \(A_{j}\cap A_{1}=\emptyset\), then \(w_{j}=-e_{j}+d_{0}\) and \(w_{j}\prec v_{1}\), but then \((v_{1},v_{2},w_{j},w_{1},v_{1})\) is an incomplete cycle. If \(A_{j}\cap A_{1}=\{1\}\), then \((v_{1},v_{2},w_{j})\) is a negative triangle. If \(n\geq 3\) and \(A_{j}\cap A_{1}=\{1,n\}\), then \((v_{1},w_{j},v_{n},\ldots,v_{2},v_{1})\) is an incomplete cycle. If \(A_{j}\cap A_{1}=\{n,n-1\}\), then either \(n=2\) and \((v_{1},w_{1},w_{j})\) induces a heavy triple, or \(n>2\) and there is a claw at \(v_{n-1}\). Conclude that \(A_{j}=\{n\}\). We must have \(|w_{j}|\), \(|w_{k}|\geq 3\) with \(w_{j}\cdot-e_{1}=w_{k}\cdot-e_{1}=-1\) for \(j\neq k\in\{2,3,5\}\) or else there is a claw at \(w_{1}\) or \(|w_{5}|=2\), \(w_{2}\) or \(w_{3}\) is loaded, and the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{5},w_{6},w_{7},w_{8}\}\) contains an incomplete cycle, but then \((v_{2},w_{j},w_{k})\) is a heavy triple. Conclude that \(v_{2}=-d_{2}+d_{1}\).
Case III: \(2\leq m\).
If \(2\leq m\), then either \(m=n\) or \(|v_{m+1}|\geq 3\) and \(m+1\in A_{1}\), or else there is an incomplete cycle or \(|v_{m+1}|=2\) and there is a claw at \(v_{m}\). Suppose that \(|v_{m+2}|\geq 3\). Then, either \(v_{m+2}=-d_{m+2}+d_{m+1}+d_{m}\) and \((v_{l},v_{m+2},w_{1})\) induces a heavy triple for \(l=\max\{i\leq m\colon|v_{i}|\geq 3\}\), or \(v_{m+2}=-d_{m+2}+d_{m+1}+d_{0}\), in which case \(v_{m+1}=-d_{m+1}+d_{m}+\ldots+d_{1}\) or else either \(v_{m+1}=-d_{m+1}+d_{m}+\ldots+d_{0}\) and \((v_{1};v_{2},v_{m+1},v_{m+2})\) is a claw if \(|v_{2}|=2\) and \((v_{1},v_{m+1},v_{2},v_{m+2},v_{1})\) is an incomplete cycle if \(v_{2}=-d_{2}+d_{1}+d_{0}\), or \(|v_{m+1}|\leq m\) and \((v_{1},\ldots,v_{m+1},v_{m+2},v_{1})\) is an incomplete cycle, and therefore \(|v_{i}|=2\) for all \(2\leq i\leq m\). It follows that \(m+2\in A_{1}\) or else \((v_{1},\ldots,v_{m},w_{1},v_{m+2},v_{1})\) is an incomplete cycle; thus, \(w_{1}=-e_{1}+d_{m}+\ldots+d_{n}\) and \(|v_{i}|=2\) for all \(m+3\leq i\leq n\). Note that \(\epsilon(w_{1})=\varepsilon(v_{1})\) since \(w_{1}\cdot v_{1}=0\) and \(w_{1}\cdot(v_{2}+\ldots+v_{m})=v_{1}\cdot(v_{2}+\ldots+v_{m})=-1\). Suppose instead that \(|v_{m+2}|=2\). Then \(|v_{i}|=2\) for all \(m+2\leq i\leq n\) and \(w_{1}=-e_{1}+d_{m}+\ldots+d_{n}\).
We now break this case into subcases based on whether \(m=n\), \(v_{m+2}=-d_{m+2}+d_{m+1}+d_{0}\), or \(|v_{i}|=2\) for all \(m+2\leq i\leq n\).
Case III.1: \(m=n\).
Suppose that \(m=n\), and suppose that \(|w_{j}|\geq 3\) for some \(w_{j}\cdot-e_{1}=-1\). Then, \(n\in A_{j}\) or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{j}\}\) contains an incomplete cycle. We must then either have \(w_{j}\cdot v_{n}=0\), or else \(w_{j}\sim v_{k}\) for some \(k\leq n-1\) with \(v_{n}\sim v_{k}\), or else \((v_{n};v_{k},w_{1},w_{j})\) is a claw.
Case III.1.a: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then either \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or \(v_{n}=-d_{n}+d_{n-1}+\ldots+d_{0}\) and \(w_{6}=-e_{6}+d_{n}+d_{0}\), or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{5},w_{6},w_{7},w_{8}\}\) contains an incomplete cycle. Furthermore, we must have \(|w_{2}|\), \(|w_{3}|\geq 3\) or else there is a claw at \(w_{1}\). It follows that there is some \(j\in\{2,3\}\) with \(|w_{j}|\geq 3\) and \(w_{j}\cdot-e_{1}=-1\), so \(n\in A_{j}\), and therefore \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else either \(0\in A_{j}\) and \(w_{j}\cdot w_{6}=2\), which is absurd, or \(w_{j}\dagger w_{6}\), in which case the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{j},w_{5},w_{6}\}\) contains an incomplete cycle. It follows that \(w_{2}\) and \(w_{3}\) are both unloaded and \(w_{2}\dagger w_{3}\). We must therefore have \(|v_{n}|\geq 3\) or else \(v_{n}=-d_{n}+d_{n-1}\) and then either \(w_{2}\cdot w_{3}=2\), which is absurd, or \((v_{n};v_{n-1},w_{1},w_{j})\) is a claw for some \(j\in\{2,3\}\). Let \(\{j,k\}=\{2,3\}\). If \(v_{n}=-d_{n}+d_{n-1}+\ldots+d_{l}\) for some \(l\leq n-2\), then either \(w_{2}\cdot w_{3}=2\), or, without loss of generality, \(w_{j}=-e_{j}+d_{l}+d_{n}\) and \(w_{k}=-e_{k}+d_{n-1}+d_{n}\), but then either \(n\geq 3\) and \((v_{1},\ldots,v_{l},w_{j},w_{k},v_{n-1},\ldots v_{1})\) is an incomplete cycle or \(n=2\), \(w_{j}\cdot v_{1}=2\) and \(w_{k}\cdot v_{1}=-1\), so there is a sign error between \(w_{j}\) and \(w_{k}\) mediated by \(v_{1}\).
Case III.1.b: \(n\in A_{5}\).
If \(n\in A_{5}\), then \(|w_{j}|\geq 3\) for some \(j\in\{2,3\}\) with \(w_{j}\cdot-e_{1}=-1\) or else \(|w_{2}|=|w_{3}|=2\) and \((w_{1};v_{n},w_{2},w_{3})\) is a claw. Then \(w_{j}\dagger w_{5}\), and the argument from Case III.1.a produces an incomplete cycle or a sign error between \(w_{j}\) and \(w_{5}\).
Case III.2: \(m+1=n\).
Suppose now that \(m+1=n\), and that \(|w_{j}|\geq 3\) and \(w_{j}\cdot-e_{1}=-1\). If \(A_{j}\cap A_{1}=\emptyset\), then \(w_{j}\dagger v_{1}\) and \(w_{j}\sim v_{i}\) for some \(i\leq m\), so the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{j}\}\) contains an incomplete cycleIf \(m\in A_{j}\), then \(A_{j}\cap A_{1}=\{m\}\) or else \(A_{j}\cap A_{1}=\{m,m+1\}\), and then \(w_{j}\dagger w_{1}\) and \(w_{j}\dagger v_{l}\) for some \(l\leq m\), in which case there is either a heavy triple or an incomplete cycle. It follows that if \(m\in A_{j}\), then either \(w_{j}\cdot v_{m+1}=1\), hence \(w_{j}\dagger v_{m+1}\), so \(w_{j}\cdot v_{m}=0\) or else there is a claw at \(v_{m}\) or the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{j}\}\) contains an incomplete cycle or a heavy triple. But then, either \(w_{j}\cdot v_{1}=v_{m+1}\cdot v_{1}=0\), in which case \((v_{1},v_{m+1},w_{j})\) induces a heavy triple; or \(|\{x\in\{v_{m+1},w_{j}\}\colon x\cdot v_{1}\neq 0\}|=1\), in which case either \(w_{j}\pitchfork v_{1}\) and \(v_{m+1}\cdot v_{1}=0\), so \(w_{j}=-e_{j}+d_{m}+d_{0}\), \(v_{m}=-d_{m}+d_{m-1}+\ldots+d_{0}\), \(|v_{i}|=2\) for all \(2\leq i\leq m-1\), and \(v_{m+1}=-d_{m+1}+d_{m}+d_{m-1}\), so there is a sign error between \(w_{j}\) and \(v_{2}+\cdots+v_{m-1}+v_{m+1}\) mediated by \(v_{1}\); \(w_{j}\dagger v_{1}\) and \(v_{m+1}\cdot v_{1}=0\), in which case \((v_{1},\ldots,v_{m+1},w_{j},v_{1})\) is an incomplete cycle; or \(w_{j}\cdot v_{1}=0\) and \(v_{m+1}\sim v_{1}\), in which case \(w_{j}\sim v_{i}\) for some \(2\leq i\leq m-1\) and \((v_{1},\ldots,v_{i},w_{j},v_{m+1},v_{1})\) is an incomplete cycle; or \(w_{j}\pitchfork v_{1}\) and \(v_{m+1}\dagger v_{1}\), so there is a sign error between \(w_{j}\) and \(v_{m+1}\); or \(w_{j}\dagger v_{1}\) and \(v_{m+1}\prec v_{1}\), in which case \(m=2\), \(v_{2}=-d_{2}+d_{1}\), \(v_{3}=-d_{3}+d_{2}+d_{0}\), and \(w_{j}=-e_{j}+d_{2}+d_{1}\), but then \((v_{1},v_{2},v_{3},w_{j},v_{1})\) is an incomplete cycle. Conclude that \(A_{j}\cap A_{1}=\{n\}\).
Case III.2.a: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then \(|w_{6}|=|w_{7}|=|w_{8}|=2\), or else for \(l=\min\{j\in\{6,7,8\}\colon|w_{j}|\geq 3\}\) either \(w_{j}\dagger v_{i}\) for some \(i\neq m\), in which case there is an incomplete cycle, or \(\{m,m+1\}\subset A_{l}\), so \(w_{l}\cdot v_{1}=2\), which is absurd. It follows that \(w_{2}\) and \(w_{3}\) are unloaded and \(|w_{2}|\), \(|w_{3}|\geq 3\) or else there is a claw at \(w_{1}\). Then, \(n\in A_{2}\cap A_{3}\), so \(w_{2}\cdot w_{3}=1\) and \(w_{2}\dagger w_{3}\), and therefore \(v_{m+1}\) is just right or else \(v_{m+1}=-d_{m+1}+d_{m}+d_{0}\) and either \(0\in A_{2}\cap A_{3}\), in which case \(2\leq w_{2}\cdot w_{3}\leq|w_{2}|-2\), or \(w_{j}\dagger v_{m+1}\) but either \(w_{j}\cdot v_{1}=0\) or \(w_{j}\dagger v_{1}\) and \(w_{k}\pitchfork v_{1}\) for \(\{j,k\}=\{2,3\}\), which is absurd, or \(w_{2}\dagger v_{m+1}\) and \(w_{3}\dagger v_{m+1}\) and \((w_{2},w_{3},v_{m+1})\) is a heavy triple. However, if
\(v_{m+1}=-d_{m+1}+d_{m}+\ldots+d_{l}\) for some \(l\leq m-1\), then either \((v_{m+1},w_{2},w_{3})\) is a heavy triple, or either \(w_{j}\dagger v_{m+1}\) and \(w_{k}\cdot v_{m+1}=0\), so \(w_{k}\cdot v_{o}=1\), or \(w_{j}\cdot v_{l}=w_{k}\cdot v_{o}=1\) for \(\{j,k\}=\{2,3\}\) and some \(l+1\leq l\leq o\leq m\) with \(|v_{l}|\),\(|v_{o}|\geq 3\), so the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{2},w_{3}\}\) contains either a heavy triple or an incomplete cycle.
Case III.2.b: \(n\in A_{5}\).
If \(n\in A_{5}\), then \(|w_{j}|\geq 3\) and \(w_{j}\cdot-e_{1}=-1\) for some \(j\in\{2,3\}\) or else \((w_{1};v_{m},w_{2},w_{3})\) is a claw, but then the argument in Case III.2.a produces a contradiction if \(n\in A_{j}\cap A_{5}\).
Case III.3: \(m+2\leq n\).
If \(m+2\leq n\), then either \(v_{m+2}=-d_{m+2}+d_{m+1}+d_{0}\), \(v_{m+2}=-d_{m+2}+d_{m+1}\), or \(v_{m+2}=-d_{m+2}+d_{m+1}+d_{m}\), in which case \((v_{l},v_{m+2},w_{1})\) induces a heavy triple for \(l=\max\{i\leq m\colon|v_{i}|\geq 3\}\).
Case III.3.a: \(v_{m+2}=-d_{m+2}+d_{m+1}+d_{0}\).
Suppose, by way of contradiction, that \(v_{m+2}=-d_{m+2}+d_{m+1}+d_{0}\). Then, either \(v_{m+1}=-d_{m+1}+d_{m}+\ldots+d_{1}\) and \(|v_{i}|=2\) for all \(2\leq i\leq m\), or \(m+2=4\), \(v_{3}=-d_{3}+d_{2}+d_{0}\), and either \(v_{2}=-d_{2}+d_{1}\) or \(v_{2}=-d_{2}+d_{1}+d_{0}\).
Case III.3.a.i: \(v_{m+1}=-d_{m+1}+d_{m}+\ldots+d_{1}\).
Suppose first that \(v_{m+1}=-d_{m+1}+d_{m}+\ldots+d_{1}\) and \(|v_{i}|=2\) for all \(2\leq i\leq m\). Let \(|w_{j}|\geq 3\) with \(w_{j}\cdot-e_{1}=-1\). If \(A_{j}\cap A_{1}=\emptyset\), then \(w_{j}=-e_{j}+d_{0}\), but then \((v_{1},w_{j},v_{m+2},v_{m+1},v_{1})\) is an incomplete cycle. If \(A_{j}\cap A_{1}=\{m\}\), then \(A_{j}\cap\{1,\ldots,m\}=\{m\}\) or else \(2\leq w_{j}\cdot v_{m+1}\), but then either \(A_{j}=\{m\}\) and \((v_{1},\ldots,v_{m},w_{j},v_{m+1},v_{1})\) is an incomplete cycle or \(w_{j}=-e_{j}+d_{0}+d_{m}\), but then \(w_{j}\pitchfork v_{1}\), \(w_{j}\dagger v_{m+1}\), and \(v_{m+1}\dagger v_{1}\), which is absurd. If \(m+1\in A_{j}\cap A_{1}\), then \(m+1=\max(A_{j})\) or else \(1\leq w_{j}\cdot w_{1}\) with equality if and only \(A_{j}\cap A_{1}=\{m+1,n\}\), but then \((v_{1},\ldots,v_{m},w_{1},w_{j},v_{m+1},v_{1})\) is an incomplete cycle since \(m\not\in A_{j}\cap A_{1}\). If \(A_{j}\cap A_{1}=\{m+1\}\) then \(A_{j}=\{m+1\}\) and \((v_{1},v_{m+1},w_{j},v_{m+2},v_{1})\) is an incomplete cycle, and if \(A_{j}\cap A_{1}=\{m,m+1\}\), then \((v_{1},\ldots,v_{m},w_{1},w_{j},v_{m+2},v_{1})\) is an incomplete cycle. If \(\max(A_{j})\geq m+2\), then, in fact, either \(A_{j}=\{n\}\), which is absurd since \(v_{m+2}\prec v_{1}\), \(v_{m+2}\sim(v_{m+3}+\cdots+v_{n})\), \((v_{m+3}+\ldots+v_{n})\cdot v_{1}=0\), and \(w_{j}\sim(v_{m+3}+\cdots+v_{n}\), but \(w_{j}\cdot v_{1}=0\), or \(w_{j}=-e_{j}+d_{0}+d_{n}\), in which case either \(n\geq m+3\) and there is a sign error between \(w_{j}\) and \(v_{m+2}\) mediated by \(v_{1}\), or \(n=m+2\) and \(w_{j}\cdot v_{m+2}=0\). Since we must have \(|w_{j}|\geq 3\) and \(w_{j}\cdot-e_{1}=-1\) for at least one \(j\in\{2,3,5\}\), we may assume that \(n=m+2\).
Case III.3.a.i.\(\alpha\): \(|w_{5}|=2\).
If \(|w_{5}|=2\), then either \(|w_{6}|=|w_{7}|=|w_{8}|=2\), or else there is an incomplete cycle or \(w_{6}=-e_{6}+d_{0}+d_{m+2}\), in which case there is a sign error between \(w_{1}\) and \(w_{6}\) mediated by \(v_{1}+\ldots+v_{m}\). It follows that \(w_{2}\) and \(w_{3}\) are unloaded and \(|w_{2}|\), \(|w_{3}|\geq 3\), or else there is a claw at \(w_{1}\), but then \(w_{2}\cdot w_{3}=2\), which is absurd.
Case III.3.a.i.\(\beta\): \(w_{5}=-e_{5}+d_{0}+d_{m+2}\).
If \(w_{5}=-e_{5}+d_{0}+d_{m+2}\), then \(|w_{2}|=|w_{3}|=2\) or else \(A_{j}=\{0,m+2\}\) for some \(j\in\{2,3\}\) and \(w_{j}\cdot w_{5}=2\), but then \((w_{1};v_{m},w_{2},w_{3})\) is a claw.
Case III.3.a.ii: \(m+2=4\), \(v_{3}=-d_{3}+d_{2}+d_{0}\), and \(v_{2}=-d_{2}+d_{1}\).
Suppose that \(m+2=4\), \(v_{3}=-d_{3}+d_{2}+d_{0}\), and \(v_{2}=-d_{2}+d_{1}\), and let \(|w_{j}|\geq 3\)
with \(w_{j}\cdot-e_{1}=-1\). If \(0\in A_{j}\), then either \(3\in A_{j}\) or \(4\in A_{j}\), or else \((v_{1},v_{3},w_{j},w_{4},v_{1})\) is an incomplete cycle. If \(\{0,3\}\subset A_{j}\), then \(4\in A_{j}\) or else \(w_{j}\cdot v_{4}=2\), but then \(w_{j}\mathord{\dagger}w_{1}\) and \(w_{j}\mathord{\dagger}v_{4}\), so \((v_{1},v_{2},w_{1},w_{j},v_{4},v_{1})\) is an incomplete cycle. If \(\{0,4\}\subset A_{j}\), then \(3\not\in A_{j}\) or else \((v_{1},v_{2},w_{1},w_{j},v_{4})\) is an incomplete cycle, but then there is a sign error between \(w_{j}\) and \(v_{3}\) mediated by \(v_{1}\). If \(1\in A_{j}\), then \(2\in A_{j}\), and so \(3\in A_{j}\) since \(\sigma_{1}+\sigma_{2}>\sigma_{3}\), and so \(4\in A_{j}\) since \(\sigma_{1}+\sigma_{2}+\sigma_{3}>\sigma_{4}\), but then \((v_{1};v_{3},v_{4},w_{j})\) is a claw. If \(A_{j}\cap A_{1}=\{m\}\), then \(A_{j}=\{m\}\) and \((v_{1},v_{2},w_{j},v_{3},v_{1})\) is an incomplete cycle. If \(m+1\in A_{j}\), then, as in Case III.3.a.i, \(m+1=\max(A_{j})\), but then \(A_{j}=\{m+1\}\), or else \(m+2\in A_{j}\), but then \((v_{1},v_{3},w_{j},v_{4},v_{1})\) is an incomplete cycle. It follows then that either \(A_{j}=\{n\}\), in which case either \(n=4\) and \(w_{j}\mathord{\dagger}v_{4}\) but \(w_{j}\cdot v_{1}=0\), which is absurd, or \(n\geq 5\) and \((v_{5}+\ldots+v_{n}+w_{j})\mathord{\dagger}v_{4}\) but \((v_{5}+\ldots+v_{n}+w_{j})\cdot v_{1}=0\). Conclude that \(|w_{2}|=|w_{3}|=|w_{5}|=2\), so \((w_{1};w_{2},w_{3},w_{5})\) is a claw.
Case III.3.a.iii: \(m+2=4\), \(v_{3}=-d_{3}+d_{2}+d_{0}\), and \(v_{2}=-d_{2}+d_{1}+d_{0}\).
Suppose that \(m+2=4\), \(v_{3}=-d_{3}+d_{2}+d_{0}\), and \(v_{2}=-d_{2}+d_{1}+d_{0}\), and let \(|w_{j}|\geq 3\) with \(w_{j}\cdot-e_{1}=-1\). If \(0\in A_{j}\), then either \((v_{2},v_{4},w_{j})\) is a heavy triple; \(2\in A_{j}\), in which case \(3\in A_{j}\) or \(w_{j}\cdot v_{3}=2\), in which case \(1\in A_{j}\) or else \(2=w_{j}\cdot v_{1}<|w_{j}|-2\), but then there is a sign error between \(w_{j}\) and \(v_{3}\) mediated by \(v_{1}\); or \(4\in A_{j}\), in which case \(1\in A_{j}\) or else there is a sign error between \(w_{j}\) and \(v_{4}\) mediated by \(v_{1}\), but then \(2\in A_{j}\) or else \(w_{j}\cdot v_{2}=2\), and so \(3\in A_{j}\) or else \(w_{j}\cdot v_{3}=2\), but then there is a sign error between \(w_{j}\) and \(v_{3}\) mediated by \(v_{1}\). If \(1\in A_{j}\), then \(2\in A_{j}\) or else \(w_{j}\mathord{\dagger}v_{1}\) or \(w_{j}\pitchfork v_{1}\), which is absurd since \(w_{j}\mathord{\dagger}v_{2}\) and \(v_{2}\mathord{\dagger}v_{1}\), then \(3\in A_{j}\) since \(\sigma_{1}+\sigma_{2}>\sigma_{3}\), then \(4\in A_{j}\) since \(\sigma_{1}+\sigma_{2}+\sigma_{3}>\sigma_{4}\), but then \(2\leq w_{j}\cdot w_{1}<|w_{j}|-2\). If \(2\in A_{j}\), then \(w_{j}\mathord{\dagger}v_{2}\), so \(3\in A_{j}\) or else \((v_{1},v_{2},w_{j},v_{3},v_{1})\) is an incomplete cycle, so \(4\in A_{j}\) since \(\sigma_{2}+\sigma_{3}>\sigma_{4}\), but then \(2=w_{j}\cdot w_{1}\). If \(3\in A_{j}\), then \(w_{j}\mathord{\dagger}v_{3}\) and \(w_{j}\cdot v_{1}=0\), which is absurd. If \(4\in A_{j}\), then \(w_{j}\mathord{\dagger}v_{4}\) and \(w_{j}\cdot v_{1}=0\), which is absurd. If \(n>4\) and \(n\in A_{j}\), then \(A_{j}=\{n\}\), and \((v_{5}+\ldots+v_{n}+w_{j})\mathord{\dagger}v_{4}\) but \((v_{5}+\ldots+v_{n}+w_{j})\cdot v_{1}=0\), which is absurd. Conclude that \(|w_{2}|=|w_{3}|=|w_{5}|=2\), so \((w_{1};w_{2},w_{3},w_{5})\) is a claw.
Case III.3.b: \(|v_{i}|=2\) for all \(m+2\leq i\leq n\).
Suppose that \(|v_{i}|=2\) for all \(m+2\leq i\leq n\) and let \(|w_{j}|\geq 3\) with \(w_{j}\cdot-e_{1}=-1\). If \(A_{j}\cap A_{1}=\emptyset\) then either \(w_{j}=-e_{j}+d_{0}\), in which case \((v_{1},\ldots,v_{m},w_{1},w_{j},v_{1})\) is an incomplete cycle, or there exists some \(2\leq i\leq m\) such that \(w_{j}\cdot v_{i}=1\), in which case either some subpath of the path \((v_{1},\ldots,v_{m},w_{1},w_{j},v_{i},\ldots,v_{1})\) is an incomplete cycle or \(|v_{m}|\geq 3\) and \(w_{j}\cdot v_{m}=1\), in which case \((v_{m},w_{1},w_{j})\) is a heavy triple. Conclude that \(A_{j}\cap A_{1}=\{m\}\) or else either \(A_{j}\cap A_{1}=\{m,n\}\), and since \(n>m+1\), \((v_{1},\ldots,v_{m},w_{j},v_{n},v_{n-1},\ldots,v_{m+1},\ldots,v_{1})\) is an incomplete cycle, or \(2\leq w_{j}\cdot w_{1}\), which is absurd. It follows then that \(w_{j}\cdot v_{m+1}=1\). If \(v_{m+1}\sim v_{1}\), then \(v_{m+1}\prec v_{1}\) or else either \(w_{j}\pitchfork v_{1}\) and \((v_{m+1};v_{1},v_{m+2},w_{j})\) is a claw, or \(j\mathord{\dagger}v_{1}\), \(w_{j}\mathord{\dagger}v_{m+1}\), and \(v_{m+1}\mathord{\dagger}v_{1}\), which is absurd. If \(v_{m+1}\prec v_{1}\), then \(w_{j}\mathord{\dagger}v_{1}\), so \(\{0,1\}\subset A_{j}\) or else either \((v_{m+1};v_{1},v_{m+2},w_{j})\) is a claw or \((v_{1},v_{m+1},w_{j})\) is a negative triangle, but then \(m+1\in A_{j}\) since \(\sigma_{0}+\sigma_{1}+\sigma_{m}>\sigma_{m+1}\), which is absurd. Conclude that \(v_{m+1}\cdot v_{1}=0\), hence \(v_{m+1}=-d_{m+1}+d_{m}+\ldots+d_{l}\) for some \(l<m\) and \(v_{m+1}\mathord{\dagger}v_{l}\). Then either \(l\in A_{j}\), which is absurd since then \(w_{j}\cdot v_{m+1}=2\), or \((w_{j};v_{l},v_{m+1},w_{j})\) is a claw. Conclude that \(n\in A_{j}\), and, in fact, that \(A_{j}\cap A_{1}=\{n\}\) or else either \(w_{j}\cdot w_{1}=1\) and \((v_{1},\ldots,v_{m},w_{1},w_{j},v_{n-1},\ldots,v_{m+1},\ldots,v_{1})\) is an incomplete cycle, or \(2\leq w_{j}\cdot w_{1}\).
Case III.3.b.i: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{5},\)\(w_{6},w_{7},w_{8}\}\) contains an incomplete cycle. It follows that \(w_{2}\) and \(w_{3}\) are unloaded, and that \(|w_{2}|\), \(|w_{3}|\geq 3\) or else there is a claw at \(w_{1}\), so \(\{n\}=A_{1}\cap A_{2}\cap A_{3}\), hence \((v_{m+1},w_{2},w_{3})\) is a heavy triple.
Case III.b.ii: \(|w_{5}|\geq 3\).
If \(|w_{5}|\geq 3\), then \(\{n\}=A_{1}\cap A_{5}\), and \(|w_{j}|\geq 3\) for some \(j\in\{2,3\}\) with \(w_{j}\cdot-e_{1}=-1\), so \(\{n\}=A_{1}\cap A_{j}\cap A_{5}\), and \((v_{m+1},w_{j},w_{5})\) is a heavy triple.
**Proposition 5.15**.: _If \(v_{1}\) is tight, then \(\sigma=(1,2,\ldots,2)\in\mathbb{Z}^{n+1}\) and one of the following holds:_
1. \(s^{*}=(2n,0,1,0,2,0,0,0)\)_,_
2. \(s^{*}=(2n,0,2,0,1,0,0,0)\)_,_
3. \(s^{*}=(2n,1,0,0,2,0,0,0)\)_,_
4. \(s^{*}=(2n,1,2,0,0,0,0,0)\)_,_
5. \(s^{*}=(2n,2,0,0,1,0,0,0)\)_, or_
6. \(s^{*}=(2n,2,1,0,0,0,0,0)\)_._
Proof.: By Lemma 5.14, if \(v_{1}\) is tight, then \(|v_{i}|=2\) for all \(2\leq i\leq n\) and \(w_{1}=-e_{1}+d_{1}+\ldots+d_{n}\). If \(|w_{j}|\geq 3\), then either \(w_{j}=-e_{j}+d_{0}\), \(A_{j}=\{n\}\), or \(w_{j}=-e_{j}+d_{0}+d_{n}\), in which case there is a sign error between \(w_{j}\) and \(v_{2}+\ldots+v_{n}\) mediated by \(v_{1}\).
Case I: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{5},\)\(w_{6},w_{7},w_{8}\}\) contains an incomplete cycle. It follows that \(w_{2}\) and \(w_{3}\) are unloaded and \(|w_{2}|\), \(|w_{3}|\geq 3\) or else there is a claw at \(w_{1}\), so for \(\{j,k\}=\{2,3\}\), \(w_{j}=-e_{j}+d_{0}\) and \(w_{k}=-e_{k}+d_{n}\). If \(w_{4}\) is loaded, then \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}\) and either \(w_{2}=-e_{2}+d_{0}\), in which case \(n\in A_{4}\) or else \(s_{4}^{*}=|\sigma|_{1}+1\), but then \((w_{5};w_{1},w_{4},w_{6})\) is a claw or \((w_{1},w_{4},w_{5})\) is a negative triangle, or \(w_{2}=-e_{2}+d_{n}\), in which case either \(n\in A_{4}\) and either \((w_{5};w_{1},w_{4},w_{6})\) is a claw or \((w_{1},w_{4},w_{5})\) is a negative triangle, or \((v_{1},\ldots,v_{n},w_{2},w_{4},w_{5},w_{1},v_{1})\) is an incomplete cycle. Conclude that \(w_{4}\) is unloaded, hence either \(|w_{4}|=2\), \(w_{4}=-e_{4}+d_{n}\), in which case \((v_{1},\ldots,v_{n},w_{4},w_{1},v_{1})\) is an incomplete cycle, or \(w_{4}=-e_{4}+d_{0}\), in which case either \(j=2\) and there is a sign error between \(w_{4}\) and \(w_{2}\) induced by \(v_{1}\) or \(j=3\) and \((v_{1};w_{1},w_{3},w_{4})\) is a claw. Conclude that \(|w_{4}|=2\), hence \(s^{*}=(2n,1,2,0,0,0,0,0)\) or \(s^{*}=(2n,2,1,0,0,0,0,0)\).
Case II: \(w_{5}=-e_{5}+d_{0}\).
If \(w_{5}=-e_{5}+d_{0}\), then \(|w_{j}|\geq 3\) for some \(j\in\{2,3\}\) with \(w_{j}\cdot-e_{1}=-1\) or else \(|w_{2}|=|w_{3}|=2\) and \((w_{1};v_{1},w_{2},w_{3})\) is a claw. If \(w_{j}=-e_{j}+d_{0}\), then there is a sign error between \(w_{j}\) and \(w_{5}\) mediated by \(v_{1}\), so \(A_{j}=\{n\}\). If \(j=2\) and \(w_{2}\) is loaded, then \(|w_{3}|=2\) and \(w_{2}=-e_{2}+e_{4}+d_{n}\), so \((v_{1},\ldots,v_{n},w_{2},w_{3},w_{1},v_{1})\) is an incomplete cycle. Conclude that \(w_{j}=-e_{j}+d_{n}\). It follows that \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else, for \(l=\min\{k\in\{6,7,8\}\colon|w_{k}|\geq 3\}\), either \(w_{l}=-e_{l}+d_{n}\) and \((v_{1},w_{j},w_{l})\) induces a heavy triple, or \(w_{l}=-e_{l}+d_{0}\) and there is a sign error between \(w_{5}\) and \(w_{l}\) mediated by \(v_{1}\) if \(l\geq 7\) or \(l=6\) and \((v_{1};v_{2},w_{5},w_{6})\) is a claw. Then, letting \(\{j,k\}=\{2,3\}\), it follows that \(w_{k}\) is unloaded, hence \(|w_{k}|=2\), since if \(w_{k}=-e_{k}+d_{0}\) then there is a sign error between \(w_{k}\) and \(w_{5}\) mediated by \(v_{1}\), and if \(w_{k}=-e_{k}+d_{n}\), then
\((v_{1},w_{j},w_{k})\) induces a heavy triple. Suppose that \(w_{4}\) is loaded and \(j=2\); thus, \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}\), or else \(s_{4}^{*}\leq|\sigma|_{1}+1\), and \(0\not\in A_{4}\) or else \((w_{6};w_{4},w_{5},w_{7})\) is a claw, hence either \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}\) and \((v_{1},\ldots,v_{n},w_{2},w_{4},w_{5},v_{1})\) is an incomplete cycle or \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}+d_{n}\) and \((v_{1},\ldots,v_{n},w_{4},w_{1},v_{1})\) is an incomplete cycle. Suppose instead that \(w_{4}\) is loaded and \(j=3\); then \(w_{4}|_{E_{8}}=-e_{4}+e_{1}+e_{5}\), so either \(0\in A_{4}\) and \((w_{6};w_{4},w_{5},w_{7})\) is a claw or \(w_{4}=-e_{4}+e_{1}+e_{5}+d_{n}\) and \((v_{1},\ldots,v_{n},w_{4},w_{2},w_{1},v_{1})\) is an incomplete cycle. Conclude that \(w_{4}\) is not loaded. If \(w_{4}=-e_{4}+d_{0}\) then there is a sign error between \(w_{4}\) and \(w_{5}\) mediated by \(v_{1}\), and if \(w_{4}=-e_{4}+d_{n}\), then \((v_{1},\ldots,v_{n},w_{4},w_{1},v_{1})\) is an incomplete cycle. Conclude that \(|w_{4}|=2\), hence \(s^{*}=(2n,2,0,0,1,0,0,0)\) or \(s^{*}=(2n,0,2,0,1,0,0,0)\).
Case III: \(w_{5}=-e_{5}+d_{n}\).
If \(w_{5}=-e_{5}=d_{n}\), then \(|w_{j}|\geq 3\) for some \(j\in\{2,3\}\) with \(w_{j}\cdot-e_{1}=-1\) or else \(|w_{2}|=|w_{3}|=2\) and \((w_{1};v_{1},w_{2},w_{3})\) is a claw. If \(A_{j}=\{n\}\), then \((v_{1},w_{j},w_{5})\) induces a heavy triple, so \(w_{j}=-e_{j}+d_{0}\). It follows that \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else, for \(l=\min\{k\in\{6,7,8\}\colon|w_{k}|\geq 3\}\), either \(w_{l}=-e_{l}+d_{0}\) and there is a sign error between \(w_{j}\) and \(w_{l}\), or \(w_{l}=-e_{l}=d_{n}\) and either \(l=6\) and \((v_{n};v_{n-1},w_{5},w+l)\) is a claw or \(l\geq 7\) and \((v_{1},v_{5},v_{l})\) induces a heavy triple. Letting \(\{j,k\}=\{2,3\}\), it follows that \(w_{k}\) is unloaded, so \(|w_{k}|=2\) since if \(w_{k}=-e_{k}+d_{0}\), then there is a sign error between \(w_{2}\) and \(w_{3}\) mediated by \(v_{1}\) and if \(w_{k}=-e_{k}+d_{n}\) then \((v_{1},w_{k},w_{5})\) induces a heavy triple. Suppose that \(w_{4}\) is loaded and that \(j=2\). Then \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}\) or else \(s_{4}^{*}\leq|\sigma|_{1}+1\), and so \(w_{4}=-e_{4}+e_{2}+e_{1}+e_{5}\) or else \(n\in A_{4}\) and \((w_{6};w_{4},w_{5},w_{8})\) is a claw, but then \((v_{1},\ldots,v_{n},w_{5},w_{4},w_{2},w_{1},v_{1})\) is an incomplete cycle. Suppose instead that \(w_{4}\) is loaded and \(j=3\); then \(w_{4}|_{E_{8}}=-e_{4}+e_{1}+e_{5}\) and \(n\in A_{4}\) or else \(s_{4}^{*}\leq|\sigma|_{1}+1\), but then \((w_{6};w_{4},w_{5},w_{7})\) is a claw. Conclude that \(w_{4}\) is unloaded, hence \(|w_{4}|=2\) or else \(w_{4}=-e_{4}+d_{n}\) and \((v_{1},\ldots,v_{n},w_{4},v_{1})\) is an incomplete cycle, or \(w_{4}=-e_{4}+d_{0}\) and either \(j=2\) and \(v_{1}\) mediates a sign error between \(w_{4}\) and \(w_{2}\) or \(j=3\) and \((v_{1};v_{2},w_{3},w_{4})\) is a claw. Hence, \(s^{*}=(2n,1,0,0,2,0,0,0)\) or \(s^{*}=(2n,0,1,0,2,0,0,0)\).
### When \(G(\mathcal{V})\) is disconnected
First we recall some basic observations about change-maker bases whose intersection graphs are disconnected.
**Lemma 5.16** (Lemma 5.1 of [11]).: _A changemaker lattice has at most two indecomposable summands. If it has two indecomposable summands, then there exists an index \(r>1\) for which \(v_{r}=-d_{r}+\sum_{i=0}^{r-1}d_{i}\), \(|v_{i}|=2\) for all \(1\leq i<r\), and \(v_{r}\) and \(v_{1}\) belong to separate summands. _
**Lemma 5.17** (Lemma 5.2 of [11]).: _All intervals in \(\mathcal{V}\) are just right. In particular, they are unbreakable. _
These two lemmas are all we will use from [11] in this section, but the reader should note that Section 5 of [11] contains a complete classification of changemaker bases whose intersection graphs are disconnected.
**Lemma 5.18**.: _If \(G(\mathcal{V})\) is disconnected, then \(|w_{1}|\geq 3\)._
Proof.: Suppose, by way of contradiction, that \(|w_{1}|=2\). Lest \((w_{1};w_{2},w_{3},w_{5})\) be a claw, we are in one of three scenarios: either \(w_{j}\sim w_{k}\) for some \(j,k\in\{2,3,5\}\) with \(w_{j}\cdot w_{1}=w_{k}\cdot w_{1}=-1\), \(w_{2}\) is loaded and \(w_{2}\cdot w_{1}=0\), or \(w_{3}\) is loaded and \(w_{3}\cdot w_{1}=0\).
Case I: \(w_{j}\sim w_{k}\) for some \(j,k\in\{2,3,5\}\) with \(w_{j}\cdot w_{1}=w_{k}\cdot w_{1}=-1\).
If \(w_{j}\) (or \(w_{k}\)) has more than one neighbor in \(\mathcal{V}\), then \(w_{j}\) has two neighbors \(v_{i_{1}}\) and \(v_{i_{2}}\), and \((v_{i_{1}},v_{i_{2}},w_{j})\) is a triangle, or else \((w_{j};v_{i_{1}},v_{i_{2}},w_{1})\) is a claw. Since every \(v_{i}\in\mathcal{V}\) is just right, if \((v_{i_{1}},v_{i_{2}},w_{j})\) is a triangle with \(i_{1}<i_{2}\), then \(|v_{i}|=2\) for all \(1\leq i\leq i_{1}\), \(r=i_{1}+1\), \(|v_{i}|=2\) for all \(r<i<i_{2}\), \(v_{i_{2}}=-d_{i_{2}}+d_{i_{2}-1}+\ldots+d_{i_{1}}\), \(w_{j}\cdot v_{i_{1}}=-1\) and \(w_{j}\cdot v_{i_{2}}=1\), or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{j}\}\) contains a heavy triple. It follows that \(i_{2}=i_{1}+2\) and \(\{i_{1},r,i_{2}\}\subset A_{j}\), or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{j}\}\) is connected, hence the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{j},w_{k}\}\) contains an incomplete cycle. It follows that \(A_{j}=\{i_{1},\ldots,n\}\) and \(|v_{i}|=2\) for all \(i_{2}+1\leq i\leq n\). But then \(w_{k}\dagger v_{r}\) and \(w_{k}\cdot v_{i}=0\) for all \(i\in\{1,\ldots,r-1,r+1=i_{2},\ldots,n\}\) or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{j},w_{k}\}\) contains an incomplete cycle. Conclude that \(w_{j}\) and \(w_{k}\) each have a single neighbor in \(\mathcal{V}\), respectively \(v_{i_{j}}\) and \(v_{i_{k}}\), and, moreover, \(v_{i_{j}}\) and \(v_{i_{k}}\) belong to distinct components of \(G(\mathcal{V})\). Without loss of generality, we may assume that \(A_{j}=\{m,\ldots,n\}\), \(A_{k}=\{n\}\), \(|v_{m+1}|\geq 3\) and \(v_{m+1}\cdot v_{m}=0\), and \(|v_{i}|=2\) for all \(m+1\leq i\leq n\).
Case I.1: \(j=5\).
If \(j=5\), then \(w_{6}\not\sim w_{5}\), so \(w_{6}\cdot w_{5}=0\) since \(-1\leq w_{6}\cdot w_{5}\leq|w_{5}|-3\), so \(|w_{6}|\geq 3\), or else \((w_{5};v_{m},w_{j},w_{6})\) is a claw or \((w_{j},w_{5},w_{6})\) is a heavy triple. Then, either \(A_{6}\cap A_{5}=\{m\}\), in which case \(w_{6}=-e_{6}+d_{m}\), in which case \((v_{m},w_{6},v_{m+1},\ldots,v_{n},w_{k},w_{1},w_{5},v_{m})\) is an incomplete cycle, or \(w_{6}=-e_{6}+d_{n}\), in which case \((v_{m+1},w_{k},w_{6})\) is a heavy triple.
Case I.2: \(j=3\).
If \(j=3\), then \(|w_{4}|\geq 3\) or else \((w_{3};v_{m},w_{1},w_{4})\) is a claw. Then, either \(w_{4}\) is loaded, or \(w_{4}\) is unloaded and \(w_{4}\cdot w_{3}=0\), since in this case \(-1\leq w_{4}\cdot w_{3}\leq|w_{3}|-3\).
Case I.2.a: \(w_{4}\) is loaded.
First observe that if \(w_{4}\) is loaded, then \(w_{4}\) is not tight, for then \(w_{4}\sim v_{1}\) and \(v_{r}\prec w_{4}\), thus the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{3},w_{4},w_{k}\}\) contains an incomplete cycle. It follows that \(w_{4}\) is unbreakable, so \(w_{4}\cdot w_{3}\in\{-1,0,1\}\) since \(-1\leq w_{4}\cdot w_{3}\leq|w_{3}|-2\). Suppose first that \(w_{4}\cdot w_{3}=-1\). Then \(w_{4}|_{E_{8}}=-e_{4}+e_{2}\) or else \(w_{4}\cdot w_{3}\geq 0\). Then, either \(k=2\), in which case \(n\in A_{4}\) or else \(w_{4}\cdot w_{2}=-2\), which is absurd since \(|w_{4}|\geq 4\) and \(w_{4}\) is unbreakable, but then \(w_{4}\cdot w_{3}\geq 0\), or \(k=5\), in which case \(A_{4}\subset\{1,\ldots,m\}\) or else \(n\in A_{4}\) and \(s_{4}^{*}-(s_{1}^{*}+s_{5}^{*})\geq 0\). But then, since every element of \(\mathcal{V}\) is just right, there is some \(l\leq m\) such that \(|v_{l}|\geq 3\), \(w_{4}\cdot v_{l}=-1\), \(w_{4}\cdot v_{l+1}=1\), and \(v_{l}\) and \(v_{l+1}\) belong to distinct components of \(G(\mathcal{V})\), so the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{3},w_{4},w_{5}\}\) contains an incomplete cycle. Conclude that \(w_{4}\cdot w_{3}\neq-1\). Suppose instead that \(w_{4}\cdot w_{3}=0\). We must then have \(w_{4}|_{E_{8}}=-e_{4}+e_{2}\) or else \(w_{4}\cdot-e_{3}=0\), in which case \(A_{4}\subset\{1,\ldots,m\}\) and there is an incomplete cycle as before. It follows that either \(w_{4}=-e_{4}+e_{2}+d_{m}\), in which case \((v_{m},w_{4},w_{m+1},\ldots,v_{n},w_{k},w_{1},w_{3},v_{m})\) is an incomplete cycle, or \(w_{4}=-e_{4}+e_{2}+d_{n}\), in which case \((v_{m+1},w_{4},w_{k})\) is a heavy triple. Suppose lastly that \(w_{4}\cdot w_{3}=1\). Then either \(n=m+1\) and \(w_{4}=-e_{4}+e_{2}+d_{m}+d_{m+1}\) and \((w_{3},w_{k},w_{4})\) is a heavy triple, or \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+\ldots\), in which case either \(A_{4}=\{m\}\) and there is an incomplete cycle as before, or \(A_{4}=\{n\}\), and either \((v_{m+1},w_{k},w_{4})\) is a heavy triple or there is a claw at \(v_{n}\). Conclude that \(w_{4}\) is not loaded.
Case I.2.b: \(w_{4}\) is not loaded.
If \(w_{4}\) is not loaded, then we arrive at a contradiction in the same way as in Case I.1, _mutatis mutandi_.
Case I.3: \(j=2\).
If \(j=2\), then \(k=3\) or \(k=5\).
Case I.3.a: \(k=5\).
If \(k=5\), then \(n\in A_{6}\) or else \((w_{5};v_{n},w_{1},w_{6})\) is a claw, but then \(w_{6}\cdot w_{2}\geq 1\), so \(w_{6}\dagger w_{2}\), and therefore \(w_{6}\cdot v_{m}=-1\) or else \((w_{2};v_{m},w_{1},w_{6})\) is a claw, but then \(m\in A_{6}\), so \(2\leq w_{6},w_{2}\), which is absurd.
Case I.3.b: \(k=3\).
If \(k=3\), then \(|w_{4}|\geq 3\) or else \((w_{3};v_{n},w_{1},w_{4})\) is a claw. Then, either \(w_{4}\) is loaded, or \(w_{4}\) is unloaded and \(w_{4}\cdot w_{3}=0\), since in this case \(-1\leq w_{4}\cdot w_{3}\leq|w_{3}|-3\).
Case I.3.b.i: \(w_{4}\) is loaded.
First observe that if \(w_{4}\) is loaded, then \(w_{4}\) is not tight, for then \(w_{4}\sim v_{1}\) and \(v_{r}\prec w_{4}\), thus the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{2},w_{4},w_{3}\}\) contains an incomplete cycle. It follows, as before, that \(w_{4}\cdot w_{3}\in\{-1,0,1\}\). If \(w_{4}\cdot w_{3}=-1\), then, since \(n\not\in A_{4}\), either \(w_{4}\cdot w_{2}=-2\), which is absurd, or \(w_{4}\cdot w_{2}=-1\), in which case \(w_{4}=-e_{4}+e_{2}+d_{m}\) and \((w_{2},w_{3},w_{4})\) is a negative triangle. If \(w_{4}\cdot w_{3}=0\), then either \(w_{4}=-e_{4}+e_{2}+d_{n}\), in which case there is a claw at \(v_{n}\), or \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}+\ldots\), so \(A_{4}\subset\{1,\ldots,m\}\) and we run into the same problem as in Case II.1. If \(w_{4}\cdot w_{3}=1\), then \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+e_{5}+\ldots\) and \(A_{4}=\{n\}\), but then \((v_{m+1},w_{3},w_{4})\) is a heavy triple. Conclude that \(w_{4}\) is not loaded.
Case I.3.b.ii: \(w_{4}\) is unloaded.
If \(w_{4}\) is not loaded, then we arrive at a contradiction in the same way as in Case I.3.a, _mutatis mutandi_.
Note that in the remaining cases, i.e. where either \(w_{2}\) is loaded and \(w_{2}\cdot w_{1}=0\) or \(w_{3}\) is loaded and \(w_{3}\cdot w_{1}=0\), the resolution of Case I above implies that \(w_{j}\not\sim w_{k}\) for \(\{j,k\}\subset\{2,3,5\}\) with \(w_{j}\cdot w_{1}=w_{k}\cdot w_{1}=-1\). In particular this means that either \(|w_{j}|=2\) or \(|w_{k}|=2\) for \(w_{j}\), \(w_{k}\) unloaded, or else, without loss of generality, we may assume that \(G(\{v_{1},\ldots,v_{n},w_{j}\})\) is connected, hence the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{j},w_{k}\}\) contains an incomplete cycle. This further implies that \(|w_{5}|=2\) since \(|w_{2}|\), \(|w_{3}|\geq 3\) if \(w_{2}\) or \(w_{3}\) is loaded, and we arrive at a contradiction: since \(w_{2}\) or \(w_{3}\) is loaded and \(|w_{5}|=2\), \(|w_{l}|\geq 3\) for some \(l\in\{6,7,8\}\), so, taking \(l\) to be minimal, the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{2},w_{3},w_{5},\ldots,w_{l}\}\) contains an incomplete cycle.
**Lemma 5.19**.: _If \(G(\mathcal{V})\) is disconnected, then \(w_{1}\) is not tight._
Proof.: If \(G(\mathcal{V})\) is disconnected and \(w_{1}\) is tight, then \(|w_{5}|\geq 3\) or else \((w_{1};v_{1},v_{r},w_{5})\) is a claw. We must furthermore have \(|w_{5}|=3\); if \(|w_{5}|=4\), then \(1\leq w_{5}\cdot w_{1}<|w_{5}|-2\), and therefore \(|w_{5}|=4\) and \(w_{5}\dagger w_{1}\), but then \(w_{5}\cdot v_{r}=1\) or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{5}\}\) contains an incomplete cycle, or \((v_{1},w_{1},w_{5})\) is a negative triangle, or \((v_{r},w_{1},w_{5})\) is a negative triangle, so we must have \(w_{5}\cdot v_{r-1}=-1\) or else \(|w_{5}|\geq 5\), a contradiction. But then \(w_{5}=-e_{5}+d_{k}\) and
\(w_{5}\cdot w_{1}=0\), so \(|v_{i}|=2\) for all \(1\leq i\leq k\), but then \(r=k+1\) and \((v_{1},\ldots,v_{r-1},w_{5},v_{r},w_{1},v_{1})\) is an incomplete cycle.
**Lemma 5.20**.: _If \(G(\mathcal{V})\) is disconnected, then \(w_{j}\) is not tight for \(j\in\{2,3,5\}\)._
Proof.: If \(G(\mathcal{V})\) is disconnected and \(w_{j}\) is tight for some \(j\in\{2,3,5\}\) with \(w_{j}\cdot-e_{1}=-1\), then we arrive at a contradiction as in the Lemma 5.19. Suppose now that \(j\in\{2,3\}\), \(w_{j}\cdot-e_{1}=0\), and \(w_{j}\) is tight. Since by Lemma 5.18 we must have \(|w_{1}|\geq 3\) it follows that \(w_{1}\pitchfork w_{2}\). We must furthermore have either \(w_{1}=-e_{1}+d_{n}\) or \(w_{1}=-e_{1}+d_{m}+\ldots+d_{n}\) for some \(m\) with \(|v_{m+1}|\geq 3\) and \(|v_{i}|=2\) for all \(m+2\leq i\leq n\) or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{j}\}\) contains an incomplete cycle. We must furthermore have \(|w_{5}|\geq 3\) or else \(|w_{5}|=2\) and, since \(w_{j}\) is loaded, letting \(l\) be minimal in \(\{6,7,8\}\) with \(|w_{l}|\geq 3\), the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{j},w_{5},\ldots,w_{l}\}\) contains an incomplete cycle, so \(w_{5}\pitchfork w_{2}\). Now, fix \(m\in\{1,\ldots,n\}\) such that \(|v_{m+1}|\geq 3\) and \(|v_{i}|=2\) for all \(m+2\leq i\leq n\). We must have either \(w_{1}=-e_{1}+d_{m}+\ldots+d_{n}\) and \(w_{5}=-e_{5}+d_{n}\) or \(w_{1}=-e_{1}+d_{n}\) and \(w_{5}=-e_{5}+d_{m}+\ldots+d_{n}\). Letting \(\{j,k\}=\{2,3\}\) with \(w_{j}\) loaded, we must have that either \(A_{k}=A_{5}=\{n\}\), in which case \((v_{m+1},w_{k},w_{5})\) is a heavy triple, or \(A_{k}=A_{1}=\{n\}\), in which case there is a claw at \(v_{n}\), or \(A_{k}\cap A_{1}=\emptyset\) and either \(n=m+1\), \(w_{3}=-e_{3}+d_{m}\), and \((v_{m+1},w_{1},w_{3})\) is a negative triangle, or the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{j},w_{k},w_{5}\}\) contains an incomplete cycle.
**Lemma 5.21**.: _If \(G(\mathcal{V})\) is disconnected, then there are two \(j,k\in\{2,3,5\}\) with \(w_{j}\cdot-e_{1}=w_{k}\cdot-e_{1}=-1\) and \(|w_{j}|\), \(|w_{k}|\geq 3\)._
Proof.: Since \(|w_{1}|\geq 3\), we know that \(w_{1}\) has at least one neighbor in \(\mathcal{V}\). If there are not two distinct \(j,k\in\{2,3,5\}\) with \(w_{j}\cdot e_{1}=w_{k}\cdot e_{1}=-1\) and \(|w_{j}|\), \(|w_{k}|\geq 3\), then \(|w_{5}|=2\), \(w_{j}\) is loaded, and \(|w_{k}|\geq 3\) for \(\{j,k\}=\{2,3\}\), or else there is a claw at \(w_{1}\). Furthermore, \(w_{1}\) has a unique neighbor in \(\mathcal{V}\), or else there is a claw at \(w_{1}\). It follows that either \(w_{1}=-e_{1}+d_{n}\) or \(w_{1}=-e_{1}+d_{m}+\ldots+d_{n}\), \(|v_{m+1}|\geq 3\), and \(|v_{i}|=2\) for all \(m+2\leq i\leq n\). Since \(w_{j}\) is loaded, we must have \(|w_{l}|\geq 3\) for some \(l\in\{6,7,8\}\), hence \(|w_{6}|\geq 3\) or else, letting \(l\) be minimal, either \(w_{l}\dagger w_{1}\) and \((w_{1},w_{5},w_{6},\ldots,w_{l},w_{1})\) is an incomplete cycle, or \(A_{l}\cap A_{1}=\emptyset\), so \(G(\{v_{1},\ldots,v_{n},w_{l}\})\) is connected, hence the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{5},w_{6},\ldots,w_{l}\}\) contains an incomplete cycle. It follows furthermore that \(w_{6}\dagger w_{1}\) and the unique neighbor of \(w_{6}\) in \(\mathcal{V}\) lies in a different component than the neighbor of \(w_{1}\) in \(\mathcal{V}\), or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{5},w_{l}\}\) contains an incomplete cycle, hence either \(w_{1}=-e_{1}+d_{m}+\ldots+d_{n}\) and \(w_{6}=-e_{6}+d_{n}\) or \(w_{1}=-e_{1}+d_{n}\) and \(w_{6}=-e_{6}+d_{m}+\ldots+d_{n}\). Now, either \(A_{k}\cap A_{1}=A_{3}\cap A_{6}=\emptyset\), in which case the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{k},w_{6}\}\) contains an incomplete cycle, or \(A_{k}\cap A_{1}=\{m\}\), in which case \((v_{m},w_{k},v_{m+1},\ldots,v_{n},w_{6},w_{5},w_{1},v_{m})\) is an incomplete cycle, or \(A_{k}\cap A_{1}=\{n\}\), and either \(w_{1}=-e_{1}+d_{n}\) and there is a claw at \(v_{n}\), or \(w_{6}=-e_{6}+d_{n}\) and \((v_{m+1},w_{k},w_{6})\) is a heavy triple, or \(|A_{k}\cap A_{1}|=2\), in which case \(n=m+1\), \(w_{k}=-e_{k}+d_{m}+d_{m+1}\), \(w_{1}=-e_{1}+d_{m}+d_{m+1}\), and \(w_{6}=-e_{6}+d_{m+1}\), in which case \((w_{1},w_{k},w_{6})\) is a heavy triple.
**Lemma 5.22**.: _Let \(j,k\in\{2,3,5\}\). If \(w_{j}\cdot-e_{1}=w_{k}\cdot-e_{1}=-1\), and neither \(w_{j}\) nor \(w_{k}\) is tight, then \(A_{j}\cap A_{k}=\emptyset\)._
Proof.: If \(A_{j}\cap A_{k}\neq\emptyset\), then \(|A_{j}\cap A_{k}|=1\) and \(w_{j}\dagger w_{k}\). Without loss of generality, and lest there be a heavy triple, either \(w_{j}=-e_{j}+d_{r-1}+\ldots+d_{n}\), \(w_{k}=-e_{k}+d_{r-1}\), and \(|v_{i}|=2\) for
all \(r+1\leq i\leq n\), or \(w_{j}=-e_{j}+d_{m}+\ldots+d_{n}\), \(w_{k}=-e_{k}+d_{n}\), \(|v_{m+1}|\geq 3\), \(|v_{i}|=2\) for all \(m+2\leq i\leq n\).
Case I: \(w_{j}=-e_{j}+d_{r-1}+\ldots+d_{n}\), \(w_{k}=-e_{k}+d_{r-1}\), and \(|v_{i}|=2\) for all \(s+1\leq i\leq n\).
If we are in Case I, then either \(w_{1}=-e_{1}+d_{4-1}\), in which case \((v_{s-1},w_{1},v_{r},w_{k},v_{r-1})\) is an incomplete cycle, or \(n=r\) and \(w_{1}=-e_{1}+d_{r-1}+d_{r}\), in which case \((v_{r-1},w_{1},w_{j},w_{k},v_{r-1})\) is an incomplete cycle, or \(n=r=2\) and \(w_{1}=-e_{1}+d_{0}+d_{1}+d_{2}\) in which case \((v_{r},w_{1},w_{j},w_{k},v_{r})\) is an incomplete cycle.
Case II: \(w_{j}=-e_{j}+d_{m}+\ldots+d_{n}\), \(w_{k}=-e_{k}+d_{n}\), \(|v_{m+1}|\geq 3\), \(|v_{i}|=2\) for all \(m+2\leq i\leq n\).
If we are in Case II, then either \(w_{1}=-e_{1}+d_{m}\), in which case \((v_{m+1},w_{1},w_{k})\) is a heavy triple, or \(w_{1}=-e_{1}+d_{n}\), in which case \(n=r\) or else there is a claw at \(v_{n}\), or \(n=r\), and \(w_{1}=-e_{1}+d_{r}+d_{r+1}\), or else \(A_{1}\cap\{m,\ldots,n\}=\emptyset\), in which case the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{j},w_{k}\}\) contains an incomplete cycle
Case II.1: \(n=r\) and \(w_{1}=-e_{1}+d_{r}\).
Suppose that \(n=r\) and \(w_{1}=-e_{1}+d_{r}\). Let \(l\in\{1,\ldots,8\}\setminus\{1,j,k\}\). If \(|A_{l}|\geq 1\), then \(A_{l}=\{r-1,r\}\) or else \(w_{l}{\dagger}v_{r}\) and either \((v_{r};w_{1},w_{k},w_{l})\) is a claw or \((v_{r},w_{1},w_{k})\) or \((v_{r},w_{1},w_{k})\) is a heavy triple, or \(w_{l}\) is tight and \((v_{1},\ldots,v_{r-1},w_{j},w_{k},v_{r},w_{l},v_{1})\) is an incomplete cycle. It follows that either \(|w_{5}|=2\), or \(j=5\), or \(k=5\), or else \(5\not\in\{j,k\}\) and \(w_{5}\cdot w_{j}=2\), which is absurd.
Case II.1.a: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then \(|w_{6}|=|w_{7}|=|w_{8}|=2\) or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{1},w_{2}\), \(w_{3},w_{5},w_{6},w_{7},w_{8}\}\) contains an incomplete cycle, thus \(w_{2}\) and \(w_{3}\) are unloaded and \(\{2,3\}=\{j,k\}\). It follows that \(|w_{4}|\geq 3\) or else there is a claw at \(w_{3}\). Suppose that \(w_{4}\) is unloaded. Then \(w_{4}=-e_{4}+d_{r-1}+d_{r}\), so \(j=3\) or else \(w_{4}\cdot w_{2}=2\), but then \((v_{r},w_{1},w_{4},w_{2},v_{r})\) is an incomplete cycle. If \(w_{4}\) is loaded, then \(w_{4}|_{E_{8}}=-e_{4}+e_{2}+e_{1}+d_{r-1}+d_{r}\), but then \((w_{5};w_{1},w_{4},w_{6})\) is a claw.
Case II.1.b: \(j=5\).
If \(j=5\), then \(w_{5}=-e_{5}+d_{r-1}+d_{r}\), so \(|w_{6}|\geq 3\) or else \((w_{5};v_{r-1},w_{k},w_{6})\) is a claw, but then \(w_{6}=-e_{6}+d_{r-1}\) or else either \(n\in A_{6}\) and \((w_{1},w_{k},w_{6})\) is a heavy triple or \(n\not\in A_{6}\) and \(2\leq w_{6}\cdot v_{r}\), which is absurd, but then \((v_{r-1},w_{6},v_{r},w_{k},w_{5},v_{r-1})\) is an incomplete cycle.
Case II.1.c: \(k=5\).
If \(k=5\), then \(w_{5}=-e_{5}+d_{r}\), so \(|w_{6}|\geq 3\) or else \((w_{5};v_{r-1},w_{j},w_{6})\) is a claw, but then \(n\in A_{6}\) or else the subgraph induced by \(\{v_{1},\ldots,v_{n},w_{j},w_{5},w_{6}\}\) contains an incomplete cycle, so \(w_{6}=-e_{6}+d_{n}\) or else \(2\leq w_{6}\cdot w_{j}\), but then \((v_{r},w_{1},w_{6})\) is a heavy triple.
Case II.2: \(n=r\) and \(w_{1}=-e_{1}+d_{r-1}+d_{r}\).
Suppose that \(n=r\) and \(w_{1}=-e_{1}+d_{r-1}+d_{r}\). Let \(l\in\{1,\ldots,8\}\setminus\{1,j,k\}\). If \(A_{l}=\{r\}\), then \(w_{l}{\dagger}v_{r}\) and either \(w_{l}{\dagger}w_{k}\), in which case \((v_{r},w_{k},w_{l})\) is a heavy triple, \(w_{l}{\dagger}w_{j}\), in which case \((v_{r},w_{l},w_{j},w_{k},v_{r})\) is an incomplete cycle, \(w_{l}{\dagger}w_{1}\), in which case \((v_{r},w_{l},w_{1},w_{j},w_{k},v_{r})\) is an incomplete cycle. If \(A_{l}=\{r-1\}\), then \((v_{r-1},w_{l},v_{r},w_{k},w_{j},w_{1},v_{r-1})\) is an incomplete cycle. If \(A_{l}=\{r-1\}\), then \(w_{l}{\dagger}v_{r-1}\) and \((w_{1},w_{j},w_{l})\) is a heavy triple. If \(r=2\) and \(A_{l}=\{0,1,2\}\), then \(w_{l}{\dagger}v_{2}\) and either \(w_{l}{\dagger}w_{1}\) or \(w_{l}{\dagger}w_{j}\), and in either case the subgraph induced
by \(\{v_{1},\ldots,v_{n},w_{1},w_{j},w_{k},w_{l}\}\) contains an incomplete cycle. Conclude that \(|A_{l}|=0\) for \(l\not\in\{1,j,k\}\).
Case II.2.a: \(|w_{5}|=2\).
If \(|w_{5}|=2\), then \(\{2,3\}=\{j,k\}\), and \(w_{2}\) and \(w_{3}\) are unloaded. It follows that there is a claw at \(w_{3}\) unless \(|w_{4}|\geq 2\), in which case \(j=2\) and \(w_{4}=-e_{4}+e_{2}+e_{1}\), or else either \(|A_{4}|\geq 1\) or \(w_{4}\) is loaded and \(s_{4}^{*}\leq|\sigma|_{1}+1\), which is absurd, but then \((w_{1},w_{2},w_{4})\) is a heavy triple.
Case II.2.b: \(j=5\).
If \(j=5\), then \((w_{5};v_{r-1},w_{k},w_{6})\) is a claw.
Case II.2.c: \(k=5\).
If \(k=5\), then \((w_{5};v_{r},w_{k},w_{6})\) is a claw.
**Proposition 5.23**.: _If \(G(\mathcal{V})\) has two components then \(\sigma=(1,1,2,\ldots,2)\in\mathbb{Z}^{n+1}\) and either \(s^{*}=(2n-1,1,2,0,0,0,0,0)\) or \(s^{*}=(2n-1,1,0,0,2,0,0,0)\)._
Proof.: Note first that the Lemma 5.22 ensures that \(n\not\in A_{j}\cap A_{k}\), so, without loss of generality, let us assume that \(n\not\in A_{j}\), hence there is some \(m\in A_{j}\) such that \(m+1\not\in A_{j}\). Notice that if \(m\in A_{j}\) but \(m+1\not\in A_{j}\), then \(w_{j}\cdot v_{m+1}\geq 1\) and \(|v_{m+1}|\geq 3\), so \(w_{j}\cdot v_{m+1}=1\). Then either \(w_{j}\cdot v_{m}=-1\), or \(v_{m}=-d_{m}+d_{m-1}+\ldots+d_{l}\) for some \(l<m-1\) with \(|v_{l+1}|\geq 3\) and \(|v_{i}|=2\) for all \(l+2\leq i\leq m-1\), \(v_{m+1}=-d_{m+1}+d_{m}+d_{m-1}\), and \(A_{j}\cap\{l,\cdots,m+1\}=\{l,m\}\), in which case \(w_{j}\cdot v_{l+1}=1\), so \((v_{l+1},v_{m+1},w_{j})\) is a heavy triple. It follows that either \(A_{j}=\{m\}\) or \(n\in A_{j}\). Supposing that \(A_{j}=\{m,\}\), note that since \(G(\{v_{1},\ldots,v_{n},w_{j}\})\) is connected, \(w_{k}\) has only a single neighbor in \(\mathcal{V}\) or else there is a heavy triple or an incomplete cycle, hence \(n\in A_{k}\). We must therefore have that \(A_{j}=\{m\}\) and either \(A_{k}=\{n\}\) or \(A_{k}=\{m^{\prime},\ldots,n\}\) for some \(m<m^{\prime}\) with \(|v_{m^{\prime}+1}|\geq 3\) and \(|v_{i}|=2\) for all \(m^{\prime}+2\leq i\leq n\). Since \(G(\{v_{1},\ldots,v_{n},w_{j},w_{k}\})\) is connected, we must have \(m\in A_{1}\), \(|A_{1}\cap A_{k}|=1\), and \(w_{1}\) is just right, or else \(G(\{v_{1},\ldots,v_{n},w_{1},w_{j},w_{k}\})\) contains a heavy triple or an incomplete cycle. It follows that \(A_{k}=\{n\}\) and therefore \(w_{1}=-e_{1}+d_{m}+\ldots+d_{n}\), but then \(r=2\) or else \((v_{r-1};v_{r-2},w_{1},w_{j})\) is a claw. Furthermore, we must have \(|v_{i}|=2\) for all \(3\leq i\leq n\). Hence, \(\sigma=(1,1,2,\ldots,2)\in\mathbb{Z}^{n+1}\).
It follows that \(|A_{l}|=0\) for all \(l\in\{1,\ldots,8\}\setminus\{1,j,k\}\) or else either \(w_{l}\) is tight and \((V_{1},w_{j},v_{2},w_{l},v_{1})\) is an incomplete cycle or \(w_{l}\sim v_{1}\), in which case either \((w_{1},w_{j},w_{l})\) is a heavy triple or \((v_{1};w_{1},w_{j},w_{l})\) is a claw, or \(w_{l}\dagger v_{2}\), in which case either \((w_{j},w_{k},w_{l})\) is a heavy triple or \(n=2\) and \((v_{2};w_{j},w_{k},w_{l})\) is a claw or \(n>2\) and \((v_{2};v_{3},w_{j},w_{l})\) is a claw, or \(w_{l}\sim v_{n}\) and either \((w_{j},w_{k},w_{l})\) is a heavy triple or \(n=2\) and \((v_{2};w_{j},w_{k},w_{l})\) is a claw or \(n>2\) and \((v_{2};v_{3},w_{k},w_{l})\) is a claw. It follows that \(j\neq 5\) or else \((w_{5};v_{1},v_{2},w_{6})\) is a claw. If \(j=3\), then \(w_{4}\) is loaded and \(w_{4}=-e_{4}+e_{1}+e_{k}\), in which case \((w_{1},w_{k},w_{4})\) is a heavy triple, or else \((w_{3};v_{1},v_{2},w_{4})\) is a claw. Conclude that \(j=2\) and that \(w_{4}\) is unloaded, hence \(|w_{4}|=2\), in the same breath. Therefore, \(s^{*}=(2n-1,1,2,0,0,0,0,0)\) or \(s^{*}=(2n-1,1,0,0,1,0,0,0)\).
## 6. Main Results
### How to read the tables in this section
In [24] and [25], Tange provided a tabulation of simple knots in lens spaces admitting integer surgeries to \(\mathcal{P}\). There is one lens space on Tange's list of surgeries, \(L(191,157)\), that our tables do not account for. The linear lattice
\(\Lambda(191,157)\) has rank 8, and embeds in \(E_{8}\oplus\mathbb{Z}\) as the orthogonal complement to the \(E_{8}\)-changemaker \(\tau=(s,(1))\) with \(s^{*}=(1,1,1,1,0,0,0,0)\), which falls out of the purview of our analysis of \(E_{8}\)-changemakers in \(E_{8}\oplus\mathbb{Z}^{n+1}\) with \(n\geq 2\). With the exception of the lone simple knot in the lens space \(L(191,157)\), each of Tange's 19 families is a family of simple knots \(K_{j}\subset L(p_{j},q_{j})\) representing the class \(k_{j}\in\mathbb{Z}/p_{j}\mathbb{Z}\cong H_{1}(L(p_{j},q_{j}))\), parametrized by
\(j\in\mathbb{Z}\setminus\{0\}\), where \(p_{j}\) is a quadratic polynomial of \(j\), \(k_{j}\) is a linear polynomial in \(j\), and \(q_{j}\) is the residue of \(-k_{j}^{2}\bmod p_{j}\). After characterizing linear \(E_{8}\)-changemaker lattices, it seems even more surprising to the author that the Tange knot admitting a surgery to \(L(191,157)\) does not fit in to an infinite family of knots like all the other Tange knots.
From the perspective of the linear lattices bounded by the lens spaces accounted for on Tange's list, each of Tange's families naturally splits into two subfamilies: a \(-\)-family and a \(+\)-family, depending on whether \(j\leq-1\) or \(j\geq 1\), respectively. We have recorded the naming convention according to [25], \(p_{j}\), \(k_{j}\), the \(E_{8}\)-changemaker whose orthogonal complement is isomorphic to \(\Lambda(p_{j},q_{j})\), and the proposition in which the \(E_{8}\)-changemaker makes an appearance in Tables 1-4, which are separated by changemaker tails.
The attentive reader will observe that these 38 families of \(E_{8}\)-changemakers do not account for all forty-four families of lattices identified in Section 5. In addition to these families, there are 6 families of orthogonal sums of pairs of linear lattices which embed as the orthogonal complements to \(E_{8}\)-changmakers \(\tau=(s,(1,\ldots,1))\in E_{8}\oplus\mathbb{Z}^{n+1}\) given by \(s^{*}=(n+1,1,0,0,0,0,0,0)\) and \(s^{*}=(n+1,0,1,0,1,0,0,0)\), which correspond to surgeries on cables of the exceptional fiber of order \(-2\), \(s^{*}=(n+1,0,1,0,0,0,0,0)\) and \(s^{*}=(n+1,1,0,0,1,0,0,0)\), which correspond to surgeries on cables of the exceptional fiber of order \(3\), and \(s^{*}=(n+1,0,0,0,1,0,0,0)\) and \(s^{*}=(n+1,1,1,0,0,0,0,0)\), which correspond to surgeries on cables of the exceptional fiber of order \(5\). Thus, we have accounted for all linear lattices and orthogonal pairs of linear lattices which are also \(E_{8}\)-changemaker lattices.
### Proofs of the main theorems
Proof of Theorem 1.18.: Note that none of the decomposable lattices identified in Section 5 have \(\Lambda(2,1)\) summands. It follows that if \(\Lambda(p,q)\oplus\Lambda(2,1)\cong(\tau)^{\perp}\) for some \(E_{8}\)-changemaker \(\tau\in E_{8}\oplus\mathbb{Z}^{n+1}\), then \(n\in\{-1,0,1\}\). In fact, we cannot have \(n=1\), or else the isomorphism \(\Lambda(p,q)\oplus\Lambda(2,1)\) must take \(\Lambda(2,1)\) to \((d_{1}-d_{0})\), but then \(A_{j}=\{0,1\}\) and \(w_{j}\) is just right for all
\(j\in\{1,\ldots,8\}\) with \(|w_{j}|\geq 3\), so \(|w_{j}|\geq 3\) for at most one \(j\in\{2,3,5\}\), so \((w_{1};w_{2},w_{3},w_{5})\) is a claw. Furthermore, we cannot have \(n=0\), since then \(\sigma=(1)\) and for all \(j\in\{1,\ldots,8\}\) either \(|w_{j}|\geq 3\) or \(w_{j}\sim w_{j^{\prime}}\) for some \(j^{\prime}\) such that \(e_{j}\) and \(e_{j^{\prime}}\) are adjacent in the \(E_{8}\) Dynkin diagram. A computer search of the 1003 non-zero \(E_{8}\)-changemakers in \(E_{8}\) reveals only two whose orthogonal complements are isomorphic to \(\Lambda(p,q)\oplus\Lambda(2,1)\): either \(s^{*}=(1,0,0,1,0,0,0,0)\), in which case \(\Lambda(p,q)\) is given by the Gram matrix
\[\begin{bmatrix}2&-1&0&0&0&0\\ -1&4&-1&0&0&0\\ 0&-1&2&-1&0&0\\ 0&0&-1&2&-1&0\\ 0&0&0&-1&2&-1\\ 0&0&0&0&-1&2\end{bmatrix},\]
and one readily certifies that \((p,q)=(27,16)\), or \(s^{*}=(0,0,1,0,0,0,0,0)\), in which case \(\Lambda(p,q)\) is given by the Gram matrix
\[\begin{bmatrix}2&-1&0&0&0&0\\ -1&2&-1&0&0&0\\ 0&-1&2&-1&0&0\\ 0&0&-1&2&-1&0\\ 0&0&0&-1&2&-1\\ 0&0&0&0&-1&2\end{bmatrix},\]
and one readily certifies that \((p,q)=(7,6)\). Note that
\[0=\max\{\langle\mathfrak{c},\tau\rangle\colon\langle\mathfrak{c},\mathfrak{c} \rangle=\operatorname{rk}(E_{8})-4d(\mathcal{P})=0\},\]
so \(2g(K)=2p\) for any knot \(K\subset\mathcal{P}\) with \(K(2p)\cong L(p,q)\#L(2,1)\).
Proof of Theorem 1.19.: Every family of linear lattices admitting an \(E_{8}\)-changemaker embedding, i.e. those families captured in Propositions 5.1 (cf. Table 1), 5.2 (cf. Table 1), 5.4 (cf. Table 1), 5.10 (cf. Table 2), 5.15 (cf. Table 3), 5.23 (cf. Table 4), is bounded by a lens space realized by surgery on a Tange knot in \(\mathcal{P}\).
### Characterizing \(C\) and \(E\)
Theorem 1.18 alone does not give us Theorem 1.1; rather, it puts us in a position to leverage the following theorems, due, in order, to Baker, Rasmussen (building on work of Ni), and Hedden, to great effect. We are grateful to John Baldwin for pointing out as much.
**Theorem 6.1** (Theorem 1.1 of [1]).: _Let \(K\subset L(p,q)\) be a knot whose exterior \(M_{K}\) fibers over \(S^{1}\) with fiber surface \(F\) where \(\partial F\) is connected, and let \(g(K)=g(F)\). If \(g(K)\leq(p+1)/4\), then \(K\) is one-bridge with respect to a Heegaard torus of \(L(p,q)\). _
The following theorem follows from the combination of Theorem 4.3 and Proposition 4.5 of [22] (cf. [19]).
**Theorem 6.2**.: _Let \(K\subset L(p,q)\) be a knot that admits a surgery to an integer homology sphere \(L\)-space. If \(g(K)<(p+1)/2\), then \(\text{rk }\widehat{HFK}(L(p,q),K)=p\). _
For each homology class \(\alpha\in H_{1}(L(p,q);\mathbb{Z})\), the simple knot \(K_{\alpha}\) representing \(\alpha\) satisfies \(\operatorname{rk}\widehat{HFK}(L(p,q),K)=p\); those familiar with knot Floer homology can readily see this by observing that the knot \(K_{\alpha}\) admits a genus one doubly pointed Heegaard diagram obtained by simply placing an extra basepoint in the standard genus one singly-pointed Heegaard diagram \(\mathcal{H}\) of \(L(p,q)\) such that \(\widehat{CF}(\mathcal{H})\cong\widehat{HF}(L(p,q))\). Hedden observed that simple knots are the only one-bridge knots in lens spaces that are _Floer simple_--that is, they have minimal rank knot Floer homology as they satisfy \(\operatorname{rk}\widehat{HFK}(L(p,q),K)=p\).
**Theorem 6.3** (Theorem 1.9 of [13]).: _If \(K\subset L(p,q)\) is primitive, \(g(K)\leq\frac{p+1}{4}\), and rk \(\widehat{HFK}(L(p,q),K)=p\), then \(K\) is simple. _
We will, in addition, make use of one more lemma, a variation of [22, Lemma 2.2]. For any knot \(K\) in a rational homology \(3\)-sphere \(Y\), the meridian of \(K\), \(\mu\), is still well-defined, though since \(K\) may not be null-homologous in \(Y\) there is no canonical choice of a Seifert longitude. However, we fix a longitude of \(K\) and denote it by \(\lambda\), oriented so that \([\lambda]\cdot[\mu]=1\) with respect to the orientation on \(\partial M_{K}\) induced by \(Y\). Recall that, by the half-lives-half-dies principle, there is a unique slope \(\gamma\) on \(\partial M_{K}\) such that some integer multiple of it bounds in \(M_{K}\). Let \(\alpha=a\cdot[\mu]+p\cdot[\lambda]\in H_{1}(\partial M_{K};\mathbb{Z})\) be a primitive homology class represented by such a curve. The number \(p\) is well-defined--it is the _order_ of \([K]\) in \(H_{1}(Y;\mathbb{Z})\). Replacing \([\lambda]\) by \([\lambda]+[\mu]\) has the effect of replacing \(a\) by \(a-p\), so the value \(a\bmod p\) is an invariant of \(K\)--and so is the quantity \(a/p\bmod 1\), referred to as the _self-linking number_ of \(K\) and denoted \(K\cdot K\). The self-linking number of \(K\) depends only on the class \([K]\in H_{1}(Y;\mathbb{Z})\), and \([nK]\cdot[nK]\equiv n^{2}[K]\cdot[K]\bmod 1\).
A _distance_\(n\) surgery on \(K\) is a manifold \(Y^{\prime}\) obtained by Dehn filling \(M_{K}\) along a curve representing \(k\cdot[\mu]+n\cdot[\lambda]\) for some \(k\in\mathbb{Z}\). More geometrically, \(Y^{\prime}\) is obtained by distance \(n\) surgery on \(K\subset Y\) if the curve \(\beta\) along which \(M_{K}\) is Dehn filled to obtain \(Y^{\prime}\) has minimum geometric intersection number \(n\) with \(\mu\). From this perspective, the relation of being obtained by distance \(n\) surgery is clearly symmetric, as \(\beta\) becomes the meridian of the surgery dual knot \(K^{\prime}\subset Y^{\prime}\) when \(M_{K}\) is Dehn filled along \(\beta\).
**Lemma 6.4** ((cf. Lemma 2.2 of [22])).: _Let \(K\subset Y\) be a knot in a rational homology sphere with \(H_{1}(Y;\mathbb{Z})\cong\mathbb{Z}/p\mathbb{Z}\). Then K has a distance \(n\) surgery \(Y^{\prime}\) which is an integer homology sphere if and only if \([K]\) generates \(H_{1}(Y)\) and its self-linking number \(a/p\) is congruent to \(\pm n^{\prime}/p\bmod 1\), where \(nn^{\prime}\equiv 1\bmod p\)._
Sketch of proof.: The proof of this lemma follows the proof of [22, Lemma 2.2] up until the final paragraph which discusses the map \(A:H_{1}(T^{2};\mathbb{Z})\to H_{1}(S^{1}\times D^{2};\mathbb{Z})\oplus H_{1}(M _{K};\mathbb{Z})\) in the Mayer-Vietoris sequence associated to the decomposition \(Y^{\prime}=(S^{1}\times D^{2})\cup_{T^{2}}M_{K}\). Since \(H_{2}(Y^{\prime};\mathbb{Z})\) and \(H_{1}(Y^{\prime};\mathbb{Z})\) are both assumed to be \((0)\), \(A\) is an isomorphism. The map \(H_{1}(T^{2};\mathbb{Z})\to H_{1}(S^{1}\times D^{2};\mathbb{Z})\cong\mathbb{Z}\) is given by \(x\mapsto x\cdot[\beta]\), where \([\beta]=k\cdot[\mu]+n\cdot[\lambda]\). On the other hand, the map \(H_{1}(T^{2};\mathbb{Z})\to H_{1}(M_{K};\mathbb{Z})\cong\mathbb{Z}\) is given by \(x\mapsto x\cdot[\alpha]\), where \([\alpha]=a\cdot[\mu]+p\cdot[\lambda]\). Therefore, with respect to the basis \(([\mu],[\lambda])\) on \(H_{1}(T^{2};\mathbb{Z})\), the matrix for the map \(A\) is given by
\[M_{A}=\begin{bmatrix}-n&k\\ -p&a\end{bmatrix}. \tag{23}\]
In order for \(A\) to be an isomorphism we must be able to choose a \(k\) so that \(\det(M_{A})=\pm 1\), which is possible if and only if \(na\equiv\pm 1\ \mathrm{mod}\ p\).
Proof of Theorem 1.1 and Theorem 1.2.: For any knot \(\kappa\subset\mathcal{P}\) with \(\kappa(p/2)\cong L(p,q)\), the cabling construction yields a knot \(K\)--which is isotopic to a simple closed curve representing \(p\cdot[\mu]+2\cdot[\lambda]\in H_{1}(\partial M_{K},\mathbb{Z})\), and which we call the \((p,2)\)_-cable_ of \(\kappa\)--such that \(K(2p)=\kappa(p/2)\#L(2,p)\cong L(p,q)\#L(2,1)\). Crucially, since we must have \(2p\geq 2g(K)\), it follows that every knot \(K\) with \(K(2p)\cong L(p,q)\#L(2,1)\) arises as the cable of some knot \(\kappa\) with a non-integer surgery either to \(L(p,q)\) or \(L(2,1)\) by Matignon-Sayari [17]. We may furthermore deduce that \(\kappa(p/2)\cong L(p,q)\), since in order for \(\kappa(2/p)\) (\(p\geq 3\)) to be an L-space we must have \(g(\kappa)=0\), and so \(\kappa\) bounds a disk in \(\mathcal{P}\) and is therefore unknotted in a 3-ball; in this case \(\kappa(2/p)\cong\mathcal{P}\#L(2,1)\) and is thus never a lens space.
In the case that \(K\) is the \((p,2)\)-cable of \(\kappa\), an elementary calculation shows that
\[2g(K)-1=2p+2(2g(\kappa)-1)-p. \tag{24}\]
On the other hand, if \(K(2p)=L(2,1)\#L(p,q)\), then \((p,q)=(7,6)\) or \((27,16)\) and \(g(K)=p\) by Theorem 1.18. Then, (24) reads
\[g(\kappa)=(p+1)/4, \tag{25}\]
and since \(\kappa(p/2)\cong L(p,q)\), by Theorems 6.1, 6.2, and 6.3 and the preceding paragraph, it follows that the surgery dual to \(\kappa\), \(\kappa^{*}\subset L(p,q)\), is simple.
By Lemma 6.4, we must furthermore have that the self-linking number of \(\kappa^{*}\) is \(\pm 2^{\prime}/p\ \mathrm{mod}\ 1\). Explicit computation shows that \(\pm 2\ \mathrm{mod}\ 7\) are the only solutions to \(x^{2}\equiv\pm 4\ \mathrm{mod}\ 7\) and that \(\pm 11\ \mathrm{mod}\ 27\) are the only solutions to \(x^{2}\equiv\pm 14\ \mathrm{mod}\ 27\), so, up to orientation reversal, \(E^{*}\) is the unique simple knot in \(L(7,6)\) with a distance \(2\) integer homology sphere surgery, and \(C^{*}\) is the unique simple knot in \(L(27,16)\) with a distance \(2\) integer homology sphere surgery. Therefore, if \(K(2p)\cong L(2,1)\#L(p,q)\), \(K\) is either the \((7,2)\)-cable of \(E\) or the \((27,2)\)-cable of \(C\).
|
2310.08014 | On Automorphisms of Complex $b^k$-Manifolds | The $b$-calculus of Melrose is a tool for studying structures on a smooth
manifold with a first order degeneracy at a given hypersurface. In this
framework, Mendoza defined complex $b$-manifolds. In the spirit of work of
Scott, we extend Mendoza's definition to the case of higher-order degeneracies,
introducing the notion of a complex $b^k$-manifold for $k$ a positive integer.
We then investigate the local and global automorphisms of complex
$b^k$-manifolds. We also propose $b^k$-analogues for some classical spaces of
holomorphic functions. | Tatyana Barron, Michael Francis | 2023-10-12T03:29:31Z | http://arxiv.org/abs/2310.08014v1 | # On Automorphisms of Complex \(b^{k}\)-Manifolds
###### Abstract
The \(b\)-calculus of Melrose is a tool for studying structures on a smooth manifold with a first order degeneracy at a given hypersurface. In this framework, Mendoza defined complex \(b\)-manifolds. In the spirit of work of Scott, we extend Mendoza's definition to the case of higher-order degeneracies, introducing the notion of a complex \(b^{k}\)-manifold for \(k\) a positive integer. We then investigate the local and global automorphisms of complex \(b^{k}\)-manifolds. We also propose \(b^{k}\)-analogues for some classical spaces of holomorphic functions.
**Mathematics Subject Classification (2010)**: Primary 32Q99; Secondary 32M18
**Keywords:**\(b\)-geometry, complex \(b\)-manifold, automorphism group
## 1 Introduction
Throughout, \(M\) is a smooth manifold and \(Z\subseteq M\) is a closed hypersurface. We write \(C^{\infty}(M)\) for the ring of smooth, \(\mathbb{R}\)-valued functions on \(M\) and \(\mathfrak{X}(M)\) for the \(C^{\infty}(M)\)-module of (smooth) vector fields on \(M\).
Melrose [11] introduced \(b\)-calculus as an organizational framework for the study of differential operators on \(M\) with a first order degeneracy along \(Z\). His formalism has significant applications to index theory on open manifolds.
It is interesting to study "\(b\) versions" of classical geometries in which the defining data of a given geometry suffers a degeneracy along \(Z\). Various authors have considered \(b\)-symplectic manifolds, also called log-symplectic manifolds ([5], [7], [6], [8], [10]). Mendoza [12] defined complex \(b\)-manifolds and studied the cohomology of the \(b\)-Dolbeault complex.
Scott [14] generalized \(b\)-calculus by introducing \(b^{k}\)-manifolds, where \(k\) encodes the order of degeneracy along the hypersurface \(Z\subseteq M\). From the point of view of Scott's theory, ordinary \(b\)-geometry is the case \(k=1\).
In this article, we extend Mendoza's definition in the spirit of Scott's work by defining what is a complex \(b^{k}\)-manifold for \(k>1\). We then restrict attention to the (real) two-dimensional case and investigate the local and global automorphisms of complex \(b^{k}\)-manifolds. We briefly discuss candidates for function spaces one can attach to a complex \(b^{k}\)-manifold.
Almost-injective Lie algebroids
Let \(M\) be a smooth manifold. The categories of smooth vector bundles on \(M\) and finitely-generated, projective \(C^{\infty}(M)\)-modules are equivalent by Serre-Swan duality [15]. Under this equivalence, injective maps of \(C^{\infty}(M)\)-modules correspond exactly to vector bundle maps that are injective over a dense set. One may therefore regard the following data as equivalent:
* a projective \(C^{\infty}(M)\)-module \(\mathcal{F}\) of vector fields on \(M\),
* a vector bundle \(A\) over \(M\) together with a bundle map \(\rho:A\to TM\) that is injective over a dense (open) subset of \(M\).
One may also deal with the corresponding sheaf of local vector fields:
\[\mathcal{F}_{U}\coloneqq C^{\infty}(U)\{X|_{U}:X\in\mathcal{F}\}=\rho(C^{ \infty}(U;A)),\text{ for }U\subseteq M\text{ open}. \tag{2.1}\]
If \(\mathcal{F}\) is closed under Lie bracket, then \(A\) is naturally a Lie algebroid with bracket coming from the identification \(C^{\infty}(M;A)\cong\mathcal{F}\). Lie algebroids whose anchor map is injective over a dense set are called **almost-injective** and appear in the study of Stefan-Sussmann singular foliations. See, for example, [3] and 5.2.5 in [2]. In this language, the following data are equivalent:
* an involutive, projective \(C^{\infty}(M)\)-module \(\mathcal{F}\) of vector fields on \(M\),
* an almost-injective Lie algebroid \(A=(A,\rho,[\cdot,\cdot])\) over \(M\).
## 3 The \(b\)-tangent bundle
Two superficially different approaches to \(b\)-calculus are in use. The original approach of [11] uses a manifold with boundary. It is also common to work on a manifold without boundary that is equipped with a preferred hypersurface [1]. We shall adopt the second approach here.
**Definition 3.1**.: A **b-manifold** is a smooth manifold \(M\) together with a closed hypersurface \(Z\subseteq M\) that we refer to as the **singular locus**. A **b-vector field** is a vector field on \(M\) tangent to \(Z\). The \(C^{\infty}(M)\)-module of \(b\)-vector fields is denoted \({}^{b}\mathfrak{X}(M)\).
**Example 3.2**.: If \(M=\mathbb{R}^{2}\) and \(Z=\{0\}\times\mathbb{R}\), then \({}^{b}\mathfrak{X}(\mathbb{R}^{2})=\langle x\partial_{x},\partial_{y}\rangle\), the \(C^{\infty}(\mathbb{R}^{2})\)-module generated by \(x\partial_{x}\) and \(\partial_{y}\).
Above, \({}^{b}\mathfrak{X}(\mathbb{R}^{2})\) is a free \(C^{\infty}(\mathbb{R}^{2})\)-module. In general, for any \(b\)-manifold \(M\), \({}^{b}\mathfrak{X}(M)\) is a projective \(C^{\infty}(M)\)-module closed under Lie bracket. Thus, referring to Section 2, the following definition makes sense.
**Definition 3.3**.: The **b-tangent bundle** of a \(b\)-manifold \(M\) is the almost injective Lie algebroid \({}^{b}TM\) associated to the involutive, projective module \({}^{b}\mathfrak{X}(M)\).
\(b^{k}\)-Tangent bundles
Fix an integer \(k\geq 2\). By analogy with \(b\)-vector fields (\(k=1\)), a \(b^{k}\)-vector field on a manifold \(M\) with given closed hypersurface \(Z\) ought to be a vector field that is "tangent to \(Z\) to \(k\)th order". It is clear how to best proceed in the model Euclidean example where \(M=\mathbb{R}^{n}\), \(Z=\{0\}\times\mathbb{R}^{n-1}\).
**Definition 4.1**.: Fix a positive integer \(k\). The **standard module of \(\mathbf{b^{k}}\)-vector fields** on \(\mathbb{R}^{n}\) is \(\langle x_{1}^{k}\partial_{x_{1}},\partial_{x_{2}},\dots,\partial_{x_{n}}\rangle\). That is, the \(C^{\infty}(\mathbb{R}^{n})\)-module of vector fields on \(\mathbb{R}^{n}\) whose first component vanishes to \(k\)th order on \(\{0\}\times\mathbb{R}^{n-1}\).
Unfortunately, the naive notion of "\(k\)th order tangency" used above is not compatible with smooth coordinate changes.
**Example 4.2**.: The diffeomorphism \(\theta:\mathbb{R}^{2}\to\mathbb{R}^{2}\), \(\theta(x,y)=(e^{y}x,y)\) satisfies \(\theta_{*}(\partial_{y})=x\partial_{x}+\partial_{y}\). Thus, although \(\theta\) preserves the \(y\)-axis, pushforward by \(\theta\) does not preserve the standard module of \(b^{2}\)-vector fields.
The above example shows it does not make sense to talk about _the_ module of \(b^{k}\)-vector fields. Instead, we prescribe as external data a projective module of vector fields on \(M\) that is tangent to \(Z\) and locally \(C^{\infty}\)-conjugate to the model example of Definition 4.1.
**Definition 4.3**.: Fix a positive integer \(k\). A **module of \(\mathbf{b^{k}}\)-vector fields** for an \(n\)-dimensional, smooth manifold \(M\) with a given closed hypersurface \(Z\subseteq M\) is a projective \(C^{\infty}(M)\)-module \(\mathcal{F}\subseteq\mathfrak{X}(M)\) such that every \(p\in M\) admits an open neighbourhood \(U\) and a diffeomorphism \(\theta:U\to\mathbb{R}^{n}\) satisfying:
\[\theta_{*}(\mathcal{F}_{U})=\begin{cases}\langle x_{1}^{k}\partial_{x_{1}}, \partial_{x_{2}},\dots,\partial_{x_{n}}\rangle&\text{ if }p\in Z\\ \mathfrak{X}(\mathbb{R}^{n})&\text{ if }p\notin Z\end{cases}\]
(see (2.1) for notation). Such an \(\mathcal{F}\) is involutive; its corresponding almost-injective Lie algebroid is called the \(\mathbf{b^{k}}\)**-tangent bundle** and denoted \({}^{b^{k}}TM\).
**Remark 4.4**.: It is tempting to call a pair \((M,\mathcal{F})\) consisting of a manifold with a given module of \(b^{k}\)-vector fields a \(b^{k}\)_-manifold_. However, we shall not do this because Scott [14] has already defined a \(b^{k}\)-manifold to be a triple \((M,Z,j_{Z})\), where \(j_{Z}\) is the \((k-1)\)-jet along \(Z\) of a defining function for \(Z\) (plus an orientation on \(M\) and \(Z\)). Such a jet induces a module of \(b^{k}\)-vector fields in our sense, namely (Definition 2.5 in [14]) the collection \(\mathcal{F}\) of vector fields \(X\) such that \(Xf\) vanishes to \(k\)th order on \(Z\), for any defining function \(f\) representing \(j_{Z}\). On the other hand, the assignment \(j_{Z}\mapsto\mathcal{F}\) is neither injective nor surjective. Regarding injectivity, taking \(k=2\), \(M=\mathbb{R}^{2}\), \(Z=\{0\}\times\mathbb{R}\), both \(j_{Z}=x\) and \(j_{Z}=2x\) induce \(\mathcal{F}=\langle x^{2}\partial_{x},\partial_{y}\rangle\). Regarding surjectivity, the module of \(b^{2}\)-vector fields \(\langle x^{2}\partial_{x},\partial_{y}+x\partial_{x}\rangle\) for \(M=\mathbb{R}\times S^{1}\), \(Z=\{0\}\times S^{1}\), for which local trivializations can be obtained from \(\theta(x,y)=(e^{y}x,y)\), cannot arise from the jet of a defining function. This can be seen from Chapter 5 of [4] where a holonomy invariant is attached to a module of \(b^{k}\)-vector fields (there called a _transversely order \(k\) singular foliation_).
## 5 \(b\)-Complex and \(b^{k}\)-complex manifolds
Recall that a complex structure on \(M\) is given by a subbundle \(T^{0,1}M\) of the complexified tangent bundle satisfying: (i) \(\mathbb{C}TM=\overline{T^{0,1}M}\oplus T^{0,1}M\), and (ii) \(T^{0,1}M\) is involutive. This bundle-theoretic definition is equivalent to the definition via holomorphic charts by the Newlander-Nirenberg theorem: every point in \(M\) admits local coordinates \((x_{1},y_{1},\ldots,x_{n},y_{n})\) in which the complex derivatives \(\partial_{\overline{z}}=\frac{1}{2}(\partial_{x_{j}}+i\partial_{y_{j}})\), \(j=1,\ldots,n\) form a local frame for \(T^{0,1}M\). To define complex \(b^{k}\)-manifolds, we simply replace \(TM\) by \({}^{b^{k}}TM\). The case \(k=1\) was given in [12].
**Definition 5.1**.: Fix a positive integer \(k\). A **complex \(\mathbf{b^{k}}\)-structure** for a smooth manifold \(M\) with a given closed hypersurface \(Z\subseteq M\) and module of \(b^{k}\)-vector fields \(\mathcal{F}\) is a (complex) subbundle \({}^{b^{k}}T^{0,1}M\) of the complexified \(b^{k}\)-tangent bundle satisfying: (i) \(\mathbb{C}(^{b^{k}}TM)=\overline{{}^{b^{k}}T^{0,1}M}\oplus{}^{b^{k}}T^{0,1}M\), and (ii) \({}^{b^{k}}T^{0,1}M\) is involutive. It is equivalent to work with the corresponding bracket-invariant \(C^{\infty}(M,\mathbb{C})\)-module of complex vector fields \(\mathcal{F}^{0,1}\subseteq\mathbb{C}\mathcal{F}\) and, actually, it is the pair \((M,\mathcal{F}^{0,1})\) that we call a **complex \(\mathbf{b^{k}}\)-manifold**.
**Remark 5.2**.: The module \(\mathcal{F}^{0,1}\) recovers all the other data at hand. For example, taking real parts recovers \(\mathcal{F}\), and hence \({}^{b^{k}}TM\). The singular set for the anchor map is the hypersurface \(Z\). We define complex \(b^{k}\)-manifolds as pairs \((M,\mathcal{F}^{0,1})\) in order to make the definition below simple.
**Definition 5.3**.: An **isomorphism**\(\theta:(M,\mathcal{E}^{0,1})\to(N,\mathcal{F}^{0,1})\) of two complex \(b^{k}\)-manifolds is a diffeomorphism \(\theta:M\to N\) satisfying \(\theta_{*}(\mathcal{E}^{0,1})=\mathcal{F}^{0,1}\).
**Remark 5.4**.: The \(b^{k}\)-tangent bundle of a complex \(b^{k}\)-manifold \(M\) is isomorphic to the usual tangent bundle away from the singular locus \(Z\), so \(M\setminus Z\) is an ordinary complex manifold. Moreover, complex \(b^{k}\)-manifold isomorphisms restrict to complex manifold isomorphisms away from the singular loci.
**Remark 5.5**.: Nothing prevents one from speaking about involutive almost complex structures on arbitrary Lie algebroids. This is done in [9] and [13], for example. On the other hand, at this level of generality, it is not clear what the formal integrability condition of involutivity might imply about the existence of a special local normal forms in the spirit of Newlander-Nirenberg. The case of complex \(b\)-manifolds is discussed in the next section.
## 6 Remark on the \(b\)-Newlander-Nirenberg theorem
Ideally, complex \(b^{k}\)-manifolds should have a single local model depending only on \(k\) and the dimension, thus allowing for an equivalent definition in terms of appropriate "\(b^{k}\)-holomorphic charts". This holds for ordinary complex manifolds (if one likes, the case \(k=0\)) by the Newlander-Nirenberg theorem. The \(k=1\) case was partially resolved in Section 5 of [12]; complex \(b\)-manifolds do not have "formal local invariants" at the boundary. Recently (results to appear elsewhere), the authors have established the full "no local invariants" result in the
\(k=1\) case and expect the same to hold for \(k\geq 2\). For this reason, we focus on a single model case in our study of automorphisms below.
**Definition 6.1**.: The **standard \(\mathbf{b^{k}}\)-complex plane** is the complex \(b\)-manifold \({}^{b^{k}}\mathbb{R}^{2}\coloneqq(\mathbb{R}^{2},\langle L_{k}\rangle)\), where \(\langle L_{k}\rangle\) is the \(C^{\infty}(\mathbb{R}^{2},\mathbb{C})\)-module singly-generated by
\[L_{k}\coloneqq x^{k}\partial_{x}+i\partial_{y}. \tag{6.1}\]
As stated above, our justification for focusing on these model cases is the following result whose proof will appear elsewhere.
**Theorem 6.2**.: _Let \(M\) be a two-dimensional complex \(b\)-manifold with singular locus \(Z\). Then, for every \(p\in Z\), there is a neighbourhood \(U\subseteq M\) of \(p\) and an open set \(V\subseteq{}^{b^{k}}\mathbb{R}^{2}\) such that \(U\cong V\) as complex \(b\)-manifolds._
## 7 Automorphisms of complex \(b^{k}\)-manifolds
We describe the global automorphisms of the standard \(b^{k}\)-complex planes by relating complex \(b^{k}\)-manifold automorphisms to standard complex automorphisms (see Remark 5.4). We first note the following simple fact.
**Proposition 7.1**.: _Fix a positive integer \(k\). Then,_
\[\operatorname{Aut}({}^{b^{k}}\mathbb{R}^{2})=\{\theta\in\operatorname{Diff}( \mathbb{R}^{2}):\theta_{*}(L_{k})\sim L_{k}\}.\]
_Here, \(L_{k}=x^{k}\partial_{x}+i\partial_{y}\) and \(\sim\) denotes equality modulo multiplication by a nowhere-vanishing smooth function. Every automorphism \(\theta\) of \({}^{b^{k}}\mathbb{R}^{2}\) satisfies \(\theta_{*}(Z)=Z\) where \(Z=\{0\}\times\mathbb{R}\)._
Proof.: From Definition 5.3, an isomorphism \({}^{b^{k}}\mathbb{R}^{2}\rightarrow{}^{b^{k}}\mathbb{R}^{2}\) is a diffeomorphism preserving the \(C^{\infty}(\mathbb{R}^{2},\mathbb{C})\)-module singly-generated by \(L_{k}\), hence the first statement. For the second statement, see Remark 5.2.
**Example 7.2**.: For every positive integer \(k\), \((x,y)\mapsto(-x,(-1)^{k+1}y)\) is an order-two automorphism of \({}^{b^{k}}\mathbb{R}^{2}\) that maps the (open) left half-plane \(\mathbb{R}^{2}_{-}\) onto the right half-plane \(\mathbb{R}^{2}_{+}\).
**Definition 7.3**.: We write \(\operatorname{Aut}_{+}({}^{b^{k}}\mathbb{R}^{2})\) for the normal subgroup of \(\operatorname{Aut}({}^{b^{k}}\mathbb{R}^{2})\) consisting of automorphisms which preserve the right half-plane \(\mathbb{R}^{2}_{+}\).
As the full automorphism group \(\operatorname{Aut}({}^{b^{k}}\mathbb{R}^{2})\) is the semidirect product product of \(\operatorname{Aut}_{+}({}^{b^{k}}\mathbb{R}^{2})\) by the order-two automorphism of Example 7.2, we are content to obtain a description of \(\operatorname{Aut}_{+}({}^{b^{k}}\mathbb{R}^{2})\).
**Theorem 7.4**.: _The group \(\operatorname{Aut}_{+}({}^{b}\mathbb{R}^{2})\) is isomorphic to \(\mathbb{R}\times\mathbb{R}\). More precisely, it is generated by the following 1-parameter subgroups:_
\[\begin{array}{llll}\text{vertical translations:}&(x,y)\mapsto(x,y+t)&t\in \mathbb{R}\\ \text{horizontal scalings:}&(x,y)\mapsto(e^{t}x,y)&t\in\mathbb{R}.\end{array}\]
Proof.: Let \(\mathbb{C}\) have its usual complex structure. The diffeomorphism
\[\theta:\mathbb{R}^{2}_{+} \to\mathbb{R}^{2} \theta(x,y)=\log x+iy\]
satisfies \(\theta_{*}(x\partial_{x}+i\partial_{y})=\partial_{x}+i\partial_{y}\); it is an isomorphism from \((\mathbb{R}^{2}_{+},\langle L_{1}\rangle)\) to \(\mathbb{C}=(\mathbb{R}^{2},\langle L_{0}\rangle)\). From Remark 5.4, the restriction of any \(\alpha\in\operatorname{Aut}_{+}(^{b}\mathbb{R}^{2})\) to \(\mathbb{R}^{2}_{+}\) pushes forward through \(\theta\) to an automorphism of the ordinary complex structure of \(\mathbb{R}^{2}=\mathbb{C}\). The automorphisms of \(\mathbb{C}\) are the group \(\mathbb{C}^{*}\ltimes\mathbb{C}\) of affine transformations. Pulled back through \(\theta\), the translation automorphisms of \(\mathbb{C}\) become the automorphisms \((x,y)\mapsto(x+s,e^{t}y)\) of \((\mathbb{R}^{2}_{+},\langle L_{1}\rangle)\), for \(s,t\in\mathbb{R}\). These extend (by the same formula) to automorphisms of \({}^{b}\mathbb{R}^{2}\). Moreover, these extensions can be seen to be unique by considering the analogous identification of the left half-plane \((\mathbb{R}^{2}_{-},\langle L_{1}\rangle)\) with \(\mathbb{C}\).
On the other hand, the scaling symmetries of \(\mathbb{C}\) pull back through \(\theta\) to the automorphisms \((x,y)\mapsto(e^{s\log x-ty},t\log x+sy)\) of \((\mathbb{R}^{2}_{+},\langle L_{1}\rangle)\), for \(s+it\in\mathbb{C}^{*}\). Such maps do not extend continuously to \(\mathbb{R}^{2}\) unless \(t=0\) and \(s>0\). In the latter case, one obtains the automorphisms \((x,y)\mapsto(sx,y^{s})\), \(s>0\) of \((\mathbb{R}^{2}_{+},\langle L_{1}\rangle)\) which can be extended continuously to \(\mathbb{R}^{2}\), but not to diffeomorphisms (excepting the trivial case \(s=1\)).
**Theorem 7.5**.: _If \(k\geq 2\) is an integer, the group \(\operatorname{Aut}_{+}(^{b^{k}}\mathbb{R}^{2})\) is isomorphic to the "\(ax+b\) group" \(\mathbb{R}_{+}\ltimes\mathbb{R}\). More precisely, it is generated by the following 1-parameter subgroups:_
\[\text{vertical translations:} (x,y) \mapsto(x,y+t) t\in\mathbb{R}\] \[\text{hyperbolic transformations:} (x,y) \mapsto(e^{-\frac{t}{k-1}}x,e^{t}y) t\in\mathbb{R}.\]
Proof.: Let \(\mathbb{H}\subseteq\mathbb{C}\) denote the upper half complex plane \(\operatorname{Im}(z)>0\) with its standard complex structure. The diffeomorphism
\[\theta:\mathbb{R}^{2}_{+} \to\mathbb{H} \theta(x,y)=y+\tfrac{1}{(k-1)x^{k-1}}i\]
satisfies \(\theta_{*}(x^{k}\partial_{x}+i\partial_{y})=-\partial_{y}+i\partial_{x}\sim \partial_{x}+i\partial_{y}\); it is an isomorphism from \((\mathbb{R}^{2}_{+},\langle L_{k}\rangle)\) to \(\mathbb{H}=(\mathbb{H},\langle L_{0}\rangle)\). Referring again to Remark 5.4, every element of \(\operatorname{Aut}_{+}(\mathbb{R}^{2},\langle L_{k}\rangle)\) determines, by conjugation through \(\theta\), an automorphism of \(\mathbb{H}\). The usual group of complex automorphisms of \(\mathbb{H}\) is \(\operatorname{PSL}(2,\mathbb{R})\), acting as fractional linear transformations. Using the KAN decomposition for \(\operatorname{SL}(2,\mathbb{R})\), \(\operatorname{Aut}(\mathbb{H})\) is the product of the following three one-parameter subgroups:
\[K =\{z\mapsto\tfrac{z\cos t-\sin t}{z\sin t+\cos t}:t\in\mathbb{R}\} \text{(elliptic)}\] \[A =\{z\mapsto e^{t}z:t\in\mathbb{R}\} \text{(hyperbolic)}\] \[N =\{z\mapsto z+t:t\in\mathbb{R}\} \text{(parabolic)}.\]
Pulled back through \(\theta\), the latter two groups of automorphisms of \(\mathbb{H}\) become the following groups of automorphisms \((\mathbb{R}^{2}_{+},\langle L_{k}\rangle)\):
\[A^{\prime}=\{(x,y)\mapsto(e^{-\frac{t}{k-1}}x,e^{t}y):t\in\mathbb{R}\},\qquad \quad N^{\prime}=\{(x,y)\mapsto(x,y+t):t\in\mathbb{R}\}.\]
These groups of automorphisms extend (by the same formulas) to automorphisms of \({}^{b^{k}}\mathbb{R}^{2}\). Their extensions are also unique (by considering their action on the left half-plane as well). On the other hand, the automorphisms of \((\mathbb{R}^{2}_{+},\langle L_{k}\rangle)\) arising as the pullback through \(\theta\) of \(z\mapsto\frac{z\cos t-\sin t}{z\sin t+\cos t}\) for \(t\) not an integer multiple of \(\pi\) do not extend continuously to \(\mathbb{R}^{2}\). Indeed, an elementary computation shows that, if the imaginary part of \(z\) goes to \(+\infty\) and the real part of \(z\) is held fixed, then \(\frac{z\cos t-\sin t}{z\sin t+\cos t}\to\cot(t)\). It follows, if \(\alpha_{t}\) is the corresponding automorphism of \((\mathbb{R}^{2},\langle L_{k}\rangle)\), that \(\alpha_{t}(x,y)\) approaches \((+\infty,\cot(t))\) as \(x\to 0^{+}\) and \(y\) is held fixed. In particular, \(\alpha_{t}\) does not admit a continuous extension to \(\mathbb{R}^{2}\).
**Remark 7.6**.: We note that the action of \(\operatorname{Aut}_{+}({}^{b^{k}}\mathbb{R}^{2})\) on the singular locus \(Z=\{0\}\times\mathbb{R}\) is faithful for \(k\geq 2\), but not for \(k=1\).
We conclude this section with an example of a family of local automorphisms of \({}^{b}\mathbb{R}^{2}\) that do not extend to a global ones.
**Example 7.7**.: Let \(\Omega\subseteq{}^{b}\mathbb{R}^{2}\) be the strip \(-\frac{\pi}{2}<y<\frac{\pi}{2}\). The function \(u:\mathbb{R}^{2}\to\mathbb{C}\) given by \(u(x,y)=xe^{iy}\) maps \(\Omega\backslash(\{0\}\times\mathbb{R})\) diffeomorphically onto \(\mathbb{C}\backslash\mathbb{R}\) and satisfies \(u_{*}(x\partial_{x}+i\partial_{y})=\overline{z}\partial_{\overline{z}}\). Thus, \(u_{*}(xe^{iy}(x\partial_{x}+i\partial_{y}))=\overline{z}^{2}\partial_{ \overline{z}}\) and, after taking real parts,
\[u_{*}(x^{2}\cos x\partial_{x}+x\sin y\partial_{y})=(x^{2}-y^{2})\partial_{x}+ 2xy\partial_{y}.\]
The vector feld on the right is the generator of the \(1\)-parameter group of Mobius transformations \(z\mapsto\frac{z}{1-tz}\), \(t\in\mathbb{R}\) which is complete when considered as a flow on \(\mathbb{C}\setminus\mathbb{R}\) (or, of course, the Riemann sphere). Pulling back through \(u\), one checks that the flow of \(x^{2}\cos x\partial_{x}+x\sin y\partial_{y}\) defines a \(1\)-parameter group of complex \(b\)-manifold automorphisms of \((\Omega,\langle L_{1}\rangle)\).
## 8 Function spaces
A \(\mathbf{b}\)**-holomorphic function** on \({}^{b}\mathbb{R}^{2}\) is a smooth function \(f:\mathbb{R}^{2}\to\mathbb{C}\) (perhaps only locally defined) satisfying \(L_{1}f=0\) where \(L_{1}=x\partial_{x}+i\partial_{y}\).
**Example 8.1**.: The function \(u:{}^{b}\mathbb{R}^{2}\to\mathbb{C}\), \(u(x,y)=xe^{iy}\) is \(b\)-holomorphic. More generally, \(h\circ u\) is \(b\)-holomorphic near \((0,0)\) for any holomorphic function \(h\) defined near \(0\). There exist nonanalytic \(b\)-holomorphic functions defined near \((0,0)\) as well; \(f(x,y)=\exp(\frac{-1}{u(x,y)})\) for \(x>0\), \(f(x,y)=0\) for \(x\leq 0\) defines a \(b\)-holomorphic function on \(\{(x,y)\in{}^{b}\mathbb{R}^{2}:-\frac{\pi}{4}<y<\frac{\pi}{4}\}\).
It is interesting to contemplate potential "\(b\) analogues" for classical spaces of holomorphic functions such as Segal-Bargmann space, (weighted) Bergman space, Hardy space, etc. Let us define ad hoc \(\mathbf{b}\)**-Segal-Bargmann space** to be the space of \(b\)-holomorphic functions on \({}^{b}\mathbb{R}^{2}\) whose restriction to the right half-plane \(\mathbb{R}^{2}_{+}\) pushes forward through the isomorphism \((x,y)\mapsto(\log x,y):(\mathbb{R}^{2}_{+},\langle L_{1}\rangle)\to\mathbb{C}\) used earlier to an element of the classical Segal-Bargmann space (entire functions \(f\) on \(\mathbb{C}\) satisfying \(\int_{\mathbb{C}}|f(z)|^{2}e^{-|z|^{2}}\ dA<\infty\)).
**Example 8.2**.: The function \(u(x,y)=xe^{iy}\) belongs to the \(b\)-Segal-Bargman space (it pushes forward to the exponential function). Considering powers of \(u\), we conclude the \(b\)-Segal-Bargmann space is infinite-dimensional. |
2303.04716 | An FPGA-Based Semi-Automated Traffic Control System Using Verilog HDL | Traffic Congestion is one of the severe problems in heavily populated
countries like Bangladesh where Automated Traffic Control System needs to be
implemented. An FPGA-based Semi-automated system is introduced in this paper
including a completely new feature "Safe State" to avoid sudden unwanted
collision. Here we used sequential encoding which has made the program much
simpler and so that easy to control and modify. The experimental result showed
the automated change in traffic lights according to the specified timing
sequences which would be able to conduct maximum possible transitions of
vehicles occur at different directions simultaneously without facing any
accident. | Anik Mallik, Sanjoy Kundu, Md. Ashikur Rahman | 2023-03-08T17:01:08Z | http://arxiv.org/abs/2303.04716v1 | **Proceeding of the**
**Proceeding of the**
**International Conference on Electrical, Computer and Telecommunication Engineering**
**01- 02 December 2012 (ICECTE2012), RUET, Rajshahi-6204, Bangladesh**
**ICECTE2012: PI-0144**
**An FPGA-Based Semi-Automated Traffic Control System Using Verilog HDL**
_Anik Mallik, Sanjoy Kundu, Md. Ashikur Rahman_
Department of Electronics & Communication Engineering, Khulna University of Engineering & Technology
E-mail: [email protected], [email protected], [email protected]
ABSTRACT
_Traffic Congestion is one of the severe problems in heavily populated countries like Bangladesh where Automated Traffic Control System needs to be implemented. An FPGA-based Semi-automated system is introduced in this paper including a completely new feature "Safe State" to avoid sudden unwanted collision. Here we used sequential encoding which has made the program much simpler and so that easy to control and modify. The experimental result showed the automated change in traffic lights according to the specified timing sequences which would be able to conduct maximum possible transitions of vehicles occur at different directions simultaneously without facing any accident._
KEY WORDS: FPGA, Verilog HDL, Mealy Machine, Sequential Encoding
## 1 Introduction
Traffic Congestion problem is a common misery to the busy lives of the heavily populated countries like Bangladesh. Weak urban planning and Traffic Control System are mainly responsible for this problem. To form methods which will control the Congestion of traffic in the roads in urban and suburban areas remained the most challenging issue for the engineers [1]. So, research on this issue continues and various methods have been proposed by the researchers. The classical methods had mainly two drawbacks- one is, due to lack of perfect adjustments in the signal-timings, vehicles had to wait for a long time without any reason [2], and the other is, there was no such way in which ambulances, fire brigades, police vehicles and other emergency vehicles could pass with high level of priority [3]. Some of the recent methods used Fuzzy Logic to implement an intelligent system which would monitor and control the Traffic system [4]. These methods could set time adjustment and lead the cars with the help of arrival parameters through camera type sensors and image processing or electromagnetic sensors [5]. Fuzzy Logic is of a great use for systems whose appropriate mathematical modeling is not possible or very expensive to implement. However, the system may suffer from noticeable error if the input-output relation & the set of rules are weakly defined. So, there is a need to design a system, simple to operate, using FPGA based, cost effective technology which can eradicate these problems and can provide much greater speed.
Recently a Traffic Control System has been developed using Mealy State Machines [6]. But, the system grown in that research, has limited Emergency States for passing vehicles on the roads, which had different switches for different roads and had no "Safe State" (i.e. all roads having alert signal to avoid sudden unwanted collisions) between an emergency and normal or ordinary state. The system also suffered from inadequate Green signals for various directions and signal for human-passing on Zebra-crossings. In our proposed design, an input of 3 binary bits has been provided which is used for defining the states, including a new idea of "Safe State". Emergency states are changed manually and too easy to handle. The major or minor roads can be defined manually and the system can run that way. Timings are specified here based on our assumptions. This can also be changed by a statistical approach depending upon the necessity of the roads in practical. We also have included signals for Zebra-crossing into the system and this design allows the maximum possible transitions of vehicles on different directions without facing any accident along with the basic features found in any control system, which can be fabricated into one single chip.
The system has been written in Verilog HDL [7] & designed, tested and evaluated using the ISE 6.0 tool of Xilinx and VeriloggerPro 6.5 [8]. Then the design has been implemented on Xilinx Spartan-2 FPGA (XC2S150). In the following sections, the methodology, the implementation technique of this system, designed circuit schematic and the result of the experiment will be discussed in brief.
## 2 Methodology
### 2.1 Mealy State Machine & State Machine Style
State machines can be classified into two categories: Mealy machines and Moore machines. Basically, the process of generation of the output differentiates between Mealy and Moore machines. In a Mealy
machine, the outputs are generally a function of both the state and the inputs [9].
The State Machine Style can be of various forms. One of these forms combines the next-state logic and the state register. This style is good to use since the next-state logic and state register here are related quite inseparably. If the state machine has many inputs which change frequently, this style is efficient enough to use [9].
### Sequential Encoding
State encoding has a great effect on the size and performance of a circuit, and also influences the amount of electronic glitches produced by a circuit. This can also be of several kinds: Sequential Encoding, Gray Encoding, One-Hot or One-Cold Encoding etc.
In Sequential Encoding, state comes one after another according to the sequence of bit pattern and makes the system simpler and efficient. But glitches may occur when we deal with a circuit with huge dimensions. In case of Gray Encoding, the simplicity in program code along with the circuitry is missed as it does not change the bit pattern sequentially and provides speed little bit slower. One-Hot or One-Cold Encoding is very efficient when we need to consider faster operation but in this case the area of circuitry becomes huge [9].
In this design, we have used Sequential Encoding as our concern was to simplify the operation. The proposed circuitry is not huge, so this scheme gave us the optimized performance.
### Field Programmable Gate Array (FPGA)
A Field Programmable Gate Array (FPGA) is an integrated circuit and its configuration is generally defined using a Hardware Description Language (HDL), and in this case, it is same as an Application Specific Integrated Circuit (ASIC). This name 'FPGA' is given as it is structured like a gate array configuration of an ASIC [10]. This regular structure is interconnected which is under the designers' extreme control, which means that the user can design, modify or make any change to his circuit easily [11]. FPGAs can be used to implement any logical function that can be performed by an ASIC, and it should also be noted that FPGA design is more cost-effective than that of ASIC [12]. They have lots of advantages over microcontrollers, such as greater speed, number of I/O ports and performance [13].
## 3 System Implementation
### States
The simplest form of traffic light is introduced with a single or a pair of colored aspects that warns any vehicle of the shared right of way of a possible conflict or danger [14]. But, in case of a complex form, the scenario is a bit different.
#### 3.1.1 Traditional States
Now, we have to imagine a junction of four roads. In this complex state, each road will have six lights (Red, Yellow, Green for going straight, Green for turning Left and Green for turning Right and Green for Zebra-crossing) which are illustrated in **Fig. 1**. The timing sequence is shown in the **Table 1**.
#### 3.1.2 Emergency States
In some special cases, like emergency states, the controlling would be different from the traditional one. For example, we can easily open Road 1 for having an emergency in this way, and can block the others which are shown in the **Table 2**.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Timing** & **R** & **Y** & **G** & **G** & **G** & **M** & **HEX** \\
**Sequence** & **1** & **1** & **1** & **1** & **1** & **1** & **(for all** \\ & & & \(\bigvee\) & \(\bigvee\) & \(\bigvee\) & & & **the four** \\ & & & & & & & **roads**) \\ \hline
60 sec & 0 & 0 & 1 & 1 & 0 & 0 & 3218A6 \\ \hline
15 sec & 0 & 1 & 0 & 0 & 0 & 0 & 410820 \\ \hline
60 sec & 1 & 0 & 0 & 1 & 1 & 0 & 98C862 \\ \hline
15 sec & 1 & 0 & 0 & 0 & 0 & 0 & 810420 \\ \hline
60 sec & 1 & 0 & 0 & 0 & 1 & 0 & 8A6321 \\ \hline
15 sec & 1 & 0 & 0 & 0 & 0 & 0 & 820410 \\ \hline
60 sec & 1 & 0 & 0 & 0 & 0 & 1 & 86298C \\ \hline
15 sec & 0 & 1 & 0 & 0 & 0 & 0 & 420810 \\ \hline \end{tabular}
\end{table}
Table 1: Timing Sequence of Traditional States (Only Road 1 is shown here)
Figure 1: A Four way road each having six lights
**Proceeding of the**
**International Conference on Electrical, Computer and Telecommunication Engineering 01- 02 December 2012 (ICECTE2012), RUET, Rajshahi-6204, Bangladesh**
**ICECTE2012: PI-0144**
**Proceeding of the**
**International Conference on Electrical, Computer and Telecommunication Engineering**
**01- 02 December 2012 (ICECTE2012), RUET, Rajshahi-6204, Bangladesh**
## 5 Conclusion
We have presented an FPGA-Based Semi-Automated Traffic Control System, where we used Verilog as the HDL. We have named this design as "Semi-Automated" because we could not make the emergency-state input system fully automated. We used the Mealy State Machine and the Sequential Encoding made the program simplest. This design allows the maximum possible transportation of vehicles in different directions simultaneously. One of the salient features of this design is any addition of different lights for different purposes can take place easily just by adding bits to the output ports. Again, we have left two more states in this design which can be used for other necessary purposes. The whole design needs only two inputs- one is Clock and the other is for different states, which can be controlled fluently. The most effective feature of this design is the use of 15sec "Safe State" which is able to avoid the sudden unwanted collision between vehicles in the transitions of states easily. Any kind of step for the improvement of this system can easily be taken for this design because the logic is made simpler than ever.
## 6 Future Work
We have a plan to include a feature to this system which will be able to count the number of the vehicles in a road. This can be accomplished using a number of Heat-sensors or IR-sensors placed in different positions in a road. Each road will be timed automatically according to the number of vehicles.
## 7 Acknowledgements
We want to acknowledge gratefully Mr. Robin Sarker (B.Sc. Engr.), Mr. Shumit Saha (Lecturer) and Dr. A. B. M. Aowlad Hossain (Asst. Professor) from Dept. of Electronics & Communication Engineering, Khulna University of Engineering & Technology, for their valuable suggestions & also for providing the facilities to implement the design on FPGA.
|
2302.13143 | Ensemble learning for Physics Informed Neural Networks: a Gradient
Boosting approach | While the popularity of physics-informed neural networks (PINNs) is steadily
rising, to this date, PINNs have not been successful in simulating multi-scale
and singular perturbation problems. In this work, we present a new training
paradigm referred to as "gradient boosting" (GB), which significantly enhances
the performance of physics informed neural networks (PINNs). Rather than
learning the solution of a given PDE using a single neural network directly,
our algorithm employs a sequence of neural networks to achieve a superior
outcome. This approach allows us to solve problems presenting great challenges
for traditional PINNs. Our numerical experiments demonstrate the effectiveness
of our algorithm through various benchmarks, including comparisons with finite
element methods and PINNs. Furthermore, this work also unlocks the door to
employing ensemble learning techniques in PINNs, providing opportunities for
further improvement in solving PDEs. | Zhiwei Fang, Sifan Wang, Paris Perdikaris | 2023-02-25T19:11:44Z | http://arxiv.org/abs/2302.13143v2 | # Ensemble learning for Physics Informed Neural Networks: a Gradient Boosting approach
###### Abstract
While the popularity of physics-informed neural networks (PINNs) is steadily rising, to this date, PINNs have not been successful in simulating multi-scale and singular perturbation problems. In this work, we present a new training paradigm referred to as "gradient boosting" (GB), which significantly enhances the performance of physics informed neural networks (PINNs). Rather than learning the solution of a given PDE using a single neural network directly, our algorithm employs a sequence of neural networks to achieve a superior outcome. This approach allows us to solve problems presenting great challenges for traditional PINNs. Our numerical experiments demonstrate the effectiveness of our algorithm through various benchmarks, including comparisons with finite element methods and PINNs. Furthermore, this work also unclocks the door to employing ensemble learning techniques in PINNs, providing opportunities for further improvement in solving PDEs.
PDE Physics-informed neural networks Gradient boosting Ensemble learning
## 1 Introduction
Physics informed neural networks have recently emerged as an alternative to traditional numerical solvers for simulations in fluids mechanics [1, 2], bio-engineering [3, 4], meta-material design [5, 6], and other areas in science and engineering [7, 8]. However, PINNs using fully connected, or some variants architectures such as Fourier feature networks [9], fail to accomplish stable training and produce accurate predictions at whites, especially when the underlying PDE solutions contain high-frequencies or multi-scale features [10, 11]. To mitigate this pathology, Krishnapriyan _et. al._[12] proposed a sequence-to-sequence learning method for time-dependent problems, which divide the time domain into sub-intervals and solve the problem progressively on each them. This method avoids the pollution of the underlying solution due to the temporal error accumulation. Wang _et. al._[13] elaborated the reason that the PINNs fail to train from a neural tangent kernel perspective, and proposed an adaptive training strategy to improve the PINNs' performance. An empirical learning-rate annealing scheme has been proposed in Wang _et. al._[14], which utilizes the back-propagated gradient statistics during training to adaptively assign importance weights to different terms in a PINNs loss function, with the goal of balancing the magnitudes of the gradients in backward propagation. Although all of these works were demonstrated to produce significant and consistent improvements in the stability and accuracy of PINNs, the fundamental reasons behind the practical difficulties of training fully-connected PINNs still remain unclear [10].
Besides PINNs, many other machine learning tasks suffer from the same issues, and some of these issues have been resolved by gradient boosting method. The idea of gradient boosting is blending several weak learners into a fortified one that gives better predictive performance than could be obtained from any of the constituent learners alone [15]. For example, Zhang and Haghani [16] proposes a gradient-boosting tree-based travel time prediction method, driven by the successful application of random forest in traffic parameter prediction, to uncover hidden patterns in travel time data to enhance the accuracy and interpretability of the model. Callens _et. al._[17] used gradient boosting trees to improve wave forecast at a specific location whose RMSE values in average \(8\%\) to \(11\%\) lower for the correction of significant wave height and peak wave period. Recently, many researchers have contributed to gradient boosting
method and further improved its performance. Friedman _et. al._[18] shows that both the approximation accuracy and execution speed of gradient boosting can be substantially improved by incorporating randomization into the procedure, and this randomized approach also increases robustness against overcapacity of the base learner. Ke [19] found that the efficiency and scalability of Gradient Boosting Decision Tree (GBDT) are unsatisfactory when the feature dimension is high and data size is large and a greedy algorithm has been used to effectively reduce the number of features without hurting the accuracy of split point determination by much and thus solve the issue.
Inspired by the above-mentioned literature review, we arrive at our proposed method in this paper. In this work, we present a gradient boosting physics informed neural networks (GB PINNs), which adopts a gradient boosting idea to approximate the underlying solution by a sequence of neural networks and train the PINNs progressively. Specifically, our main contributions can be summarized into the following points:
1. A simple implementation of the gradient boosting method, which can easily be integrated into existing PINNs code with minimal modifications.
2. The assembly of several weak PDE predictors to form a strong predictor, resulting in increased flexibility for solving intractable problems.
3. Low sensitivity to the choice of neural networks and their arrangement, resulting in fewer efforts required for fine-tuning hyperparameters.
4. The flexibility to combine with other techniques, such as Fourier features, making it a versatile approach for PINNs.
We introduce some preliminaries for key ingredients of our algorithm in section 2. Then we present our algorithm with motives in section 3. Numerical experiments are shown in section 4 to verify our algorithm. We discuss our algorithm and conclude the paper in section 5.
## 2 Preliminaries
In this section, we will provide a brief overview of the related topics that are relevant to the proposed algorithm in this paper. For a more in-depth understanding of these topics, we encourage readers to refer to the original papers cited below.
### Physics informed neural networks
PINNs are a method for inferring a continuous latent function \(u(\mathbf{x})\) that serves as the solution to a nonlinear PDE of the form:
\[\mathcal{N}[u](\mathbf{x}) =0,\quad\text{ in }\Omega, \tag{1}\] \[\mathcal{B}[u](\mathbf{x}) =0,\quad\text{ on }\partial\Omega, \tag{2}\]
where \(\Omega\) is an open, bounded set in \(\mathbb{R}^{d}\) with a piecewise smooth boundary \(\partial\Omega\), \(\mathbf{x}\in\mathbb{R}^{d}\), and \(\mathcal{N}\) and \(\mathcal{B}\) are nonlinear differential and boundary condition operators, respectively.
The solution to the PDE is approximated by a deep neural network, \(u_{\theta}\), which is parameterized by \(\theta\). The loss function for the network is defined as:
\[L(u;\theta)=\frac{\omega_{e}}{N_{p}}\sum_{i=1}^{N_{p}}|\mathcal{N}[u_{\theta} ](\mathbf{x}_{i}^{p})|^{2}+\frac{\omega_{b}}{N_{b}}\sum_{i=1}^{N_{b}}|\mathcal{B} [u_{\theta}](\mathbf{x}_{i}^{b})|^{2}, \tag{3}\]
where \(\{\mathbf{x}_{i}^{p}\}_{i=1}^{N_{p}}\) and \(\{\mathbf{x}_{i}^{b}\}_{i=1}^{N_{b}}\) are the sets of points for the PDE residual and boundary residual, respectively, and \(\omega_{e}\) and \(\omega_{b}\) are the weights for the PDE residual loss and boundary loss, respectively. The neural network \(u_{\theta}\) takes the coordinate \(\mathbf{x}\) as input and outputs the corresponding solution value at that location. The partial derivatives of the \(u_{\theta}\) with respect to the coordinates at \(\mathcal{N}\) in (3) can be readily computed to machine precision using reverse mode differentiation [20].
The loss function \(L(u;\theta)\) is typically minimized using a stochastic gradient descent algorithm, such as Adam, with a batch of interior and boundary points generated to feed the loss function. The goal of this process is to find a set of neural network parameters \(\theta\) that minimize the loss function as much as possible.
It is worth noting that the abstract PDE problem in (1)-(2) can easily be extended to time-dependent cases by considering one component of \(\mathbf{x}\) as a temporal variable. In this case, one or more initial conditions should be included in the PDE system and additional initial condition constraints should be added to the loss function (3).
### Gradient boosting machines
Gradient Boosting (GB) is a powerful machine learning technique that is commonly used in regression and classification tasks. It is an additive ensemble of weak prediction models, similar to AdaBoost, but with a key difference - unlike other ensemble algorithms, GB does not have trainable weights, and the sub-models are trained sequentially instead of in parallel. For the sake of simplicity, in the rest of the paper, we will use \(f(x;\theta)\) to denote a general neural network \(f\) with input \(x\) and parameterized by \(\theta\).
Given a neural network \(f(x;\theta)\) and a training dataset, the loss function \(L(f;\theta)\) is defined as the sum of the individual losses for each sample, as follows:
\[L(f;\theta)=\sum_{i=1}^{N}L(y_{i},f(x_{i};\theta)),\]
where \(N\) is the total number of samples in the dataset, \(y_{i}\) is the true label for sample \(x_{i}\), and \(f(x_{i};\theta)\) is the predicted label for sample \(x_{i}\).
To minimize this loss function, a common approach is to use the stochastic gradient descent algorithm. This algorithm updates the network's parameters, \(\theta\), iteratively using the following update rule:
\[\theta\leftarrow\theta-\gamma\frac{\partial}{\partial\theta}L(f;\theta), \tag{4}\]
where \(\gamma\) is a user-specified learning rate that controls the step size of the updates.
The goal of the gradient boosting (GB) method is to minimize the loss function \(L(f;\theta)\) with respect to the neural network function \(f\). GB method assumes that the surrogate model can be represented in the following iterative form:
\[f_{m}(\mathbf{x};\Theta_{m})=f_{m-1}(\mathbf{x};\Theta_{m-1})+\rho_{m}h_{m}(\mathbf{x}; \theta_{m}),\quad\text{ for }m=1,2,3,\cdots, \tag{5}\]
where \(f_{0}(\mathbf{x};\theta_{0})\) is a pre-selected baseline neural network, \(\rho_{m}\) is the learning rate, \(f_{m}(\mathbf{x};\Theta_{m-1})\) is parameterized by \(\Theta_{m}=\bigcup_{i=0}^{m}\theta_{m}\), and \(h_{m}(\mathbf{x};\theta_{m})\) is a neural network designed to enhance the accuracy of the predictor \(f_{m-1}(\mathbf{x};\Theta_{m-1})\). The gradient descent algorithm is used to choose \(h_{m}(\mathbf{x};\theta_{m})\), which is defined as:
\[h_{m}(\mathbf{x};\theta_{m})=-\frac{\partial}{\partial f_{m-1}(\mathbf{x};\Theta_{m- 1})}L(f_{m-1};\Theta_{m-1}). \tag{6}\]
Therefore, the model update rule is defined as:
\[f_{m}(\mathbf{x};\Theta_{m})=f_{m-1}(\mathbf{x};\Theta_{m-1})-\rho_{m}\frac{\partial }{\partial f_{m-1}(\mathbf{x};\Theta_{m-1})}L(f_{m-1};\Theta_{m-1}). \tag{7}\]
In this fashion, the corresponding loss at the \(m\)-th step reads
\[L(f_{m};\Theta_{m})=L(f_{m-1}+\rho_{m}h_{m};\Theta_{m}). \tag{8}\]
The technique outlined in this construction is commonly referred to as a GB method. The update function, \(h_{m}(\mathbf{x};\theta_{m})\) in equation (6), is similar in nature to the gradient vector in equation (4), however, GB operates by taking the gradient with respect to the function, rather than the parameter vector as traditional gradient descent does. This distinction is the reason why we refer to GB as a method that descends the gradient in function space. For further information on gradient boosting methods, please refer to the reference [21]. Furthermore, it is worth noting that in the context of PINNs, this method has been adapted to a simpler form that is easily implementable.
## 3 Gradient boosting physics informed neural networks
Despite a series of promising results in the literature [22; 4; 11; 1; 2], the original formulation of PINNs proposed by Raissi _et. al._[23] has been found to struggle in constructing an accurate approximation of the exact latent solution. The reasons for this remain poorly understood. However, some observations in the literature can be used to infer potential solutions to this issue. One such observation is that the prediction error in PINNs is often of high frequency, with small and intricate structures, as seen in figures 4(b) and 6(a) and (b) of [13]. As demonstrated in [9], high-frequency functions can be learned relatively easily using Fourier features. Based on these findings, it is natural to consider using a multi-layer perceptrons (MLPs) as a baseline structure in PINNs, followed by a Fourier feature network, to further minimize the error. This idea led to the development of GB PINNs.
The proposed method, referred to as GB PINNs, utilizes a sequence of neural networks in an iterative update procedure to gradually minimize the loss function. As shown in equation (6), the update model \(h_{m}(\mathbf{x};\theta_{m})\) is defined by the
gradient of the loss with respect to the previous output \(f_{m-1}\). However, in the context of PINNs, the PDE residual loss in (3) typically includes gradients of the outputs with respect to the inputs. This necessitates the computation of twisted gradients, which is a unique characteristic of this approach. For example
\[\frac{\partial}{\partial f(\mathbf{x};\theta)}\left[\left(\frac{\partial f(\mathbf{x}; \theta)}{\partial\mathbf{x}}\right)^{2}\right],\]
which is definitely not elementary and should be avoided. Despite the mathematical validity of the gradient
\[\frac{\partial}{\partial f_{m-1}(\mathbf{x};\Theta_{m-1})}L(f_{m-1};\Theta_{m-1}),\]
it can be challenging to compute it using AD due to the fact that \(L(f_{m-1};\theta)\) is typically a leaf node in the computational graph.
Fortunately, we can still utilize the formulation in equation (8) to establish an appropriate GB algorithm for PINNs. After the training of the \((m-1)\)-th step is completed, we add an additional pre-selected neural network \(\rho_{m}h_{m}(\mathbf{x};\theta_{m})\) to the previous predictor \(f_{m-1}(\mathbf{x};\Theta_{m-1})\) and subsequently apply gradient descent with respect to the parameters \(\theta_{m}\). This iterative procedure allows us to gradually minimize the loss and improve the accuracy of the predicted solution.
It is important to note that the neural networks utilized in the proposed GB PINNs algorithm do not need to possess a consistent structure. In fact, they can be composed of a variety of surrogate models, as long as they have the same input and output dimensions. Examples of such models include MLPs, Fourier feature networks, radial basis neural networks, and even finite element representations. This flexibility allows for a more versatile approach to minimizing the loss and improving the accuracy of the approximation to the exact latent solution.
As highlighted in [13], the neural tangent kernel of wide and deep neural networks remains relatively stable during the training process, which can impede the ability to learn solutions with sharp gradients. However, it has been proposed that by decreasing the size of the network, such issues can be addressed. This gives rise to the idea of utilizing a small network for the initial approximation of the solution, and then progressively refining it using larger networks. 1
Footnote 1: It is worth noting that the proposed organization of neural networks is merely a rough idea and may not always yield the best performance. The choice and arrangement of neural networks may vary depending on the specific problem at hand. The numerical experiments will provide further insights into the optimal configuration of neural networks for a given problem.
In GB training, the term \(\rho_{m}\) in equation (5) serves as a learning rate in the function space gradient descent, and it also adjusts the magnitude of \(h_{m}(\mathbf{x};\theta_{m})\). In PINNs scenarios, once the training of \(f_{m-1}(\mathbf{x};\Theta_{m-1})\) is complete, we already have a decent predictor, which implies that the relative error between the current predictor and the ground truth (for example, relative \(l^{2}\) error defined below) is assumed to be small. The additive model \(h_{m}(\mathbf{x};\theta_{m})\) is then used to handle this error, thus it is reasonable to assume that the magnitude of \(h_{m}(\mathbf{x};\theta_{m})\) decreases over iteration step \(m\). As a result, it is also reasonable to assume that the \(\rho_{m}\) gradually decreases over \(m\). In the experiments below, we will assume that the \(\rho_{m}\) exponentially decays, which is similar to traditional gradient descent methods.
The proposed algorithm can be summarized as follows:
```
0: A baseline neural network \(f_{0}(\mathbf{x};\theta_{0})\) and an ordered neural network set \(\{h_{i}(\mathbf{x};\theta_{i})\}_{i=1}^{m}\) that contains models going to be trained in sequence; A set of learning rate \(\{\rho_{i}\}_{i=0}^{m}\) that correspond to \(\{f_{0}(\mathbf{x};\theta_{0})\}\cup\{h_{i}(\mathbf{x};\theta_{i})\}_{i=1}^{m}\). Usually, \(\rho_{0}=1\) and \(\rho_{i}\) is decreasing in \(i\); Set \(f_{i}(\mathbf{x};\Theta_{i})=f_{i-1}(\mathbf{x};\Theta_{i-1})+\rho_{i}h_{i}(\mathbf{x}; \theta_{i}),\quad\text{ for }i=1,2,3,\cdots,m\). Given PDEs problem (1)-(2), establish the corresponding loss (3).
0:
1: Train \(f_{0}(\mathbf{x};\theta_{0})=\rho_{0}f_{0}(\mathbf{x};\theta_{0})\) to minimize loss (3).
2:for\(i=1\) to m do
3: In \(f_{i}(\mathbf{x};\Theta_{i})=f_{i-1}(\mathbf{x};\Theta_{i-1})+\rho_{i}h_{i}(\mathbf{x}; \theta_{i})\), set trainable parameters as \(\theta_{i}\). Train \(f_{i}(\mathbf{x};\Theta_{i})\) to minimize loss (3).
4:endfor
5:return\(f_{m}(\mathbf{x};\Theta_{m})\) as a predictor of the solution of (1)-(2) for any point in \(\overline{\Omega}\).
```
**Algorithm 1** Gradient boosting physics informed neural network.
The proposed algorithm can be summarized as follows:
```
0: A baseline neural network \(f_{0}(\mathbf{x};\theta_{0})\) and an ordered neural network set \(\{h_{i}(\mathbf{x};\theta_{i})\}_{i=1}^{m}\) that contains models going to be trained in sequence; A set of learning rate \(\{\rho_{i}\}_{i=0}^{m}\) that correspond to \(\{f_{0}(\mathbf{x};\theta_{0})\}\cup\{h_{i}(\mathbf{x};\theta_{i})\}_{i=1}^{m}\). Usually, \(\rho_{0}=1\) and \(\rho_{i}\) is decreasing in \(i\); Set \(f_{i}(\mathbf{x};\Theta_{i})=f_{i-1}(\mathbf{x};\Theta_{i-1})+\rho_{i}h_{i}(\mathbf{x}; \theta_{i}),\quad\text{ for }i=1,2,3,\cdots,m\). Given PDEs problem (1)-(2), establish the corresponding loss (3).
0:
1:for\(i=1\) to m do
2: In \(f_{i}(\mathbf{x};\Theta_{i})=f_{i-1}(\mathbf{x};\Theta_{i-1})+\rho_{i}h_{i}(\mathbf{x}; \theta_{i})\), set trainable parameters as \(\theta_{i}\). Train \(f_{i}(\mathbf{x};\Theta_{i})\) to minimize loss (3).
3:endfor
5:return\(f_{m}(\mathbf{x};\Theta_{m})\) as a predictor of the solution of (1)-(2) for any point in \(\overline{\Omega}\).
```
**Algorithm 2** Gradient boosting physics informed neural network.
The proposed algorithm, described in Algorithm 1, utilizes a sequence of neural networks and an iterative update procedure to minimize the loss gradually. At each iteration step \(i\), the forward prediction relies on the union parameter
set \(\Theta_{i}\), while the backward gradient propagation is only performed on \(\theta_{i}\). This results in a mild increase in computational cost during the training of GB iteration. The simplicity of this algorithm allows practitioners to easily transfer their PINNs codes to GB PINNs'. In the following section, we will demonstrate that this simple technique can enable PINNs to solve many problems that were previously intractable using the original formulation of Raissi _et. al._[23].
Additionally, the proposed GB PINNs algorithm also introduces another dimension of flexibility in terms of network architecture design, namely the combination of different neural networks. This opens up new opportunities for fine-tuning the architecture to minimize PDE residual losses and improve overall predictive accuracy. As shown in the following section, the performance of GB PINNs is relatively insensitive to the specific choice and arrangement of networks, as long as their capacity is sufficient.
## 4 Numerical Experiments
In this section, we will demonstrate the effectiveness of the proposed GB PINNs algorithm through a comprehensive set of numerical experiments. To simplify the notation, we use a tuple of numbers to denote the neural network architecture, where the tuple represents the depth and width of the layers. For example, a neural network with a two-dimensional input and a one-dimensional output, as well as two hidden layers with width \(100\) is represented as \((2,100,100,1)\). Our default experimental setup is summarized in Table 4, and will be used in all experiments unless otherwise specified.
To quantify the model's accuracy, we use the relative \(l^{2}\) error over a set of points \(\{x_{i}\}_{i=1}^{N}\):
\[\text{Error}=\frac{\sum_{i=1}^{N}|u_{pred}(x_{i})-u_{true}(x_{i})|^{2}}{\sum _{i=1}^{N}|u_{true}(x_{i})|^{2}}.\]
### 1D singular perturbation
In this first example, we utilize GB PINNs to solve the following 1D singular perturbation problem.
\[-\varepsilon^{2}u^{\prime\prime}(x)+u(x) =1,\qquad\text{ for }x\in(0,1),\] \[u(0) =u(1) =0.\]
The perturbation parameter, \(0<\varepsilon\ll 1\), is set to \(10^{-4}\) in this case. The exact solution to this problem is given by
\[u(x)=1-\frac{e^{-x/\varepsilon}+e^{(x-1)/\varepsilon}}{1+e^{-1/\varepsilon}}.\]
Despite the boundedness of the solution, it develops boundary layers at \(x=0\) and \(x=1\) for small values of \(\varepsilon\), a scenario in which traditional PINNs have been known to perform poorly.
To evaluate the performance of GB PINNs, we used a series of fully connected network structures sequentially \((1,50,1)\), \((1,100,1)\), \((1,100,100,1)\), \((1,100,100,1)\) for the baseline and update neural networks, followed by a Fourier feature neural network \((1,50,50,1)\) with frequencies ranging from \(1\) to \(10\). The details of the Fourier feature method used in this study can be found in the appendix 6.1. The step size \(\rho_{m}\) in (5) was set to \(0.5^{i}\), where \(i=0,1,\cdots,5\) is the model index.
For each GB iteration, we train \(10,000\) steps using a dataset of \(10,000\) uniform random points in \((0,1)\). The weights in the loss function (3) are set to \(\omega_{e}=1\) and \(\omega_{b}=10\), respectively, and the batch size for PDE residual computation is \(10,000\).
The output of GB PINNs is shown in Figure 1, where the relative \(l^{2}\) error is found to be \(0.43\%\). The boundary layers at \(x=0\) and \(x=1\) are clearly visible in the solution, which is a result of the thinness of the layers and the almost right
\begin{table}
\begin{tabular}{l l} \hline \hline Name & Value \\ \hline Activation function & GeLU \\ Method to initialize the neural network & Xavier \\ Optimizer & Adam \\ learning rate & \(10^{-3}\) \\ learning rate decay period & \(10,000\) \\ learning rate decay rate & \(0.95\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Default Experiment Set up
angle curvature of the solution at these points. Despite the singularity present in the solution, GB PINNs were able to provide an accurate solution for this problem. To further highlight the contribution of GB PINNs in this example, an ablation study was conducted. A vanilla PINNs approach, using a network structure of \((1,100,100,100,1)\) and \(20,000\) training steps, was used to solve the same problem. The resulting relative \(l^{2}\) error was found to be \(11\%\), as shown in Figure 2. Additional experiments including ablation studies and comparisons can be found in appendix 6.2.
Furthermore, we demonstrate the robustness of our algorithm against the choice of network structure and arrangement through this example. We solve the problem using a variety of networks, including \((1,50,1)\), \((1,100,1)\), \((1,100,100,1)\), followed by a Fourier feature network \((1,100,100,100,1)\) with frequencies ranging from \(1\) to \(10\). The resulting relative \(l^{2}\) error is around \(1.1\%\), which is comparable to the previously mentioned result. It is possible that there exists a specific set of hyperparameters and configurations that would allow a single neural network to perfectly solve this problem. After all, by the universal approximation theorem [24], even a neural network with a simple structure possesses the ability to approximate a complicated function. However, the fine-tuning of hyperparameters is a common challenge in machine learning tasks and can consume significant computational resources. GB PINNs, on the other hand, do not require such fine-tuning as the multiple networks adjust the results automatically, ultimately saving effort and resources.
### 2D singular perturbation with boundary layers
In this example, we aim to solve the Eriksson-Johnson problem, which is a 2D convection-dominated diffusion equation. As previously noted in the literature, such as in [25], this problem necessitates the use of specialized finite element techniques in order to obtain accurate solutions, such as the Discontinuous Petrov Galerkin (DPG) finite element method.
Let \(\Omega=(0,1)^{2}\). The model problem is
\[-\varepsilon\Delta u+\frac{\partial u}{\partial x} =0 \text{in }\Omega,\] \[u =u_{0} \text{on }\partial\Omega.\]
The manufactured solution is
\[u(x,y)=\frac{e^{r_{1}(x-1)}-e^{r_{2}(x-1)}}{e^{-r_{1}}-e^{-r_{2}}}\sin(\pi y )\quad\text{ with }\quad r_{1,2}=\frac{-1\pm\sqrt{1+4\varepsilon^{2}\pi^{2}}}{-2\varepsilon}.\]
In this example, we set \(\varepsilon=10^{-3}\). To resolve this problem, we employ various neural network architectures, including \((2,50,1)\), \((2,100,1)\), \((2,100,100,1)\), \((2,100,100,1)\), \((2,100,100,1)\), and a Fourier feature network \((1,100,100,1)\) with frequencies ranging from \(1\) to \(50\). For each iteration of our GB algorithm, we train for \(20,000\) steps. We set the weights in (3) as \(\omega_{e}=1\) and \(\omega_{b}=10,000\), respectively. The batch sizes for PDE residuals and boundaries are set at \(10,000\) and \(50,00\), respectively. The predicted solution is visualized in Figure 3. We can see that
Figure 1: Prediction of singular perturbation problem by GB PINNs, \(\varepsilon=10^{-4}\). Left: predicted solution (black) v.s. ground truth (red). Right: pointwise error.
our model prediction is in good agreement with the ground truth, with a relative \(l^{2}\) error of \(1.03\%\). Notably, there is a boundary layer present on the right side of the boundary (\(x=1\)), which is not easily recognizable to the naked eye due to its thinness. However, GB PINNs are able to provide a reasonable degree of predictive accuracy even in this challenging scenario.
To further demonstrate the efficacy of our proposed method, we also attempted to solve this problem using a single fully connected neural network of architecture \((2,100,100,10)\), under the same hyperparameter settings as before. However, the resulting relative \(l^{2}\) error was \(57.7\%\). As can be seen in Figure 4, there is a significant discrepancy between the predicted solution and the reference solution. Additional experimental results, including an ablation study and comparisons, can be found in Appendix 6.2.
### 2D singular perturbation with an interior boundary layer
In this example, we address a 2D convection-dominated diffusion problem featuring curved streamlines and an interior boundary layer. The model problem is
\[-\varepsilon\Delta u+\beta\cdot\nabla u =f\qquad\text{ in }\Omega,\] \[u =u_{0}\qquad\text{ on }\partial\Omega,\]
Figure 3: Prediction of 2D singular perturbation with boundary problem by GB PINNs, \(\varepsilon=10^{-3}\). Left: predicted solution. Middle: ground truth. Right: pointwise error.
Figure 2: Prediction of singular perturbation problem by PINNs for ablation study, \(\varepsilon=10^{-4}\). Left: predicted solution (black) v.s. ground truth (red). Right: pointwise error.
with \(\beta=e^{x}(\sin(y),\cos(y))\) and \(f\), \(u_{0}\) are defined such that
\[u(x,y)=\arctan\left(\frac{1-\sqrt{x^{2}+y^{2}}}{\varepsilon}\right).\]
This example has been solved by DPG finite element method in [25]. A specific value of \(\epsilon=10^{-4}\) was chosen for the purpose of this study. The neural network architectures employed include \((2,200,200,200,1)\), \((2,100,100,1)\), \((2,100,100,1)\), and a Fourier feature network \((2,50,50,1)\) with frequency ranging from \(1\) to \(5\). The weights for the loss function in (3) were set as \(\omega_{e}=1\) and \(\omega_{b}=10,000\), respectively. The batch size for the PDE residual and boundary were set to \(10,000\) and \(5,000\), respectively. The results of this study are shown in Figure 5 and exhibit a relative \(l^{2}\) error of \(3.37\%\).
As a part of an ablation study, we resolve this problem using a fully connected neural network architecture of \((2,200,200,200,1)\), while maintaining the other configurations as same as the previous experiment. The relative \(l^{2}\) error obtained in this case is \(16\%\). The predictions and the corresponding errors are depicted in Figure 6. Additional experiments pertaining to the ablation study and comparisons can be found in the appendix, section 6.2.
### 2D nonlinear reaction-diffusion equation
In this example, we investigate the solution of a time-dependent nonlinear reaction-diffusion equation. As demonstrated in [12], conventional PINNs have been shown to be inadequate in accurately learning the solution of such equations.
Let \(\Omega=(0,2\pi)\). The model problem is
\[\frac{\partial u}{\partial t}-10\frac{\partial^{2}u}{\partial x^{2 }}-6u(1-u) =0,\quad x\in\Omega,t\in(0,1],\] \[u(x,0) =h(x)\quad x\in\Omega,\]
Figure 4: Prediction of 2D singular perturbation with boundary problem by PINNs, \(\varepsilon=10^{-3}\). Left: predicted solution. Middle: ground truth. Right: pointwise error.
Figure 5: Prediction of 2D singular perturbation with interior boundary problem by GB PINNs, \(\varepsilon=10^{-4}\). Left: predicted solution. Middle: ground truth. Right: pointwise error.
with periodic boundary conditions, where
\[h(x)=e^{-\frac{(\varepsilon-\pi)^{2}}{2(\nu/4)^{2}}}.\]
In order to impose an exact periodic boundary condition, we use \((\sin(x),\cos(x))\) as the spatial input instead of \(x\), while maintaining the temporal input unchanged. This eliminates the need for boundary loss. Additionally, we include an additional loss term for the initial condition in the loss function, equation (3). The neural network architecture utilized for this problem is \((2,200,200,200,1)\), \((2,100,100,100,1)\), \((2,100,100,1)\). The weights for the PDE residual and initial condition loss are set to \(1\) and \(1,000\), respectively. The batch sizes for the PDE residual and initial condition loss are \(20,000\) and \(1,000\), respectively. We present our results in Figure 7. The relative \(l^{2}\) error is \(0.58\%\). As shown in [12], the relative error for traditional PINNs with \(\rho=\nu=5\) is \(50\%\). A comparison between the exact solution and the PINNs' prediction can also be found in the figure.
In the aforementioned study, Krishnapriyan _et. al._[12] proposed a sequence-to-sequence learning approach to address this problem, achieving a relative \(l^{2}\) error of \(2.36\%\) for \(\rho=\nu=5\). The method required training the neural network for \(20\) steps. In contrast, our approach, which was implemented with \(\nu=10\) and \(\rho=6\), a more challenging scenario, required training only three networks. Despite this, our method achieved an error rate that was approximately four times lower than that of the previous study. This represents a significant improvement.
## 5 Conclusion
In this paper, we propose a GB PINNs algorithm, which utilizes multiple neural networks in sequence to predict solutions of PDEs. The algorithm is straightforward to implement and does not require extensive fine-tuning of hyperparameters. Additionally, the method is flexible and can be easily integrated with other PINNs techniques. Our experimental results demonstrate its effectiveness in solving a wide range of intractable PDE problems.
Figure 6: Prediction of 2D singular perturbation with interior boundary problem by PINNs, \(\varepsilon=10^{-4}\). Left: predicted solution. Middle: ground truth. Right: pointwise error.
Figure 7: Prediction of nonlinear reaction-diffusion equation by GB PINNs. Left: predicted solution. Middle: ground truth. Right: pointwise error.
However, it should be noted that the algorithm has some limitations. Firstly, it is not suitable for solving conservation laws with derivative blow-ups, such as the inviscid Burgers' equation and the Sod shock tube problem. This is due to the lack of sensitivity of these equations' solutions to PDE loss. The addition of more neural networks alone cannot overcome this issue. Secondly, the optimal combination of neural networks is not always clear, and the current experimental selection is mostly based on experience and prior estimation of the PDE problem. Further research into the theoretical and quantitative analysis of this method is an interesting direction for future work.
## Acknowledgments
This was supported in part by the US Department of Energy (grant DE-SC0019116) and the Air Force Office of Scientific Research (grant FA9550-20-1-0060), and DOE-ARPA grant DE-AR0001201.
|
2308.06273 | A Novel Model for Capturing the Multiple Representations during Team
Problem Solving based on Verbal Discussions | Improving the effectiveness of problem solving in teams is an important
research topic due to the complexity and cross-disciplinary nature of modern
problems. It is unlikely that an individual can successfully tackle alone such
problems. Increasing team effectiveness is challenging due to the many
entangled cognitive, motivational, social, and emotional aspects specific to
teamwork. It is often difficult to reliably identify the characteristics that
make a team efficient or those that are main hurdles in teamwork. Moreover,
experiments often produced conflicting results, which suggests possibly
incorrect modeling of team activities and/or hypothesis formulation errors.
Automated data acquisition followed by analytics based on models for teamwork
is a intriguing option to alleviate some of the limitations. This paper
proposes a model describing an individual's activities during team problem
solving. Verbal discussions between team members are used to build models. The
model captures the multiple images (representations) created and used by an
individual during solving as well as the solving activities utilizing these
images. Then, a team model includes the interacting models of the members. Case
studies showed that the model can highlight differences between teams depending
on the nature of the individual work before teamwork starts. Inefficiencies in
teamwork can be also pointed out using the model. | Alex Doboli, Ryan Duke | 2023-07-28T14:22:27Z | http://arxiv.org/abs/2308.06273v1 | A Novel Model for Capturing the Multiple Representations during Team Problem Solving based on Verbal Discussions
###### Abstract
Improving the effectiveness of problem solving in teams is an important research topic due to the complexity and cross-disciplinary nature of modern problems. It is unlikely that an individual can successfully tackle alone such problems. Increasing team effectiveness is challenging due to the many entangled cognitive, motivational, social, and emotional aspects specific to teamwork. It is often difficult to reliably identify the characteristics that make a team efficient or those that are main hurdles in teamwork. Moreover, experiments often produced conflicting results, which suggests possibly incorrect modeling of team activities and/or hypothesis formulation errors. Automated data acquisition followed by analytics based on models for teamwork is a intriguing option to alleviate some of the limitations. This paper proposes a model describing an individual's activities during team problem solving. Verbal discussions between team members are used to build models. The model captures the multiple images (representations) created and used by an individual during solving as well as the solving activities utilizing these images. Then, a team model includes the interacting models of the members. Case studies showed that the model can highlight differences between teams depending on the nature of the individual work before teamwork starts. Inefficiencies in teamwork can be also pointed out using the model.
## 1 Introduction
Problem solving in teams, also called Collaborative Problem Solving (CPS) [1, 2, 3], considers that a team shares a common goal in devising a solution for a problem. Teamwork starts from an initial state of the team, such as team members might have attempted to first individually solve the problem or other situations [4, 5, 6, 7]. Studying CPS has been a major research problem, as modern problems are of a knowledge diversity, complexity, and difficulty that cannot tackled by a single individual [3]. Effective problem solving in teams is not straightforward. Numerous examples show that individuals can often outperform entire teams.
In a purely mechanistic approach, the problem solving outputs, e.g., ideas, discussions, designs, and social and emotional interactions, would be completely explained by the physiological process in the brains of the team members. Even though this approach might seem extreme, work in neuroaesthetics attempts to explain art along this approach [8, 9]. More recently, some work in engineering design creativity [10, 11, 12] seems to follow the same concept, at least to some degree. While having the potential to offer a causal explanation of high-level cognitive and social activities in terms of physical, physiological signals, it is hard to see how such models scale for complex activities, the well-known curse of dimensionality. The alternative approach, traditionally used in psychology, sociology and organization research, only explores the dependencies among pairs of parameters intuitively selected based on previous experimental observations [13]. These parameters are used to define models and theories, which then spawn new experiments and models. However, while important findings were uncovered using this approach, many experiments report contradicting results and/or suggest conclusions that are subsequently invalidated by new experiments [3]. In our opinion, this limitation relates to insufficient modeling, e.g., model identification, including selecting parameter granularities, model synthesis, like finding parameterized, mathematical expressions of the models, computing the model parameters, and model validation.
This work argues that a team's "sauce", the ingredients that amplify a team's capabilities beyond the sum of its members, is the result of the cognitive, emotional, and social interactions between members, including knowledge access, understanding and extension, and social - emotional changes that these interactions create in the members' behavior during problem solving. In the related literature, joint action represents the interaction among team members that aim to achieve common goals, intentions, or ground [1, 14, 15, 16]. However, as conflicting experimental results suggest, it is difficult to find what combinations of cognitive, emotional, and social parameters consistently produce effective team problem solving. From a modeling point of view, it can be argued that the reported inconsistencies, like having a positive [17, 18, 19, 20] or a negative impact [21, 22] of positive affect on problem solving, point to modeling errors due to various causes, like incorrect parameter identification or model synthesis. While interactions between members can be directly observed in real-time using smart electronic devices, including those devised by the authors [23, 24, 25, 26, 27], the nature and the evolution of a team model (i.e. its "sauce") is hidden. Uncovering the hidden model elements only based on the traditional trial-and-error approach to devise new experiments is cumbersome and unreliable. Instead, there should be more effective methods to uncover the observable and hidden contributions of each member's interactions and behavior to a team, and ultimately to successful team problem solving. The extracted knowledge would enable to create novel ways to form better teams, to analyze team's behavior and performance, and to offer insight for improving problem solving and interactions in a team. Understanding the secret "sauce" of teams can also help better team learning in large and diverse settings.
There are important challenges that distinguish team problem solving in real-world settings from other activities, including experiments in laboratory settings. Laboratory experiments usually control one parameter at a time: the outcomes of a reference group (control group) are compared with the outcomes of the experimental group, for which there was an intervention on the studied parameters. Statistical analysis, like ANOVA tests, eliminate any randomness in groups. Then, any observed differences can be assigned to the intervention. The decomposition of complex entanglements between parameters into a sequence of single, independent parameters misses any dependencies between parameters and can be a reason for the observed inconsistencies in experiments. For example, the influence of any previous activities on a current problem decision is hard to capture, even though fixation (when only similar outcomes are offered) and other similar situations are common in real life [28, 29, 30]. Also, problem solving can start from different initial states, like from scratch, reusing previous designs [31, 32, 33], etc., occur in different settings, like in the laboratory, at home, through zoom etc., and span time intervals much longer than the time intervals reported for laboratory experiments. The dependency of cognitive aspects on time, e.g., memory [34], knowledge representations [35, 36, 37, 38, 39, 40, 41, 42, 43], and social and emotional parameters is important [44, 45, 18, 44]. These elements can arguably make traditional experimental methods inefficient and unreliable. Instead, new experimental approaches are needed, so that they can be seamlessly embedded over a long time into real-world problem-solving situations performed in a variety of conditions.
This paper proposes a new model describing an individual's activities in a team during problem solving. Models are created based on the verbal discussions between team members during problem solving. The model captures the multiple mental images (representations) created and used by an individual during solving as well as the solving activities that utilize the images. The team model includes the interacting models of the members. The presented model assumes that every team member operates with multiple, individual images of a problem description and related solutions. Images have the following attributes: (1) They might not include all details at all time instances, however, any degree of detailing of a problem or solution fragment can be explored during problem solving. (2) While focusing on specific details, a certain abstract description is created and used for problem solving. (3) Images might include multiple fragments and ideas representations, which are not necessarily tight together in a unitary description. (4) Images are connected with an individual's knowledge and previous similar experiences, like previously-solved similar exercises. Under these assumptions, problem solving is the process of filling in the gaps between the images of problem descriptions and solutions using images of solution features (e.g., fragments, characteristics, constraints), input and output features (i.e. examples), and solution goals and sub-goals to be achieved. Features can pertain to different abstraction levels depending on their positioning on the continuum between the images of the problem descriptions and solutions. Hence, in this model, a team's "sauce" (defined as the behavior (dynamics) of the cognitive, emotional, and social interactions between members over time) can be seen as the way in which interactions result based on the individual images formed and updated over time during teamwork. For example, there is a matching of the individuals' images, if there is a consensus in a team about a final solution, or there are important differences between the images, if there is disagreement in a team.
This paper focuses mainly on how cognitive interactions in a team will likely change the individuals' images and produce a certain team behavior. Emotional and social interactions are discussed in other work [25, 26, 27]. The proposed approach considers that creating a correct solution for which there is an informed consensus of every team member is possible only if
each individual's solution image includes a correct understanding of the complete causality (e.g., effects) of each solution feature. Any causal effects missing from an image indicate a possible error in the solution, as that feature was not fully analyzed and agreed on by the team. Then, the considered cognitive interactions include the design (e.g., code), high-level solution descriptions, solution details, cues in explanations, solution places to be changed, identified missing fragments in a solution, suggested changes, errors for certain inputs, inputs and outputs for a certain execution scenario, and data features. Specific cognitive interactions are the result of activities of the general problem-solving flow, such as creating an explanation of a problem or solution, making analogies, analyzing a solution or solution description, combining new ideas with an existing solution description, generalizing from concrete situations, identifying changes to a solution, localizing where changes are needed, modifying a solution, identifying unhandled situations by a solution, finding inputs and outputs for specific errors or missing solution features, and identifying required data processing features. Analyzing the sequence of observed cognitive interactions supports making predictions on the performed solving activities, including finding design decisions executed with more or without fully understanding their consequences at the team level.
This paper presents a novel model and the related methodology to characterize the degree to which cognitive, emotional, and social interactions in a team lead towards correctly solving a problem, while there is an educated consensus among team members about a solution. It is important not only to produce a solution, but also to have the team agree with the solution by understanding and analyzing its features. As different members are likely to have different perspectives, reaching a consensus suggests that the solution withstood the analysis from different points of view. Also, all members reaching a consensus is likely to increase individual learning during problem solving. As an individual's images for problem descriptions and solutions are unknown, predictions are made for their more likely features based on the produced solutions and communicated ideas, explanations, statements, questions, analysis results, agreements, and so on. Each communicated idea serves a certain purpose in solving. It is the result of a response synthesis activity, i.e. detailing, generalization, trial and error, etc., and its communication triggers an understanding process within the specific context and experience of each individual. The methodology uses the type of the observed outputs (code and communications) to estimate the nature and characteristics of the performed activities within a general-purpose problem-solving flow. These activities support then making predictions about the images of the individuals. Predictions about the performed problem-solving activities and individual activities can be used to estimate the quality of the solutions, i.e. by describing the analyzed and missed design opportunities, likely errors as well as design issues that are not supported by consensus or which were not fully understood by all members.
The paper has the following structure. Section 2 offers an experimental motivation of the work. Section 3 discusses related work. Section 4 presents the methodology used to study the factors that describe team interactions, i.e. those defining a team's "sauce". Section 4 describes three types of team problem-solving scenarios. Conclusions end the paper.
## 2 Motivation
The _problem-solving space of a problem description_ is the set of solutions and solution fragments that pertain to the semantic space bordered by the problem description and the implementations, e.g., C programs, that correctly address the problem description. Then, problem solving identifies ways of bridging the semantic gap between a problem description and implementations, so that (i) there is an equivalency between the description and a solution under the assumption that (ii) any ambiguous, undefined, or other open-ended aspects of the description are consistently identified and addressed.
**Example**: A problem descriptions requires reading data from a file about date, temperature, humidity, and air pressure. Multiple readings can exist for the same date. Each correct value pertains to a predefined range, but values might be corrupt too. The goal is to find the dates with the most erroneous data readings. In addition, the description might include a set of test data and the associated correct outputs. They add details that clarify uncertainties about the description. Still, the description might remain uncertain with respect to some issues, like if all readings for the same date occur successively in the file, or if they are mixed up with the readings for other dates.
The team activity during problem solving can be observed over time to characterize the emerging team dynamic during problem solving [25, 26, 27, 46]. As summarized in Figure 1 for an example, team discussions reflect the following problem-solving activities and characteristics:
1. Problem solving can be a mixture of activities, which includes not only finding (construction) the actual solution for
a description, but also understanding the description, such as reaching some degree of consensus about the problem requirements and constraints, identifying data that helps problem understanding, and solution construction and validation, and reasoning about solutions and their parts. These activities are not separated in distinct, subsequent phases (like it is often assumed in academic work) but are rather mixed-up.
2. Teamwork can have different starting points (initial states) encompassed by the following two extremes: Team members do not understand the problem description to a significant degree, therefore no solutions or solution parts are available and team activity starts by focusing on understanding the problem. The opposite extreme occurs when solutions are already devised to some degree by the individuals, hence teamwork focuses on addressing any of the unsolved issues and solution validation by comparing and harmonizing the solutions by different individuals. As explained later in the paper, the team dynamic is very different in the two cases. Moreover, other starting points can exist, positioned between the two extremes.
3. From a semantic (meaning) point of view, the outputs of an individual during team problem solving pertains to three categories: (i) Detailing an existing solution fragment, (ii) abstracting a solution or solution fragment, and (iii) restating an existing fragment. Detailing involves different degrees of added information related to three aspects: (a) input data used to devise and explain the fragment, (b) conditions set for the solution results, and (c) processing steps that form a solution. Abstracting refers to creating descriptions at higher-level of abstractions about the following elements: (a) conditions of the inputs, (b) conditions of the output data, and (c) presentation of the solution procedures and principles. Restating includes producing similar input data, output requirements, and solution steps. Detailing pertains to top-down problem-solving process, and abstraction to a bottom-up process. Detailing, abstraction, and restating belong to the semantic continuum between problem descriptions and solutions.
4. Detailing includes steps on problem description understanding and solution construction with the following features: decomposition of an existing description into new fragments, connection of a new fragment to the rest of the solution, composition of new fragments with each other, adding new variables, capturing data and causal dependencies between variables and fragments, adding new conditions for data and fragments, identification of special and general situations for the solutions and fragments, and identification of new inputs for relevant execution cases.
5. Abstracting refers to problem description understanding and solution development through the following actions: combining local fragments into more overall descriptions, generalizing the outputs obtained for specific data into conditions for solution outputs, and creating higher-level descriptions of a solution or fragment.
**Example**: Figure 1 summarizes the traversal of the problem-solving space as a team attempts to solve the problem. A team of three members was considered. The figure shows the bottom-up activities in red color the, the top-down steps in blue, and the restating in green. The figure also indicates how the individual activities form broader problem-solving phases, like solution explanation to validate its correctness, changing the solution idea through restating, trial-and-error to correct errors in the code, changing (extending) the solution through reasoning, and restating the solution idea to find an error in the code. This example is also discussed in Subsection 4.2.
In this example, all members attempted to individually solve the problem before meeting in the team setting. However, none of the solutions was complete and correct. In Figure 1, the ordering of the performed steps follows the vertical axis from the top to the bottom of the figure. As each of the members already had some ideas on solving the problem, one member started by describing the role (purpose) of each part (i.e. block) in the solution, such as loop statements, conditionals, and so on. The discussion of the detailed processing refers to specific input data and the produced outputs. This step further leads to a description of the high-level reasoning (idea) behind the code. To further clarify the idea, the description is restated by referring to a counting process as being the main concept of the solution. The next step details the broad concept by adding the precise idea of using a flag variable to achieve a certain sub-goal of the broad computation. Solution detailing attempts to identify the specific place where the flag variable should be added. Repeated attempts are made to find the correct insertion place. As the team is unable to correctly add the flag variable to their code, the discussion shifts to making an analogy with a previously solved problem. The analogy helps creating a new overall description of the solution concept, followed by another detailing step that suggests the conditions that the correct outputs must meet. Next, based on the stated output conditions, the high-level solution is modified to include more details. This description is then related to the actual code, such as which modules are expected to complete specific parts of the overall description. This leads to a new attempt to find the C code location that should be modified to extend the existing code.
As illustrated in Figure 1, problem solving is a complex process, which often does not follow a single predefined sequence of steps. It is unclear at this point what specific situations and cues trigger a specific action by a participant, even though
there are o few empirical observations, like relying on trial-and-error to find errors or going back to broad, high-level statements about the solution principle after unsuccessful debugging attempts. It can be argued that problem solving relies on a mixture of structured and ad-hoc sequence of decisions in an attempt to find in reasonable time a correct path through the solution space. The gap has an exponential size, and exhaustively exploring all possible code structures, e.g., through automated program generation, could identify a solution but the required time would be prohibitively large.
## 3 Related Work
The state-of-the-art in problem solving in teams (or CPS) has been discussed by a number of classical and recent work in psychology, sociology, and organization research [1, 2, 3, 4, 5, 6, 7]. There is a set of distinguishing characteristics of problem solving in teams as compared to other collaborative activities, like learning, decision-making, and judgement [3]. They include having a shared task, i.e. the problem to be solved, the possibility of individual assessment of the degree to which the solution satisfies the problem (solution quality), role differentiation of team members (including role emergency during problem solving), and the interdependency between members depending on their contribution to successful solving [3]. Problem solving in teams shares a number of advantages with the other collaborative activities, like division of work, multiple sources of information, perspectives and experiences, improved evaluation, and individual simulation by other's ideas, as well as some limitations, such as inefficient communication, social loafing, false information propagation, conflicts, and diffusion of responsibility [3].
It has been suggested that problem solving in teams comprises of two entangled layers, (i) the individual layer that includes the emotional and cognitive aspects of each team member, and (ii) the team layer, which refers to the social elements. The two layers are discussed next.
_Individual layer:_ This layer refers to an individual's cognition modulated by emotions during problem solving. There are several well-known models about the cognition of problem solving [47, 48]. Among these, our work follows the idea that problem solving finds a path in the problem space from an initial state to the goals of a problem using operators, like evaluation, transformation, and heuristic steps [4, 41]. This model supports the characterization of entire sequences of cognitive activities (e.g., the problem-solving paths), and making prediction on how individual activities are influenced
Figure 1: Example of problem solving dynamic in a team
by cognitive, social, and emotional factors. Problem spaces are described as semantic networks, in which nodes represent concepts and links describe actions or relations among concepts [49]. Then, problem solving is the activation spreading in small steps through the network [41]. The linkage of subsequent design decisions has been a major topic, like identifying the rationale or creating concurrent representations [41, 50]. Network expansion, like using domain space extension [51], supports modeling problem solving of ill-defined problems, which require solutions beyond the space of the currently known solutions. Expanding graph-based knowledge representations offers the benefit of expressing features and relations of concepts, which are analogous to properties and connections of building blocks in engineering design [43, 52, 53, 54].
Identifying the problem space representation that reflects the mental (internal) representations is not trivial. Cognitive Task Analysis has been devised in an attempt to characterize the cognitive activities during problem solving based on observations and interviews, process tracing, conceptual techniques, and formal models [55, 56]. The problem solving activities are often studied through verbalization [41, 57, 58, 59]. For example in [41], the sequence of design steps and their argument are presented for the design of a bicycle rack. The steps used in solving math equations is described in [59]. Even though external solution representations, e.g., sketches, drawings, code, formulas, proofs, verbalization, etc. and other symbolic representations reflect to some degree the internal solution images [41], there are also important mismatches between the two [40, 60]. Related work mainly focuses on mismatches, e.g., physical attributes or negation [60], but there is a much richer set of features that distinguish a concept from its physical realization that operates in a physical setting. This mismatch can have a significant impact on the effectiveness of problem solving, still, cognitive theories have rarely addressed it [40]. Experiments also show that multiple solution representation are created during problem solving [41, 61]. For example, figural representation express implementation details while conceptual representations refer to the broader goals and requirements. [41] argues that multiple representations convey different kind of information, help memorizing and clarify ambiguities, support interpretation, describe solution transformation, modification and reformulation, and express conflicts and other challenges.
Studies show that working memory (WM) is a central component in explaining problem solving effectiveness [62, 63, 64]. As explained in [59], "WM is a short-term memory system involved in the control, regulation, and active maintenance of a limited amount of information with immediate relevance to the task at hand." A similar definition is proposed in [62]. WM capacity (WMC) has been shown to be a reliable parameter in discussing other cognitive behaviors, like comprehension and reasoning [64]. WMC is around 3-5 chunks [63, 65]. Individual's WMC are assessed through span tasks, like reading, counting, and operation span [66, 57, 62]. Span tasks require recalling shown letters or words during the problem-solving process, like deciding if a sentence is semantically correct or solving math exercises. Research also focused on the impact of WMC on problem solving under a range affect conditions [62, 63, 66], including reward [67] and work pressure [59]. Experiments show that higher WMC correlates to better problem solving under positive or neutral affect conditions, however, WMC is less important in the case of negative affect situations, as any additional WM capacity is likely used to process negative affects [62, 67]. Dopamine and dopamine paths seem to cause the correlation to positive affect [67, 68], moods, and emotions [69]. Similarly, a high WMC can maintain and manipulate more features related to the problem in low pressure situations [59]. However, in high pressure situations, individuals with both high or low WMC relied on shortcuts to solve a problem. WMC is important for story telling the elements (e.g., ideas, concepts) essential to conducting a cognitive activity. For example, as Conway explains for paragraph understanding, creating an integrated image (e.g., stored in a WM chunk) can be realized only if one concurrently holds in mind images about the major premise, the main meaning of the previous sentence, and a fact or opinion discussed in the current sentence [63]. A similar observation was suggested for math problem solving too [65]. It has been argued that problem-solving effectiveness depends on the nature of the information stored in the limited WM space, such as shallow vs. deep features, distractions, or sensory data [63]. Guiding attention to certain features, which are then stored in WMC, was suggested to be significant in problem solving [63].
Work also shows that affect has an important role in the cognitive activities in problem solving, like goal setting, decision making, and memory recall [67, 22, 70]. For example, positive affect influences the balance between flexibility, e.g., updating WM and switching goals and overcoming fixation [28, 29, 70], vs. perseverance, i.e. maintaining goals in the presence of distraction, irrelevant answers, and other cues [22]. This balance is critical in problem solving, as successful solving involves both considering alternatives and pursuing a certain choice. Affect also changes verbal fluency [71], helps activating remote associations in memory and producing unusual associations [72], influences the balance between heuristic vs. analytic decision-making [73], categorization [74], and aids implicit judgments of semantic coherence [18]. It also influences social categorization [75] and job satisfaction [19]. Overviews of computational models for emotions are offered in [76, 77].
_Team layer_: As discussed in Joint Action Theory [14, 15, 16, 1], interactions coordinate members through synchronization, entrainment, alignment, and convergence [78], and depend on the conversational context and the team members' intentions and features [79]. For example, affiliative conversations have different interaction characteristics than argumentative interactions. Abney et al. explain that coordination is a complex process that occurs at different time-scales and hierarchical levels [78]. A complex system behavior emerges because of explicit and implicit matching at phonetic, phonologic, lexical, syntactic, semantic, and situational levels. However, member coordination goes beyond speech attributes. Common ground knowledge, shared visual information, and beliefs about the other team members influence postural and gaze coordination [80]. Affective valuation is also important during interaction [81, 14], such as different interpretations of the external stimuli and distinct expectations for the outputs of problem solving. Members must be committed to participate to the team effort. A detailed overview of related models and work is offered in [1].
Related models in psychology and sociology propose different scenarios for interactions in a team [1]. "Visual worlds", arguably the simplest model, assumes that team members follow an active - passive participant approach, with no changing initiative and little coordination between members [82]. The message model states that communication is a probabilistic information flow at a certain rate, during which the sender and receiver must employ the same encoding and decoding of the "packets of meaning" represented by words [83]. Social interactions are less important, as there is no coordination, intention recognition, role taking, or matching between the communicated details and the shared common perspective [1]. In contrast, social aspects are addressed in the two-stage model that focuses on lexical entrainment, shared perspective, and reuse of syntactic forms. Fowler et al. indicate that parsing and speaking durations, speaking rates, turn durations, response latencies, vocal intensities, and accents depend on coordination [84]. The interactive alignment model considers members' perspective adjustment and creation of mental models about other team members [85, 86]. Lowe et al. propose an extension to Associative Two-Process (ATP) theory to include social cues and affective states [14]. The model uses temporal difference learning to express expectations and to include social cues. Learning considers the magnitude and omission of rewards as well as the temporal difference between stimulus and learned outcome. The work argues not only for the importance of a member's intentional behavior towards a goal but also for the need that an agent learns the other's behavior and then adjusts accordingly. Finally, grounding models emphasize the social, collaborative view, i.e. coordination of meaning, observance of each other, and creation of mental models about others [1, 80].
## 4 Proposed Modeling Methodology
This section discusses the proposed modeling methodology to describe problem understanding and solving in teams. The methodology must capture any team activity flow originated for different starting conditions, like having individually completed solutions, individually devised solution fragments, or no available solutions. Besides the overall methodology, the section presents the ten related activities and the overall modeling algorithm.
Figure 2: Conceptual model for problem understanding and solving
### Modeling Individual Problem Solving
Problem descriptions include functional (processing) and data requirements, including output values that are expected to result for specific inputs. Unknowns and ambiguities can be part of a problem description. High-level solution descriptions are mixtures of four kinds of description features: (a) a sequence of actions (e.g., processing), (b) a set of goals that must be achieved, (c) a set of requirements for the outputs (i.e. logic conditions that they should meet), and (d) a set of inputs and corresponding outputs expected for the solutions. It is unclear what parameters influence the mixture of the four feature types. The purpose of the process is to devise a solution that minimizes the difference between the requirements of the problem description and the features of a devised solution.
The model for individual problem solving includes the multiple images used in solving (e.g., the work memory), the nature of knowledge representation (i.e. long-term memory), the individual's goals and expectation about being able to solve the problem and the resulting outcomes, and the sequence of activities over time as part of problem solving. Figure 2 illustrates the proposed conceptual model for an individual's problem understanding and solving activities. The flow considers two mental images, an image of the problem description and one of the solution. Both images can be mixtures of the above features (a)-(d). The problem image results from the problem understanding activity. The solution image is created by solution analysis. Changes of the images are described by the pair \(\Delta^{\prime},\mu^{\prime}\), where the first term indicates the change and the second represents the meaning of the change. The changes of the solution images are shown as \(\Delta,\mu\). A solution is successfully created when \(\Delta,\mu\approx\Delta^{\prime},\mu^{\prime}\approx 0\). The difference between the two images is established by RMDCP (Recall - Match - Difference - Combine - Predict) process, which includes four steps: recalling the two images from the memory, matching the images to relate their similar parts, find the difference of the two, and predicting the meaning of the difference in the context of the common matched parts. A bottom-up RMDCP process starting from the image of an existing solution identifies the difference \(\Delta^{\prime},\mu^{\prime}\) of the problem image, and a top-down RMDCP process starting from the current image of the problem finds the difference \(\Delta,\mu\) of the solution image. RMDCP process is executed in the context of the related experience (i.e. existing knowledge) of the member.
A main feature of the model in Figure 2 is the using of two images connected by the two RMDCPs. Having two images offers the following benefits: (i) The image of the problem can serve as a reference to set goals for problem solving. It has invariant requirements (even though ambiguities and unspecified elements can be included). Moreover, the image should be abstract enough to support exploration during problem solving. (ii) The matching and difference computation between the two images serves to guide the decision making during problem solving, such as by \(\Delta,\mu\approx\Delta^{\prime},\mu^{\prime}\approx 0\), or by maximizing the distance \(\Delta,\mu\approx\Delta^{\prime},\mu^{\prime}\)[27]. (iii) Matching, difference, and prediction between the two images supports decomposing goals into sub-goals to decide what should be solved next, and also to predict causality, such as how different solution parts contribute to the achieving the problem requirements. Goal decomposition and causality understanding support setting priorities among sub-goals and causal relations, hence guiding decision making during solving process. It also allows to define the difference between the expected and the observed effect of a solution fragment. Finally, the two RMDCPs can express the following two situations: (a) If they continuously relate the two images then they mimic the subconscious activity that finally creates the feeling that something is right or not, or possibly express sudden insight. (b) They can represent the conscious activity when a member directs his / her attention on finding the differences between the current solution and problem requirements.
The description of the resulting differences \(\Delta,\mu\) of the top-down RMDCP (which relate the solution to the problem requirements) becomes the current sub-goal of solution design. The solution is then executed. The solution and the execution behavior (like the outputs generated for specific inputs) are inputs to the solution analysis activity. Solution analysis determines differences \(\Delta,\mu\) between the expected and observed execution behavior. The differences represent missing or incorrect parts of the solution. The differences modify the mental images of the solution. The analysis can involve reasoning based on the code, such as mentally executing the code. Alternatively, analysis can be based on the data outputs obtained by executing the code for specific data inputs. Reasoning attempts to identify the required change \(\Delta,\mu\) to the solution image by following backwards the causal sequence that created the unwanted result. Data-based analysis obtains changes \(\Delta,\mu\) by generalizing the input - output behavior as a rule that creates this behavior, and then combining it with the solution image. Hence, the mental images of a solution, solution design and execution, and analysis form a bottom-up flow that can involve backwards reasoning about the processing steps along the solution's causal relations, and using the differences between the expected and observed inputs and outputs to create the processing rules of the solution.
Figure 3 depicts the proposed model to describe problem understanding and solving in a team. A high-level solution
description of the main idea on how to solve the problem is initially created based on a set of cues selected from the problem description. Alternatively, the process can iterate to refine the solution idea by comparing it with the problem description to expose additional cues that are incorporated into the idea. Another path to create the solution idea is based on analogy with previously solved problems. After relating the new problem description with previously solved, similar problems, an idea for solving the problem is found. This idea is changed to incorporate cues about the differences between the current and previous problems. The relating to previous problems can iterate to identify new cues.
A new, more detailed description of the solution is devised starting from the high-level solution idea. The iterative detailing process includes three activities, as shown in Figure 3, (i) localizing the place where detailing is performed, (ii) identifying the change that is performed on the higher-level description, and (iii) combining the change with the current solution. This process continues until the detailing changes can be transposed into code of the solution.
After creating or modifying the code, code analysis identifies errors in the code, such as the problem requirements that are not covered or incorrectly realized. It includes identifying the problem requirements that are not covered by the present code and the inputs - outputs for these code fragments. This analysis can continue with the sequence that starts with localizing the corresponding solution detailing (in Figure 3) or identifying characteristics for the inputs and outputs that could address the errors, like the expected conditions of the outputs. The identified data features are then combined with the higher-level description. The data features could also lead to changing the analogies used in problem solving.
### Modeling Team Behavior
There can be a large variety of problem-solving flows, e.g., activity sequences, but flows can be grouped into three main situations depending on the initial status of a team: (a) when solutions were individually developed before the team activity started, (b) when there was no available solution before the beginning of teamwork, and (c) when there were individually-developed solution fragments before the team started to work together, but these fragments did not form a complete solution. The three cases were the only main situations observed in our experiments, even though other situations showed variations of the three cases. The three cases are discussed next followed by details about the team problem-solving model.
_(a) Solutions were individually developed before team activity starts_: The main goal of the team activity in this case is to integrate the individual solutions into an improved, joint solution for the conjointly accepted problem description, including addressing any misunderstandings, inefficiencies, and errors that the initial solutions might have. Teamwork includes activities to explain the solutions to the other members, understand someone else's solution, analyze, and compare different solutions to understand their advantages and limitations, combine ideas and fragments of ideas, understand the
Figure 3: Problem understanding and solving dynamics across the semantic space
improvements of the joint solutions compared to the individual solutions, and relate the individual and joint solutions to the problem description. These activities might have to be performed in a context with little incentive to do them, as solutions already exist for the problem. In summary, as shown in Figure 3, the flow emphasizes bottom-up activities, like explaining the solutions to others, understanding the solutions of others, comparing solutions to understand their pros and cons, and combining useful solution features.
_(b) There was no available solution before the beginning of teamwork_: The main goal of the team activity is to jointly clarify any errors, unknowns, and ambiguities by individuals about the problem description and solution ideas, to figure out what these are for each individual, and to stitch together the individual contributions into a correct high-level solution idea and then to correctly detail it until creating the implementation code. Hence, problem solving involves problem decomposition into sub-goals and sub-problems, so that they can be tackled by individuals. It is important to understand the contributions and limits of one's problem understanding and solution, e.g., the uncovered parts of a problem description and the missing elements in a solution idea as compared to the problem description, and how someone else's idea can be combined to advance towards building a complete and correct solution. Coming-up with an overall solution idea based on the individual contributions is also part of the process. Individuals learn new information from others and must integrate it with their own in an attempt to create a complete, higher-level solution idea. Individuals' incentives to solve the problems are high as there is no solution idea available. The problem-solving flow in Figure 3 emphasizes top-down activities, like decomposing into sub-goals and sub-problems, adjusting them to the knowledge of the participants, and identifying the missing solution fragments that can be solved by individuals.
_(c) There were individually developed solution fragments before the team starts to work together, but these fragments did not form a complete solution_: The main role is to identify the missing solution parts, and to clarify ambiguities and unknowns in the problem description and to correct errors of the individual, partial solutions. Teamwork is based on explaining solution fragments and understanding the explanations and code. Identifying the missing parts requires localizing problem solving by detailing the existing solutions, identifying the required changes, and combining the changes with the existing solutions. Correcting errors must find their causes through analysis. As high-level descriptions of solutions are likely to exist, problem decomposition into sub-goals and sub-problems is less important than in Case (b). Incentives to solve the problems are moderately high. The flow in Figure 3 is a mixture of bottom-up and top-down activities guided by identified missing parts, which become sub-goals for solving.
Problem solving in a team involves a sequence of cognitive activities performed under individual incentives that are influenced by the team setting. The cognitive activities include an individual's (i) images used in problem solving (e.g., WM), (ii) the knowledge representations (i.e. long-term memory), (iii) the meaning associated to concepts and other outcomes, and (iv) the decision making that decides the problem-solving steps over time. (v) Individual expectations modulate the cognitive activities. The team environment includes (vi) the level of agreement between team members' images, knowledge representations, and their meanings. It also refers to (vii) a member's flexibility to address the needs of other members as well as (viii) the expectation about his/her participation to the team's activity.
A team's effectiveness in the three problem-solving scenarios depends on the following factors specific to each activity:
1. _Problem understanding_: The factors describe the degree to which the **requirements** of the problem description and their associated **cues** are correctly and completely considered by a team, and the consistency of the formed mental images of each member (Figure 2). The factors include the cues from the problem description that are considered during the development of the high-level solution descriptions (e.g., the completeness of the cue set, cue ordering based on their importance in solving the problem, forgotten cues), the way in which the cues in higher-level descriptions are further tackled during detailing, and the nature of used analogies, including correct and incorrect associations between previous and current problems. The factors also address the degree to which team members agree on the way in which the cues are selected, interpreted, and used to devise the solution ideas across levels of abstraction, including the levels of agreement and disagreement and the importance assigned to cues (e.g., attention). Besides, it presents how considering some cues further influences the inclusion of new cues and cues not in the problem description. Other factors refer to participation, like adaptively raising questions about problem requirements and giving responses, and flexibility in considering alternative interpretations of the requirements.
2. _Solution understanding_: Similarly to problem understanding, it refers to the degree to which a solution description is correctly and completely understood by the team members, and the consistency of the related mental images formed by the team members, like the mental images of the **desired** and **existing solutions** (Figure 2). Note that the mental images of the desired and existing solutions might only partially **match**, as inconsistencies can occur
during encoding in a programming language. Multiple solution descriptions can exist at various abstraction levels. The factors include the degree of understanding of the various solution facets, like the processing flows and their causal influences, the conditions controlling the flow, the handling of special situations, the connections between the higher-level and more detailed descriptions, such as the way in which higher-level parts (including analogies with similar exercises) are realized by lower-level constructs, and the opposite relation between more detailed and more abstract descriptions during abstraction. It also includes the degree to which the individual understanding of the different solution parts is similar or differ. Participation represents a member's engagement in creating a solution, even when the existing solution is not fully understood, as it might have been developed by others. Flexibility describes an individual's willingness to pursue other solution ideas than his/her own.
3. _Problem explanation_: The factors relate to a team member's availability and capability **to restate** the requirements of a problem in response and to satisfy other members' **needs**, like their questions, comments, and doubts. Explanations include alternatives of stating the requirements, logic conditions of the results, and expected outputs for inputs. They can refer to unspecified or ambiguous requirements too. Participation is important, including the availability to adjust explanations depending on expressed or implied needs. Flexibility describes the adjustment of the responses about the problem requirements depending on the specific needs.
4. _Solution explanation_: The factors refer to a team member's availability and capability **to describe** the solution ideas at different levels of detailing, including any level between the high-level and code level, depending on the degree to which the explanations **match** the other members' solution understanding **needs**. They include the variety to which different kinds of descriptions are used at various levels of abstraction, like sequences of processing steps, required conditions of the outputs, and relevant inputs and expected outputs. Participation refers to the availability to offer and adjust solution explanations depending on expressed or implied needs. Flexibility describes explanation adjustment based on the needs.
5. _Comparison of problem and solution descriptions_: As shown in Figure 2, it includes the **RMDCPs** between the mental images of the problem and the desired and expected solutions. The factors include the nature and degree of recall from memory of the features of an image due to the features of the other image, the degree of correct and complete matching of the features of the two mental images, the specifics of the identification of the differences between the two images, the way in which the found differences are integrated with images of previous, similar problems, and the characteristics of the predictions made after integration. The object of participation can be extremely broad, covering all features of a problem description, the high-level and the detailed solutions. Flexibility refers to the possible interpretations of the comparison results, including the importance of similarities and differences.
6. _Solution analysis to understand its pros and cons_: Similarly to the previous item, the factors refers to the **RMDCPs** between the mental images of the expected behavior of a solution, the image of the observed behavior, and the image of the causality for the difference between the two (Figure 2). The factors include the correctness and completeness of the expected execution traces, output properties, and input - output values, as well as the completeness of the traces, properties, and values of the observed behavior. The correctness, precision, and completeness to which the differences are related to the desired and existing solution images are additional factors too. Participation can be broad, especially to compare the expected and observed behavior. Flexibility is reduced, as the required input - output behavior is fixed by the problem description. The causality of the differences is also fixed by the nature of the solution, even though completely understanding the causality might be hard for complex solutions. Hence, flexibility can include a participant's willingness to accept a presented causality explanation, even if the explanation is not fully understood.
7. _Identification of the missing solution fragments_: The features refer to the process of **decomposing** the problem requirements by the **RMDCPs** between the mental images of the problem description, the desired solutions and existing solutions (Figure 2). The features describe missing processing, sub-goals and requirements for the missing parts, and input - outputs that should be produced by adding the fragments. They also refer to the completeness, correctness, and the degree of details of the missing parts and their interactions with the parts of an existing solution. Participation includes not only sub-goal and missing fragment identification but also analyzing that they reflect well the current solution needs.
8. _Identification of the required changes_: The corresponding features relate **reasoning** to find how to address the differences between the images of the desired solutions and existing solutions (Figure 2). The RMDCPs predict the outcomes of the differences, and this insight is used to formulate **hypotheses** about the changes that would address the differences, including the necessary detailing of a specific solution part and the abstraction of a part
into a higher-level description. The step identifies the places (parts) of an existing solution that must be changed to address the new sub-goals, missing parts, and required modifications (Figure 3). Besides, the features describe the degree to which the changes are based on analysis results, like the causality of the differences between the behaviors of the desired and existing solutions, including reasoning, unaddressed situations, and unwanted input - output values. Participation refers to formulating the changes, but also to understanding the correctness of the changes. Flexibility refers to the willingness to accept the change suggestions of others.
9. _Combining the identified changes with the current solution_: The features refer to **combining** the found changes with the images of desired and existing solutions, and possibly with the image of the problem requirements too, e.g., when ambiguities and unknowns of the requirements are addressed (Figure 2). The meaning of the combination is inferred. The features consider the degree to which the behavior of the changes (e.g., changes in processing, and data values and properties) is combined with the behavior of the rest of the solution (Figure 3) and the degree to which the combination is consistent across the different mental images (Figure 2). Participation to identify all aspects of a combination is important, like the impact on all related variables and processings. Flexibility refers to the degree of acceptance of combination alternatives different than the one proposed by a member.
10. _Flow of the problem solving activities_: Features describe the **trace** towards devising a solution, e.g., a working program. The flow of activities in Figures 2 and 3 describe how parameters about WM, LTM, participation, and engagement end-up relating the current and previous activities, such as having a sequence of solution detailing activities or switching from solution detailing to analysis. Features present how cues in previous answers, and social and emotional interaction influences the advancement towards the final solution, the breadth and depth of the explored solution space, and the overlapping of the solution ideas of the individuals.
**Example**: Figure 4 summarizes a sample of team discussions for a situation in which there were individually developed solution fragments before teamwork commenced, but the fragments did not form a complete solution (Case (c) in Section 4.2). The tackled problem required the finding of the contiguous sub-array with the largest sum in an existing array with the maximum length of thirty values. The figure shows the inputs produced during problem solving (labeled with letter _I_), their role in problem solving, and the related activity performed by a team member.
Figure 4: Team discussions during problem solving
As each team member individually devised fragments of the overall solution, teamwork started with solution explanation by the members (inputs \(I_{1}\) - \(I_{3}\)). All were high-level descriptions of an envisioned solution (mental image of the desired solution in Figure 2). Note that input \(I_{1}\) was part of the dialog while the team was still discussing the problem description to clarify some doubts (e.g., problem understanding), hence the solving activities were intermingled and not well separated in successive phases. Idea \(I_{4}\) is an analysis of the previous description (idea \(I_{3}\)) to communicate a feature of the idea, such as having many loops to implement the idea (i.e. mental image of expected behavior and solution analysis to find pros and cons). Idea \(I_{5}\) is another high-level description (e.g., mental image of desired solution), which, as explained in the next exercise, has a semantic relatedness to the previous inputs \(I_{1}\) - \(I_{3}\). Input \(I_{6}\) brings additional details by identifying required changes to the description. However, there is a degree of uncertainty to which of the ideas the detailing applies best, e.g., clarifying more the fragment _all combinations_ of idea \(I_{3}\) or referring to _length of entire_ in idea \(I_{5}\). Also, the specific set words to which it refers is unclear. The uncertainty of the connection was shown with dashed arrows in the figure. Idea \(I_{7}\) restates the meaning of the previous idea \(I_{6}\) by describing one way of implementing the requirement. However, it does not clarify any of the unknowns of idea \(I_{6}\). Input \(I_{8}\) introduces additional ambiguities. Besides the unknown, previous ideas to which it connects, like ideas \(I_{3}\) or \(I_{5}\), its role can be ambiguous too, as it can indicate a concern to be addressed through analysis or identify a missing fragment to be addressed by subsequent solving steps. Similar unknowns exist for idea \(I_{9}\) too. Idea \(I_{10}\) indicates a required change to the description to add more details (mental image of needed changes in Figure 2). This step clarifies that the previous idea \(I_{9}\) represents a missing fragment of the solution and not an analysis feature, as it could result from interpreting the idea in isolation. Idea \(I_{11}\) presents another required change of the description, but it is unclear if it relates to idea \(I_{5}\) or \(I_{9}\). Idea \(I_{12}\) adds a new change related to idea \(I_{9}\) (mental images of expected behavior and needed change), thus reinforcing that the latter is a required change too. Idea \(I_{13}\) is an analysis of the discussed solution, but it is unknown to which part of the solution it connects to. Idea \(I_{14}\) introduces another required change in relation to idea \(I_{5}\). Idea \(I_{15}\) is an analysis of the high-level idea \(I_{5}\) as suggested by the word \(brute\ force\), which designates the meaning of idea \(I_{5}\). The problem solving in Figure 4 continues with idea \(I_{16}\), which proposes a different solving approach than ideas \(I_{1}\) - \(I_{3}\) and \(I_{5}\). It relies on using an analogy (e.g., bubble sort) with a previous exercise. Input \(I_{17}\) is an analysis of the previous idea. Input \(I_{18}\) introduces needed changes to the solution in \(I_{16}\). Finally, input \(I_{19}\) is an analysis of the detailing introduced by input \(I_{18}\).
Discussions during problem solving mainly focused on aspects related to the mental images of the desired solutions, expected behavior and needed changes, and less about the images of the existing solution, observed behavior, causality of differences between expected and observed behaviors, problem description and needed changes in the problem description. Participation mainly was on adding details to the desired solution and stating some of its cons, but these details were not integrated with each other or with rest of the solution. There was little focus on discussing the missing parts, as there seems to be little attempt to connect the desired solution to the problem description. Without reference to the existing solution and its observed behavior, some ideas (like using bubble sort, \(I_{16}\), or the additions through \(I_{11}\), \(I_{12}\) and \(I_{18}\)) are hard to assess about their usefulness. Participation could have been increased by focusing more on the observed behavior and understanding the causality of the differences between the expected and observed behaviors. Regarding flexibility, the team considered two high-level solution alternatives, \(I_{5}\) and \(I_{16}\), but it is unclear the degree to which they shaped the individual solution contributions. Encouraging to create complete solutions along different high-level ideas would likely give more insight about the individual flexibility.
The inputs to the problem solving process pertain to two orthogonal dimensions: the solution space spanned by the high-level descriptions, i.e. the breadth of ideas described by inputs \(I_{1}\), \(I_{2}\), \(I_{3}\), \(I_{5}\), and \(I_{16}\), and the solution space described by the details added to the high-level descriptions, such as the specification in depth through identification of missing fragments and required changes. The next example discusses the idea breadth for this team problem-solving example.
Figure 5: Meanings of the descriptions of the alternative high-level solutions
**Example**: Figure 5 summarizes the meanings of the five descriptions of the high-level solution ideas produced by the team. Each description was expressed by indicating the verb (e.g., actions), the object of the action (i.e. who), the associated goals and outputs, and any related properties of the nouns. In addition, idea \(I_{3}\) includes a noun (e.g., brute-force) describing an identity (i.e. what). The relatedness of the descriptions can be characterized by the maximum degree of matching between verbs and nouns of similar constructs, like what, where, who, goals, outputs, etc. Some of the components were missing in the inputs, and were marked with letter \(X\), like the output component of input \(I_{2}\). Colored arrows show the matched words in different inputs. For example, the blue arrows show the matching of word _sum_ in input \(I_{1}\) and word _add-up_ in input \(I_{2}\) and that of word _larger_ in the two inputs.
The figure illustrates the matchings between consecutive descriptions, even though similar matching can be attempted between any two inputs \(I_{i}\) and \(I_{j}\). The matching of descriptions \(I_{1}\) and \(I_{2}\) shows that the parts connected through blue arrows express the same meaning in two different structural forms: description \(I_{1}\) combines two actions (verb _find_ followed by verb _sum_) to create an output, and description \(I_{2}\) uses a sequence of two actions, in which the second action (_see_) is the goal of the first action (_add-up_). In addition, verb _compare_ in idea \(I_{1}\) and sub-structure _all numbers_ in idea \(I_{2}\) are unmatched. The matching of ideas \(I_{2}\) and \(I_{3}\) shows the similarity due to action _add-up_ in both ideas, as well as the semantic relatedness between fragment _all numbers_ in idea \(I_{2}\) and sub-structure _all combine (combinations)_ in idea \(I_{3}\). Note that the two are not semantically equivalent, as idea \(I_{3}\) introduces an additional action through verb _combine_, thus it is unmatched in idea \(I_{2}\). Idea \(I_{3}\) refers to _brute force_ to introduce an analogy with similar problems and a way to create the context in which the idea is presented. The matching between ideas \(I_{3}\) and \(I_{5}\) is similar to the matching of ideas \(I_{1}\) and \(I_{2}\), as ideas \(I_{1}\) and \(I_{5}\) have similar structures as do ideas \(I_{2}\) and \(I_{3}\). Ideas \(I_{1}\), \(I_{2}\), and \(I_{5}\) have similar structures that express the goal of an action though a verb. Moreover, the semantics of the sub-structure _all numbers_ in idea \(I_{2}\) is similar to the sub-structure _entire length_ in idea \(I_{5}\) and _all_ in idea \(I_{3}\). Finally, the matching of ideas \(I_{5}\) and \(I_{16}\) includes the words connected through red arrows, while similar to idea \(I_{3}\), _bubble sort_ introduces an analogy with previous solutions. Still, the strict meaning of ideas \(I_{5}\) and \(I_{16}\) is different, as word _largest_ is unmatched in idea \(I_{5}\). Figure 5 depicts labeled with _All_ the high-level description that includes all sub-fragments of the five ideas.
As shown in Figure 2, detailing high-level ideas used RMDCP steps between mental images to identify differences between desired and existing solution images, identify the details to be added, combine the changes with the current solution, and predict expected outcomes, like pros / cons of the change and comparison to related solutions, including analogies with previously solved problems. The added details are required properties of the variables, requirements of a certain part of the solution, and computations to implement these requirements. They are localized, as they did not change the image of the overall solution idea. It is possible that the meaning of the original and detailed descriptions were not the same, like when new data properties or requirements were added. The added properties and requirements acted as a filter to eliminate options that were not part of a correct solution, an observation that can be used to support the correctness of detailing. From this perspective, a high-level description is like a template (thread) along which detailing is performed, but without changing the template.
**Example**: Figure 6(b) shows the detailing of the high-level idea \(I_{5}\) in Figure 5. Figure 6(a) presents a correct solution of the problem. Idea \(I_{6}\) adds details to the fragment _entire length_ of idea \(I_{5}\). Ideas \(I_{5}\) and \(I_{6}\) are semantically not equivalent,
Figure 6: (a) Complete solution and (b) team solution after detailing steps
as input \(I_{6}\) describes a changing of variable _length_. Even though idea \(I_{6}\) is not a correct detailing of how variable _length_ should be handled (as shown in Figure 6(a)), it is a step forward in the solving process, as it correctly identifies a missing solution part. Input \(I_{7}\) is a semantically equivalent restating of idea \(I_{6}\), but establishing the equivalence is based on observing the equivalence of executing the two operations, _minus_ one and _shift_ left (assuming that shifting is over one position of the array). Idea \(I_{8}\) also refers to fragment _entire length_ of input \(I_{5}\) or to fragment _length minus 1_ of idea \(I_{6}\). Moreover, it can represent an analysis of the solution, or the identification of a missing solution fragment by stating a requirement, like to count the possible combinations of sub-arrays of the overall array. The addition to inspect all sub-arrays is a correct step towards solving the problem (see Figure 6(a)), however, the action (i.e. verb _count_) is incorrect. Similarly, input \(I_{9}\) can relate to ideas \(I_{5}\) or \(I_{8}\), and describe an analysis or a missing fragment. Idea \(I_{10}\) restates a fragment of idea \(I6\), and defines idea \(I_{9}\) as having the role to identify a missing solution fragment. Input \(I_{11}\) is ambiguous: its action (e.g., _smack_) is undefined with respect to its object (who) and the previous idea it relates to. Input \(I_{12}\) adds a required change related to noun _length_ of input \(I_{9}\). Input \(I_{14}\) introduces a required change that relates to input \(I_{5}\), either its sub-structure _entire length_ or its action _check_. Figure 6(a) indicates the solution fragment that is missing from the solution devised by the team.
In conclusion, these examples show that the team activity mainly focused on creating and elaborating the mental images of the desired solutions and their expected behavior. Changes were mainly based on matching localized solution fragments, and less on global solution analysis, comparison between the expected and observed behaviors, and matching the observed behavior to the problem requirements. These are possible reasons why the team could not correctly solve the exercise.
## 5 Case Studies
This section presents case studies for each of the three problem-solving situations depending on a team's status before starting teamwork.
### Individually devised solutions existed before starting teamwork
All team members devised a working solution before meeting in the team setting. As mentioned in Subsection 4.2, the purpose of teamwork should have been to verify the correctness of the solutions with respect to the problem requirements, and to compare the different solution ideas with respect to their pros and cons. An overall observation is that all team discussions were significantly shorter than for the next two problem-solving situations. While all members participated to discussions, the depth of the discussions (e.g., the precision of the expressed ideas) was lower, with fewer and less connected follow-up comments. It is likely that there was less motivation to participate to teamwork as all members already had their own solution, which they assumed to be likely correct. Members usually offered high-level descriptions of their solutions. Descriptions were structured along the processing sequence of a program. Examples based on certain inputs were sometimes used to clarify some steps. However, while the processing steps were individually presented, there was no discussion of how the steps relate to each other, hence often there was no description of the overall solution idea containing the main processing flow of the input data. Subsequent comments by other members were mostly broad and simple, like observing the similarity with previous exercises, discussions about code length and simplicity (which does not require understanding the solution), and certain punctual elements of the solutions. There were few instances in which a question was asked about the code, and the subsequent answers improved the solution. More often, the previous ideas were repeated, which shows little flexibility in understanding the questions. When more detailed descriptions were offered, like comprehensive presentations of the code and variables with reference to certain input examples and correct results, it is likely that keeping track of the offered information was hard for the others.
**Discussion**: The analysis of team problem solving showed that team members had few incentives to spend time on teamwork, as they already solved individually the problem. With respect to Figure 2, there was little interest in performing RMDCPs to form mental images about needed changes, causality of differences, needed changes in problem description, expected behavior, and observed behavior. Teams spent little time to verify the correctness of their solutions with respect to the problem descriptions and to compare the pros and cons of their programs. With respect to Figure 3, interaction was less on identifying situations that are not addressed by the current code, identifying inputs - outputs that describe missing processing, comparing problem descriptions and solution ideas, identifying data characteristics, generalizing, and combining with description. There was little flexibility to adjust to the other members' questions.
### Using individually devised fragments to complete a problem solution
Figure 7 presents discussions during team problem solving for the case in which members attempted to individually solve the problem before meeting in a team setting. Each of the members produced a partial but incomplete solution. In addition to missing fragments, solutions also included errors. The exercise required to "find and display the batch with the most lines with acceptable parameters, and the batch that has the most lines with unacceptable parameters. Acceptable ranges of parameters for an acceptable seal were as follows: temperature: 150 - 170 C, pressure: 60 - 70 psi, and dwell time: 2 - 2.5 s. A data file named suture.txt contains information about batches. Each line contains the batch number, temperature, pressure and dwell time. Multiple lines can have the same batch number but all lines for the same batch are grouped together" [87]. The following discussion refers to the problem solving flow in Figure 3.
The team problem-solving part expressed by the dialog in Figure 7 starts with input \(I_{1}\) that states a missing (local) fragment from one of the individual solutions. This fragment was found after analyzing the solution and is expressed through a high-level description. The stated missing fragment becomes a sub-goal in problem solving. This sub-goal had to be then addressed in the context of the individual solution. Answer \(I_{2}\) by another team member does not address the posed sub-goal, but instead explains the entire solution. Details are added to the suggested sequence of steps, like using the C function fscanf together with some more concrete steps, such as separately handling each batch and features of the input data. It is unclear the degree to which acknowledgment \(I_{3}\) is true considering that response \(I_{2}\) did not address question \(I_{1}\). Input \(I_{4}\) offers a specific feature of the solution, e.g., its execution characteristics found after analyzing the solution. Input \(I_{5}\) attempts to compare the features of the discussed solution to the features of the solution by another member by explaining that it produces another output. Input \(I_{6}\) offered by another team member (not the solution author) solves the error mentioned in input \(I_{5}\). Based on the analysis and understanding of a local part of the code of the solution, the member identified the missing fragment. In addition, the expected purpose of the missing fragment is contrasted to the actual role of the existing code. Hence, input \(I_{6}\) (highlighted in yellow) is the conclusion of addressing one of the sub-goals, as it fixes one of the errors in the individual solutions.
Inputs \(I_{7}-I_{13}\) refocus team problem solving on the initial question \(I_{1}\) about the missing fragment in one of the solutions.
Figure 7: Team problem solving dialog starting from individual attempts
Note that explanation \(I_{7}\) is incorrect, as the stated details ("like its own Google data") are not addressing the problem requirements. Still, the other members correctly understood the purpose of the missing fragment. Question \(I_{8}\) and the subsequent answer \(I_{9}\) (by the same member that offered input \(I_{2}\) too) do not answer again the actual question but offer a more detailed explanation of the solution (than explanation \(I_{2}\)). The repeated semantic gap between the question and answers suggests that the discussion is slightly unfocused between the two members. Even though more detailed than description \(I_{2}\), explanation \(I_{9}\) is a mixture of description styles, e.g., sequence of broad processing steps and specific details, like using the C operator ++ and the referencing to batches one and two. Explanation \(I_{9}\) includes uncertain parts, like 'compare each data point to it'', as the nature of 'it'' is ambiguous. Still, question \(I_{10}\) suggests that answer \(I_{9}\) is insufficient, as the question reiterates questions \(I_{7}\) and \(I_{2}\) but in a more precise way. Inputs \(I_{11}\) (reinforcement) and then \(I_{12}\) (highlighted in yellow) suggest that the member finally understood the asked question as an explanation was offered, even though it was not understood by the member (\(I_{13}\)). Answer \(I_{12}\) is unclear as the purpose of using a set of variables or an array with 'the most acceptable parameters'' is unknown, however, the statement 'just replace what is in the array'' indicates that the solution might actually refer to separate variables to store the batch numbers with most acceptable and unacceptable values (correct solution). Hence, even though the explanation is incorrect, its underlying idea is arguably correct.
The next part of team problem solving shifted to other issues than the discussed missing fragment. Inputs \(I_{14}\) and \(I_{15}\) indicate possible changes to the solution. They include details, and act as hypothesis about possible causes that created the observed execution output. Response \(I_{16}\) (highlighted in yellow) addressed the error in the code. It can be argued that the team member narrowed down the search for possible causes until the precise error was found and solved. Therefore, this step describes reasoning about causality, which is important in solution understanding. Inputs \(I_{18}-I_{23}\) attempted to solve the observed code execution features described by input \(I_{17}\). Input \(I_{18}\) offered more details about the features, such as the input characteristics for which the execution feature was observed. Inputs \(I_{19}-I_{23}\) are a sequence of separate hypothesis on how to correct the observed error. They describe causal reasoning by team members to figure out the cause. The randomness of attempts suggests trial-and-error, likely without being guided by an overall image of the solution.
The analysis of a second team problem-solving example offered similar observations. Team members explained their solutions to each other by using mixed descriptions, like sequences of processing steps presented at a high-level, conditions of the steps, some solution details and special cases, and rarely referring to the code. The descriptions had ambiguous statements. The degree to which the other members correctly understood explanations is unknown. Providing explanations that were understood by others was not trivial. Members often only focused on their solution while paying little attention to understanding other methods. Addressing missing fragments was attempted by starting from specific examples, which while being correctly handled, were hard to generalize for other data too. As they could not identify the reason of an error, the team relied on trial and error to find places where changes were needed. Another incorrect attempt was to modify the code to produce correct results for a particular input, as it did not also produce correct outputs for other inputs. Hence, as backward reasoning was possibly performed for one specific data, the reasoning was not generalized for all data.
**Discussion**: With respect to the problem-solving flow in Figure 3, the main role of teamwork was to identify the missing code fragments and correct the errors of individual solutions. Identifying the missing code fragments involved localizing the solving by detailing an existing solution, identifying the required changes, and combining the changes with the existing solutions. The missing fragments were sub-goals, which were addressed starting from high-level descriptions that had to be connected to the rest of a solution. The process of correcting errors attempted to find their causes, like through backward reasoning. Sequences of hypothesis were set to narrow down to the code part that generated an error. If the process was unsuccessful, the team switched to trial and error.
Teamwork mainly involved explaining solutions and understanding explanations and code. There was less emphasis on problem description explanation and understanding. Explanations were often mixed descriptions that included sequences of processing activities, features (e.g., conditions) of the input and output data, and examples. Responses contained ambiguous elements and incorrect parts, but it is likely that team members achieved a correct, shared understanding of the intended meaning (despite the actual communication) by correcting it within the context of problem solving to obtain the most useful understanding within that context. There were no questions or comments on the incorrect parts of the explanations. The context set by the individually developed solutions was important for explanation and understanding.
### Jointly creating a new solution
The example presented in Section 4.2 and shown in Figure 4 illustrates this case. In addition to the discussions on creating a new solution, the team first collaborated on understanding the problem description, as one member had doubts about the meaning of contiguous sub-arrays. The doubt originated in one of the provided test cases, which did not match the member's understanding. The discussion included definitions of terms, and examples to illustrate the using of the definitions and their differences from the provided tests. Errors occurred between the offered examples and the meaning of the problem description. As mentioned in Section 4.2, discussions to clarify the doubts were mixed with high-level ideas of the solutions. Explanations included ambiguous elements. Another requirement was to connect the different explanations offered for the raised doubts.
A similar problem-solving behavior was observed for another team too. Solving started with making a broad analogy with a previous problem and discussing an input with precise features before offering a high-level description of the processing steps. Solving used examples as a method to identify algorithmic steps to be added to the description. However, there were doubts about the correctness of the additions. Some doubts remained unclarified. Moreover, errors likely occurred when attempting to generalize from the specifics of the examples to code that should process any input. New code was produced based on similarity with previous exercises (analogy), such as a certain code fragment was cut and pasted from an existing solution. It was observed that the team did not understand some of the reused code beyond its expected outputs, hence, mistakes were made due to the differences between the old and new problems. Difficulties also occurred when trying to localize and identify a required change, and then combining it with the existing code. There was little effort for backwards reasoning to understand the cause of an error. Instead, changes were made mainly based on predictions made only by using the inputs and observed outputs (i.e. reactive behavior). Therefore, there is no guarantee (e.g., explanation) that these changes solved the errors besides the fact that the specific expected outputs were obtained. New summaries of the solution were periodically produced, as new code was added. Summaries were high-level sequences of processing steps but were rarely related to the problem description.
**Discussion**. Team member might have had fragments of the problem solutions, but there were difficulties in forming a complete high-level description. Moreover, there was sometimes a need to discuss a problem description to clarify ambiguities in the descriptions or test data. Devising a complete high-level solution description started with multiple partial ideas of various degrees of semantic similarity. Broad analogies to problems, possibly from other domains, were utilized to create a high-level overall concept. Solution construction required detailing the high-level ideas at various abstraction levels, e.g., the high-level description parts were sometimes mapped directly to code from similar exercises, or successive, more detailed descriptions were created. New detailing was not always semantically equivalent to the previous descriptions, included errors, or were ambiguous with respect to the referred concepts and connected ideas. However, even if erroneous in terms of the expressed processing, they could still have a role in problem solving, like identifying missing fragments. Analysis of alternatives, including their cons, was a team activity. Linking through backward reasoning the observed outputs to their causes was difficult, instead correction was attempted based on the input - output behavior of the code. Finally, problem solving encompassed a part of the solution space, both in breadth and depth.
After producing a high-level description of the solution, team discussions were localized on adding details to it. Hence, explanation understanding was important to have a broader participation from the entire team. Summarizing code fragments was necessary during team discussions but there was no mechanism beyond using examples to verify the correctness of the summaries or any higher-level descriptions. There was more flexibility between immediate, medium, and long-term contexts. Still, the overall solution image that was formed probably included errors and ambiguities due to uncertain connections between individual ideas.
## 6 Conclusions
This paper presents a novel model to characterize team activities during devising computer programs that address typical problem requirements. Team members jointly work on solutions and interact through discussions with each other during problem solving. The analysis of problem-solving cases showed that teamwork situations can be grouped into three groups depending on the work executed before meeting in the team set-up: (i) situations in which team members individually solved a problem before team meetings, (ii) situations in which team members individually produced incomplete solutions
or solutions with significant errors before teamwork started, and (iii) situations in which no team member could devise a program before meeting in a team. The analysis showed that different kinds of discussions existed in the three situations, suggesting different types of activities being conducted during problem solving. A requirement for the model was to capture the specific characteristics of different problem-solving scenarios observed in real life.
The proposed team problem-solving model describes that every member operates with multiple mental images of the problem description and solution-related features, like desired and existing solutions, expected and observed behaviors of the solutions, causalities of the differences between expected and observed behaviors, and changes needed to address the differences. The images can be at different levels of abstraction, from high-level ideas to concrete code. They are a mixture of different styles, like sequences of processing steps, expected input - output pairs, or requirements expressed as logic conditions about the results. Descriptions can include unspecified details, ambiguities, unknowns, and errors. The fragments of images are not always tight together in unitary representations. The model connects the various mental images through cognitive activities, like creating problem descriptions, analysis of descriptions, identification of unaddressed issues (e.g., missing functionality and incorrect inputs - outputs), identification of the needed changes, and updating descriptions by combining them with cues and other results of analysis. Analyzing the sequence of team discussions using the proposed model supports making predictions on the performed solving activities, including making design changes without the team fully understanding their consequences. The model supports making predictions about changes of the team members' images because of teamwork and tracking the discussions among them. For example, specific cognitive interactions are the result of certain activities, like explaining a solution to others, analyzing solution correctness, combining new ideas, localizing errors, and so on.
The study of team problem-solving cases using the proposed model showed that discussion characteristics were different for the three situations depending on the nature of the individual solutions before teamwork commencement. Discussions were mostly broad summaries of the solutions if members already devised a program before working in a team. Likely, there was little motivation to have more in-depth descriptions or analysis. If incomplete solutions were devised before teamwork started, then discussions mostly presented alternative solution ideas and detailed certain parts of the ideas. The main effort focused on identifying the missing solution fragments as compared to the problem requirements. The study showed that discussions were not always well structured, hence, the missing parts were not correctly identified. Also, it was unclear the degree to which alternatives were compared to understand their pros and cons, or if reasoning was conducted to determine the missing parts of the solutions. If team members could not design a solution idea before working in a team, then discussions focused on understanding the problem requirements and associating the requirements to previously solved exercises. Explaining ideas was a main part of the discussions, but it was unclear the degree to which explanations were understood by others. Even though members had good a participation in discussions, they rarely followed a systematic procedure to combine individual ideas to progress towards a problem solution. It was unclear how the cues produced by a member illuminated new ideas by others to create joint progress.
Future work will focus on using the model to devise computational methods to estimate the characteristics of the problem-solving activities and of the mechanisms that connect them in a sequence during teamwork. Another opportunity is to design algorithmic ways of detecting inefficiencies and errors of teamwork, such as ignoring solution analysis and comparison. Finally, the model can be used as a starting point to create new techniques to find correlations between the team members' social and emotional interactions and the properties of the problem-solving flow.
|
2302.03350 | To Be Forgotten or To Be Fair: Unveiling Fairness Implications of
Machine Unlearning Methods | The right to be forgotten (RTBF) is motivated by the desire of people not to
be perpetually disadvantaged by their past deeds. For this, data deletion needs
to be deep and permanent, and should be removed from machine learning models.
Researchers have proposed machine unlearning algorithms which aim to erase
specific data from trained models more efficiently. However, these methods
modify how data is fed into the model and how training is done, which may
subsequently compromise AI ethics from the fairness perspective. To help
software engineers make responsible decisions when adopting these unlearning
methods, we present the first study on machine unlearning methods to reveal
their fairness implications. We designed and conducted experiments on two
typical machine unlearning methods (SISA and AmnesiacML) along with a
retraining method (ORTR) as baseline using three fairness datasets under three
different deletion strategies. Experimental results show that under non-uniform
data deletion, SISA leads to better fairness compared with ORTR and AmnesiacML,
while initial training and uniform data deletion do not necessarily affect the
fairness of all three methods. These findings have exposed an important
research problem in software engineering, and can help practitioners better
understand the potential trade-offs on fairness when considering solutions for
RTBF. | Dawen Zhang, Shidong Pan, Thong Hoang, Zhenchang Xing, Mark Staples, Xiwei Xu, Lina Yao, Qinghua Lu, Liming Zhu | 2023-02-07T09:48:29Z | http://arxiv.org/abs/2302.03350v2 | # To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods
###### Abstract
The right to be forgotten (RTBF) is motivated by the desire of people not to be perpetually disadvantaged by their past feeds. For this, data deletion needs to be deep and permanent, and should be removed from machine learning models. Researchers have proposed machine unlearning algorithms which aim to erase specific data from trained models more efficiently. However, these methods modify how data is fed into the model and how training is done, which may subsequently compromise AI ethics from the fairness perspective. To help software engineers make responsible decisions when adopting these unlearning methods, we present the first study on machine unlearning methods to reveal their fairness implications. We designed and conducted experiments on two typical machine unlearning methods (SISA and AmnesiacML) along with a retraining method (ORTR) as baseline using three fairness datasets under three different deletion strategies. Experimental results show that under non-uniform data deletion, SISA leads to better fairness compared with OTRR and AmnesiacML, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods. These findings have exposed an important research problem in software engineering, and can help practitioners better understand the potential trade-offs on fairness when considering solutions for RTBF.
## 1 Introduction
Machine learning (ML) systems play an important role in high-stake domains. For example, ML is used to identify human faces in images and videos [1], recommend products to customers [2], and recognize criminals accurately [3]. ML has been called software 2.0 because its behaviours are not written explicitly by programmers, but instead are learned from large datasets [4].
When ML software learns about individuals, it uses datasets collected about them. This data contains a broad range of information that may be used to identify individuals, such as personal emails, credit card numbers, and employee records. Governments or data subjects may sometimes ask ML service providers to remove sensitive information from their datasets for security or privacy purposes or for regulatory requirements. For example, Clearview AI1, a facial recognition company owning more than 20 billion images, was requested by France's Commission Nationale Informatique et Libertes to delete data due to a data protection law. In 2014, the Court of Justice of the European Union ordered Google, a multinational technology company, to remove links to sensitive personal data from its internet search results2. Later on, Europol3, the European Union Agency for Law Enforcement Cooperation, was asked to delete individuals' data having no criminal activity. Such types of demands are expected to grow in the future as regulation and privacy awareness increases.
Footnote 1: [https://www.buzzfeednews.com/article/richardnieva/clearview-ordered-to-delete-in-france](https://www.buzzfeednews.com/article/richardnieva/clearview-ordered-to-delete-in-france)
Footnote 2: [https://www.reuters.com/article/us-eu-alphabet-privacy-idUSKBN1W90R5](https://www.reuters.com/article/us-eu-alphabet-privacy-idUSKBN1W90R5)
Footnote 3: [https://www.bleepingcomputer.com/news/security/europol-ordered-to-erase-data-on-those-not-linked-to-crime/](https://www.bleepingcomputer.com/news/security/europol-ordered-to-erase-data-on-those-not-linked-to-crime/)
The _"right to be forgotten"_ (RTBF) is covered in legislation in different regions, such as the General Data Protection Regulation (GDPR) in the European Union [5], the California Consumer Privacy Act (CCPA) in the United States [6], and the Personal Information Protection and Electronic Documents Act (PIPEDA) in Canada [7]. These have given the data subject, i.e., service users, the right to request the deletion of their personal data and somehow get rid of their past [8]. When ML service providers receive such requests, they have to remove the personal data from the training set as well as update ML models to satisfy legislative purposes. Moreover, the data deletion is supposed to be deep and permanent due to the prime purpose of this right, exposing a key research challenge in various ML applications [9].
Researchers have proposed _machine unlearning_ approaches to enable the RTBF to be efficiently implemented when constructing ML models. Specifically, machine unlearning is the problem of making a trained ML model forget the impact of one or multiple data points in the training data. As ML models capture the knowledge learned from data, it is necessary to erase what they have learned from the deleted data to fulfill the RTBF requirements. A naive strategy is to retrain ML models from scratch by excluding the deleted data from the training data. However, this process may incur significant computational costs and may be practically infeasible [10]. Machine unlearning aims to avoid the large computational cost of fully retraining ML models from scratch and attempts to update ML models to enable the RTBF.
In recent years, machine unlearning has been extensively investigated to address these problems [11, 12, 13, 14, 15, 16]. There are two main types of machine unlearning approaches: _exact machine unlearning_, and _approximate machine unlearning_. While the exact machine unlearning approach ensures that the data deletion has no impact on the updated ML model by totally excluding it from the training set, and the approximate machine unlearning approach attempts to update the trained ML model weights to remove the deleted data's contribution from the trained ML model.
Current machine unlearning research focuses on efficiency and the RTBF satisfaction, but overlooks many other critical AI properties, such as AI fairness. _AI fairness_ is a non-functional property of ML software. It concerns algorithmic bias in ML models and whether they are biased toward any protected attribute classes, such as race, gender, or familial status. There is a rich literature about AI fairness [17, 18, 19, 20, 21, 22, 23, 24]. For example, Biswas and Rajan [22] conducted an empirical study, employing 40 models collected from Kaggle, to evaluate the fairness of ML models. The results help AI practitioners to accelerate fairness in building ML software applications. Zhang and Harman [21] later presented another empirical study on the influence of feature size and training data size on the fairness of ML models. It suggests that when the feature size is insufficient, the ML models trained on a large training dataset have more unfairness than those trained on a small training dataset. This work also assists us to ensure ML models' fairness in practice.
To the best of our knowledge, there is no prior work studying the fairness implications of machine unlearning methods. However, ignoring fairness in the construction process of machine unlearning systems will adversely affect the benefit of people in protected attribute groups such as race, gender, or familial status. For this reason, ML systems built based on these machine unlearning methods, may violate anti-discrimination legislation, such as the Civil Rights Act [25]. In this paper, we conduct an empirical study to evaluate the fairness of machine unlearning models to help AI practitioners understand how to build the fairness ML systems satisfying the RTBF requirements. We aim to answer the following research questions.
**RQ1: (Initial training)** What are the impacts of machine unlearning methods on fairness before the _"right to be forgotten"_ requests arrive?
**RQ2: (Uniform distribution)** What are the impacts of machine unlearning methods on fairness when the deleted data has uniform distribution?
**RQ3: (Non-uniform distribution)** What are the impacts of machine unlearning methods on fairness when the deleted data has non-uniform distribution?
To conduct the empirical study, we employ two popular machine unlearning methods, i.e., SISA and AmnesiacML on three AI fairness datasets. **SISA
(Sharded, Isolated, Sliced, and Aggregated) [13] and **AmnesiacML**[16] are an exact machine unlearning method and an approximate machine unlearning method, respectively. The three datasets, such as Adult, Bank, and COMPAS, have been widely used to evaluate the fairness of machine learning systems on different tasks, i.e., income prediction, customer churn prediction, and criminal detection. We use four different evaluation metrics, i.e., disparate impact, statistical parity difference, average odds difference, and equal opportunity difference, to measure the fairness of machine unlearning methods. We then analyze the results to answer the research questions.
The main contributions of our paper are as follows:
* We designed and conducted an empirical study to evaluate the impacts of machine unlearning on fairness. Specifically, we employed two well-recognized machine unlearning methods on three AI fairness datasets and adopted four evaluation metrics to measure the fairness on machine unlearning systems.
* Our results show that adopting machine unlearning methods does not necessarily affect the fairness during initial training. When the data deletion is uniform, the fairness of the resulting model is hardly affected. When the data deletion is non-uniform, SISA leads to better fairness than other methods. Through these findings, we shed light on fairness implications of machine unlearning, and provide knowledge for software engineers about the potential trade-offs when selecting solutions for RTBF.
## 2 Background
This section provides the background knowledge, including machine unlearning methods and AI fairness metrics.
### Machine Unlearning Methods
The classification problem is a type of task that many machine learning systems aim to solve and in which machine unlearning can be leveraged. Given a dataset of input-output pairs \(\mathcal{D}=(x,y)\in\mathcal{X}\times\mathcal{Y}\), we aim to construct a prediction function \(\mathcal{F}_{\mathcal{D}}:\mathcal{X}\rightarrow\mathcal{Y}\) that maps these inputs to outputs. The prediction function \(\mathcal{F}_{\mathcal{D}}\) is often learned by minimizing the following objective function:
\[\underset{\mathcal{F}_{\mathcal{D}}}{min}\sum_{i}\mathcal{L}(\mathcal{F}_{ \mathcal{D}}(x_{i}),y_{i})+\lambda\Omega(\mathcal{F}_{\mathcal{D}}) \tag{1}\]
where \(\mathcal{L}(.)\), \(\Omega(\mathcal{F}_{\mathcal{D}})\), and \(\lambda\) are the empirical loss function, the regularization function, and the trade-off value, respectively. Let \(\mathcal{D}_{r}\) and \(\mathcal{D}_{u}\) represent the retained dataset and the deleted dataset respectively. \(\mathcal{D}_{r}\) and \(\mathcal{D}_{u}\) are mutually exclusive, i.e., \(\mathcal{D}_{r}\cap\mathcal{D}_{u}=\mathcal{O}\) and \(\mathcal{D}_{r}\cup\mathcal{D}_{u}=\mathcal{D}\). When the _"right to be forgotten"_ (RTBF) requests arrive, a machine unlearning system needs to remove \(\mathcal{D}_{u}\) from \(\mathcal{D}\) and update the prediction function \(\mathcal{F}_{\mathcal{D}}\). Machine unlearning attempts to achieve a model \(\mathcal{F}_{\mathcal{D}_{r}}\), only trained from the retained dataset \(\mathcal{D}_{r}\), without incurring a significant computational cost. Hence, the model \(\mathcal{F}_{\mathcal{D}_{r}}\) is often used to evaluate the performance of machine unlearning methods.
There are mainly two types of machine unlearning approaches, such as exact machine unlearning and approximate machine unlearning. We present a typical method for each machine unlearning approach. Specifically, SISA and AmnesiacML are selected to represent the exact machine unlearning approach and the approximate machine unlearning approach, respectively. These methods, adopted for deep learning models, are efficient and effective in dealing with RTBF requests. We will briefly describe them in the following subsections.
#### 2.1.1 Sisa [13]
This is an exact machine unlearning method aiming to reduce the computational cost of the retraining process by employing a data partitioning technique. Figure 1 briefly describes an overview framework of SISA. In the beginning, the original data \(\mathcal{D}\) is split into \(\mathcal{S}\) shards, such as \(\cap_{i\in|\mathcal{S}|}D_{i}=\mathcal{O}\) and \(\cup_{i\in|\mathcal{S}|}D_{i}=\mathcal{D}\). Each shard \(D_{i}\in\mathcal{D}\) is then further split into \(K\) slices, i.e., \(\cap_{k\in|K|}D_{ik}=\mathcal{O}\) and \(\cup_{k\in|K|}D_{ik}=D_{i}\). A deep learning (DL) model is constructed on each shard. The DL model is updated
by gradually increasing the number of slices. Note that all the parameters of the DL model are kept in storage. After finishing the training process, SISA contains multiple DL models. Finally, the output results are collected by employing a voting mechanism on a list of outputs of DL models. When RTBF requests arrive, SISA automatically locates the shards and the slices containing the deleted data \(\mathcal{D}_{u}\). SISA then retrains the DL models of these shards from the particular cached stage, i.e., before the slices of the deleted data were put into the DL models.
The priori probability is the probability of an event happening when we have a limited number of possible outcomes that equally occur [26]. Machine unlearning methods can easily improve their performance when we know the priori probability of data deletion from different groups. For example, wealthy families prefer to keep their privacy for safety purposes, so they tend to send RTBF requests compared to other people [27]. Another example is that people with a higher educational background are more likely to remove their personal information from public [28].
There are two strategies for SISA to leverage the priori probability to speed up the training process, hence reducing the computational cost. The first strategy is to allocate the instances with a higher deletion probability into the same shards. This means the retraining process would happen on fewer shards compared with randomly allocating the instances. The second strategy is to allocate the instances with a higher deletion probability to the last slices. In this case, the retraining process would happen on fewer slices compared with randomly allocating the instances. Figure 1(a) and Figure 1(b) briefly describe the first and second strategies, respectively.
SISA is efficient and effective in dealing with machine unlearning problems. The method has inspired many later works [29, 30, 31]. Its source code is placed at [https://github.com/cleverhans-lab/machine-unlearning](https://github.com/cleverhans-lab/machine-unlearning).
#### 2.1.2 AmnesiacML [16]
This is a method of approximate machine unlearning approach. AmnesiacML makes use of the characteristics of batch training in neural networks. During the training process, the updated parameters of a
Figure 1: An overview framework of SISA. The dataset is first sharded into multiple shards. Each shard is further sliced into multiple slices. Each shard is put into a deep learning model trained by gradually increasing the number of slices. The output of the DL models is combined using a voting-based aggregation.
Figure 2: SISA’s strategies aim to reduce the computational cost of the retraining process.
DL model for each batch are recorded and kept in storage. The training process is expressed as follows:
\[\theta_{M}=\theta_{\text{initial}}+\sum_{e=1}^{E}\sum_{b=1}^{B}\Delta_{\theta_{e,b}} \tag{2}\]
where \(\theta_{\text{initial}}\) is the initial parameters of the DL model, \(E\) and \(B\) represent the total number of epochs and the total number of batches in each epoch, respectively. The updated parameters are stored as \(\{\gamma_{b}\mid\gamma_{b}=\sum_{e=1}^{E}\Delta_{\theta_{e,b}},1\leq b\leq B\}\).
When we receive the RTBF requests, AmnesiacML automatically locates the batches containing the instances that need to be deleted. After that, the DL model's parameters are rolled back to remove the impact of the deleted data on the trained DL model as follows:
\[\theta_{M^{\prime}}=\theta_{M}-\sum_{\hat{b}=1}^{\hat{B}}\gamma_{\hat{b}} \tag{3}\]
A strategy for AmnesiacML is easily adopted when we comprehend the priori probability of deleted data from different groups. For example, instances with a higher priori probability of being removed can be placed into the same batches. Hence, the process of updating parameters in the DL model will require a less computational cost.
Similar to SISA, AmnesiacML shows its efficiency and effectiveness in machine unlearning problems. However, it does not ensure the impact of deleted data being completely forgotten in the updated DL model. The open-source repository of AmnesiacML can be found at [https://github.com/lmgraves/AmnesiacML](https://github.com/lmgraves/AmnesiacML)
### AI Fairness Metrics
The goal of AI fairness is to correct machine learning (ML) models with the assumption that models should not be biased between any protected classes, i.e., race, sex, familial status, etc. Each protected class partitions a population into different groups, such as the privileged group and the unprivileged group. In this section, we employ four different fairness metrics, such as disparate impact, statistical parity difference, average odds difference, and equal opportunity difference, to evaluate the impact of machine unlearning methods on fairness. These metrics are widely adopted in measuring the fairness of ML systems [17, 18, 19, 20, 21, 22, 23, 24].
Let \(x_{s}\in\{0,1\}\) indicates the binary label of a protected class (\(x_{s}=1\) for the privileged group). Let \(\hat{y}\in\{0,1\}\) be the predicted outcome of a ML classification model (\(\hat{y}=1\) for the favourable decision). Let \(y\in\{0,1\}\) be the binary classification label (\(y=1\) is favourable). We present the four fairness evaluation metrics as follows.
**Disparate impact (DI)**[32] measures the ratio of the favourable outcome of the unprivileged group (\(x_{s}=0\)) against the privileged group (\(x_{s}=1\)).
\[\frac{P[\hat{y}=1\mid x_{s}=0]}{P[\hat{y}=1\mid x_{s}=1]} \tag{4}\]
**Statistical parity difference (SPD)**[33] is the difference of the favourable outcome of the unprivileged group (\(x_{s}=0\)) against the privileged group (\(x_{s}=1\)).
\[P[\hat{y}=1\mid x_{s}=0]-P[\hat{y}=1\mid x_{s}=1] \tag{5}\]
**Average odds difference (AOD)**[34] calculates the average of difference in true positive rate and false positive rate between unprivileged and privileged groups.
\[\frac{1}{2}(|P[\hat{y}=1|x_{s}=0,y=1]-P[\hat{y}=1|x_{s}=1,y=1]|\] \[+|P[\hat{y}=1|x_{s}=0,y=0]-P[\hat{y}=1|x_{s}=1,y=0]|) \tag{6}\]
**Equal opportunity difference (EOD)**[34] evaluates the difference in true positive rate between unprivileged group and privileged groups.
\[P[\hat{y}=1|x_{s}=0,y=1]-P[\hat{y}=1|x_{s}=1,y=1] \tag{7}\]
All fairness metrics are range from -1 to 1. Among them, DI achieves the greatest fairness of the classification model when it equals 1. The remaining fairness metrics, i.e., SPD, AOD, and EOD, attain the greatest fairness when their values are 0.
Methodology
This section first describes our experimental design and setup. Then we briefly present the datasets, the data deletion strategies, and our evaluation metrics.
### Experiment Design
Our empirical study starts by first collecting the benchmark fairness datasets. For each dataset, we preprocess and split it into training and testing datasets. The training dataset is then employed to train machine unlearning models. We use six evaluation metrics to measure the performance and fairness of these models. Figure 3 briefly presents an overview framework of our experimental design.
To identify the fairness datasets, we first refer to the work on fairness testing for machine learning models that employed six datasets, such as German Credit, Adult, Bank, US Executions, Fraud Detection, and Raw Car Rentals [35]. Among these datasets, only German Credit, Adult, and Bank are available. We also collect the Heart Disease dataset [36], referring to the presence of heart disease in patients, and the COMPAS dataset [37], aiming to predict the probability of criminals reoffending. In total, we acquire five datasets, i.e., German Credit, Adult, Bank, Heart Disease, and COMPAS, across various domains. As machine unlearning methods are efficient and effective on large datasets [13, 16], we remove datasets that have fewer than 1,000 instances including German Credit and Heart Disease. Hence, there are three datasets, i.e., Adult, Bank, and COMPAS, that are employed to evaluate the impacts of machine unlearning methods in our experiments.
We apply the same data preprocessing approach for all three datasets. Specifically, we employ the AI Fairness 360 toolkit [38], which is an open-source library for fairness metrics, to clean up invalid or missing values, transform categorical values into a one-hot encoding, and convert non-numerical binary values to a binary label (e.g., _male_: 1, _female_: 0). We further preprocess the datasets to employ them for fairness evaluation. Specifically, we specify favourable labels or the predicted outcome of our model. We also identify sensitive features (or protected classes) for the privileged and unprivileged groups. For example, in the Adult dataset, the prediction label is a favourable label, indicating whether a person has a high annual salary. We define _sex_ as a sensitive feature. We assume that a _male_ often has a higher annual salary than a _female_; hence, the _male_ should be put in the privileged group while the _female_ should be in the unprivileged group regarding the sensitive feature _sex_. For each dataset, we shuffle and split it into the training dataset (80%) and the testing dataset (20%). We then feed the training dataset into our models.
To conduct our experiments, we employ a multi-layer perceptron (MLP), a simple feedforward network [39]. The MLP model includes an input layer, a hidden layer, and an output layer. We train the MLP model by optimizing a cross-entropy loss function [40]. Two machine unlearning methods, such as SISA and AmnesiacML, are built based on the MLP model. A naive approach of using original training and retraining (denoted as **ORTR**) is also built based on the MLP model as the baseline. We consider two experimental scenarios.
* **Scenario 1:** Before any "_right to be forgotten_" (RTBF) requests, what are the impacts of machine unlearning methods on fairness? In this setting, the training dataset is put into three different models, such as ORTR, SISA, and AmnesiacML (see Figure 3), to train these models. We then employ the testing dataset to evaluate the performance and fairness of these trained models.
* **Scenario 2:** When the RTBF requests arrive, what are the impacts of machine unlearning methods on fairness? In this setting, we employ data deletion strategies (see Figure 3) to remove instances from the training dataset. For each data deletion strategy, we compare the performance and fairness of ORTR with two machine unlearning methods, such as SISA and AmnesiacML.
For each dataset, we apply 5-fold cross-validation and take the mean of the results. We have conducted our experiments using an Nvidia T4 GPU and an Intel Xeon Silver 4114 CPU with 16 GB RAM and 12 GB RAM, respectively. The OS is Debian 10.10 LTS 64 bit. The machine learning framework is PyTorch
v.1.12 with CUDA 11.3, and the Python language version is 3.7.
### Datasets
We conduct our experiments by employing three widely-used fairness datasets to evaluate the impacts of machine unlearning methods on fairness. These datasets are briefly described as follows.
* **Adult**[41]. This dataset is extracted from the 1994 Census Bureau database4. Its task is to predict whether a person can earn over $50,000 USD per year. The dataset includes 48,842 instances and 14 features. The sensitive features for this dataset are _sex_ and _race_. Footnote 4: [https://www.census.gov/programs-surveys/ahs/data/1994.html](https://www.census.gov/programs-surveys/ahs/data/1994.html)
* **Bank**[42]. The dataset is collected from marketing campaigns of a Portuguese banking institution. Its task is to predict whether a client will subscrib to a bank term deposit. The dataset contains 45,211 instances and 17 features. We use _age_ as the sensitive feature for this dataset.
* **COMPAS**[37]. The dataset contains recidivism records, which are used to build a prediction system to forecast the possibility of a criminal defendant reoffending. The dataset has 7,215 instances and seven features. The sensitive features are defined as _sex_ and _race_.
All the sensitive features are selected by following the previous work [21, 22, 24].
### Data Deletion Strategies
To send the "_right to be forgotten_" (RTBF) requests, we adopt two data deletion strategies. Each strategy has various settings presented as follows.
**Uniform distribution.** For this strategy, we assume that the deleted data has a uniform distribution, i.e., each instance has an equal probability of being removed from the training dataset. To select a range of proportions of the total amount of deleted data, we leverage the work of Bertram et al [43]. Specifically, we randomly remove 1%, 5%, 10%, and 20% of the training data.
**Non-uniform distribution.** For this strategy, we assume that the deleted data has a non-uniform distribution, i.e., each instance has a different probability of being removed from the training dataset. Some people have a higher probability of sending RTBF requests compared to other people. For example, people who are from wealthy families or have
Figure 3: Experimentation to evaluate the performance and fairness of machine unlearning methods under different scenarios.
a high educational background prefer to keep their sensitive information private for security and privacy purposes [27, 28]. As these personal details are unavailable in our datasets, to better understand the fairness implications under different cases when the deleted data is a non-uniform distribution, we first assume that the people who request the RTBF are predominantly from privileged groups, and we assume another scenario that people exercising the RTBF are predominantly from unprivileged groups.
### Evaluation Metrics
We consider two types of evaluation metrics in our experiments, which are performance and fairness.
**Performance measure.** Before evaluating the fairness of models, we calculate their performance in terms of accuracy and F1 score.
* _Accuracy:_ The ratio of true predictions among the total number of predictions [44].
* _F1 score:_ The harmonic mean between precision and recall [45].
**Fairness measure.** To measure the fairness of models, we adopt the four fairness metrics, i.e., disparate impact (DI), statistical parity difference, average odds difference, and equal opportunity difference, briefly mentioned in Section 2.2. For simplicity in presenting and observing, we convert all the fairness metric values into their absolute values. As disparate impact (DI) value differs from other fairness metrics, we use \(|\)1 - DI\(|\) to evaluate the fairness of our models. In this case, all four fairness metrics achieve the greatest fairness when their values equal 0.
## 4 Experiments
In this section, we provide results and insights from the experimentation, to answer our research questions.
**RQ1: (Initial training) What are the impacts of machine unlearning methods on fairness before the "_right to be forgotten_" requests arrive?
The exact machine unlearning methods, such as
Figure 4: Fairness (the smaller, the better) and performance (the higher, the better) evaluation results of SISA with different shards (5/10/15/20h) and slices (1/5/10c)
SISA, modify how data is fed into machine learning models, affecting the fairness of these models before RTBF is requested, i.e., initial training. This research question is aimed to understand the impact of machine unlearning methods on fairness in initial training. Specifically, we compare SISA with ORTR, a naive approach built based on a MLP model. Note that the approximate machine unlearning methods, such as AmnesiacML, only update the ML models' parameters without modifying their architecture. We therefore ignore AmnesiacML in this research question.
We evaluate the impact of SISA and ORTR on fairness across different numbers of shards (5, 10, 15, 20) and numbers of slices (1, 5). We execute the experiments on three different datasets, such as Adult, Bank, and COMPAS. For ease of observation, we denote Adult, Bank, and COMPAS as \(A\), \(B\), and \(C\), respectively. _Sex_, _Race_, and _Age_, which are the sensitive features, are represented as \(S\), \(R\), and \(Y\), respectively. The number of shards and the number of slices are represented as \(h\) and \(c\), respectively. For example, given 1,000 instances, _5h5c_ means these instances are split into five shards. Each shard is then further split into five slices. In the end, each shard contains 200 instances and each slice includes 40 instances.
Figure 3(a) shows the fairness evaluation results of SISA initial training with 5/10/15/20 shards and one slice. The baseline is ORTR. We can see that for some datasets and features, the \(|1-\text{DI}|\) value gets better when the number of shards increases, including _B_-_Y_ and _A-R_, while for _C-S_ and _A-S_ the value gets worse when the number of shards increases. Similarly, for other metrics, the trends are not always one
Figure 5: Fairness (the smaller, the better) evaluation results of different training methods after uniform data deletion under various deletion proportions.
Figure 6: Difference of fairness between before and after the deletion. Value 0 indicates no fairness change, while positive values and negative values indicate worsen fairness and improved fairness respectively.
way along the increasing number of shards. Although there are some tendencies within each dataset, overall across all datasets, we cannot come up with an interpretation towards any outstanding fairness impact from the SISA method and its number of shards.
In terms of performance, there is degradation of less than 10% in accuracy for Adult and Bank datasets, while there is no apparent degradation for the COMPAS dataset. This could be because the COMPAS dataset is much smaller than the Adult and Bank datasets and has fewer useful features, making it easier to converge and less likely to experience performance degradation from data partitioning.
The fairness evaluation results of SISA at initial training with five slices are shown in Figure 3(c). Comparing the fairness between one slice and five slices, we find no noticeable difference across all fairness metrics. We have the similar observation on performance metrics shown in Figure 3(b) and Figure 3(d), and this is expectedly identical to what was reported in the SISA paper [13].
During initial training, no significant fairness impacts are observed from using machine unlearning methods, such as SISA. In addition, compared with ORTR, SISA has performance degradation on larger datasets.
**RQ2: (Uniform distribution) What are the impacts of machine unlearning methods on fairness when the deleted data has uniform distribution?**
A uniform data deletion strategy assumes that every instance has an equal possibility of being removed from trained models. In this research question, we want to explore how much these machine unlearning methods impact fairness when the deleted data is in uniform distribution.
For this research question, we employ a range of deletion rates from small to large (1%, 5%, 10%, 20%) chosen from the statistics [43]. For SISA, we apply its default setting (i.e., _5h1c_). For AmnesiacML, we train its model according to the requirements in the paper [16].
Figure 5 presents the results of fairness in various deletion proportions. We see that there is no clear trend indicating which methods achieve better results across all datasets (i.e., Adult, Bank, and COMPAS) and sensitive features (_Sex_, _Race_, and _Age_). Figure 6 shows the difference in fairness before and after applying the deletion strategy. It indicates that AmnesiacML is likely to be prone to fairness loss caused by this deletion strategy, while SISA is the most robust. However, the difference in fairness between before and after data deletion is unclear in Adult and COMPAS datasets. The main reason is that the deleted data is in uniform distribution, i.e., each instance has an equal probability of being removed from trained models, leading to similar fairness results in this setting. We also see that all methods have a relatively large variation in fairness on the Bank dataset. The reason is that this dataset is highly imbalanced compared to other datasets, such as Adult and COMPAS. Specifically, among 45,211 instances, only 963 instances (2.13%) are labeled as negative instances in the Bank dataset.
Figure 7 illustrates the performance of this data deletion strategy. It shows that the deleted data has minimal impact in terms of performance on trained models. As the deletion proportion is 1% - 20%, we believe the deleted data might be insufficient to cause non-trivial performance degradation.
Under the data deletion of uniform distribution, the fairness is not clearly affected by machine unlearning methods, while ORTR outperforms SISA and AmnesiacML on performance metrics.
**RQ3: (Non-uniform distribution) What are the impacts of machine unlearning methods on fairness when the deleted data has non-uniform distribution?**
People from different groups have the equal right to send RTBF requests to remove their sensitive information, but they may have varied probabilities [28]. In this research question, we aim to understand the impacts of machine unlearning methods on fairness when the deleted data has non-uniform distribution.
The simplest way to conduct the experiments is to remove the deleted data so that it has a similar distribution to the percentage of each group (privileged or unprivileged groups) for each sensitive feature on the whole dataset. As our datasets are imbalanced on some features, this RTBF simulation strategy highly likely leads to empty groups. To overcome this problem, we simplify our scenario by removing the data only from either the privileged group or the unprivileged group. Specifically, we remove \(50\%\) of the data for each group, making the potential impact on fairness more apparent. Note that we assume the prior probability of a certain group (the privileged group or the unprivileged group) is known.
Figure 7(a) and Figure 7(b) present the results on data deletion from the privileged group and the unprivileged group, respectively. From the charts we can see that SISA with a sharding strategy (see Figure 1(a)) achieve the best \(|1-\mathrm{DI}|\) values for nine out of ten combinations. Figure 9 shows that SISA with a sharding strategy may also have fairness improvements after data deletion. The extent of improvements varies under different datasets, such as Adult, Bank, and COMPAS, and different sensitive features, i.e., _Sex_, _Race_, and _Age_. Furthermore, we plot the differences between SISA with and without the sharding strategy in Figure 10. Overall, the fairness is likely to be improved across all metrics when the sharding strategy is applied. Moreover, such improvements are likely to happen on those datasets and sensitive features with more imbalanced distributions.
SISA with a slicing strategy (see Figure 1(b)) is also likely to outperform ORTR on \(|1-\mathrm{DI}|\). However, it achieves less performance compared to SISA with a sharding strategy. For ORTR, we observe that the
Figure 8: Fairness (the smaller, the better) evaluation results of non-uniform deletion. The results are shown as the distances from the ORTR results (baseline).
Figure 7: Performance (the higher, the better) results of different training methods after uniform data deletion under various deletion proportions.
fairness changes between before and after retraining are weak. Similarly, AmnesiacML tends to be close to the ORTR across all indicators.
Performance-wise, we observed no significant performance difference between before and after the data deletion, or between methods with and without strategies applied. The changes in performance indicators are always less than 5%. The performance differences between different methods are likely to be inherited from the methods instead of escalated from unlearning strategies or distribution settings.
Under the data deletion of non-uniform distribution, SISA with a sharding strategy achieves better fairness. The performance has no significant degradation from deletion using machine unlearning methods.
## 5 Discussion
Our research explores machine unlearning methods on fairness and has gained empirical observations about fairness regarding initial training, data deletion with uniform distribution, and data deletion with non-uniform distribution. We will discuss these observations as follows.
Before the _"right to be forgotten"_ requests arrive, we see that there are no significant impacts of machine unlearning methods, such as SISA, on fairness. The observations also indicate that SISA achieves lower performance on large datasets, such as Adult and Bank, in this setting.
When the deleted data is in uniform distribution, there is no clear impact of machine unlearning methods on fairness. The observations also show that ORTR, a naive approach that retrains a model from scratch, outperforms SISA and AmnesiacML in terms of accuracy and F1 score on large datasets, i.e., Adult and Bank.
When the deleted data is in a non-uniform distribution, SISA with a sharding strategy (see Figure 1(a)) is more likely to achieve better fairness compared to other models. Moreover, we also see that there is no significant performance difference between before and after the data deletion in machine unlearning methods.
## 6 Threat to Validity
### Internal validity
To perform our empirical study, we employed two machine unlearning methods on three AI fairness datasets. For machine unlearning algorithms, we reused existing implementations by following their
Figure 10: Fairness difference between SISA with and without applying sharding strategy.
Figure 9: Fairness change after deletion using SISA with sharding strategy.
open-source repositories. All three datasets are well-known and widely used by AI fairness researchers. We employed the AIF360 library to preprocess the datasets for fairness evaluation. We have carefully checked the code and data, but there might be some remaining errors. Although there was some randomness involved in the experiments, we have tried to minimize this threat by conducting experiments multiple times (5-fold cross-validation).
### External validity
Threats to external validity refer to the generalizability of the study. In our experiments, we only used three AI fairness datasets, collected from three tasks, i.e., income prediction, customer churn prediction, and criminal detection, with a total of five protected classes and two machine unlearning methods to perform our experiments. We also performed two data deletion strategies. This may be a threat to external validity as these datasets, tasks, methods, and data deletion strategies may not be generalized beyond our studies. As the datasets and methods are widely adopted in AI fairness and machine unlearning research fields respectively, we believe that there is minimal threat to external validity. In the future, we plan to investigate more machine unlearning methods and AI fairness datasets.
### Construct validity
Threats to construct validity indicate evaluation metrics failing to be selective. We employed different evaluation metrics, widely used to measure fairness in machine learning models, to minimize threats to construct validity.
## 7 Related Work
This section introduces the work related to machine unlearning and AI fairness.
### Machine Unlearning
Machine unlearning was first presented by Cao and Yang [11]. Its objective is to build a system that can remove the impact of a data point in the training data. Early works on _machine unlearning_ focused on traditional machine learning (ML) models, i.e., support vector machines, linear classification, logistic regression, etc., by facilitating incremental and decremental learning techniques to efficiently retrain ML models after adding or removing multiple data points from the training set [46, 47, 48, 49]. Since then, machine unlearning has been extensively studied to reduce the computational cost of retraining deep learning (DL) models [11, 12, 13, 14, 15, 16]. Specifically, there are two main research approaches for employing machine unlearning in deep neural networks, i.e., _exact machine unlearning_ and _approximate machine unlearning_.
The exact machine unlearning approach requires a new model to be trained from scratch by removing the deleted data from the training set. This approach ensures that the deleted data has no impact on the new model as we exclude it from the training set. To make the retraining process more efficient, previous works [12, 13] divided the training data into multiple disjoint shards and DL models were trained on each of these shards. Hence, when a request to remove data points from the training set comes, we only need to retrain the models containing the removed data points. The exact machine unlearning approach necessitates changes in the DL architecture, making testing and maintaining the DL system challenging.
The approximate machine unlearning approach starts with the trained DL model and attempts to update its weights so that the model will no longer be affected by the removed data points from the training data. Izzo et al. [50] showed that we may achieve a linear time complexity in machine unlearning by updating a projective residual of the trained DL models. Guo et al. [51] and Golatkar et al. [52] employed a Newton step on the model weights to eliminate the influence of removed data points. Graves et al. [16] later proposed an amnesiac unlearning method by storing a list of batches and their updated weights; hence, DL models only need to undo the updated weights from the batches containing the removed data points. The approximate machine unlearning approach is more computationally efficient than the exact machine learning approach. However, we are unsure whether the removed data points have been
completely forgotten in the trained model.
Although machine unlearning methods have been comprehensively studied, their fairness has not been investigated in the process of building machine unlearning systems. To fill in this gap, we perform an extensive study on the two machine unlearning approaches, i.e., extract and approximate, to reveal their fairness implications.
### AI Fairness
_AI fairness_ or machine learning (ML) fairness has been deeply investigated during the last decade [17, 18, 19, 20, 21, 22, 23]. Its basic idea is that the prediction model should not be biased between different individuals or groups from the protected attribute class (e.g., race, sex, familial status, etc.). There are mainly two major types of AI fairness, i.e., _group fairness_ and _individual fairness_[23, 19].
Group fairness requires the prediction model to produce different predictive results for different groups in the protected attribute class. Several studies proposed a specific kind of utility maximization decision function to satisfy a fairness constraint and derive optimal fairness decisions [53, 54, 55, 56]. Hardt et al. [54] employed the Bayes optimal non-discriminant to derive fairness in a classification model. Corbett-Davies et al. [53] considered AI fairness as a constrained optimization problem to maximize accuracy while satisfying group fairness constraints. Menon and Williamson [55] investigated the trade-off between accuracy and group fairness in AI models and proposed a threshold function for the fairness problem. Group fairness often ignores the individual characteristics of the group, leading to permit unfairness in training ML models [57].
Individual fairness on the other hand expects the prediction model to produce similar predictive results among similar individuals who are only different in protected attributes. Udeshi et al. [58] presented Aequitas, a fully automated and directed test generation framework, to generate test inputs and improve the individual fairness of ML models. Aggarwal et al. [35] employed both symbolic execution (together with local explainability) to identify factors making decisions and then generate test inputs. Sun et al. [59] combined both input mutation and metamorphic relations to improve the fairness of machine translation.
Other works explore the effectiveness and efficiency of existing ML methods for software fairness [24, 21, 22]. Specifically, researchers focus on improving fairness in ML systems by leveraging mitigation techniques [22], removing biased instances in training data [24], or improving the quality of features in the datasets [21].
Even though AI fairness has been widely adopted, its properties have not been revealed in machine unlearning. We perform an empirical study on the three AI fairness datasets, i.e., Adult, Bank, and COMPAS to understand the impacts of machine unlearning models on fairness.
## 8 Conclusion and Future Work
Machine unlearning emerges with the need to implement the _"right to be forgotten"_ (RTBF) efficiently while existing studies overlook its impact on fairness. To the best of our knowledge, we are the first to perform an empirical study on the impacts of machine unlearning methods on fairness. We designed and conducted experiments on two typical machine unlearning methods (SISA and AmnesiacML) along with a retraining method (ORTR) using three fairness datasets under three different deletion strategies. We found that SISA leads to better fairness compared with AmnesiacML and ORTR, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods. Our research has shed light on fairness implications of machine unlearning and provided knowledge for software engineers about the trade-offs when considering machine unlearning methods as a solution for RTBF. In the future, more research efforts are needed to broaden the understanding of fairness implications into other machine unlearning methods as well as investigate the underlying causes of their impact on fairness. |
2310.03141 | The galaxy cluster AC114 -- II. Stellar populations and the
mass-metallicity relation | We investigate the mass-metallicity relation for galaxies in the Abell
cluster AC114 from 7 hours of VIMOS/MR data collected at the ESO-VLT telescope
in 2009. The dynamical analysis completed in our previous paper allowed us to
select cluster members, whose spectra are here analyzed with stellar population
synthesis models. Active and passive galaxies are identified based on the
presence/absence of the [\ion{O}{II}]\lambda3727,
[\ion{O}{III}]\lambda\lambda4959,5007 and/or H\beta emission lines, depending
on the galaxy redshift. We find that active galaxies have lower average masses
than passive ones, and have lower average metallicities. The mass-metallicity
relation (MZR) of the cluster is found to be steeper than that for galaxies in
the local universe. | Ivo Saviane, Irina Yegorova, Dominique Proust | 2023-10-04T20:05:52Z | http://arxiv.org/abs/2310.03141v1 | # The galaxy cluster AC114. II. Stellar populations and the mass-metallicity relation
###### Abstract
We investigate the mass-metallicity relation for galaxies in the Abell cluster AC114 from 7 hours of VIMOS/MR data collected at the ESO-VLT telescope in 2009. The dynamical analysis completed in our previous paper allowed us to select cluster members, whose spectra are here analyzed with stellar population synthesis models. Active and passive galaxies are identified based on the presence/absence of the [O ii]\(\lambda 3727\), [O iii]\(\lambda\lambda 4959,5007\) and/or H\(\beta\) emission lines, depending on the galaxy redshift. We find that active galaxies have lower average masses than passive ones, and have lower average metallicities. The mass-metallicity relation (MZR) of the cluster is found to be steeper than that for galaxies in the local universe.
keywords: galaxies: clusters and metallicities - galaxies: distances and redshifts
## 1 Introduction
The mass-metallicity relation (MZR) is one of the fundamental constraints to galaxy evolution, and for this reason it is the subject of a substantial body of literature (see Maiolino & Mannucci, 2019, for a review), with a section of that literature looking at its evolution with redshift: indeed, studies of the MZR of galaxies are now available up to \(z\sim 8\)(Jones et al., 2020).
However, metallicities at different redshifts are obtained with different methods, owing to abundance-sensitive spectral features moving across the optical-infrared range. This makes a comparison of both absolute and relative trends across cosmic time rather difficult. Furthermore, spectra of high-redshift objects cannot be obtained at high resolution, so metallicity indices calibrated with nearby galaxies must be used. For example in the case of star forming galaxies, abundance indicators are calibrated based on local H ii regions, but physical conditions of the interstellar medium (ISM) at high-redshift might be different than those at present day, thus adding further uncertainty to the process.
Interestingly, up to \(z\sim 0.5\) emission lines that are used to compute abundances and physical conditions of the ISM, still fall in the optical range, and sufficiently good spectra can be obtained with 10m-class telescopes, with reasonable exposure times. Thus a few years ago we identified the galaxy cluster AC114 as an interesting target to obtain emission-line spectra of galaxies spanning a range of masses: the cluster sits at \(z\sim 0.32\), and as explained in Saviane et al. (2014), we expect that star-forming galaxies at that epoch are a factor \(\sim 1.4\) more metal-poor than those in the local universe. Therefore, obtaining gas-phase metallicities of AC114 cluster members gives us the possibility to check the evolution of the MZR in the last \(\sim~{}4\) Gyr, with abundances obtained with the exact same method used for local star-forming regions.
The substantial body of work on the cluster has been reviewed in our previous paper (Proust et al., 2015), nevertheless we summarize here the key features that make it relevant for the current study. The cluster is classified as Bautz-Morgan type II-III (Abell et al., 1989) with a galaxy distribution that tends to be diffuse. Indeed, X-ray emission has an irregular morphology, with two components dominating the soft part of the spectrum (below 0.5 keV): a tail extends about 400 kpc from the central emission to the southeast (see De Filippis et al., 2004). Its compact core is dominated by a cD galaxy and has strong lensing power with several bright arcs and multiple image sources (Smail et al., 1995; Natarajan et al., 1998; Campusano et al., 2001). Note however that it is also classified as a "non-cool core" cluster by Zhang et al. (2016), meaning that a central supermassive black hole shall not be present. AC114 has a higher fraction of blue, late-type galaxies compared with lower redshift clusters (up to 60% outside the core region; Couch et al., 1998; Sereno et al., 2010, making it a prototypical Butcher-Oemler cluster. Having many active members is a crucial property when studying the MZR based on H ii regions.
In the course of data reduction, we realised that abundance-sensitive absorption features can also be reliably measured in our VIMOS spectra, therefore a study of the stellar component of AC114 galaxies is presented in this work, while ISM abundances will be the subject of a forthcoming paper. It shall be noted that a recent study of stellar populations can also be found in Rodriguez Del Pino et al. (2014), who targeted thirteen disk galaxies with integral field spectroscopy. While most galaxies in their sample to not display emission lines, they still host a young stellar population; this is not centrally concentrated therefore it is suggested that star formation was recently truncated by gradual processes such as ram-pressure stripping or weak galaxy-galaxy interactions.
The present study builds on a dynamical analysis of the velocities of galaxies in our sample, which was performed in Proust et al. (2015) hereafter referred to as Paper I. As explained in that paper, thanks to a sample that reaches 1 magnitude fainter (\(R\approx 21\)), and which is about 30% larger than those in previous studies, we could revisit the structure and dynamics of the cluster, which helped us select cluster members.
### Main results from Paper I
The mean redshift obtained in Paper I is \(z=0.31665\pm 0.0008\) and the velocity dispersion is \(\sigma=1893^{+73}_{-82}\) km s\({}^{-1}\) based on a catalogue of 524 velocities. The distribution in redshift of all galaxies is shown in Fig.1 with bins \(z=0.01\) widths (magenta histogram). The cluster has a very elongated main radial filament spanning 12000 km s\({}^{-1}\) in redshift space. In addition, a radial foreground structure was detected within the central 0.5/h Mpc radius, recognizable as a redshift group at the same central redshift value. A background structure could also be identified (see also Fig. 2). Two hundred and sixty five galaxy members were identified, which yield a dynamical mass \(M_{200}=(4.3\pm 0.7)\times 10^{15}\)M\({}_{\odot}\)/h for AC114 and \(M_{v}=(5.4\pm 0.7\pm 0.6)\times 10^{15}\) M\({}_{\odot}\)/h from the intrinsic velocity dispersion out to a radius of 3.98/h Mpc.
### Analysis of the stellar populations based on Paper I and new data
Once the redshift is known, the spectrum can be corrected to the rest-frame: this exercise led to the identification of a number of galaxies whose redshift value needed revision, mainly because of their low S/N values, in particular when the R value of the correlation peak is lower or close to 3, except for emission-line objects (see Sect. 2). A few redshifts could be determined from the position of the most prominent emission and absorption lines: O ii and Cai H and K.
The new distribution in redshift of all galaxies in shown in Fig.1 (black lines) where objects are clearly clustered in a few redshift intervals whose limits are identified by the vertical lines and horizontal segments. The limits are listed in Table 1. Galaxies belonging to the cluster are defined as those within the lower and upper limits of rows 3 and 4 of Table 1: they span a \(\sim\)5\(\sigma\) range of velocities.
In this paper, we reanalyse the data obtained in Paper I. Section 2 presents the observations and reduction of the data and section 3 analyses the stellar populations properties; these properties are used in section 4 to construct and interpret the mass-metallicity relation, and finally section 5 presents the conclusions based on the global results obtained for AC114.
## 2 Observations and reductions
A complete account of the observations and data reduction is given in paper I, so just a summary is given here. VIMOS (Le Fevre et al., 2003) was used to carry out observations in service mode at the Very Large Telescope, under program 083.A-0566, between August 16 and September 25, 2009 (see Table 2). Seven exposures for each grism HR-red and MR were obtained, for a total shutter time of \(\sim 4.4\)h for each setup. The MR grism has a spectral resolution of 580 for a 1'' slit, over the 500 - 1000 nm spectral range, while the HR-red grism has a spectral resolution of 2500 for a 1'' slit, over the 630 - 870 nm spectral range.
The galaxy selection was made from the pre-imaging frames of the cluster. To construct the masks, initially, the SIMBAD catalogue was used without imposing any restriction criteria. As a first step we identified already known galaxies of the cluster. Then we selected the non-stellar objects in this region by eye in order to punch a maximum number of slits in each of the four quadrants. Such a visual inspection allows to discriminate between extended objects and stars.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \# & \(z\) & \(c\)\(z\) & Notes \\ & & km s\({}^{-1}\) & \\ \hline
1 & 0.1503 & 45,059 & \\
2 & 0.1900 & 56,961 & \\
3 & 0.2840 & 85,141 & lower limit \\
4 & 0.3310 & 99,231 & upper limit \\
5 & 0.4588 & 136,645 & \\
6 & 0.5487 & 164,496 & \\
7 & 0.6048 & 181,314 & \\
8 & 0.6678 & 200,201 & \\ \hline \end{tabular}
\end{table}
Table 1: Redshift limits used to group galaxies. Galaxies belonging to AC114 are defined as those within the lower and upper limits of rows 3 and 4.
Figure 1: Distribution in redshift of all galaxies. The vertical and horizontal segments mark the limits of redshift ranges that are used later in the analysis. These limits are consistent with redshift clustering presented in Paper I, which are based on a larger sample including literature data (magenta histogram).
\begin{table}
\begin{tabular}{c c c c} \hline \hline Date & OB starta & Exp. timeb & Filter & Grism \\ \hline
2009-09-17 & 02:37 & 2250 & GG475 & HR red \\
2009-09-17 & 03:32 & 2250 & GG475 & HR red \\
2009-09-17 & 04:11 & 2250 & GG475 & HR red \\
2009-09-17 & 04:50 & 2250 & GG475 & HR red \\
2009-09-17 & 05:28 & 2250 & GG475 & HR red \\
2009-09-17 & 06:07 & 2250 & GG475 & HR red \\
2009-09-21 & 03:54 & 2250 & GG475 & HR red \\
2009-08-16 & 05:25 & 2250 & GG475 & MR \\
2009-08-16 & 06:15 & 2250 & GG475 & MR \\
2009-08-16 & 06:55 & 2250 & GG475 & MR \\
2009-08-21 & 07:24 & 2250 & GG475 & MR \\
2009-08-21 & 06:11 & 2250 & GG475 & MR \\
2009-09-25 & 04:02 & 2250 & GG475 & MR \\
2009-09-25 & 04:53 & 2250 & GG475 & MR \\ \hline \end{tabular}
\end{table}
Table 2: Observations log (OB = Observing Block).
### Data reduction
The VIMOS pipeline was used to reduce the data. We reduced separately the data taken with the grism HR red and MR. Each scientific frame was bias subtracted and flat field corrected, the cosmic rays were removed, and sky emission lines subtracted (the same procedure was done for the standard stars). The spectra were wavelength calibrated and seven frames for each VIMOS quadrant were combined. This gave us S/N = 20 per pixel on the average at the continuum level on the final spectra. Since the continuum of most of the galaxies is too weak we did not use the spectra extracted by the pipeline. Instead, we extracted one-dimensional spectra with MIDAS. Flux calibration was obtained with spectrophotometric standards from (Hamuy et al., 1992, 1994).
In this work, only MR data are used, while HR spectroscopy is used in the forthcoming paper on the nebular abundances, to resolve N ii from H\(\alpha\) emission lines.
Apparent magnitudes in the R-band were obtained by computing the integrated flux of our spectra in the range 4897.5 A to 9785.5 A and calibrating it using objects in common with the superCOSMOS database (Maddox et al., 1990, 1990). Photometry could also have been obtained from pre-imaging frames, but these are observed under variable sky conditions, so magnitudes would have to be calibrated on the superCOSMOS system anyway. The precision of our calibration was estimated in paper I as \(\simeq 0.5\) magnitudes, which is comparable with that of superCOSMOS. Furthermore, synthetic photometry can be obtained with vastly more efficient means compared to reducing pre-imaging frames from scratch, and we show in the sections below that luminosities and masses obtained this way yield well defined mass-metallicity relations. As shown in Paper I, our galaxy sample reaches luminosities as faint as \(R\sim 21\), but it starts to suffer from incompleteness at \(R\sim 20\).
We obtained a total of 163 redshifts, combining and checking those already obtained in Paper I. The complete redshift values with their individual error measurements are published in Table 5 and correspond to the highest R-value obtained from the cross-correlations (Tonry and Davis, 1979). UK-J \(B_{j}\) and UK-J \(R_{j}\) magnitudes are from superCOSMOS.
The content of this table is as follows:
* number of the object in each quadrant from slit position;
* right ascension (J2000);
* declination (J2000);
* UK-J \(B_{j}\) magnitude from superCOSMOS;
* UK-J \(R_{f}\) magnitude from superCOSMOS;
* \(R_{comp}\) computed magnitude from spectra;
* redshift;
* redshift error;
Figure 3: A zoom into the three-dimensional representation of AC114. The three segments show velocity values from 90,000 km s\({}^{-1}\) to 100,000 km s\({}^{-1}\) in steps of 5,000 km s\({}^{-1}\). The transversal scale has been amplified by a factor \(\sim 10\) compared to the radial scale, but it is still clear that the cluster is a very elongated structure. Note that gaps in the object density are artefacts of the partial sky coverage of the VIMOS footprint.
Figure 2: Three-dimensional representation of objects in the sky area covered by our VIMOS observations, with blue dots identifying active galaxies, and red dots identifying passive ones. AC 114 is the clump of objects near \(cz=100,000\) km s\({}^{-1}\). The transversal scale has been greatly amplified to better show structures surrounding the cluster: at the redshift of the cluster, the 16 arcmin of RA span of VIMOS, convert into 4.8 Mpc linear distance, while the linear distance spanned by the redshift limits of the cluster is \(\sim 300\) Mpc (see Wright, 2006, for the calculations).
* R value from the cross-correlation (Tonry & Davis, 1979);
* notes
In the course of data analyses, we identified two objects that were removed from the table, as their spectra tricked the cross-correlation algorithm, which returned wrong redshift values; indeed objects Q4/39 and Q4/59 were identified as M-dwarf stars, after finding that their \(D_{n}(4000)\) values were too high for their supposed mass (see below).
Using data from Table 5, Fig. 2 shows the spatial distribution of all galaxies in our VIMOS sample with a measured spectrum, with blue and red dots marking active and passive galaxies respectively. The cluster is clearly visible as the dense agglomeration of objects near \(cz=100,000\) km s\({}^{-1}\), and it appears that most of its galaxies are not actively forming stars, although in the cluster periphery there are some that do. Couch et al. (2001) note that in AC114 the star formation rate from the H\(\alpha\) emission is not exceeding 4 M\({}_{\odot}\) yr\({}^{-1}\) and that the H\(\alpha\) luminosity function is an order of magnitude below that observed for field galaxies at the same redshift. As anticipated in Sec. 1.1, figure 2 also suggests the existence of two other structures, which appear as filaments extending for tens thousands km s\({}^{-1}\), one of them seemingly connected to the cluster core. Figure 3 shows a zoom into the three-dimensional representation of the cluster. It is clear that the cluster has a very elongated structure spanning \(\simeq 10,000\) km s\({}^{-1}\).
### Classification of galaxies
After correcting spectra to rest-frame the 163 galaxies were classified as 'active' or 'passive' based on the presence/absence of [O ii]\(\lambda 3727\), [O iii]\(\lambda 4959,5007\) and/or H\(\beta\) emission lines, depending on the galaxy redshift, with the following procedure.
The first step consisted in blindly scanning all spectra for the presence of [O iii]\(\lambda 5007\) and/or H\(\beta\): a linear fit to the continuum was subtracted, and a maximum in flux was searched in the window \(4800\leq\lambda\leq 4841\) for H\(\beta\) and in the window \(4980\leq\lambda\leq 5090\) for [O iii]\(\lambda 5007\). A Gaussian fit was then attempted, provided that the peak in flux is three times larger than the continuum noise (assumed to be the semi-quartile of the flux variation in the spectral window). To estimate errors, the fit was repeated for the nominal continuum level, and by assuming a higher or lower continuum by the amount of noise. The result of this procedure was that, for each spectrum where a fit is possible, and for each of the two lines, the central wavelength, its FWHM and error, and the area of the Gaussian and its error are obtained. After this first screening, a quality parameter \(q\) is assigned to each spectrum, defined as the sum for the two lines, of their Gaussian area divided by its error. Spectra are then sorted by fit quality, and plotted one by one to visually check the quality of the fit. We find that spectra with \(q\geq 4\) show unambiguous presence of emission lines, so galaxies that satisfy this criterion were classified as 'active', while the rest were classified as 'passive'. While performing the visual checks, a few galaxies preliminarily classified as 'passive' were noticed having a clear [O ii]\(\lambda 3727\) emission line, and were added to the list of 'active' objects: their spectra are shown in Fig. A1 of Appendix A. While some passive galaxies might have very weak emission lines that are not detected by our methods, the sections below show that the two galaxy classes have very different properties, thus our classification must be broadly correct.
Table 4 summarises the results, and adds further remarks on some galaxies, such as those actually identified as local M-dwarf stars, and those having low-quality spectra that are not further considered in this paper.
In order to show the redshift evolution of their spectral energy distribution (SED), Figures 4 and 5 show the co-added spectra of the two galaxy classes, in the redshift ranges defined in Table 1.
It can be seen that, in the redshift range of AC114, the majority of galaxies are passive: there are 30 galaxies classified as such (59%), versus 21 classified as active (41%). This confirms the visual impression given by Fig. 2.
Figure 4: For each redshift range from table 1, this figure shows the co-added spectra of active galaxies. The vertical blue bars mark the position of the following emission lines: [S ii] \(\lambda\lambda 6716,6731\), H\(\alpha\), [O iii] \(\lambda\lambda 4959,5007\), H\(\beta\), H\(\gamma\), H\(\delta\), [O ii] \(\lambda\); and Ca H and K absorption lines (3968Å, 3934Å).
Figure 5: Same as Fig. 4 for co-added spectra of passive galaxies.
## 3 Stellar Populations
### Representative age and metallicity of active and passive galaxies in AC114
As a first step towards the characterisation of galaxies in AC114, we fitted the co-added spectra obtained above in the redshift range of the cluster, with stellar population synthesis models from Bruzual & Charlot (2003), following the prescriptions of Saviane & Jerjen (2007). Briefly, model spectra were degraded to the resolution of our VIMOS data, and then for each model spectrum, the steps were the following:
* select the spectral range from 3800 to 6000A;
* remove emission lines from co-added spectra, before fitting: i.e., remove regions near H\(\beta\) and [O iii] ;
* normalise both model and co-added spectra to maximum flux within the fit range;
* compute ratio of model to observed spectrum;
* compute average and dispersion of spectral ratio;
* select the best fit model as the one that gives the lowest dispersion of the spectral ratio.
The results of this procedure are shown in Figures 6 and 7, for active and passive galaxies, respectively: the typical age of active galaxies in the cluster is 2 Gyr, and their metallicity is \(Z=0.008\); while in the case of passive galaxies, the best fit is reached for an age of 10 Gyr, and a metallicity \(Z=0.02\). Galaxies cannot be older than the universe, so it is reassuring to see that the age of the universe at the redshift of AC114 is indeed no less than 10 Gyr (see Figures 15 and 16). The best-fit models are shown overlaid to the co-added spectra in figures 8 and 9. Figures 6 and 7 show that adopting model ages and metallicities closest to the best fits, would also give good results: this uncertainty is included in the calculation of M/L values in Sect. 3.3.
This exercise reveals that passive galaxies must have formed the bulk of their population early on, in a strong burst that quickly enriched their ISM to solar metallicities: indeed, the remarkably good fit of a simple stellar population model (SSP) indicates that those stars were formed in a relatively short time span. On the contrary, the low stellar metallicity of active galaxies is an indication that star formation (SF) proceeded at a low pace in these systems, which are indeed still forming stars. Still, the good fit of a SSP to the observed spectrum is telling us that most stars in active galaxies were also formed rapidly, although later than those in passive galaxies.
Passive galaxies are in general more massive than active ones (see below), but physical sizes do no vary too much between the two classes; therefore one expects a higher gas density in passive galaxies, which would explain their higher SFR, according to the Schmidt-Kennicutt law (Kennicutt, 1998). Figures 2 and 3 show that passive
Figure 8: The co-added spectrum of all passive galaxies in AC114 redshift range (black) fitted with the best model of Bruzual & Charlot (2003) in red. This gives the typical age and metallicity of the underlying stellar population.
Figure 6: To select the best simple stellar population (SSP) model that fits the co-added spectrum of active galaxies in AC114, the dispersion of the model-to-data ratio is plotted as a function of age and metallicity: this figure shows that the minimum value is reached for a model age of 2 Gyr, and a metallicity Z= 0.008.
Figure 7: Same as Fig. 6, for the co-added spectrum of passive galaxies in AC114. In this case, the best model fit is reached for an age of 10 Gyr, and a metallicity Z= 0.02.
Figure 9: Same as figure 8 for active galaxies in the cluster.
galaxies are concentrated near the cluster centre, where mass must have been accumulating in the early formation epochs, thus easing the formation of the largest objects (see also Fig. 6 in Proust et al. 2015). In addition, SF in the densest regions of galaxy clusters is quenched more effectively than in their outskirts (see, e.g., Deshev et al. 2017): therefore the active/passive dichotomy can also be explained, at least partially, by an effect of the environment.
### Comparison with Couch & Sharples (1987)
To confirm the results above, it is instructive to compare them to Couch & Sharples (1987, hereafter CS87). They classify their galaxy sample into'red' and 'blue' objects, based on their position in the colour-magnitude plot, which is shown in Fig. 10 for the subset of galaxies having published colours in superCOSMOS. In CS87, blue galaxies are defined has having colours smaller than \(B_{\rm J}-R_{\rm F}=2\), which is represented by the dotted line in the figure; the solid line represents the'red sequence' as defined in paper I, which is similar to the one found by CS87. Passive and active galaxies are plotted with red or blue symbols, so the figure demonstrates that most active galaxies would be classified as 'blue' by CS87, while passive galaxies gather around the red sequence.
Most red galaxies in CS87 have spectra comparable to nearby E/S0 galaxies, and indeed their comparison to models of population synthesis yields a typical age of 10 Gyr. They also find 15% of red galaxies having strong H\(\delta\) absorption, which are interpreted as hosting a short 0.5 Gyr burst with an age of 1.5 Gyr. These results are consistent with our findings above, where the typical age of passive galaxies was indeed found to be 10 Gyr, with a few exceptions identified in Sec. 3.5.3. This age is used in Sect. 3.3 to compute M/L ratios, therefore we can expect that masses of passive galaxies are on the average correct: indeed the tight sequence found in Sect. 3.5.2 and Fig. 17 validate this conclusion.
As expected from their spread in colour, blue galaxies in CS87 have a more diverse SF history and interestingly, almost half of them are pure absorption-line objects. By comparison to local galaxies and models, their emission line objects are split between having spectra comparable to nearby spiral galaxies (their type 2) and hosting a current SF burst (their type 1). Finally, blue galaxies with no emission lines are interpreted as having experienced a SF burst of varying strength, up to 1.5 Gyr prior to the epoch of observation. The stacked spectrum of our active galaxies, which by definition are emission-line objects, must be a mix of type 1 and type 2 of CS87, therefore the age of 2 Gyr found above is a reasonable guess. Once more, this is reinforced by Fig. 17, which shows a well-defined trend of metallicity index versus masses. Note also that some objects do require a small correction to their ages, as discussed in Sect. 3.5.3.
Only four galaxies are in common between this work and CS87, and they all belong to the red sequence, with normal H\(\delta\) absorption. A larger overlap in the two sets would be needed for a thorough comparison, nevertheless in Appendix C we show that our spectra match the classification of CS87.
In the context of this discussion, it is also interesting to examine 'postage stamps' of our targets as shown in figures 11 and 12, for passive and active galaxies, respectively. As expected for an E/S0 type, most passive galaxies have regular shapes, but there are some galaxies that appear to be interacting (at least in projection) with nearby smaller objects (see image caption). It is interesting to note that two of these potentially interacting galaxies (44 and 49) present an anomalous \(D_{n}(4000)\) index, as discussed in Section 3.5.3, which means that their ages deviate from the typical one of their class. Conversely, galaxies 64 and 21 also have a deviant \(D_{n}(4000)\) index, but they appear to be isolated in Fig. 11. Active galaxies appear more diffuse than passive ones, and contrary to passive ones, objects with anomalous \(D_{n}(4000)\) index do not have close companions but rather appear irregular in shape (numbers 36, 56, 48, 40, and 26). The implications of galaxy morphology will be examined in more detail in a forthcoming paper where nebular abundances will be obtained (Andrade et al., in preparation), and contrasted with the stellar abundances obtained in this work.
### Mass-to-Light ratios
Having in hand the age of the stellar populations, the next step is the computation of their mass-to-light ratios. From the plots in BC03 we obtained Fig. 11, which shows M/L\({}_{V}\) vs. age: as discussed in BC03, the mass-to-light ratio is affected by the choice of the initial mass function (IMF), spectral library, theoretical evolutionary tracks,
Figure 11: V-band mass-to-light ratio as a function of the age of the population, from BC2003: the band represents uncertainties introduced by the choice of different initial IMF, spectral library, theoretical evolutionary tracks, and metallicities. Based on the age returned by the best-fit models, representative M/L ratios can be estimated for passive and active galaxies, which are shown by the dots and error bars. Arrows in the lower right corner show that, for ages greater than \(\sim 0.1\) Gyr, a change in age of 0.5 dex translates into a similar 0.5 dex change in M/L ratio.
Figure 10: Colour-magnitude diagram for AC114 galaxies with photometry in superCOSMOS database (Maddox et al. 1990b). The solid line is the red sequence as defined in Paper I, and the horizontal dotted line represents the separation between red and blue galaxies as in CS87.
and metallicity, so the shaded band in the figure represents those uncertainties, estimated as \(1/4\) of the full variation interval.
As shown in Table 2, the most complete photometric catalogue for our galaxies is in the R-band, therefore mass-to-light ratios were converted from V- to R-band using the expression: \(M/L_{R}=M/L_{V}\times 10^{-0.4}\,(V-R)\), where the \(V-R\) colour of an old population was taken as the average of globular clusters in the Harris (1996) catalogue: \(V-R=0.64\pm 0.17\). The resulting ratios are then \(M/L_{R}=0.64\pm 0.26\) and \(M/L_{R}=2.41\pm 1.12\) for active and passive galaxies, respectively.
### Galaxy distances, luminosities, and masses
To compute galaxy luminosities, apparent magnitudes were converted to absolute ones with the usual \(M_{R}=R-\mu\), where the distance modulus is \(\mu=5\log D_{L}-5\). The luminosity distance \(D_{L}\) is a function of redshift, and it was evaluated following Wright (2006), with input parameters \(H_{0}=69.6\), \(\Omega_{\rm matter}=0.286\), and \(\Omega_{\rm vac}=0.714\) (flat universe). After computing the relation for a few selected values, it was interpolated by a quadratic function, as shown in Fig. 12. We find:
\(D_{L}=2245.7\,z^{2}+4628.4\,z-20.838\)
Figure 16: Mass of active galaxies as a function of redshift. The dotted line represents a linear fit through the data, while the contours represent the general distribution of passive galaxies in the same plot of Fig. 15. Errors in mass are taken from Table 3, while errors in redshift are smaller than the symbols.
Figure 12: Luminosity distance, expressed in Mpc, vs. redshift: the curve is a quadratic interpolation of selected values calculated using the cosmology calculator of Wright (2006).
Figure 13: Comparing absolute \(R\) magnitudes vs. integrated flux, for passive galaxies in AC114 redshift range. As expected, the two quantities are proportional to each other, but the correlation breaks down at the lower end of total fluxes. An uncertainty of 0.5 magnitudes has been assumed both for the total flux and the R-band photometry.
Figure 15: Mass of passive galaxies as a function of redshift and age of the universe. The dotted line represents a linear fit through the data. Errors in mass are taken from Table 3, while errors in redshift are smaller than the symbols.
which is valid for \(0.05\leq z\leq 0.7\). At the mean redshift quoted in section 1.1, the distance is \(D_{L}=1670\pm 17\) Mpc
Absolute luminosities were then computed as \(L_{R}=10^{-0.4}\,(M_{R}-M_{R_{0}})\), with \(M_{R_{0}}=4.5\). Finally, luminosities were converted into masses with the mass-to-light ratios computed in Sect. 3.3. Uncertainties on mass values were computed by adding in quadrature uncertainties from photometry and from mass-to-light ratios.
As a consistency check between photometric and spectral properties, the integrated spectral flux is compared to absolute \(R\) magnitudes in Figures 13 and 14 for the two classes of galaxies. It is reassuring to see that there is a good correlation between the two quantities: the correlation coefficient is \(-0.94\), with a dispersion of \(\sim 0.24\) magnitudes for passive galaxies, and the same quantities are \(-0.85\) and \(\sim 0.4\) magnitudes for active ones. However the correlation breaks down for galaxies at the faint end of the luminosity range; therefore for the next analysis we retained only objects with \(\Sigma({\rm flux})>10^{-15}\,{\rm erg\,cm^{-2}sec^{-1}}\).
Figure 15 shows that passive galaxies in AC114 span an order of magnitude in mass; interestingly, looser cluster members at higher and lower redshift tend to define a mild tendency of having masses increasing while moving to more recent times: from 9.8 to 10.3 Gyr ago, their mass increases from \(0.4\times 10^{11}\)\(M_{\odot}\) to \(10^{11}\)\(M_{\odot}\). It could be inferred that some galaxies continued forming stars both before and after a cluster-wide main SF episode.
On the contrary, Fig. 16 shows that active galaxies closer to us have lower masses than objects located in the main body of the cluster, which also span a smaller mass range of \(\sim 0.5\) dex, and are generally less massive than passive galaxies in the same region. This confirms the idea that active galaxies formed later than passive ones, and did not participate to the initial strong galaxy formation epoch.
### Metallicities
#### 3.5.1 Introduction of the D4000 index
Having measured the masses of AC114 galaxies in the previous sections, metallicities were then estimated using the \(D_{n}(4000)\) index, which measures the ratio of spectral continuum at the red and blue side of the 4000A discontinuity. As in Balogh et al. (1999), the average of the continuum and its dispersion were calculated within the bands 3850-3950A and 4000-4100A, both for the galaxy sample and the models. If we call \(\overline{F}_{\rm red}\) and \(\overline{F}_{\rm blue}\) the average continuum level in the two bands computed over \(n_{\rm pix}\) spectral elements, and \(\sigma_{\rm red}\) and \(\sigma_{\rm blue}\) their dispersion, then \(D_{n}(4000)\) and its uncertainty are computed as \({\rm D_{n}(4000)=\overline{F}_{\rm red}/\overline{F}_{\rm blue}}\) and \(\sigma_{\rm D_{n}(4000)}=D_{n}(4000)\times(\sigma_{\rm F,blue}/\overline{F}_{ \rm blue}+\sigma_{\rm F,red}/\overline{F}_{\rm red})\), where \(\sigma_{\rm F}=\sigma_{\rm px}/\sqrt{n_{\rm pix}}\).
#### 3.5.2 \(D_{n}(4000)\) vs. mass for galaxy spectra
Figure 17 shows the results of our measurements, with \(D_{n}(4000)\) plotted as a function of galaxy mass. It appears that passive and active galaxies occupy distinct places in such diagram, with active galaxies generally with lower masses and smaller \(D_{n}(4000)\) values, compared with passive galaxies. In the hypotheses that galaxies within a class have a common age (Sec. 3.1), the variation of the index must be mainly due to metallicity differences, thus the graph can be interpreted as passive galaxies having higher metallicities than active ones.
Figure 17 also shows that some objects lay outside the general trend defined by galaxies of comparable mass, which can be interpreted by using Fig. 19: it is likely that galaxies with larger/smaller values of \(D_{n}(4000)\) have typical ages that are larger/smaller than the typical age inferred in Sec. 3.1.
#### 3.5.3 Special cases of of \(D_{n}(4000)\)
An inspection of Fig. 18 shows that such interpretation must be correct: active galaxies with large \(D_{n}(4000)\) values have red SEDs which resemble those of passive galaxies of similar index values, thus having ages greater than the typical 2 Gyr. Furthermore, their spectra have very weak [O ii]\(\lambda 3727\) and H\(\alpha\) emission lines, which indicates a low SFR. At the opposite side, active galaxies with relatively low index values, have blue SEDs and very prominent emission lines, indicative of an age younger than 2 Gyr. For these galaxies, a second possibility is a metallicity which is intrinsically lower than galaxies of comparable mass, for example due to accretion of fresh gas: indeed the same Fig. 18 shows a low-luminosity tail emerging from object Q4/26, so this option cannot be excluded.
As Fig. 11 demonstrates, an age increase/decrease translates into a larger/smaller M/L ratio, with a change of 0.1 dex in M/L for every change of 0.1 dex in age. Thus, if galaxies with large \(D_{n}(4000)\) are older than the others, their luminosities must be converted into larger masses: from Fig. 17, it can be evinced that corrections of \(\sim\pm 0.3\) dex in mass, i.e. in M/L, would bring deviant galaxies back into the general trend. Such corrections are within the uncertainties of the measurements.
Such a change would bring the age of active galaxies with large \(D_{n}(4000)\) close to that of passive galaxies, in agreement with the conclusion inferred from their spectra. At the other side, active galaxies with a low discontinuity index would re-enter the general trend, if their ages were \(\sim 1\) Gyr.
Looking at passive galaxies, object Q4/49 has a \(D_{n}(4000)\) value
Figure 17: Relationship between \(D_{n}(4000)\) and galaxy mass, for passive (red dots) and active (blue dots) galaxies. The straight lines and shaded areas represent weighted linear fits to the data, and the dispersion around the fit. The top panel shows the original measurements, with the most evident outliers marked with large circles. To these objects, a correction in mass has been applied, as explained in the text, and their new position can be seen in the bottom panel. Their spectra are plotted in Fig. 18.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline Q\#-s\# & RA & DEC & \(z\) & \(R\) & \(M_{R}\) & \(L_{R}\) & \(\mathcal{M}/\mathcal{M}_{\odot}\pm\) & \(D_{n}(4000)\pm\) & [mH] \(\pm\) \\ \hline Q4-15 & 344.608 & -34.731 & 0.30800 & 20.900 & -20.140 & 0.720 & 0.460 & 0.190 & 1.117 & 0.063 & -2.497 & 0.399 \\ Q4-14 & 344.673 & -34.730 & 0.30460 & 20.700 & -20.320 & 0.840 & 0.540 & 0.220 & 0.931 & 0.071 & -3.674 & 0.449 \\ Q2-38 & 344.846 & -34.663 & 0.29870 & 20.100 & -20.870 & 1.400 & 0.900 & 0.370 & 1.013 & 0.053 & -3.155 & 0.335 \\ Q4-19 & 344.609 & -34.737 & 0.31070 & 20.200 & -20.870 & 1.400 & 0.900 & 0.360 & 0.996 & 0.048 & -3.262 & 0.304 \\ Q4-46 & 344.712 & -34.784 & 0.32950 & 20.100 & -21.110 & 1.760 & 1.130 & 0.460 & 1.155 & 0.056 & -2.256 & 0.354 \\ Q1-18 & 344.846 & -34.751 & 0.31300 & 19.700 & -21.380 & 2.260 & 1.450 & 0.590 & 1.137 & 0.073 & -2.370 & 0.462 \\ Q3-1 & 344.673 & -34.555 & 0.32950 & 19.800 & -21.410 & 2.320 & 1.480 & 0.600 & 1.175 & 0.063 & -2.129 & 0.399 \\ Q1-37 & 344.849 & -34.791 & 0.31600 & 19.600 & -21.510 & 2.530 & 1.620 & 0.660 & 1.290 & 0.056 & -1.402 & 0.354 \\ Q4-26 & 344.675 & -34.748 & 0.32460 & 18.900 & -22.280 & 5.130 & 1.644 & 1.330 & 1.071 & 0.031 & -2.410 & 0.230 \\ Q1-30 & 344.901 & -34.779 & 0.31860 & 19.600 & -21.530 & 2.580 & 1.650 & 0.670 & 1.251 & 0.066 & -1.648 & 0.418 \\ Q2-32 & 344.854 & -34.643 & 0.39019 & 19.500 & -21.550 & 2.640 & 1.690 & 0.690 & 1.174 & 0.055 & -2.136 & 0.348 \\ Q4-22 & 344.666 & -34.741 & 0.31450 & 19.400 & -21.700 & 3.010 & 1.930 & 0.780 & 1.137 & 0.059 & -2.370 & 0.373 \\ Q4-48 & 344.621 & -34.787 & 0.31460 & 20.000 & -21.100 & 1.730 & 2.215 & 0.450 & 1.470 & 0.088 & -0.726 & 0.392 \\ Q3-12 & 344.654 & -34.583 & 0.30940 & 19.000 & -22.060 & 4.190 & 2.680 & 1.090 & 1.227 & 0.051 & -1.800 & 0.323 \\ Q4-40 & 344.604 & -34.772 & 0.31780 & 19.000 & -22.120 & 4.460 & 2.850 & 1.160 & 1.225 & 0.037 & -1.813 & 0.234 \\ Q4-60 & 344.685 & -34.806 & 0.31830 & 19.000 & -22.130 & 4.470 & 2.860 & 1.160 & 1.224 & 0.039 & -1.819 & 0.247 \\ Q1-40 & 344.848 & -34.799 & 0.32280 & 18.100 & -23.060 & 10.580 & 3.393 & 2.750 & 1.081 & 0.047 & -2.335 & 0.348 \\ Q4-36 & 344.650 & -34.764 & 0.31310 & 18.400 & -22.690 & 7.480 & 9.557 & 1.950 & 1.757 & 0.094 & -0.025 & 0.132 \\ Q4-56 & 344.684 & -34.800 & 0.31380 & 18.400 & -22.690 & 7.520 & 9.597 & 1.960 & 1.743 & 0.095 & -0.039 & 0.149 \\ \hline \end{tabular}
\end{table}
Table 4: Main characteristics of active galaxies entering the mass-metallicity relation. Luminosities and masses are given in \(10^{10}\) solar units. Galaxie are sorted by luminosity, from fainter to brighter. Typical error on \(R\) magnitudes is 0.5, which yields \(\sim 50\%\) errors on luminosities and masses, after accounting for the uncertainty in mass-to-light ratios.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline Q\#-s\# & RA & DEC & \(z\) & \(R\) & \(M_{R}\) & \(L_{R}\) & \(\mathcal{M}/\mathcal{M}_{\odot}\pm\) & \(D_{n}(4000)\pm\) & [mH] \(\pm\) \\ \hline Q4-45 & 344.652 & -34.781 & 0.31390 & 20.600 & -20.490 & 0.990 & 2.390 & 1.110 & 1.589 & 0.099 & -0.668 & 0.308 \\ Q4-23 & 344.646 & -34.743 & 0.32270 & 20.600 & -20.560 & 1.060 & 2.550 & 1.180 & 1.387 & 0.095 & -1.355 & 0.402 \\ Q4-24 & 344.638 & -34.744 & 0.31450 & 20.500 & -20.600 & 1.090 & 2.630 & 1.220 & 1.438 & 0.076 & -1.160 & 0.296 \\ Q4-16 & 344.623 & -34.732 & 0.31850 & 20.400 & -20.730 & 1.230 & 2.970 & 1.380 & 1.660 & 0.121 & -0.481 & 0.336 \\ Q4-21 & 344.711 & -34.738 & 0.30610 & 19.500 & -21.530 & 2.580 & 3.117 & 1.448 & 1.402 & 0.086 & -1.022 & 0.445 \\ Q3-26 & 344.618 & -34.614 & 0.31480 & 20.200 & -20.900 & 1.440 & 3.480 & 1.620 & 1.494 & 0.096 & -0.962 & 0.349 \\ Q4-43 & 344.620 & -34.778 & 0.31720 & 20.200 & -20.920 & 1.470 & 3.540 & 1.650 & 1.394 & 0.084 & -1.327 & 0.350 \\ Q4-63 & 344.719 & -34.811 & 0.31870 & 20.200 & -20.930 & 1.490 & 3.580 & 1.660 & 1.386 & 0.082 & -1.359 & 0.345 \\ Q4-28 & 344.612 & -34.751 & 0.31470 & 20.000 & -21.100 & 1.730 & 4.180 & 1.940 & 1.592 & 0.090 & -0.659 & 0.276 \\ Q2
and SED close to that of objects Q4/44 and Q4/64: again, a correction of \(\sim+0.3\) dex in M/L, corresponding to a correction of \(\sim+0.3\) dex in age, would bring its mass in agreement with the general trend. Conversely, galaxies with relatively low index values have bluer SEDs than galaxies with masses initially estimated to be the same: a reduction in age of a few Gyr would thus move them to the left in the figure, and reconcile \(D_{n}(4000)\) measurements with estimated masses.
The lower panel of Fig. 17 shows how the mass and hence age corrections proposed above improve the definition of the loci occupied by the two classes of galaxies.
#### 3.5.4 \(D_{n}(4000)\) for model spectra
To convert \(D_{n}(4000)\) values to metallicities, a calibration of the index was obtained from BC03 models, as shown in Fig. 19. It can be seen that a given value of \(D_{n}(4000)\) can correspond to a range of metallicities, depending on the age of the population: for most active and passive galaxies, the calibration for ages 2 Gyr and 10 Gyr were used. Calibrations for 5 Gyr and 20 Gyr (in a relative extrapolation) were used for passive galaxies needing correction to their masses as explained in Sect. 3.5.3, while for active galaxies calibrations for 1 Gyr and 5 Gyr were used.
The figure also shows that \(D_{n}(4000)\) measured for active galaxies reaches values lower than those spanned by theoretical models. Although the linear extrapolation looks relatively safe, metallicities lower than [m/H]\(\sim-2.5\) must be regarded as uncertain.
Metallicities can now be added to the main parameters of passive and active galaxies in AC114, which are summarised in Tables 3 and 4. It is also useful to be able to inspect the morphology of each object, therefore Figs. 11 and 12 in Appendix B show images of the same galaxies, extracted from the VIMOS pre-imaging.
## 4 Discussion
Taking the key quantities from the aforementioned tables, we can now use them to define the evolutionary status of galaxies in AC 114. In particular, we can construct the mass-metallicity relation, and interpret it by way of a simple model of chemical evolution: it will tell us that both the mass of a galaxy, and its location within the cluster, are key drivers of evolution.
### Mass-metallicity relation
As subsequent stellar generations are created within a galaxy, they enrich the ISM and build the stellar mass at the expense of gas mass. It is therefore expected that mass and metallicity grow in time, and if galaxies were able to perform the gas conversion at the
Figure 19: Relationship between \(D_{n}(4000)\) and [m/H] from the BC03 model spectra, for ages 1, 2, 5, 10, and 20 Gyr, going from left to right. The shaded areas represent linear (for ages \(\leq\) 2 Gyr) and quadratic fits to the points, and the dispersion around the fits. The vertical blue dotted segment marks the smaller value of \(D_{n}(4000)\) for active galaxies, while the vertical red dotted segments mark the range of \(D_{n}(4000)\) for passive galaxies.
Figure 18: In the left two panels, spectra of passive galaxies having \(D_{n}(4000)\) values that are larger or smaller than those of galaxies of comparable mass, are plotted in the left-hand panels, with index values increasing from bottom to top. The right-hand panels plot spectra of ‘deviant’ active galaxies. Within each panel, the \(D_{n}(4000)\) value is printed in the top left corner, following the galaxy identification. In each spectrum, the blue and red bands highlight the wavelength range that is used to compute the ratio of continuum near the 4000 Å break. In the right two panels, images of the same galaxies, extracted from the VIMOS preimaging are displayed.
same rate, isolated galaxies of the same age would have converted the same fraction of gas into stars, and thus would have the same metallicity, irrespective of mass. But galaxies show an ubiquitous mass-metallicity relation, as testified by a vast literature on the subject (see, e.g. Maiolino and Mannucci, 2019, for a review). This fact can have several explanations, which are illustrated in Fig. 20: galaxies of larger mass might be able to convert larger fractions of gas into stars, in the same amount of time (higher SF efficiency); their stellar generations might be able to produce more metals than those in low-mass galaxies (higher yield); metals produced by low-mass galaxies might be more easily lost into the IGM; or finally, low-mass galaxies might have been originally higher mass objects, whose SF was truncated due to loss of residual gas into the IGM. It is then interesting to investigate the existence of a MZR of AC114 galaxies, and see what it can tell us about the evolutionary history of the cluster.
#### 4.1.1 The MZR and its Interpretation
When \(D_{n}(4000)\) is converted into a metallicity, Fig. 17 is converted into Fig. 21, which shows the location of AC114 galaxies in the [m/H] vs. mass plane. As expected, the trend of \(D_{n}(4000)\) with mass converts into a mass-metallicity relation, for the two galaxy classes. To interpret the MZR, we make use of the so-called closed-box model (Searle and Sargent, 1972), which, despite its simplicity, allows to draw interesting conclusions on the evolutionary status of the galaxy cluster.
In the close box approximation, the evolution of a galaxy is parameterised by the gas mass fraction \(\mu=m_{\rm gas}/m_{\rm tot}=m_{\rm gas}/(m_{\rm gas}+m_{\rm stars})\): a galaxy starts as a mass of pristine gas (\(\mu=1\)), which is converted into stars thereby producing metals, until \(\mu=0\). All along, the total mass remains constant. In the same framework, according to Pagel (1997) the average abundance of a stellar population starts at zero and it evolves with \(\mu\) according to the equation:
\[<z>=1+\frac{\mu\,\ln\mu}{1-\mu} \tag{1}\]
where the metallicity is expressed as \(z=Z/p\), and \(p\) is the metal yield of a stellar generation1: therefore \(<Z>\rightarrow\rho\) when \(\mu\to 0\). At the end of its SF history a galaxy will then have an average metallicity similar to the yield, which can then be estimated in this way. Thus, if passive galaxies have reached the end of their SF history, then Fig. 21 is telling us that their yield depends on galaxy mass. And because the stellar yield only depends on the IMF, what must be changing is the ability of a galaxy to return its metals to the ISM: indeed a common explanation for the MZR is that a less massive object has a shallower gravitational potential from which metals can escape more easily (e.g., Dekel and Silk, 1986).
Footnote 1: The yield of metal \(i\) is given by \(p_{i}=[\int_{m(i)}^{m_{U}}m_{q_{i}}(m)\;\phi(m)\;dm]/S\), where \(q_{i}(m)\) is the yield per stellar mass, and \(\phi(m)\) is the initial mass function; the global yield is then \(p=\Sigma p_{i}\). \(S\) is the mass in stars and remnants of each generation, such that \(p\times S\) is the mass of metals returned to the ISM.
If instead the yield is constant, then \(\mu\) must depend on the galaxy mass: this is shown in the lower panel of Fig. 21, where \(\mu\) has been calculated assuming that the constant yield is close to that of the most metal rich galaxies, i.e. equal to \(Z_{\odot}\). The plot shows that larger galaxies would have been able to convert a higher fraction of their initial mass into stars. Because passive galaxies do not have associated large gas masses, the unprocessed gas must have been lost into the IGM, thus truncating their SF: again, the gas dispersal would be easier for less massive objects. Of course this would be a stochastic process which could be also influenced by the galaxy location in a cluster and a MZR might be not expected, but the close-box tracks of Fig. 21 run almost parallel to the MZR, so even a random truncation of the SF would naturally lead to a relationship. Therefore this is our preferred explanation for the distribution of passive galaxies in Fig. 21.
If active galaxies had the same yield as passive ones, their low metallicities would be explained by a very slow chemical evolution: the close-box model predicts that only \(2\%\) of the initial gas mass needs to be processed (\(\mu=0.98\)), to reach a metallicity [m/H]=\(-2\). The age of the cluster is \(\sim 10\) Gyr, so if active galaxies continued their evolution at the same pace, today (\(\sim 3.7\) Gyr later) they would have reached [m/H]=\(-1.86\), and consumed only an additional 1% of gas mass.
This means that we should observe a mass of gas associated to active galaxies \(\sim 50\) times the mass of stars, and that their total masses would be more than an order of magnitude larger than those of passive galaxies. Both deductions are not in agreement with observations, so we must conclude that active galaxies have a lower yield compared to passive ones. Indeed this is also the conclusion of Koppen et al. (2007), based on the idea that stars in less massive galaxies are generated in less massive stellar clusters: these are not able to form massive stars, thereby skewing the IMF to lower masses and reducing yields. More massive galaxies can produce more massive clusters because they compress their gas more efficiently, as predicted by the Schmidt-Kennicutt law (Schmidt, 1959; Kennicutt, 1998).
To put some constraints to low-mass galaxy yields, we imposed that the total masses of active galaxies should not be larger than those of passive ones: thus we plotted the track that reaches the higher mass at end of evolution, and changed the yield until its total mass was similar to that of the most massive passive galaxy. The result is \(p=0.03\,Z_{\odot}\), which we adopted for all active galaxies: this value is well within the yield reduction computed by Koppen et al. (2007).
Figure 20: Illustration of different ways to obtain an MZR, with boxes representing two galaxies of different mass, and arrows connecting the start and end points of their evolution, assumed to happen in the same time interval. Light blue is pristine gas, darker blue is enriched gas, and red represents metals. In case (A), two galaxies of different mass produce the same fraction of metals over pristine gas, so no MZR is generated. In case (B), the more massive galaxy generates a higher fraction of metals. In case (C), both galaxies produce the same fraction of metals, but the low-mass objects ejects more of them into the IGM. Finally in case (D), gas is lost before chemical evolution is complete, in such a way that the less massive galaxy will have generated a smaller fraction of metals, compared with the more massive one.
As for passive galaxies, the location of active ones along their tracks constrains their mass fractions, which are plotted in the lower panel of Fig. 21. But because these objects are still evolving, we can expect that nowadays they will have reached higher masses and metallicities and lower \(\mu\) values: therefore we also computed the expected evolution from an age of the universe \(t_{10}\simeq 10\) Gyr to the present time, assuming a linear dependence of \(\mu(t)=1-(1-\mu_{10})/t_{10}\times t\). This evolution is shown by the magenta symbols and magenta segments in the upper and lower panels of Fig. 21, respectively.
The lower panel of the figure also shows the \(\pm 1\,\sigma\) range of gas fractions in local spiral galaxies, taken from McGaugh & de Blok (1997) (see Fig. 22). The comparison tells us that, at the current pace of evolution, most active galaxies in AC114 will not be able to reach the same level of gas processing as galaxies in the local universe.
Some active galaxies in the upper panel of Fig. 21 are located in the region of passive galaxies, so a comparable yield was adopted to compute their evolutionary tracks. These are galaxies Q1-30, Q1-37,
Figure 21: The top panel show the location of passive galaxies (red symbols) and active galaxies (blue symbols) in the [mH] vs. stellar mass plane. The tracks are close-box models as explained in the text: for passive galaxies the tracks just reach their representative point, while for active galaxies we also plot the expected evolution until close to terminal gas fraction \(\mu=0.001\). Tracks are not plotted for active galaxies located in the passive galaxy region, which are identified by their quadrant-slit combination. The magenta points on active tracks represent the expected evolution until the present time for a linear \(\mu(t)\) function set by past evolution. The lower panel shows the calculated \(\mu\) as a function of the galaxy mass and metallicity, assuming the yield values displayed in the plot. The grey area shows the typical value of \(\mu\) for local galaxies, taken from Fig. 22. Magenta segments represent the expected evolution until the present time.
Figure 22: Gas mass fraction distribution for local spiral galaxies, with data from McGaugh & de Blok (1997). The curve is a Gaussian with average and dispersion as displayed in the plot.
Q4-36, Q4-48, and Q4-56: from Fig. 14, it appears that the first three objects are relatively large galaxies with a disk/bulge morphology, therefore their high \(D_{n}(4000)\) values might be due to the fact that our VIMOS slits include a large fraction of the central, older and more metal-rich regions. To some extent this also applies to Q4-48, while the case of Q4-36 requires more investigation, because its classification as a high luminosity galaxy does not seem granted by its appearance.
#### 4.1.2 Relationships for passive and active galaxies
To compare the MZR of AC114 galaxies with other relations from the literature, they are plotted again in Fig. 23, without model tracks. Weighted linear fits to the data run nearly parallel to the model tracks discussed in the previous section, both for passive and active galaxies. Therefore in our scenario, mass-metallicity relations for AC114 objects are naturally generated by galaxy evolution.
In the lower panel of the figure, dashed blue and red lines are the relationships for active and passive galaxies found by Peng et al. (2015), who analysed the Sloan Digital Sky Survey (SDSS) spectra of local galaxies (\(z\approx 0.05\)). Clearly, galaxies in the AC114 cluster have much lower metallicities than field ones, and for the case of passive galaxies, we must conclude that the dense cluster environment curtailed star formation of its more massive members in a short time after their formation.
Active galaxies are evolving along different tracks than passive ones, probably because of lower metal yields. And they are also evolving slower, as it can be inferred from their younger ages and larger mass fractions. We also expect that, even if allowed to reach the end of their chemical evolution, they would still be \(\sim 1\) dex more metal poor than local active galaxies. The likely explanation is that the cluster environment facilitates the loss of metals from these lower mass objects, thus reducing their effective yields even further.
An evolutionary scenario would then be that the first objects to form got the largest masses, formed significant amounts of metals in SF bursts that lasted only a few Gyr, cut short through their mutual interactions. The stripped gas stayed in the cluster potential and is now facilitating the loss of metals from less massive galaxies that formed later via ram pressure stripping.
## 5 Summary
In this paper, we investigate the mass-metallicity relation for galaxies in the Abell cluster AC114 from 7 hours of VIMOS/MR data collected at the ESO-VLT telescope in 2009. The dynamical analysis of Proust et al. (2015) allowed us to select cluster members, whose spectra are here analyzed with models from Bruzual & Charlot (2003).
Active and passive galaxies are identified based on the presence/absence of the [O ii]\(\lambda 3727\), and/or [O iii]\(\lambda 4959,5007\) and/or H\(\beta\) emission lines, depending on the galaxy redshift, and we conclude that active galaxies have lower average masses and metallicities than passive ones.
We establish that the mass-metallicity relation of the cluster is found to be steeper than that for galaxies in the local universe. In a forthcoming paper, the MZR of active galaxies, based on the oxygen abundance of their gaseous component, will be presented and discussed in light of the present results.
Fig. 2 showed that passive and active galaxies tend to occupy central and external regions respectively. Therefore the results above show the well known fact that galaxy evolution in the centre of galaxy clusters is faster than that in their periphery. In the central regions of AC114, and in the course of the last 9.8 Gyr, galaxies reached masses and metallicities that are comparable to present-day galaxies, while in the outskirts galaxies are still forming stars, and in their past they have built substantially less mass than central objects.
The case of large active galaxies highlights the fact that conclusions about their global properties are influenced by population gradients within their bodies. Therefore a conclusive study could only be realised by a two-dimensional mapping of their SEDs, such as can be afforded by IFU instruments: in this respect, MUSE at the VLT would be an obvious choice.
## Acknowledgements
DP thanks ESO in the context of the _Visiting Scientists program_ for its hospitality at Santiago (Chile). We thank the anonymous referee for a careful reading of the manuscript, which greatly improved the presentation of our work. Alain Andrade is also thanked for helping in the revision of the paper.
## Data Availability
The complete redshift values of the 163 galaxies of this work are listed in Table 5 and the description of each column can be found in subsection 2.1.
Figure 23: The upper panel shows the location of active and passive galaxies in the [m/H] vs. mass plane, with the two classes identified by blue and red colour, respectively. A general trend of metallicity increasing with mass can be seen. In the lower panel, symbols are inversely proportional to the measurement errors: the red and blue solid curves are weighted linear fit to points representing passive and active galaxies. The blue dashed curve is the relation obtained after excluding galaxies with metallicities lower than –3. The red and blue shaded areas mark the location of the tracks taken from Fig. 21. The dashed curves on top of the panel represent mass-metallicity relations from Peng et al. (2015), with red and blue colour distinguishing passive and active galaxies. |
2303.03079 | Steane enlargement of Entanglement-Assisted Quantum Error-Correcting
Codes | We introduce a Steane-like enlargement procedure for entanglement-assisted
quantum error-correcting codes (EAQECCs) obtained by considering Euclidean
inner product. We give formulae for the parameters of these enlarged codes and
apply our results to explicitly compute the parameters of enlarged EAQECCs
coming from some BCH codes. | Carlos Galindo, Fernando Hernando, Ryutaroh Matsumoto | 2023-03-06T12:40:24Z | http://arxiv.org/abs/2303.03079v1 | # Steane enlargement of entanglement-assisted quantum error-correcting codes
###### Abstract.
We introduce a Steane-like enlargement procedure for entanglement-assisted quantum error-correcting codes (EAQECCs) obtained by considering Euclidean inner product. We give formulae for the parameters of these enlarged codes and apply our results to explicitly compute the parameters of enlarged EAQECCs coming from some BCH codes.
Key words and phrases:EAQECC; entanglement-assisted quantum error-correcting codes; Steane enlargement; BCH codes The first two authors were partially funded by MCIN/AEI/10.13039/501100011033, by "ERDF A way of making Europe" and by "European Union NextGeneration EU/PRTR", grants PGC2018-096446-B-C22 and TED2021-130358B-I00, as well as by Universitat Jaume I, grant UJI-B2021-02.
Quantum codes obtained from the CSS procedure enjoy several advantages. Indeed, they can be used for privacy amplification of quantum cryptography [42] and for constructing asymmetric quantum codes [29, 41], but they have mainly computational virtues, among others the smaller size of the supporting alphabet [24].
Another good property of CSS codes is that they can be improved by using the Steane enlargement procedure. This procedure was initially proposed by Steane in the binary case [45] and afterwards generalized to the general case [27, 33] (see also [20]). Denote by \(\mathbb{F}_{q}\) the finite field with \(q\) elements, the specific result is the following one:
**Theorem 2**.: _Let \(C\) be a linear code over \(\mathbb{F}_{q}\) of length \(n\) and dimension \(k\). Suppose that \(C^{\perp_{e}}\subseteq C\) and \(C\) can be enlarged to a \(q\)-ary linear code \(C^{\prime}\) of length \(n\) and dimension \(k^{\prime}\geq k+2\). Then, there exists a stabilizer quantum code with parameters \([[n,k+k^{\prime}-n,d]]_{q}\), where \(d\geq\min\{d_{1},\lceil\frac{q+1}{q}d_{2}\rceil\}\), \(d_{1}=\operatorname{wt}\big{(}C\setminus(C^{\prime})^{\perp_{e}}\big{)}\) and \(d_{2}=\operatorname{wt}\big{(}C^{\prime}\setminus(C^{\prime})^{\perp_{e}} \big{)}\)._
As mentioned, self-orthogonal codes (or codes containing their duals) have to be considered for providing stabilizer quantum codes and this fact determines the parameters of the obtained codes. However, one can use any linear code without no condition whenever encoder and decoder share entanglement, which increases the capacity of communication [8]. These codes are named entanglement-assisted quantum error-correcting codes (EAQECCs). Apart from the parameters length \(n\), dimension \(k\) and minimum distance \(d\) used for quantum codes, EAQECCs include a new one \(c\), which gives the minimum number of (pairs of) maximally entangled quantum states required and then, the parameters of these codes are expressed as \([[n,k,d;c]]_{q}\).
A formula for computing the value \(c\) of binary EAQECCs was first given for the CSS construction [28]. Afterwards, always in the binary case, more general constructions where treated by Wilde and Brun [46]. In the general case, formulae for obtaining the parameters of \(q\)-ary EAQECCs which extend those in [28, 46] can be found in [19]. Using them, parameters of many specific constructions of EAQECCs are recently given [37, 40, 13, 38, 26, 34, 15, 21].
Next we recall two of the main results on EAQECCs we will use in this article. To begin with, we consider the vector space \(\mathbb{F}_{q}^{2n}\) where \(n\) is a positive integer. The _symplectic product_ of two vectors \((\boldsymbol{x}|\boldsymbol{y})\) and \((\boldsymbol{z}|\boldsymbol{t})\) in \(\mathbb{F}_{q}^{2n}\) is defined as
\[(\boldsymbol{x}|\boldsymbol{y})\cdot_{s}(\boldsymbol{z}|\boldsymbol{t}):= \boldsymbol{x}\cdot_{e}\boldsymbol{t}-\boldsymbol{z}\cdot_{e}\boldsymbol{y},\]
where \(\cdot_{e}\) means Euclidean inner product. The dual space of a vector subspace \(C\subseteq\mathbb{F}_{q}^{2n}\) with respect to the symplectic product \(\cdot_{s}\) is denoted by \(C^{\perp_{s}}\).
The _symplectic weight_ of a vector \((\boldsymbol{x}|\boldsymbol{y})\) in \(\mathbb{F}_{q}^{2n}\) is
\[\operatorname{swt}(\boldsymbol{x}|\boldsymbol{y}):=\#\{j\mid(x_{j},y_{j})\neq (0,0),1\leq j\leq n\},\]
\(\#\) meaning cardinality and \(x_{j}\) (respectively, \(y_{j}\)) being the \(j\)th coordinate of the vector \(\boldsymbol{x}\) (respectively, \(\boldsymbol{y}\)). We define the minimum symplectic distance of a subset \(S\subseteq\mathbb{F}_{q}^{2n}\) as
\[d_{s}\left(S\right):=\min\left\{\operatorname{swt}\left(\boldsymbol{x}| \boldsymbol{y}\right)\ \mid\ \left(\boldsymbol{x}|\boldsymbol{y}\right)\in S\setminus\{( \boldsymbol{0}|\boldsymbol{0})\}\right\}.\]
Our first result, which can be found in [19, Theorem 2], determines the parameters of the EAQECC that one can get from a linear code \(C\subseteq\mathbb{F}_{q}^{2n}\) over \(\mathbb{F}_{q}\). Suppose that \(C\) has dimension \(n-k\). One desires to obtain a symplectic self-orthogonal \(\mathbb{F}_{q}\)-vector space \(\tilde{C}\subseteq\mathbb{F}_{q}^{2n+2c}\), whose projection is \(C\) and \(c\) is the smallest number of maximally entangled quantum states in \(\mathbb{C}^{q}\otimes\mathbb{C}^{q}\). \(\tilde{C}\) provides the quantum circuit which, by means of \(c\) maximally entangled pairs, encodes \(k+c\) logical qudits into \(n\) physical qudits.
**Theorem 3**.: _Let \(C\subseteq\mathbb{F}_{q}^{2n}\) be a linear code which is generated by the rows of a matrix \((H_{X}|H_{Z})\) of size \((n-k)\times 2n\). Then, \(C\) gives rise to an EAQECC with parameters \([[n,k+c,d;c]]_{q}\), where_
\[2c=\operatorname{rank}\left(H_{X}H_{Z}^{T}-H_{Z}H_{X}^{T}\right)=\dim_{ \mathbb{F}_{q}}C-\dim_{\mathbb{F}_{q}}\left(C\cap C^{\perp_{s}}\right)\]
_and \(d=d_{s}\left(C^{\perp_{s}}\setminus(C\cap C^{\perp_{s}})\right)\)._
The second mentioned result on EAQECCs [19, Theorem 4] is a specialization of Theorem 3 and it shows how the CSS construction can be used for providing \(q\)-ary EAQECCs. To state it, consider two \(\mathbb{F}_{q}\)-linear codes \(C_{1},C_{2}\subseteq\mathbb{F}_{q}^{n}\) of dimensions \(k_{1}\) and \(k_{2}\), and generator matrices \(H_{1}\) and \(H_{2}\), respectively. The specific result is the following one, where \(d_{H}\) means Hamming distance, \(\perp_{e}\) Euclidean dual and, for a matrix \(M\), \(M^{T}\) denotes its transpose.
**Theorem 4**.: _With the above notation, the code \(C_{1}\times C_{2}\subseteq\mathbb{F}_{q}^{2n}\) determines an EAQECC with parameters \([[n,n-k_{1}-k_{2}+c,d;c]]_{q}\), where_
\[c=\operatorname{rank}\left(H_{1}H_{2}^{T}\right)=\dim_{\mathbb{F}_{q}}C_{1}- \dim_{\mathbb{F}_{q}}\left(C_{1}\cap C_{2}^{\perp_{e}}\right),\]
_and_
\[d=\min\left\{d_{H}\left(C_{1}^{\perp_{e}}\setminus(C_{2}\cap C_{1}^{\perp_{e} })\right),d_{H}\left(C_{2}^{\perp_{e}}\setminus(C_{1}\cap C_{2}^{\perp_{e}}) \right)\right\}.\]
In Section 2 of this paper we prove that, with a similar procedure to that given by Steane, one can enlarge the EAQECCs provided by Theorem 4 whenever the involved linear codes \(C_{1}\) and \(C_{2}\) coincide. Setting \(C=C_{1}=C_{2}\) and expressing \(C\) as a direct sum of two linear spaces \(\langle B_{r}\rangle\) and \(\langle B_{t}\rangle\), Theorem 6 shows how to compute the parameters of the corresponding enlarged EAQECC. This enlarged code uses an invertible \(t\times t\) matrix \(A\), where \(t\) is the dimension of the space \(\langle B_{t}\rangle\). Under the additional hypothesis that \(A\) has no eigenvalue in the supporting field and \(\langle B_{t}\rangle\subseteq C^{\perp_{e}}\), Theorem 7 determines the parameters of the enlarged EAQECC. In particular, Theorem 7 proves that the enlarged code keeps the same parameter \(c\) and enlarges the dimension with respect to the original EAQECC. Theorem 10, Corollary 12 and Remark 13 study and show the advantages produced by the Steane enlargement of EAQECCs obtained when the spaces \(\langle B_{r}\rangle\) and \(\langle B_{t}\rangle\) are Euclidean orthogonal.
Our procedure can be carried out for any decomposition of a linear code as mentioned. As a specific case, Section 3 explains how to put our results into practice through certain BCH codes. Regarding BCH codes as subfield-subcodes and applying Theorem 7 and previous results, Theorems 17 and 20 state explicitly those parameters of the Steane enlargement of the EAQECCs given by suitable BCH codes. Finally, in Theorems 21 and 22 and Remark 23, by considering another families of BCH codes, we determine the corresponding parameters of EAQECCs deduced from Theorem 10, Corollary 12 and Remark 13.
In this section we have given a brief introduction to QECCs and EAQECCs, recalling some of the main known results which will be used later. The goal of this article is to explain how a Steane enlargement of EAQECCs can be achieved. We also specialize this construction to certain BCH codes and, as a consequence, we obtain new EAQECCs which enjoy interesting computational advantages (see the last part of the paragraph after Theorem 1). Section 2 introduces the Steane enlargement of EAQECCs and proves several results about their parameters, while Section 3 applies the results in Section 2 to compute the parameters of enlarged codes of EAQECCs associated to BCH codes. Some good codes deduced from our procedure are also presented at the end of this last section.
## 2. Steane enlargement of EAQECCs
The goal of this section is to prove that a procedure like the Steane enlargement showed in Theorem 2 can be used in the framework of EAQECCs. Firstly we will see how it applies to codes as described in Theorem 4 and then we will show that, under certain conditions, we are able of determine the parameters of the enlarged EAQECCs and how they improve the original ones.
Let \(C\subseteq\mathbb{F}_{q}^{n}\) be an \(\mathbb{F}_{q}\)-linear code with parameters \([n,k,\delta]_{q}\). Let \(B\) be a matrix with entries in \(\mathbb{F}_{q}\) whose rows are linearly independent vectors of \(\mathbb{F}_{q}^{n}\). Along this paper, by convenience, we denote by \(\langle B\rangle\) the vector subspace of \(\mathbb{F}_{q}^{n}\) generated by the rows of \(B\). Assume that \(C=\langle B_{r}\rangle\oplus\langle B_{t}\rangle\), where \(B_{r}\) (respectively, \(B_{t}\)) are \(r\times n\) (respectively, \(t\times n\)) generator matrices of the \(\mathbb{F}_{q}\)-linear subcodes \(\langle B_{r}\rangle\) (respectively, \(\langle B_{t}\rangle\)) of \(C\), and where \(r=\dim_{\mathbb{F}_{q}}\langle B_{r}\rangle\) and \(t=\dim_{\mathbb{F}_{q}}\langle B_{t}\rangle\). Applying Theorem 4 for \(C_{1}=C_{2}=C\), one obtains an EAQECC, \(\tilde{C}\), with parameters
\[[[n,n-2k+c,d;c]]_{q}, \tag{1}\]
where \(k=r+t\), \(d=d_{H}(C^{\perp_{e}}\setminus C)\) and
\[c=\mathrm{rank}\left[\left(\begin{array}{c}B_{r}\\ B_{t}\end{array}\right)(B_{r}^{T}B_{t}^{T})\right] = \mathrm{rank}\left[\left(\begin{array}{cc}B_{r}B_{r}^{T}&B_{r}B _{t}^{T}\\ B_{t}B_{r}^{T}&B_{t}B_{t}^{T}\end{array}\right)\right]. \tag{2}\]
Next, we introduce the linear code \(D_{A}\) we desire to use in order to provide the Steane enlargement of \(\tilde{C}\).
**Definition 5**.: Given a code \(C=\langle B_{r}\rangle\oplus\langle B_{t}\rangle\) as above and such that \(\dim_{\mathbb{F}_{q}}\langle B_{t}\rangle\geq 2\), we define the code \(D_{A}\) as the \(\mathbb{F}_{q}\)-linear code \(D_{A}\subseteq\mathbb{F}_{q}^{2n}\) whose generator matrix is
\[\left(\begin{array}{cc}B_{t}&AB_{t}\\ Br&0\\ 0&B_{r}\end{array}\right), \tag{3}\]
where \(A\) is a \(t\times t\) invertible matrix over \(\mathbb{F}_{q}\).
Our general result on Steane enlargement is the following one:
**Theorem 6**.: _Let \(C=\langle B_{r}\rangle\oplus\langle B_{t}\rangle\subseteq\mathbb{F}_{q}^{n}\) be an \(\mathbb{F}_{q}\)-linear code such that \(\dim_{\mathbb{F}_{q}}\langle B_{t}\rangle\geq 2\). Assume that \(A\) is a \(t\times t\) invertible matrix over \(\mathbb{F}_{q}\) which has no eigenvalue in \(\mathbb{F}_{q}\). Then \(D_{A}\) gives rise to an EAQECC, \(\tilde{D}_{A}\), which is a Steane enlargement of the EAQECC, \(\tilde{C}\), with parameters_
\[[[n,n-2r-t+c^{\prime},d^{\prime},c^{\prime}]]_{q},\]
_where \(d^{\prime}\geq\min\left\{\delta_{1},\left\lceil\left(1+\frac{1}{q}\right) \delta_{2}\right\rceil\right\}\), \(\delta_{1}=d_{H}(C^{\perp_{e}})\) and \(\delta_{2}=d_{H}\left(\langle B_{r}\rangle^{\perp_{e}}\right)\), and_
\[c^{\prime}=\frac{1}{2}\mathrm{rank}\left(\begin{array}{ccc}B_{t}B_{t}^{T}A^{ T}-AB_{t}B_{t}^{T}&-AB_{t}B_{r}^{T}&B_{t}B_{r}^{T}\\ B_{r}B_{t}^{T}A^{T}&0&B_{r}B_{r}^{T}\\ -B_{r}B_{t}^{T}&-B_{r}B_{r}^{T}&0\end{array}\right). \tag{4}\]
Proof.: The proof follows by applying Theorem 3 to the linear code \(D_{A}\subseteq\mathbb{F}_{q}^{2n}\). The size of its generator matrix is \((n-(n-2r-t))\times 2n\). By Theorem 3, the dimension of the obtained EAQECC is \(n-2r-t+c^{\prime}\) and the number of maximally entangled pairs \(c^{\prime}\) is given by the formula
\[2c^{\prime}\ =\ \mathrm{rank}\left[\left(\begin{array}{c}B_{t}\\ B_{r}\\ 0\end{array}\right)\left(B_{t}^{T}A^{T}\ \ 0\ \ B_{r}^{T}\right)\ \ -\ \ \left(\begin{array}{c}AB_{t}\\ 0\\ B_{r}\end{array}\right)\left(B_{t}^{T}\ \ B_{r}^{T}\ \ 0\right)\right].\]
Then, we have proved that, with the exception of the minimum distance, the parameters of the Steane enlargement of the EAQECC, \(\tilde{C}\), are as in the statement.
With respect to the minimum distance \(d^{\prime}\), we are going to prove that
\[d_{s}\left(D_{A}^{\perp_{s}}\setminus D_{A}\cap D_{A}^{\perp_{s}}\right)\geq \min\left\{\delta_{1},\left\lceil\left(1+\frac{1}{q}\right)\delta_{2}\right\rceil \right\},\]
which, again by Theorem 3, finishes our proof.
Denote by \(G\) a generator matrix of the Euclidean dual \(C^{\perp_{e}}\) with size \((n-(r+t))\times n\) and set \(\left(G^{\prime}\mid G\right)^{T}\) a generator matrix of the dual vector space \(\langle B_{r}\rangle^{\perp_{e}}\). Then, by the proof of Theorem 2.6 of [33] (see also [27]), the symplectic dual \(D_{A}^{\perp_{s}}\) of \(D_{A}\) has
\[\left(\begin{array}{cc}\bar{A}G^{\prime}&G^{\prime}\\ G&0\\ 0&G\end{array}\right),\]
as a generator matrix, where \(\bar{A}:=G^{\prime}B_{t}^{T}\left(A^{T}\right)^{-1}\left(G^{\prime}B_{t}^{T} \right)^{-1}\).
Now, \(\bar{A}\) has no eigenvalue in \(\mathbb{F}_{q}\) because \(A\) has no eigenvalue in \(\mathbb{F}_{q}\) and if an invertible matrix \(M\) with entries in \(\mathbb{F}_{q}\) has an eigenvalue \(\lambda\in\mathbb{F}_{q}\), then \(M^{T}\), \(M^{-1}\) and \(PMQ\), where \(P\) and \(Q\) are invertible matrices with entries in \(\mathbb{F}_{q}\), also have an eigenvalue in \(\mathbb{F}_{q}\). To conclude, again by the proof of Theorem 2.6 of [33],
\[d_{s}\left(D_{A}^{\perp_{s}}\setminus D_{A}\cap D_{A}^{\perp_{s}}\right)\geq \min\left\{d_{H}\left(\langle G\rangle\right),d_{H}\left(\langle B_{r}\rangle ^{\perp_{e}}\right)\right\},\]
proving the lower bound for \(d^{\prime}\) in the statement.
The following subsections study specific cases where one can give more information about the advantages of using Steane enlargement of EAQECCs. An application of these results by considering BCH codes will be given in Section 3.
### The case when \(\langle B_{t}\rangle\) and \(C\) are Euclidean orthogonal
In this subsection we consider the Steane enlargement \(\tilde{D}_{A}\) of the EAQECC, \(\tilde{C}\), defined by an \(\mathbb{F}_{q}\)-linear code \(C=\langle B_{r}\rangle\oplus\langle B_{t}\rangle\) such that \(\langle B_{t}\rangle\subseteq C^{\perp_{e}}\). Keeping the above notation, our result is the following one.
**Theorem 7**.: _Let \(C=\langle B_{r}\rangle\oplus\langle B_{t}\rangle\subseteq\mathbb{F}_{q}^{n}\) be an \(\mathbb{F}_{q}\)-linear code such that \(\langle B_{t}\rangle\subseteq C^{\perp_{e}}\) and \(\dim_{\mathbb{F}_{q}}\langle B_{t}\rangle\geq 2\). Set \(t=\dim_{\mathbb{F}_{q}}\langle B_{t}\rangle\) and assume that \(A\) is a \(t\times t\) invertible matrix over \(\mathbb{F}_{q}\) which has no eigenvalue in \(\mathbb{F}_{q}\). Denote by \(c\) the minimum required number of maximally entangled states in the EAQECC, \(\tilde{C}\), determined by \(C\). Then, the linear code \(D_{A}\), introduced in Definition 5, gives rise to an EAQECC, \(\tilde{D}_{A}\), which is a Steane enlargement of \(\tilde{C}\), with parameters_
\[[[n,n-2r-t+c,d^{\prime},c]]_{q},\]
_where \(d^{\prime}\geq\min\left\{\delta_{1},\left\lceil\left(1+\frac{1}{q}\right) \delta_{2}\right\rceil\right\}\), \(\delta_{1}=d_{H}\left(C^{\perp_{e}}\right)\) and \(\delta_{2}=d_{H}\left(\langle B_{r}\rangle^{\perp_{e}}\right)\)._
Proof.: Theorem 6 determines the parameters given in the statement of the EAQECC \(\tilde{D}_{A}\) with the exception of the fact that the minimum required number of maximally entangled states \(c^{\prime}\) in \(\tilde{D}_{A}\) equals \(c\). Now, \(\langle B_{t}\rangle\subseteq C^{\perp_{e}}\) and then \(B_{t}B_{t}^{T}=0\) and \(B_{r}B_{t}^{T}=0\). Then, by (2),
\[c=\operatorname{rank}\left(\begin{array}{cc}B_{r}B_{r}^{T}&0\\ 0&0\end{array}\right)=\operatorname{rank}\left(B_{r}B_{r}^{T}\right).\]
Finally, by (4) and the above equalities, it holds that
\[c^{\prime}=\frac{1}{2}\text{rank}\left(\begin{array}{ccc}0&0&0\\ 0&0&B_{r}B_{r}^{T}\\ 0&-B_{r}B_{r}^{T}&0\end{array}\right)=\text{rank}\left(B_{r}B_{r}^{T}\right)=c,\]
which concludes the proof.
The above result shows that suitable choices of linear codes \(C\) allow us to get Steane enlargements of the EAQECC given by \(C\) that enlarge its dimension (by \(t\)) and keep the number \(c\) of required maximally entangled states.
To finish this subsection we introduce a class of matrices which allows us to get matrices \(A\) as required in Theorems 6 and 7. This class of matrices will be also used in the forthcoming Subsection 2.2. Let \(j\geq 2\) be an integer, consider the monic polynomial
\[h_{\boldsymbol{a}}(X):=X^{j}+a_{j-1}X^{j-1}+\cdots+a_{1}X+a_{0}\in\mathbb{F}_ {q}[X] \tag{5}\]
and its corresponding \(j\times j\) (companion) matrix with entries in \(\mathbb{F}_{q}\):
\[L_{j}(h_{\boldsymbol{a}})=\left(\begin{array}{cccccc}0&1&0&\cdots&0&0\\ 0&0&1&\cdots&0&0\\ \vdots&\vdots&\ddots&\ddots&\cdots&\vdots\\ \vdots&\vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&0&\cdots&0&1\\ -a_{0}&-a_{1}&-a_{2}&\cdots&-a_{j-2}&-a_{j-1}\end{array}\right).\]
Then the following proposition holds.
**Proposition 8**.: \(h_{\boldsymbol{a}}(X)\) _is the characteristic polynomial of the matrix \(L_{j}(h_{\boldsymbol{a}})\)._
Proof.: A proof can be found in [27, Lemma 7] or in [33, Lemma 2.5].
**Remark 9**.: Let \(t\geq 2\) be a positive integer. Consider the polynomial \(\bar{h}_{t}:=X^{t}+X^{t-1}\in\mathbb{F}_{q}[X]\) which gives rise to the map \(\varphi_{\bar{h}_{t}}:\mathbb{F}_{q}\to\mathbb{F}_{q}\), \(\varphi_{\bar{h}_{t}}(x)=\bar{h}_{t}(x)\). Clearly \(\varphi_{\bar{h}_{t}}\) is not one-to-one and there exists \(0\neq\xi\in\mathbb{F}_{q}\) which is not in the image of \(\varphi_{\bar{h}_{t}}\). Then the polynomial \(h_{t}:=\bar{h}_{t}-\xi\in\mathbb{F}_{q}[X]\) has no roots in \(\mathbb{F}_{q}\). As a consequence \(L_{t}(h_{t})\) is a suitable choice of a matrix \(A\) for Theorem 7.
### The case when \(\langle B_{r}\rangle\) and \(\langle B_{t}\rangle\) are Euclidean orthogonal
This subsection studies the Steane enlargement of the EAQECC \(\tilde{C}\) given by a code \(C=\langle B_{r}\rangle\oplus\langle B_{t}\rangle\subseteq\mathbb{F}_{q}^{n}\) satisfying that the codes \(\langle B_{r}\rangle\) and \(\langle B_{t}\rangle\) are Euclidean orthogonal. Note that Subsection 2.1 studies a special situation of the present subsection, where we are going to give the parameters of the Steane enlargement \(\tilde{D}_{A}\). In this case, additional conditions will be required to the matrix \(A\).
Set \(\langle B_{t}\rangle=\langle B_{t_{\ell}}\rangle\oplus\langle B_{t_{Q}}\rangle\), where \(\langle B_{t_{Q}}\rangle\subseteq\langle B_{t}\rangle^{\perp_{\varepsilon}}\). We denote by \(t_{\ell}\) the dimension of the linear code \(\langle B_{t_{\ell}}\rangle\). Without loss of generality, assume that the rows of \(B_{t_{\ell}}\) are compatible with a geometric decomposition of \(\mathbb{F}_{q}^{n}\) (see [39]) and by [19, Section 2.4], \(Z:=B_{t_{\ell}}B_{t_{\ell}}^{T}\) is a matrix such that all its elements are zero but diagonal boxes of the form
\[\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\]
or \((z_{i})\), \(z_{i}\neq 0\), which in characteristic \(2\) may include a box as follows:
\[\left(\begin{array}{cc}0&1\\ 1&1\end{array}\right).\]
That is,
\[Z=\left(\begin{array}{ccccccccc}0&1&&&&&&&\\ 1&0&&&&&&\\ &&\ddots&&&&\\ &&&0&1&&&&\\ &&&1&0&&&&\\ &&&&z_{1}&&&&\\ &&&&\ddots&&&&\\ &&&&z_{s}&&&\\ &&&&&&&0&1\\ &&&&&&&1&1\end{array}\right),\]
where the last box may only appear in characteristic \(2\).
Now consider an invertible matrix
\[A=\left(\begin{array}{cc}A_{0}Z^{-1}&0\\ 0&A_{1}\end{array}\right),\]
where \(A_{0}\) (respectively, \(A_{1}\)) are \(t_{\ell}\times t_{\ell}\) (respectively, \((t-t_{\ell})\times(t-t_{\ell})\)) matrices with entries in \(\mathbb{F}_{q}\). We also assume that \(A\) has no eigenvalue in the finite field \(\mathbb{F}_{q}\). Then, we are ready to state our main result in this subsection.
**Theorem 10**.: _Let \(C=\langle B_{r}\rangle\oplus\langle B_{t}\rangle\subseteq\mathbb{F}_{q}^{n}\) be an \(\mathbb{F}_{q}\)-linear code as before. Denote by \(c\) the minimum required number of maximally entangled states in the EAQECC \(\tilde{C}\) determined by \(C\). Then, the linear code \(D_{A}\) gives rise to an EAQECC \(\tilde{D}_{A}\), which is a Steane enlargement of \(\tilde{C}\), with parameters_
\[[[n,n-2r-t+c^{\prime},d^{\prime},c^{\prime}]]_{q},\]
_where_
\[c^{\prime}=(c-t_{\ell})+\frac{1}{2}\mathrm{rank}\left(A_{0}-A_{0}^{T}\right)= c-\mathrm{rank}\left(B_{t}B_{t}^{T}\right)+\frac{1}{2}\mathrm{rank}\left(A_{0}-A_{0} ^{T}\right)\]
_and \(d^{\prime}\geq\min\left\{\delta_{1},\left\lceil\left(1+\frac{1}{q}\right) \delta_{2}\right\rceil\right\}\), \(\delta_{1}=d_{H}\left(C^{\perp_{\varepsilon}}\right)\) and \(\delta_{2}=d_{H}\left(\langle B_{r}\rangle^{\perp_{\varepsilon}}\right)\)._
Proof.: Theorem 6 shows the parameters in the statement with the exception of the formula for \(c^{\prime}\). The number of maximally entangled states \(c^{\prime}\) depends on the rank of the matrix in (4). We have assumed that the codes \(\langle B_{r}\rangle\) and \(\langle B_{t}\rangle\) are Euclidean orthogonal, therefore \(B_{r}B_{t}^{T}=0\) and the boxes in positions \((1,2)\), \((1,3)\), \((2,1)\) and \((3,1)\) of the matrix in (4) vanish. In addition, recalling that \(Z=B_{t_{\ell}}B_{t_{\ell}}^{T}\) is an invertible square matrix of size \(t_{\ell}\), it holds that
\[B_{t}B_{t}^{T}=\left(\begin{array}{cc}Z&0\\ 0&0\end{array}\right)\]
and then, the box in position \((1,1)\) of the matrix in (4), which is \(B_{t}B_{t}^{T}A^{T}-AB_{t}B_{t}^{T}\), is equal to
\[\left(\begin{array}{cc}Z&0\\ 0&0\end{array}\right)\left(\begin{array}{cc}(Z^{-1})^{T}A_{0}^{T}&0\\ 0&A_{1}^{T}\end{array}\right)-\left(\begin{array}{cc}A_{0}Z^{-1}&0\\ 0&A_{1}\end{array}\right)\left(\begin{array}{cc}Z&0\\ 0&0\end{array}\right)=\left(\begin{array}{cc}A_{0}^{T}-A_{0}&0\\ 0&0\end{array}\right).\]
Therefore, it holds that
\[c^{\prime}=\frac{1}{2}\left(\mathrm{rank}\left(A_{0}-A_{0}^{T}\right)\right)+ \mathrm{rank}\left(B_{r}B_{r}^{T}\right). \tag{6}\]
Now, taking into account that \(\langle B_{r}\rangle\) and \(\langle B_{t}\rangle\) are Euclidean orthogonal, by Equality (2), we get
\[c=\operatorname{rank}\left(B_{r}B_{r}^{T}\right)+\operatorname{rank}\left(B_{t}B _{t}^{T}\right)=\operatorname{rank}\left(B_{r}B_{r}^{T}\right)+t_{\ell}. \tag{7}\]
Combining equalities (6) and (7), the equalities for \(c^{\prime}\) in the statement are proved.
We finish this subsection by showing that a matrix \(A\) given in terms of the above matrices \(L_{j}\) is suitable for our purposes.
**Lemma 11**.: _Let \(h_{\boldsymbol{a}}(X)\) be a polynomial as in (5) and consider the companion matrix \(L_{j}:=L_{j}(h_{\boldsymbol{a}})\). Then_
\[\operatorname{rank}(L_{j}-L_{j}^{T})\geq j-2.\]
_Moreover, \(\operatorname{rank}(L_{j}-L_{j}^{T})=j-1\) if \(j\) is odd; otherwise, \(\operatorname{rank}(L_{j}-L_{j}^{T})=j\) if and only if_
\[1+a_{0}+a_{2}+a_{4}+\cdots+a_{j-2}\neq 0.\]
Proof.: For a start, it holds that
\[L_{j}-L_{j}^{T}\] \[=\left(\begin{array}{ccccccccc}0&1&0&0&0&\cdots&0&0&0&a_{0}\\ -1&0&1&0&0&\cdots&0&0&0&a_{1}\\ 0&-1&0&1&0&\cdots&0&0&0&a_{2}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\cdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&0&0&0&\cdots&-1&0&1&a_{j-3}\\ 0&0&0&0&0&\cdots&0&-1&0&1+a_{j-2}\\ -a_{0}&-a_{1}&-a_{2}&-a_{3}&-a_{4}&\cdots&-a_{j-4}&-a_{j-3}&-1-a_{j-2}&0\\ \end{array}\right).\]
The matrix \(L_{j}-L_{j}^{T}\) is skew-symmetric. This matrix is also alternate even in characteristic \(2\) because the elements of its diagonal vanish. Then its rank is even [14]. In addition, if one deletes the last two rows and the first and the last column, one gets a triangular matrix with ones in the diagonal. Then \(\operatorname{rank}(L_{j}-L_{j}^{T})\geq j-2\) and we have proved our first statement. Since the rank is even, the statement that \(\operatorname{rank}(L_{j}-L_{j}^{T})=j-1\) when \(j\) is odd is clear.
Finally assume that \(j\) is even. Then, \(\operatorname{rank}(L_{j}-L_{j}^{T})\) equals either \(j\) or \(j-2\). We are going to prove that this rank is \(j\) if and only if \(\det(L_{j}-L_{j}^{T})\neq 0\) if and only if \((1+a_{0}+a_{2}+a_{4}+\cdots+a_{j-2})^{2}\neq 0\) if and only if \((1+a_{0}+a_{2}+a_{4}+\cdots+a_{j-2})\neq 0\), which concludes the proof. Thus, the fact to prove is that, for \(j\) even,
\[\det(L_{j}-L_{j}^{T})=(1+a_{0}+a_{2}+a_{4}+\cdots+a_{j-2})^{2}.\]
Indeed, adding the odd rows to the \((j-1)\)th row, we get a new matrix with the same determinant and the same rows with the exception of the \((j-1)\)th one, all the entries of this \((j-1)\)th row are zeros with the exception of the last one, which is
\[1+a_{0}+a_{2}+a_{4}+\cdots+a_{j-2}.\]
Now we use the Laplace expansion along the \((j-1)\)th row, getting a unique non-vanishing summand. Next we again perform Laplace expansions using, successively, those rows having a unique one as entry (these ones are in the odd files of the initial matrix). Then,
it remains to compute the minor given by the determinant of the matrix
\[\left(\begin{array}{cccccccc}-1&1&0&0&\cdots&0&0&0\\ 0&-1&1&0&\cdots&0&0&0\\ 0&0&-1&1&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\cdots&\vdots&\vdots&\vdots\\ 0&0&0&0&\cdots&0&-1&1\\ -a_{0}&-a_{2}&-a_{4}&-a_{6}&\cdots&-a_{j-6}&-a_{j-4}&-1-a_{j-2}\end{array} \right).\]
Finally, adding all the columns in the above matrix to the last column, we get the following matrix (with the same determinant):
\[\left(\begin{array}{cccccccc}-1&1&0&0&\cdots&0&0&0\\ 0&-1&1&0&\cdots&0&0&0\\ 0&0&-1&1&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\cdots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&0&-1&0\\ -a_{0}&-a_{2}&-a_{4}&-a_{6}&\cdots&-a_{j-6}&-a_{j-4}&-(1+a_{0}+a_{2}+\cdots+a_ {j-2})\end{array}\right).\]
This proves that \(\det(L_{j}-L_{j}^{T})=\left(1+a_{0}+a_{2}+a_{4}+\cdots+a_{j-2}\right)^{2}\), and it finishes the proof.
Keep the notation as in Remark 9. One can use the matrix \(L_{t-t_{\ell}}\left(h_{t-t_{\ell}}\right)\) as the box \(A_{1}\) in the matrix \(A\) considered in Theorem 10.
We look for matrices \(A_{0}\) of the type \(L_{j}\) to guarantee that \(A\) has no eigenvalue in \(\mathbb{F}_{q}\). Our next proposal allows us to get matrices of the mentioned type such that \(\operatorname{rank}(A_{0}-A_{0}^{T})\) is maximum. Note that, by Theorem 10, this fact enlarges the entanglement but also the dimension of the obtained codes.
To begin with, suppose that either the characteristic of the field \(\mathbb{F}_{q}\) is odd or it is even and \(t_{\ell}>2\). Then, set \(\tilde{\phi}_{t_{\ell}}:=X^{t_{\ell}}-X^{t_{\ell}-2}\in\mathbb{F}_{q}[X]\). The attached map
\[\varphi_{\tilde{\phi}_{t_{\ell}}}:\mathbb{F}_{q}\rightarrow\mathbb{F}_{q}\]
is \(\varphi_{\tilde{\phi}_{t_{\ell}}}(x)=\tilde{\phi}_{t_{\ell}}(x)\). As in Remark 9, \(\varphi_{\tilde{\phi}_{t_{\ell}}}\) is not bijective and one can find a nonzero element \(\xi_{1}\) in \(\mathbb{F}_{q}\) such that the polynomial \(\phi_{t_{\ell}}:=\tilde{\phi}_{t_{\ell}}-\xi_{1}\in\mathbb{F}_{q}[X]\) has no roots in \(\mathbb{F}_{q}\). In this case, \(L_{t_{\ell}}(\phi_{t_{\ell}})\) is a suitable matrix for being the matrix \(A_{0}\) involved in the box \((1,1)\) of \(A\).
When \(q\neq 2\), the characteristic of the field \(\mathbb{F}_{q}\) is two and \(t_{\ell}=2\), it suffices to consider the matrix \(L_{t_{2}}(\phi_{2})\) defined by \(\tilde{\phi}_{2}=\bar{h}_{2}\), where \(\bar{h}_{2}\) is the polynomial given in Remark 9, and consider \(\phi_{2}=\tilde{\phi}_{2}-\xi\), where \(\xi\neq 0,1\) is a suitable value in \(\mathbb{F}_{q}\). Notice that, in this case, a polynomial \(X^{2}+X-\varsigma\), \(\varsigma\in\mathbb{F}_{q}\), either has two different roots or it is irreducible.
**Corollary 12**.: _Let \(A_{0}\) and \(A_{1}\) be matrices as described in the above paragraphs. Then, the matrix_
\[A=\left(\begin{array}{cc}A_{0}Z^{-1}&0\\ 0&A_{1}\end{array}\right)\]
_can be used to provide a Steane enlargement \(\tilde{D}_{A}\) of the EAQECC \(\tilde{C}\) given in Theorem 10. Furthermore, the rank of the matrix \(A_{0}-A_{0}^{T}\) is as large as possible. That is to say, it is \(t_{\ell}\) (respectively, \(t_{\ell}-1\)) whenever \(t_{\ell}\) is even (respectively, odd)._
Proof.: The proof follows from the fact that our choice of \(A_{0}\) and \(A_{1}\) implies that they have no eigenvalue in \(\mathbb{F}_{q}\) and, then, the same happens to \(A\). Finally by Lemma 11, if \(t_{\ell}\) is even, the value \(1+a_{0}+a_{2}+a_{4}+\cdots+a_{j-2}\) corresponding to \(\phi_{t_{\ell}}\) is \(-\xi_{1}\neq 0\) and its rank is maximum.
**Remark 13**.: When \(t_{\ell}\) is odd, \(\operatorname{rank}(A_{0}-A_{0}^{T})=t_{\ell}-1\) is the only possibility if one considers matrices \(A_{0}\) of the type \(L_{j}\).
Assume now that \(t_{\ell}\) is even. If \(t_{\ell}\geq q\) and the characteristic of \(\mathbb{F}_{q}\) is two, we consider the polynomial \(\eta_{t_{\ell}}:=X^{t_{\ell}}-X^{t_{\ell}-q+1}-1\). Assume that \(q\) is odd. Soon we will show that there exists \(0\neq a\in\mathbb{F}_{q}\) such that the polynomial \(X^{2}+aX-1\in\mathbb{F}_{q}[X]\) has no roots in \(\mathbb{F}_{q}\). As a consequence, the polynomial \(\eta_{2w}=X^{2w}+aX^{w}-1\), \(w\) being an odd positive integer, has no roots in \(\mathbb{F}_{q}\). Therefore, if \(t_{\ell}\geq q\) when the characteristic is two and, otherwise, when \(t_{\ell}=2w\), the polynomials \(\eta_{t_{\ell}}\) provide matrices \(A_{0}:=L_{t_{\ell}}(\eta_{t_{\ell}})\) which are suitable for our purposes and satisfy \(\operatorname{rank}(A_{0}-A_{0}^{T})=t_{\ell}-2\). For the remaining cases, we have no generic candidate for \(A_{0}\) such that \(\operatorname{rank}(A_{0}-A_{0}^{T})=t_{\ell}-2\), however for specific cases and moderate values of \(q\) and \(t_{\ell}\), it is not hard to find suitable polynomials \(\eta_{t_{\ell}}\) and attached matrices \(A_{0}\) with the above mentioned rank. Note also that the propagation rules stated in [35] does not work here since our codes do not come from the Hermitian construction.
It remains to prove that \(X^{2}+aX-1\) is irreducible for some \(a\neq 0\). It holds if and only if there exists \(0\neq a\in\mathbb{F}_{q}\) such that \(a^{2}+4\neq b^{2}\) for all \(b\in\mathbb{F}_{q}\). Consider the attached equation (1): \(x^{2}+4=y^{2}\), \(x,y\in\mathbb{F}_{q}\) and new variables \(x_{1}=(x+y)/2\) and \(y_{1}=(y-x)/2\) which satisfy \(x=x_{1}-y_{1}\) and \(y=x_{1}+y_{1}\). It follows that \((x,y)\) is a solution of (1) if and only if \((x_{1},y_{1})\) is a solution of the equation (2): \(x_{1}y_{1}=1\). Thus the map \(\zeta:\mathbb{F}_{q}^{*}\to\mathbb{F}_{q}\) defined \(\zeta(x_{1})=x_{1}-x_{1}^{-1}=x_{1}^{-1}(x_{1}^{2}-1)\) takes the solutions of (2) into the solutions of (1) and \(x_{1}=1\) and \(x_{1}=-1\) give solutions of (1) with \(x=0\). This proves that there is some value \(a\neq 0\) in \(\mathbb{F}_{q}\) satisfying that \(a^{2}+4=y^{2}\) has no solution in \(\mathbb{F}_{q}\), which concludes the proof.
## 3. Steane enlargement of EAQECCs given by BCH codes
In this last section we study the Steane enlargement of EAQECCs given by some BCH codes. We divide it in two subsections. The first one shows our results while the second one gives a few examples.
### Results
BCH codes are cyclic codes but we prefer to regard them as subfield-subcodes of certain evaluation codes [6, 12]. In fact our BCH codes are \(J\)-affine variety codes in one variable with \(J=\{1\}\) as introduced in [16], and we will use some results from this source.
Set \(q=p^{m}\), \(m\geq 1\) and \(n=p^{m}-1\). We consider the evaluation map
\[\operatorname{ev}:\frac{\mathbb{F}_{q}[X]}{\langle X^{n}-1\rangle}\to\mathbb{F }_{q}^{n},\]
given by \(\operatorname{ev}(h)=(h(R_{1}),\ldots,h(R_{n}))\), where \(\{R_{i}\}_{i=1}^{n}\) is the set of \(n\)th roots of unity in the finite field \(\mathbb{F}_{q}\). Now define \(H:=\{0,1,\ldots,n-1\}\) and, for sets \(\emptyset\neq\Delta\subseteq H\), denote by \(C_{\Delta}\) the linear code over \(\mathbb{F}_{q}\) generated by \(\{\operatorname{ev}(X^{i}):i\in\Delta\}\). Consider also a positive integer \(s\) such that \(s\neq m\) divides \(m\) and then, BCH codes over \(\mathbb{F}_{p^{s}}\) are subfield-subcodes of codes \(C_{\Delta}\), that is codes the form \(\mathbb{F}_{p^{s}}^{n}\cap C_{\Delta}\). Given \(a\in H\), a minimal cyclotomic coset (with respect to \(n\) and \(s\)) is a set of elements \(\mathcal{I}_{a}:=\{ap^{\ell s}:\ell\geq 0\}\), where the products are carried out modulo \(n\) (i.e., both \(a\) and the elements in \(\mathcal{I}_{a}\) are representatives in \(H\) of classes in the congruence ring \(\mathbb{Z}_{n}\)). Denote by \(i_{a}\) the cardinality of \(\mathcal{I}_{a}\) and set \(\mathcal{I}_{a}^{R}=\mathcal{I}_{n-a}\), which we
name the reciprocal coset of \(\mathcal{I}_{a}\). Moreover, \(\mathcal{I}_{a}\) is named symmetric when \(\mathcal{I}_{a}=\mathcal{I}_{a}^{R}\) and, otherwise, it is called to be asymmetric.
Within each minimal cyclotomic coset, we take its minimal element for the natural ordering and denote by \(\mathcal{A}=\{a_{0}=0<a_{1}<\cdots<a_{z}\}\) the set of these minimal representatives. Then \(\{\mathcal{I}_{a_{\omega}}\}_{\nu=0}^{z}\) is the set of minimal cyclotomic cosets (with respect to \(n\) and \(s\)). We will use the following result which can be found in [21, propositions 1 and 2].
**Proposition 14**.: _Keep the above notation._
1. _Assume that_ \(\Delta=\cup_{\nu=\ell_{1}}^{\ell_{2}}\mathcal{I}_{a_{\omega}}\)_, where_ \(\ell_{1}<\ell_{2}\) _are in_ \(\{0,1,\ldots,z\}\)_. Then, the dimension of the subfield-subcode_ \(\mathbb{F}_{p^{s}}^{n}\cap C_{\Delta}\) _is equal to_ \(\sum_{\nu=\ell_{1}}^{\ell_{2}}i_{a_{\omega}}\)_. The same result is true when_ \(\Delta\) _is union of nonconsecutive minimal cyclotomic cosets._
2. _Assume now that_ \(\ell_{1}=0\) _and denote by_ \(\delta\) _the minimum distance of the Euclidean dual code of_ \(\mathbb{F}_{p^{s}}^{n}\cap C_{\Delta}\)_, then_ \(\delta\geq a_{\ell_{2}+1}+1\)_._
For \(0\leq\ell<z\), denote \(\Delta(\ell)=\cup_{\nu=0}^{\ell}\mathcal{I}_{a_{\omega}}\) and \(\Delta(\ell)^{\perp_{e}}=H\setminus\cup_{\nu=0}^{\ell}\mathcal{I}_{a_{\nu}}^{R}\). Then, by [21, page 5], the Euclidean dual of \(\mathbb{F}_{p^{s}}^{n}\cap C_{\Delta(\ell)}\) coincides with the subfield-subcode \(\mathbb{F}_{p^{s}}^{n}\cap C_{\Delta(\ell)^{\perp_{e}}}\). We decompose \(\Delta(\ell)=\Delta_{r}\sqcup\Delta_{L}\), where \(\Delta_{r}\) consists of the asymmetric cosets in \(\Delta(\ell)\) whose reciprocal coset is not in \(\Delta(\ell)\) and \(\Delta_{L}\) is the union of the remaining asymmetric cosets and the symmetric ones. It is clear that \(\Delta(\ell)=\Delta_{r}\cup\Delta_{L}\) and \(\Delta_{r}\cap\Delta_{L}=\emptyset\).
The following result follows by applying Theorem 4 (where one considers \(C_{1}=C_{2}=\mathbb{F}_{p^{s}}^{n}\cap C_{\Delta(\ell)}\)) and the first displayed formula in [21, page 7].
**Theorem 15**.: _With the above notation, there exists an EAQECC whose parameters are_
\[\left[\left[n,n-2\sum_{\nu=0}^{\ell}i_{a_{\nu}}+c,\geq a_{\ell+1}+1;c\right] \right]_{p^{s}},\]
_where \(c=\#\Delta_{L}\), \(\#\) meaning cardinality._
The goal of this section is to compute the parameters of the Steane enlargement of some EAQECCs provided either by Theorem 15 or by the same procedure as in that theorem but associated to another codes \(C_{\Delta}\). Given a prime number \(p\) and distinct positive integers \(m\) and \(s\) as above, we define the positive integer \(B(p,m,s)\) as
\[B(p,m,s):=\begin{cases}(p^{s})^{\frac{m}{2s}}-1&\text{if $\frac{m}{s}$ is even},\\ (p^{s})^{\lceil\frac{m}{2s}\rceil}-p^{s}+1&\text{otherwise}.\end{cases}\]
We will use this integer frequently.
The following result follows from Theorem 3 and Lemma 8 in [1].
**Proposition 16**.: _Let \(p,m\) and \(s\) be integers as above and consider another integer \(b\) such that \(0<b<B(p,m,s)\). Then,_
**a):**: \(\mathcal{I}_{b}\subseteq\mathcal{I}_{b}^{\perp_{e}}:=H\setminus\mathcal{I}_{b}^ {R}\)_._
**b):**: \(\#\mathcal{I}_{b}=\frac{m}{s}\) _and this equality also holds when_ \(b=B(p,m,s)\)_._
We are ready for stating our first new result in this section. It falls within the case described in Subsection 2.1. We state it and afterwards we give two corollaries.
**Theorem 17**.: _Let \(p\) be a prime number and \(m\) and \(s\) distinct positive integers as above. Set \(n=p^{m}-1\) and \(\mathcal{A}=\{a_{0}=0<a_{1}<\cdots<a_{z}\}\) the set of minimal representatives of the family of minimal cyclotomic cosets (with respect to \(n\) and \(s\)). Pick indices \(\ell_{1}<\ell_{2}<z\)
_such that \(a_{\ell_{2}}<B(p,m,s)\). Then, there is a Steane enlargement of an EAQECC as in Theorem 15 whose parameters are_
\[\left[\left[n,n-\frac{m}{s}(\ell_{1}+\ell_{2})-1,d^{\prime};1\right]\right]_{p^{ s}},\]
_where \(d^{\prime}\geq\min\left\{a_{\ell_{2}+1}+1,\left[\left(\frac{p^{s}+1}{p^{s}} \right)(a_{\ell_{1}+1}+1)\right]\right\}\)._
Proof.: With the notation as in Theorem 7, set \(C:=\mathbb{F}_{p^{s}}\cap C_{\Delta(\ell_{2})}\) and \(B_{r}\) (respectively, \(B_{t}\)) a generator matrix of the codes \(\mathbb{F}_{p^{s}}\cap C_{\Delta(\ell_{1})}\) (respectively, \(\mathbb{F}_{p^{s}}\cap C_{\Delta^{\prime}}\), where \(\Delta^{\prime}=\cup_{\nu=\ell_{1}+1}^{\ell_{2}}\mathcal{I}_{a_{\nu}}\)). Then, taking into account that the evaluation of a monomial \(X^{a}\) is orthogonal to that of \(X^{b}\) except when \(a+b\equiv 0\mod n\) (see [16, Proposition 2.2]), by Proposition 16 one deduces that \(\Delta_{L}=\mathcal{I}_{0}\) in the decomposition \(\Delta(\ell_{2})=\Delta_{r}\sqcup\Delta_{L}\) given before Theorem 15. Thus the value \(c\) of the EAQECC given by \(C\) is \(c=1\). Finally, if one applies Theorem 7 and considers propositions 14 and 16, one obtains a Steane enlargement of the EAQECC given by \(C\) with parameters
\[\left[\left[n,n-2\left(\frac{m\ell_{1}}{s}+1\right)-\left(\frac{m(\ell_{2}- \ell_{1})}{s}\right)+1,d^{\prime};1\right]\right]_{p^{s}},\]
where \(d^{\prime}\geq\min\left\{a_{\ell_{2}+1}+1,\left[\left(\frac{p^{s}+1}{p^{s}} \right)(a_{\ell_{1}+1}+1)\right]\right\}\). This concludes the proof.
Taking \(\ell_{1}=0\) and \(\ell_{2}=1\) in the above theorem, one gets the following result.
**Corollary 18**.: _Consider a prime number \(p\) and a positive integer \(n=p^{m}-1\) given by \(m>0\). Set \(s\neq m\) a positive integer such that \(s\) divides \(m\). Then, there exists a Steane enlargement of an EAQECC as in Theorem 15 with parameters \(\left[\left[n,n-\frac{m}{s}-1,3;1\right]\right]_{p^{s}}\)._
The following result is a bit weaker than Theorem 17 but depends only on an element in \(\mathcal{A}\).
**Corollary 19**.: _Keep the notation as in Theorem 17 and pick \(a_{\ell}\in\mathcal{A}\) such that \(a_{\ell}<B(p,m,s)\). Then, there is a Steane enlargement of an EAQECC as in Theorem 15 whose parameters are_
\[\left[\left[n,n-\frac{m}{s}(2\ell-1)-1,a_{\ell+1};1\right]\right]_{p^{s}}.\]
Proof.: Apply Theorem 17 for \(\ell_{1}=\ell-1\) and \(\ell_{2}=\ell\). The proof follows from the fact that \(a_{\ell+1}=a_{\ell}+2\) if \(a_{\ell}+1\) is a multiple of \(p^{s}\) and \(a_{\ell+1}=a_{\ell}+1\) otherwise. Indeed, it is clear that \(a_{i}=i\) whenever \(i<p^{s}\) and that, when \(a_{\ell}+1\) is a multiple of \(p^{s}\), then \(a_{\ell+1}>a_{\ell}+1\). Thus, when \(m/s\) is even, it suffices to prove that the elements of the form \(b+\lambda p^{s}<p^{m/2}-1\), \(b,\lambda\) positive integers and \(0\neq b<p^{s}\) are minimal representatives in \(\mathcal{A}\). This is true because \(b\) is the first element in the \(p^{s}\)-adic expansion of \(b+\lambda p^{s}\) and, expressing the \(p^{s}\)-adic expansion \(a_{0}+a_{1}p^{s}+\cdots+a_{\lfloor\frac{n}{s}\rfloor}p^{\lfloor\frac{n}{s} \rfloor s}\) of an integer as \((a_{0},a_{1},\ldots,a_{\lfloor\frac{n}{s}\rfloor})\), the \(p^{s}\)-adic expansions of the elements in the coset \(\mathcal{I}_{b+\lambda p^{s}}\) are obtained by successively shifting the \(p^{s}\)-adic expansion of \(b+\lambda p^{s}\). During a while, these shifts are \(p^{s}\)-adic expansions with a zero in the first position and when one obtains a nonzero in the first position, the corresponding value is larger than \(p^{m/2}-1\). An analogous reasoning proves the result in the case when \(m/2\) is odd.
The following result is also supported on Theorem 7. We keep the same conditions as in Theorem 17. That is, we consider a prime number \(p\), and \(m\) and \(s\) different positive integers such that \(s\) divides \(m\). Let \(\mathcal{A}=\{a_{0}=0<a_{1}<\cdots<a_{z}\}\) be the set of minimal cyclotomic cosets (with respect to \(n=p^{m}-1\) and \(s\)).
**Theorem 20**.: _Let \(\ell_{1}<\ell_{2}<z\) two indices such that \(a_{\ell_{2}}<B(p,m,s)\). Then, there is a Steane enlargement of an EAQECC determined by a code \(C_{\Delta}\) with parameters:_
\[\left[\left[n,n-\frac{m}{s}(\ell_{1}+\ell_{2})-1,d^{\prime};c\right]\right]_{p^ {s}},\]
_where \(d^{\prime}\geq\min\left\{a_{\ell_{1}+1}+a_{\ell_{2}+1},\left[2\left(\frac{p^{s }+1}{p^{s}}\right)a_{\ell_{1}+1}\right]\right\}\) and \(c=1+2\frac{m}{s}\ell_{1}\)._
Proof.: For a nonnegative integer \(\ell\), define \(\Delta(\ell,R):=\cup_{\nu=0}^{\ell}\mathcal{I}_{a_{\nu}}\bigcup\cup_{\nu=1}^{ \ell}\mathcal{I}_{a_{\nu}}^{R}\). Now, with the notation as in Theorem 7, set \(C:=\mathbb{F}_{p^{s}}\cap C_{\Delta(\ell_{1},R)\cup\Delta^{\prime}}\), where \(\Delta^{\prime}=\cup_{\nu=\ell_{1}+1}^{\ell_{2}}\mathcal{I}_{a_{\nu}}\). Fix \(B_{r}\) (respectively, \(B_{t}\)) a generator matrix of the code \(\mathbb{F}_{p^{s}}\cap C_{\Delta(\ell_{1},R)}\) (respectively, \(\mathbb{F}_{p^{s}}\cap C_{\Delta^{\prime}}\)).
To compute \(c\), we reason as in the proof of Theorem 17 and \(c=1+2\frac{m}{s}\ell_{1}\), because the set \(\Delta_{L}\) in the decomposition \(\Delta(\ell_{1},R)\cup\Delta^{\prime}=\Delta_{r}\sqcup\Delta_{L}\) given before Theorem 15 is \(\Delta(\ell_{1},R)\). With respect to the distance, we notice that the values in \(\Delta(\ell,R)\) contain all the consecutive integers from \(0\) to \(a_{\ell+1}-1\) and their opposites modulo \(n\). Then, using the \(*\) product defined by \((x_{1},\ldots,x_{n})*(y_{1},\ldots,y_{n})=(x_{1}y_{1},\ldots,x_{n}y_{n})\), we deduce that there is code which is isometric to \(C_{\Delta(\ell,R)}\) containing the evaluation of consecutive powers of \(X\). This proves that the minimum Hamming distance of \(\langle B_{r}\rangle^{\perp_{e}}\) is larger than or equal to \(2a_{\ell_{1}+1}\). A close reasoning shows that \(d_{H}(C^{\perp_{e}})\geq a_{\ell_{1}+1}+a_{\ell_{2}+1}\), which ends the proof.
Our last results fit with the case described in Subsection 2.2. We start with the following one.
**Theorem 21**.: _Let \(p\) be a prime number and \(m\) and \(s\) positive integers such that \(s\) divides \(m\), \(s\neq m\). Assume that \(m\) and \(m/s\) are even. Set \(n=p^{m}-1\) and \(\mathcal{A}=\{a_{0}=0<a_{1}<\cdots<a_{z}\}\) the set of minimal representatives of the family of minimal cyclotomic cosets (with respect to \(n\) and \(s\)). Let \(0<\ell_{2}<z\) that index such that \(a_{\ell_{2}}=p^{\frac{m}{2}}-1\). Consider an index \(0\leq\ell_{1}<\ell_{2}\). Then, there is a Steane enlargement of an EAQECC as in Theorem 15 whose parameters are_
\[\left[\left[n,n-\frac{m}{s}(\ell_{1}+\ell_{2})-1+\frac{m}{2s},d^{\prime};\frac {m}{2s}+1\right]\right]_{p^{s}},\]
_where \(d^{\prime}\geq\min\left\{a_{\ell_{2}+1}+1,\left[\left(\frac{p^{s}+1}{p^{s}} \right)(a_{\ell_{1}+1}+1)\right]\right\}\)._
Proof.: We desire to use Theorem 10. Consider \(C:=\mathbb{F}_{p^{s}}\cap C_{\Delta(\ell_{2})}\) and a suitable generator matrix \(B_{r}\) (respectively, \(B_{t}\)) of the subfield-codes \(\mathbb{F}_{p^{s}}\cap C_{\Delta(\ell_{1})}\) (respectively, \(\mathbb{F}_{p^{s}}\cap C_{\Delta^{\prime}}\), where \(\Delta^{\prime}=\cup_{\nu=\ell_{1}+1}^{\ell_{2}}\mathcal{I}_{a_{\nu}}\)). With the notation used after Proposition 14 and by [16, Remark 3.4], it holds that \(\Delta_{L}=\mathcal{I}_{0}\cup\mathcal{I}_{a_{\ell_{2}}}\) and \(\Delta_{r}=\cup_{\nu=1}^{\ell_{2}-1}\mathcal{I}_{a_{\nu}}\) in the decomposition \(\Delta(\ell_{2})=\Delta_{r}\sqcup\Delta_{L}\). Theorem 15 and Proposition 16 show that the value \(c\) of the EAQECC given by \(C\) is \(c=1+\#\mathcal{I}_{a_{\ell_{2}}}=1+(m/s)\).
Finally, the proof follows by applying Theorem 10 and Corollary 12 after noticing that \(m/s\) is the rank of a suitable matrix \(A_{0}-A_{0}^{T}\) as introduced in Subsection 2.2 and \(\operatorname{rank}(B_{r}B_{r}^{T})=1\). Thus we have proved that there is a Steane enlargement of the EAQECC given by \(C\) with parameters as in the statement.
As in the proof of Theorem 20, we can add reciprocal cosets and the obtained result is the following one.
**Theorem 22**.: _The following statements hold._
**i):**: _Keep the same notation and requirements as in Theorem_ 20_, then there exists a Steane enlargement of an EAQECC determined by a code_ \(C_{\Delta}\) _with parameters:_
\[\left[\left[n,n-\frac{m}{s}(\ell_{1}+\ell_{2})-1,d^{\prime};c^{\prime}\right] \right]_{p^{s}},\]
_where_ \(d^{\prime}\geq\min\left\{2a_{\ell_{2}+1},\left[2\left(\frac{p^{s}+1}{p^{s}} \right)a_{\ell_{1}+1}\right]\right\}\) _and_ \(c^{\prime}=1+\frac{m}{s}(\ell_{1}+\ell_{2})\)_._
**ii):**: _If we are under the notation and conditions of Theorem_ 21_, then there is a Steane enlargement of an EAQECC with parameters_
\[\left[\left[n,n-\frac{m}{s}\left(\ell_{1}+\ell_{2}-\frac{1}{2}\right)-1,d^{ \prime};c^{\prime}\right]\right]_{p^{s}},\]
_where_
\[d^{\prime}\geq\min\left\{a_{\ell_{2}+1}+a_{\ell_{2}}-1,\left[2\left(\frac{p^{ s}+1}{p^{s}}\right)a_{\ell_{1}+1}\right]\right\}\]
_and_ \(c^{\prime}=1+\frac{m}{s}(\ell_{1}+\ell_{2}-\frac{1}{2})\)_._
Proof.: To prove the first statement, it suffices to consider \(C:=\mathbb{F}_{p^{s}}\cap C_{\Delta(\ell_{2},R)}\) and suitable matrices \(B_{r}\) and \(B_{t}\), where \(B_{r}\) generates the subfield-code \(\mathbb{F}_{p^{s}}\cap C_{\Delta(\ell_{1},R)}\) and, then, apply the same procedure as in the proof of Theorem 21. A proof for the second statement is analogous after taking into account that the coset \(\mathcal{I}_{a_{\ell_{2}}}\) is symmetric. Recall that in this last case \(a_{\ell_{2}}=p^{\frac{m}{2}}-1\).
**Remark 23**.: Under the conditions described in the second paragraph of Remark 13, we can also obtain Steane enlargements of EAQECCs whose parameters coincide with those stated in Theorems 21 and 22 but the entanglement \(c^{\prime}\) and the dimension, which are one unit less. To do it, it suffices to consider matrices \(A_{0}\) defined by polynomials \(\eta_{\ell_{\ell}}\) as in the mentioned Remark 13.
### Examples
We conclude this section and the paper by providing parameters of EAQECCs obtained from some results given in Subsection 3.1. Note that Steane enlarged codes enjoy the interesting computational advantages described after Theorem 1. We present some EAQECCs with parameters that we have not found in the literature and cannot be obtained from existing ones by means of propagation rules.
We give three tables of \(q\)-ary EAQECCs for \(q=2,4,9\). Our tables show the parameters of the codes and the involved results to deduce them. The tables also present the values \(m\), \(s\), \(a_{\ell_{1}}\) and \(a_{\ell_{1}}\) used in our computations.
Table 1 shows some examples in the binary case. To enhance understanding we explain three cases in detail. The same procedure allows us to obtain the parameters of all our tables. The first binary code in the table have parameters \([[15,4,6;3]]_{2}\) and them follow from the formula in Theorem 21, where \(m=4\), \(s=1\), \(\ell_{1}=1\), \(a_{\ell_{1}}=1\), \(\ell_{2}=2\), \(a_{\ell_{2}}=3\) and \(a_{3}=5\). If one applies Theorem 17 for \(m=5\), \(s=1\), \(\ell_{1}=1\), \(a_{\ell_{1}}=1\), \(\ell_{2}=2\), \(a_{\ell_{2}}=3\) and \(a_{3}=5\), then one gets a code with parameters \([[31,15,6;1]]_{2}\) as in the second line of the table. Finally, the binary code \([[63,32,14;31]]_{2}\) in the eighth row of the table can be constructed from Theorem 22 by noticing that \(m=6\), \(s=1\), \(\ell_{1}=2\), \(a_{\ell_{1}}=3\), \(\ell_{2}=3\), \(a_{\ell_{2}}=5\) and \(a_{4}=7\). To the best of our knowledge, EAQECCs in Table 1 are new with the exception of those marked with a *. We think they are good since in some cases they compare well with others in the literature. Indeed, our code \([[15,4,6;3]]_{2}\) (respectively, \([[63,44,8;13]]_{2}\)) is better than \([[15,4,6;5]]_{2}\) (respectively, \([[63,42,8;14]]_{2}\)) appearing in [25]. This last reference also contains the codes marked with a *, being the best known codes with those parameters. We add them to show that we are able to get good codes.
Table 2 (respectively, 3) provides some examples of new 4-ary (respectively, 9-ary) EAQECCs obtained with our results. Furthermore, according to Remark 23, the values \(k\) and \(c\) can be decreased one unit whenever we apply Theorems 21 and 22. Indeed, one can use the polynomials \(\eta_{t_{\ell}}\) given in Remark 13 after noticing that \(t_{\ell}=6\) in the corresponding cases of Table 3.
## Conflict of interest
The authors declare they have no conflict of interest.
\begin{table}
\begin{tabular}{|c|c|c|c|c||c|c|c|} \hline \(n\) & \(k\) & \(d\geq\) & \(c\) & Result & \((m,s)\) & \(a_{\ell_{1}}\) & \(a_{\ell_{2}}\) \\ \hline
15 & 4 & 6 & 3 & Theorem 21 & (4,1) & 1 & 3 \\
31 & 15 & 6 & \(1^{*}\) & Theorem 17 & (5,1) & 1 & 3 \\
31 & 15 & 8 & 11 & Theorem 20 & (5,1) & 1 & 3 \\
63 & 44 & 8 & 13 & Theorem 20 & (6,1) & 1 & 3 \\
63 & 32 & 8 & \(1^{*}\) & Theorem 17 & (6,1) & 3 & 5 \\
63 & 44 & 9 & 19 & Theorem 22 & (6,1) & 1 & 3 \\
63 & 38 & 9 & 13 & Theorem 20 & (6,1) & 1 & 5 \\
63 & 32 & 14 & 31 & Theorem 22 & (6,1) & 3 & 5 \\
127 & 70 & 12 & 1 & Theorem 17 & (7,1) & 5 & 9 \\
127 & 84 & 14 & 29 & Theorem 20 & (7,1) & 3 & 7 \\
127 & 70 & 18 & 43 & Theorem 20 & (7,1) & 5 & 7 \\
255 & 214 & 12 & 33 & Theorem 20 & (8,1) & 3 & 5 \\
255 & 214 & 14 & 41 & Theorem 22 & (8,1) & 3 & 5 \\
255 & 190 & 18 & 49 & Theorem 20 & (8,1) & 5 & 9 \\ \hline \end{tabular}
\end{table}
Table 1. Parameters of binary EAQECCs
\begin{table}
\begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \(n\) & \(k\) & \(d\geq\) & \(c\) & Result & \((m,s)\) & \(a_{\ell_{1}}\) & \(a_{\ell_{2}}\) \\ \hline
63 & 59 & 3 & 1 & Theorem 17 & (6,2) & 0 & 1 \\
63 & 26 & 17 & 31 & Theorem 20 & (6,2) & 6 & 9 \\
63 & 23 & 19 & 37 & Theorem 20 & (6,2) & 7 & 9 \\
63 & 23 & 20 & 40 & Theorem 22 & (6,2) & 7 & 9 \\
63 & 14 & 23 & 43 & Theorem 20 & (6,2) & 9 & 11 \\
63 & 11 & 26 & 52 & Theorem 22 & (6,2) & 10 & 11 \\
1023 & 1017 & 3 & 1 & Theorem 17 & (10,2) & 0 & 1 \\
1023 & 1007 & 4 & 1 & Theorem 17 & (10,2) & 1 & 2 \\
1023 & 892 & 36 & 121 & Theorem 20 & (10,2) & 15 & 18 \\
1023 & 887 & 37 & 131 & Theorem 20 & (10,2) & 17 & 18 \\
1023 & 887 & 38 & 136 & Theorem 22 & (10,2) & 17 & 18 \\
1023 & 877 & 40 & 141 & Theorem 20 & (10,2) & 18 & 19 \\
1023 & 877 & 42 & 146 & Theorem 22 & (10,2) & 18 & 19 \\
1023 & 847 & 50 & 176 & Theorem 22 & (10,2) & 22 & 23 \\
1023 & 602 & 113 & 411 & Theorem 20 & (10,2) & 54 & 57 \\
1023 & 597 & 115 & 421 & Theorem 20 & (10,2) & 55 & 57 \\ \hline \end{tabular}
\end{table}
Table 2. Parameters of 4-ary EAQECCs |
2303.02367 | Perirobot space representation for HRI: measuring and designing
collaborative workspace coverage by diverse sensors | Two regimes permitting safe physical human-robot interaction, speed and
separation monitoring and safety-rated monitored stop, depend on reliable
perception of the space surrounding the robot. This can be accomplished by
visual sensors (like cameras, RGB-D cameras, LIDARs), proximity sensors, or
dedicated devices used in industrial settings like pads that are activated by
the presence of the operator. The deployment of a particular solution is often
ad hoc and no unified representation of the interaction space or its coverage
by the different sensors exists. In this work, we make first steps in this
direction by defining the spaces to be monitored, representing all sensor data
as information about occupancy and using occupancy-based metrics to calculate
how a particular sensor covers the workspace. We demonstrate our approach in
two (multi-)sensor-placement experiments in three static scenes and one
experiment in a dynamic scene. The occupancy representation allow to compare
the effectiveness of various sensor setups. Therefore, this approach can serve
as a prototyping tool to establish the sensor setup that provides the most
efficient coverage for the given metrics and sensor representations. | Jakub Rozlivek, Petr Svarny, Matej Hoffmann | 2023-03-04T10:03:17Z | http://arxiv.org/abs/2303.02367v2 | Perirobot space representation for HRI: measuring and designing collaborative workspace coverage by diverse sensors
###### Abstract
Two regimes permitting safe physical human-robot interaction, speed and separation monitoring and safety-rated monitored stop, depend on reliable perception of the space surrounding the robot. This can be accomplished by visual sensors (like cameras, RGB-D cameras, LIDARs), proximity sensors, or dedicated devices used in industrial settings like pads that are activated by the presence of the operator. The deployment of a particular solution is often ad hoc and no unified representation of the interaction space or its coverage by the different sensors exists. In this work, we make first steps in this direction by defining the spaces to be monitored, representing all sensor data as information about occupancy and using occupancy-based metrics to calculate how a particular sensor covers the workspace. We demonstrate our approach in two (multi-)sensor-placement experiments in three static scenes and one experiment in a dynamic scene. The occupancy representation allow to compare the effectiveness of various sensor setups. Therefore, this approach can serve as a prototyping tool to establish the sensor setup that provides the most efficient coverage for the given metrics and sensor representations.
## I Introduction
Safety is a fundamental requirement of physical human-robot interaction. While there are multiple approaches to ascertain it [1], these were funneled in the industrial context into four collaborative operations by the technical specification for collaborative robots ISO 10218 [2]. Two of those operations (so-called safety-rated monitored stop and speed and separation monitoring) depend on sensors that monitor the space surrounding the robot.
Risk assessment is a crucial part of the safety evaluation of a robot application according to the standard [2]. However, in practice, this risk assessment relies on experience and general guidelines, not on a formal approach. This article proposes to use a unified occupancy-based modeling approach for the space surrounding the robot, i.e., the _perirobot space_ (PeRS). This modeling allows the comparison of various sensor setups and the formal evaluation of their workspace coverage. An illustration of the scenarios studied in this work is in Fig. 1.
**Contribution.** The contributions of this paper are the following:
* A unified robot surrounding space representation approach.
* Demonstration of the main benefits of this approach: the delimitation of occlusions, explicit definition of sensor data representation, and a clear description of the space monitored by the sensors.
* Novel use of established metrics to evaluate coverage of regions of interest.
* Application of the approach on an optimal (multi-) sensor coverage task on four different scenes.
* A publicly available code repository with the perirobot space implementation1. Footnote 1: [https://github.com/ctu-vras/perirobot-space](https://github.com/ctu-vras/perirobot-space)
## II Related work
The origin of our research is in safe human-robot interaction (HRI). While much work in safety research has been done on the side of the robot control algorithms, we focus in this article on workspace monitoring. Space that is not monitored is unknown to the robot, and any activity (or lack thereof) cannot affect the robot's behavior. Four fields of research are relevant to this problem.
First, sensor coverage approaches are relevant (see, for example, [3, 4]). Nevertheless, they often aim at coverage
Fig. 1: Studied scenes represented as occupancy models. Height is color-coded in the simulation output.
from a theoretical perspective (thus using only 2D coverage) and do not combine multiple types of sensors. An exception in this respect is the recently published work by OSCadal et al. [5]. They present an approach where they determine the importance of individual voxels in the shared human-robot workspace and arrange cameras to provide the highest coverage of the monitored space. However, they do not take into account occlusions caused by the human or other objects in the scene.
The second research field, occlusion mitigation or gaze control, as in [6, 7] or [8], is also related as it shares the aim to monitor a region of space efficiently. However, this region, as opposed to coverage approaches, is small (usually merely a target point). For safe interaction, all the relevant robot surrounding space needs to be considered.
Sensor fusion approaches, the third relevant research area, do not address coverage, but focus on the fusion of various sensor inputs (see [9]). Still, safety-related works such as [10, 11] leverage sensor fusion to ascertain sufficient workspace coverage. An interesting addition in this respect is [12] which evaluates two types of sensors (albeit not used together in the same setup) for 2D and 3D coverage.
The last research area that we discuss deals with the safety aspect. While collaboration methods were introduced already in [2], safe HRI was investigated more after the technical specification for collaborative robots ISO/TS 15066 [13] provided details about them. The specification describes two methods for safe operation that rely on workspace monitoring, safety-rated monitored stop (SRMS), and speed and separation monitoring (SSM). The SRMS specification demands that the system detects the presence of a person within the intended workspace and keeps the robot in a monitored stop state if a person is present. For SSM, it is necessary to maintain at least the protective separation distance between the human and robot at all times. Thus, SSM allows for a higher resolution where, for example, individual human keypoints can be monitored. SSM-motivated monitoring is a vivid field of research, where methods of implementing efficient sensing and distance measurement algorithms are studied (e.g., Marvel [14, 15].)
Specific aspects of workspace monitoring are studied. For this work, investigations of the nature of the relative distances between human and robot keypoints are highly relevant. See [16] for a comparison of approaches and their approach based on the depth field. Studies of the robot representation are relevant also and its possible dynamic adjustment based on the human or robot velocity: from the kinetostatic danger field to sphere swept lines-based bounding volumes [10, 17, 18, 19, 20, 21]. These approaches, however, focus on robot control and not on monitoring itself.
Finally, let us mention that the majority of the presented papers investigate camera-based monitoring (see also [22]). However, there are other alternatives for safety-related monitoring, e.g., time-of-flight sensors [23] or proximity sensing [24]. Especially proximity sensing presents a promising and recent avenue of research and focuses on robot-mounted sensors that can detect obstacles up to 0.5 m from the robot surface; see [25] for a thorough review.
## III Proposed approach
Our interest in the space surrounding the robot is motivated by safe HRI. The inspiration comes from the so-called peripersonal space [26] which was also described as a "margin of safety" when avoidance behavior is concerned. However, perirobot space aims to be a more general approach that is not limited by the human-likeness of peripersonal space (e.g., sensors are not limited only to the robot's body). We first discuss the region of interest for the proposed approach. Then we present the modeling and representation of sensors and the notion of perirobot space itself. Finally, we present the metrics used for its evaluation and the modeling approach.
### _Region of interest_
We study always a given region of interest, the volume where HRI can occur. In our approach, we study one robot-centered and one human-centered region.The robot-centered region is defined simply as the (semi-)sphere centered at the robot whose radius is given by the robot's end-effector's maximal reach, see Fig. 1(a). This definition of the 'robot' space signifies that for safe interaction it is necessary to cover the robot's full reach. We designate as the 'human' space the bounding box enclosing the human that is next to the robot, see space \(h\) in Fig. 1(b). By this representation, it is meant that this space needs to be efficiently monitored in order to ascertain safe interaction.
The surrounding space can be also defined with respect to the task, e.g., focusing on the human partner. Following definitions from [27, Chapter 3], for example, we could also distinguish either the robot's workspace (i.e., positions reachable by the end-effector) or the robot's envelope (i.e., the total volume of space occupied by the robot during these positions). There are even more elaborate approaches to capture the space surrounding the robot, see also the Sec. II.
### _Sensor modeling_
However, all of the mentioned definitions do not consider the sensors and their effect. The monitoring of the surrounding space together with the robot's properties determine how the robot would be controlled. For example, a sensor that should trigger the robot's stop at full speed in order to prevent a collision needs to be placed so that it gives the robot sufficient time to brake. Therefore, sensor information should also be part of the representation of the surrounding space of the robot. However, sensor monitoring also introduces a new challenge--occlusions and especially sensor data representation.
The data from the sensors monitoring the robot surrounding space are always represented in some manner. This representation is a deliberate choice and is not defined only by the collected raw sensor data itself. For example, a line laser sensor can be both a switch (i.e., something passed in front of the sensor) or it can serve as a so-called profiler (i.e., measure the profile of the object under the laser).
Additionally, the representation of the sensor reading is specific to the robot, the task, and the available sensors. The choice of representation can lead to different control decisions for the robot. For example, the collision classification [28] for a pressure sensor on the robot can have at least two representations. A collision can be interpreted as a stop signal or an impulse to move away.
Our approach is to model the representation of the sensor readings as information about space occupancy. While safety-related research focuses on detecting the human, we aim to represent the space surrounding the robot as a whole as it is perceived by the sensors. Therefore our approach represents any relevant sensor data as occupied space, be it a human, robot or an object. We can distinguish three main ways of sensor representation:
* **Naive representation.** Pure occupancy information provided from the sensor.
* **Volume-based.** The sensor information is represented as occupancy of a predetermined volume (e.g., pressing a floor pad leads to the assumption that the whole space above the pad is occupied).
* **Feature-based.** Sensor data are used to determine features (often human keypoints) and their locations. This representation can be further extended by determining a bounding box around the human, the feature neighborhoods (e.g., surrounding spheres), or volumes between features (e.g., cylinders connecting human keypoints) which are all considered as occupied space.
We analyze five sensors in this study. Our assumption is that thanks to the simple representation, any relevant sensor could be represented in a similar manner. Namely each sensor fits roughly into one of the three types:
* **Area.** Sensors activated by events in a given area in the workspace.
* **Range.** Sensors activated based on the range from the robot.
* **Zone.** Sensors safeguarding a given zone but not necessarily monitoring the zone itself, e.g., a gate monitoring the entrance to the zone.
All of the analyzed sensors are defined by their 6D poses and additional defining parameters. The sensors and their defining parameters provided in the parentheses are the following:
* **RGB camera.** Pinhole camera model, simulated as ray-casting (field of view, resolution), without the depth information provided.
* **RGB-D camera.** Pinhole camera model, simulated as ray-casting (field of view, resolution), see Fig. 2(a).
* **LIDAR sensor.** Arc of rays (field of view, range), see Fig. 2(b).
* **Pressure pads.** Defined by their area and active if there are any contact points, i.e., occupied voxels right above the pad, designates all space above the pad as occupied (dimensions), see 'P' in Fig. 2(c).
* **Robot proximity cover.** Inflation of the robot model volume (inflation value), see 'RP' in Fig. 2(c).
Fig. 3: Visualization of occupancy based representations. The space monitored by the sensors is in green, while obstacles are in blue.
Fig. 2: Two regions of interest with a single sensor—a camera—and its perception space \(m\). We distinguish four types of the space in the region of interest: occupied (red), free (green), unknown monitored free space (striped), unknown monitored occupied space (grey) and not monitored free space in the region of interest (dotted). Note that the occupied space is merely for illustration purposes this large. In practice, only the surfaces of the captured objects are registered.
The combinations of sensors and representations used in this paper are listed in Tab. I.
### _Perirobot space_
The _perirobot space_ (PeRS) arises from this combination of a region of interest and occupancy-represented sensor data. A simplified representation of PeRS is in Fig. 2. This schema shows the monitored space \(m\) of the camera and two regions of interest, the robot and human space. Notice, however, that the robot itself changes what space is monitored by the camera due to occlusions, and thus the capability to represent its surroundings.
### _Coverage metrics_
We chose two established metrics for classification tasks to evaluate the efficiency of the established PeRS in a specific region of interest, namely the F-score (\(F_{1}\)) and Cohen's Kappa (\(\kappa\)). We distinguish three classes--positive (occupied space), negative (free space), and unknown (not monitored). The last class appears only in predictions. The F-score is given as:
\[F_{1}=\frac{2\mathrm{TP}}{2\mathrm{TP}+\mathrm{FP}+\mathrm{FN}+\mathrm{UF}+ \mathrm{U}\mathrm{O}}\, \tag{1}\]
where UF and UO represent the not monitored free and occupied space, respectively. TP, FP, FN are defined in Fig. 4. The Cohen's Kappa is given by the ratio of the relative observed agreement \(p_{o}\) and \(p_{e}\), the hypothetical probability of chance agreement which is defined in multi-class version as:
\[\kappa=\frac{s(TP+TN)-\sum_{k}p_{k}t_{k}}{s^{2}-\sum_{k}p_{k}t_{k}}\, \tag{2}\]
where \(s\) is the total number of voxels, \(p_{k}\) is the number of voxels where class \(k\) is predicted and \(t_{k}\) is the true number of voxels of class \(k\).
### _OctoMap modeling_
The core notion of our approach is occupancy in the surrounding space. All sensors are represented as providing information about the occupancy of the space they perceive. Specifically, we model the scene and, thereafter, all sensors using OctoMap [29] and octrees.
OctoMap and the occupancy representation also allow us to model the limitations of various approaches. For example, camera occlusions are considered because ray-casting from the simulated sensor is blocked by occupied space. Ray-casting also simulates the limited field of view of the cameras. Additionally, we introduced noise for the range-based sensors (LIDAR, RGB-D camera, and proximity cover) added to the measured distance.
As mentioned earlier, feature-based sensors create a representation of the detected human keypoints. We distinguish three models of the detected human keypoints based on the usual practice in safe HRI. Namely, a bounding box encompassing all the keypoints, the detected keypoints enclosed in spheres or cylindrical connections between the keypoints. These are shown in Fig. 5.
Our modeling approach consists of the following steps:
1. Create a model of the scene and determine the region of interest.
2. Model sensors in the scene.
3. Calculate the sensor-generated occupancy.
4. Calculate the metrics. (see Sec. III-D)
In our initial approach, we model a single manipulator placed on a table and a human in various poses next to it. As mentioned earlier, we investigate two regions of interest.
Fig. 4: Example of the coverage metric representation for a monitored volume with two pads. The active pad designates all the volume above it as occupied. Only the voxels occupied by the person are truly occupied (TP, white on red), the voxels not occupied by the person are falsely marked as occupied (FP, red). The volume above an inactive pad is considered empty. The majority of the voxels are truly empty (TN, green), but the human arm voxels are falsely considered as empty space (FN, white on green). Last, there is also space not monitored by the pads containing empty space (dotted) and occupied space (white on dots).
\begin{table}
\begin{tabular}{r|c|c c c} Sensor & Type & \multicolumn{3}{c}{Representation} \\ & & Naive & Volume & Feature \\ \hline RGB Camera & Zone & & & x \\ RGB-D Camera & Range & x & & x \\ LIDAR & Range & x & & \\ Pressure pad & Area & & x & \\ Proximity & Range & x & & \\ \end{tabular}
\end{table} TABLE I: Representations and types of sensors.
Fig. 5: Ground truth and human models for the used keypoint representations. Height is color-coded in the simulation output.
While these could be determined in a task-dependent fashion, we chose regions that can be defined in a straightforward way for demonstration purposes.
## IV Results
To demonstrate our approach, we present three experiments (parameters in Tab. II evaluated in a series of HRI scenes (shown in Fig. 1). Although the presented scenes focus on HRI, the problem can be understood in general, as the robot's workspace can contain unforeseen objects that could harm the robot or be damaged by the robot.
### _Different data interpretation experiment (Exp 1)_
In this experiment, we compare the performance of the RGB-D camera observing the workspace based on different interpretations of the measured data. Those interpretations are: 1) 2D keypoint detection only--if a human is detected, the defined space around the robot is marked as occupied; 2) Raw 3D point cloud; 3) Raw 3D point cloud with added points from the keypoints surroundings (spheres); 4) Raw 3D point cloud with added points from keypoints cylinder connections; 5) Raw 3D point cloud with added points from the keypoints bounding box. The keypoint representations are shown in Fig. 5.
We placed the RGB-D camera in 172 different positions covering all four walls and ceiling (see Fig. 7 for all positions) and rotated the camera to five different orientations in each position, resulting in 860 measurement poses. We computed the \(F_{1}\) and \(\kappa\) scores for each measurement pose for the 'robot' and 'human' spaces and evaluated the distribution of the metrics for each scene to evaluate the performance of each interpretation.
As shown in Fig. 6, the trends are very similar for both metrics. The 2D detection is outperformed by others in both metrics and in both spaces. The results of the other interpretations indicate that the enhancement of the point cloud by keypoints improves the performance. The addition of cylinders has the highest median for both metrics and for both spaces. Moreover, for the 'human' space, the addition of cylinders has the highest maxima as well. For the 'robot' space, the addition of bounding boxes has the highest maxima. The addition of spheres has the worst results from the additions, but still outperforms the raw 3D point cloud.
In addition to interpretation evaluation, we can analyze the RGB-D camera placement for 'robot' space coverage. We
\begin{table}
\begin{tabular}{c|c|c|c} Exp. (Scenes) & Sensor (\# of poses) & Parameter & Value \\ \hline \hline \multirow{3}{*}{1} & RGB-(D) camera & FoV (hor \(\times\) vev) [\({}^{\circ}\)] & 87 \(\times\) 58 \\ & (172 \(\times\) 5 ori) & Res. (hor \(\times\) vev) [px] & 1280 \(\times\) 720 \\ & (172 \(\times\) 5 ori) & Range (RGBD only) [m] & 0.6 - 6 \\ \hline \multirow{3}{*}{2} & Pressure pad (24) & Dimensions [m] & 1.0 \(\times\) 0.75 \\ \cline{2-4} & Robot proximity cover & Inflation [m] & 0.1/0.2/0.3 \\ \cline{2-4} & \multirow{3}{*}{RGB-D camera (9)} & FoV (hor \(\times\) vev) [\({}^{\circ}\)] & 87 \(\times\) 58 \\ & Res. (hor \(\times\) vev) [px] & 1280 \(\times\) 720 \\ & & Range [m] & 0.3 - 3 \\ \hline \multirow{3}{*}{3 (Dynamic)} & \multirow{3}{*}{LIDAR sensor (172)} & FoV (hor \(\times\) vev) [\({}^{\circ}\)] & 360 \(\times\) 45 \\ & & Range [m] & 0.5 - 20 \\ \cline{1-1} & & Ang. res. (hor / vev) [\({}^{\circ}\)] & 0.7 / 0.7 \\ \end{tabular}
\end{table} TABLE II: Experiment parameters.
Fig. 6: \(F_{1}\) (top two rows) and \(\kappa\) results (bottom two rows) in three static scenes for different interpretations of data from RGB-D camera—2D keypoint detection (Zone); raw 3D point cloud (PC); raw 3D point cloud with added spheres (PC+Sph), Raw 3D point cloud with added cylinders (PC+Cyl); raw 3D point cloud with added bounding box (PC+Box).
Fig. 7: RGB-D camera placement heatmap for ‘robot’ space in Scene 3; \(F_{1}\) (top) and \(\kappa\) (bottom) values.
created a heatmap, where each camera position is represented by its highest \(F_{1}\) or \(\kappa\) value of the five orientations. Figure 7 shows the heat maps for the point cloud with the added keypoint cylinders in Scene 3. As can be seen, there are many solid placements based on the \(F_{1}\) metric all around the walls. Moreover, it is clearly visible that placements close to the human that is obscuring the operator have a much lower \(F_{1}\) score than the other placements around. The \(\kappa\) metric results emphasize the positions only on the right side of the perimeter with the best positions on the front wall. The simulated point clouds for the RGB-D camera placement with the highest 'robot' space values of \(F_{1}\) and \(\kappa\) for Scene 1 and Scene 2 are shown in Fig. 8. The camera is placed on the right wall (in the same notation as for the heatmap) for the highest value of \(F_{1}\) in Scene 1. In other cases, the camera is placed on the front wall, and the placement for the highest value of \(\kappa\) is the same for both scenes.
### _Multi-sensor coverage experiment (Exp 2)_
The second experiment aims to evaluate the integration of several different sensors together and to determine how the combinations improve the coverage of the space. We integrated a pressing pad, a robot proximity cover, and an RGB-D camera placed on the robot base looking front. We tested 24 pressing pad positions, three proximity cover ranges (0.1, 0.2, 0.3 m), and 9 different orientations for the RGB-D camera (from -40\({}^{\circ}\) to 40\({}^{\circ}\) around the z-axis with a step of 10\({}^{\circ}\)). We evaluated the 7 possible combinations of the sensors--a triplet of sensors, 3 pairs of sensors, and 3 individual sensors. Unlike in the previous experiment, we integrated the known pose of the robot into the occupied representation to show that both possibilities are available.
In this scenario, we compare only the maximum \(F_{1}\) and \(\kappa\) as we look for the best placement only for the combination of sensors. Figure 9 shows the maxima of the \(F_{1}\) and \(\kappa\) metrics for the sensor combinations relative to the triplet values.
From the F-score point of view, we can see that the combination of all three sensors does not outperform the others remarkably. Moreover, in the case of the evaluation of the 'human' space, the \(F_{1}\) values are almost the same for all combinations containing the pressing pad. Interestingly, the pair without the pressing pad is the second-best combination for the 'robot' space, followed by the combination of the pressing pad and the robot proximity cover. The \(F_{1}\) results show the difference betwe
Fig. 8: Point clouds for the RGB-D camera poses with the highest metrics value for ‘robot’ space; x-axis coordinate is color-coded for better visibility.
Fig. 10: Occupancy voxels of occupied space (blue) and free space (green) for best variants of sensor combinations for both metrics.
Fig. 9: Maxima of \(F_{1}\) (top) and Kappa (bottom) values in three static scenes and mean over scenes for all possible combinations of a pressing pad (pad), a robot proximity cover (prox), and an RGB-D camera (cam) sensors.
individual sensors, e.g., the pressing pad is more suitable for the 'human' space than two other sensors and even their combination.
For the \(\kappa\) results, the trends are very similar to those of the \(F_{1}\) results. However, the relative differences between combinations are larger, mainly for the 'robot' space. The triplet is followed by the pairs containing the pressing pad and the pressing pad alone. These results suggest that using a pair of sensors instead of all three sensors can have a better price-performance ratio.
Figure 10 shows the occupancy voxels for the triplet, the pressing pad with the robot proximity cover, and the pressing pad with the RGB-D camera combinations. For each combination, the variant with the highest 'human' space values of \(F_{1}\) and \(\kappa\) for Scene 2 is shown.
### _Dynamic scene experiment (Exp 3)_
In the third experiment, we look for the best position of the LIDAR sensor in the dynamic scene represented by 27 snapshots of the scene where the human and the robot were moving. We computed the coverage of the LIDAR sensor in the same 172 positions as those used in Exp 1.
In this experiment, we evaluate only the LIDAR placement for the 'robot' space coverage. Similarly to the first experiment, we created a heatmap, where each camera position is represented by its highest \(F_{1}\) or \(\kappa\) value of the five orientations. Figure 7 shows the heat maps for the point cloud with the added keypoint cylinders. Again, we have differences between the best positions based on the \(F_{1}\) and \(\kappa\) values. The \(F_{1}\) results suggest placing the LIDAR in the right half of the ceiling, which is totally different compared to the placement of the RGB-D camera in the first experiment (see Fig. 7). From the ceiling positions, the \(\kappa\) results emphasize only a few places close to the center of the ceiling, and they also propose placements on the right wall. The simulated point clouds for the LIDAR placement with the highest 'robot' space values of \(F_{1}\) and \(\kappa\) are shown in Fig. 12 for two different snapshots. As can be seen, the best position based on the \(F_{1}\) metric is on the ceiling. On the other hand, the position with the highest \(\kappa\) values is on the right wall (in the same notation as for the heatmap).
## V Conclusion
We introduced the definition of the perirobot space (PeRS)--the monitored region of interest for robot interaction where sensor data are represented as occupancy information. This study presented the formalization and evaluation of PeRS. We demonstrated its use in an RGB-D camera placement experiment, a multi-sensor coverage experiment, and a dynamic scene experiment. The occupancy representation allowed us to compare the effectiveness of various sensor setups and we used the well-established metrics of \(F_{1}\) and \(\kappa\) score to evaluate the coverage of regions of interest. Therefore, our approach can serve as a prototyping tool to establish the sensor setup that provides the most efficient coverage with respect to the given metrics and sensor representations. For that reason, we made the implementation publicly available 2.
Footnote 2: [https://github.com/ctu-vras/perirobot-space](https://github.com/ctu-vras/perirobot-space)
## VI Discussion
The central idea of our approach is the simple evaluation of integrated sensors with different properties. We consider this work a first step that could be extended in multiple ways. Our approach can be interpreted in two ways. First, given a sensor setup, what are the proper, i.e., safe and efficient, ways of achieving the task. Or, given a task, determine the optimal
Fig. 11: LIDAR sensor placement heatmap for ‘robot’ space in dynamic scene; \(F_{1}\) (top) and \(\kappa\) (bottom) values.
Fig. 12: Point clouds for the LIDAR sensor position with the highest ‘robot’ space values of \(F_{1}\) (sensor on the ceiling) and \(\kappa\) (sensor on the wall) for the dynamic scene; x-axis coordinate is color-coded for better visibility.
sensor setup. For our presented multi-sensor grid search to determine the optimal sensor setup, we could consider additional criteria (e.g., space limitations or sensor costs to find the best price-performance ratio).
Our approach can capture the differences between sensors of the same type, as revealed by different suggestions for placing the LIDAR sensor and the RGB-D camera (see Figs. 7, 11). Moreover, the known robot pose can be integrated into the occupied space, as shown in the multi-sensor experiment (see Fig. 10), to disfavor sensor variants observing only the robot and not the space around. We found a difference between the \(F_{1}\) and \(\kappa\) scores for the best placements for the sensors. This is probably caused by the \(F_{1}\) score overlooking the correctly detected free space.
While we presented only a few sensors in this paper, our occupancy-based approach allows the easy addition of new sensors. For a detailed evaluation of a sensor setup, all appropriate representations should be considered.
The presented approach dealt with regions of interest surrounding the robot and the human. However, one could take into account the instantaneous robot speed to determine an appropriate surrounding volume for the robot to be considered as the monitored space. Or, following the peripersonal space approaches, one could 'enact' PeRS by training an artificial neural network on sensor data.
Other possible additions include additional sensors (e.g., moving sensors, sensors on humans), further representations (e.g., swept volumes for the human), taking into account the human attention (e.g., by tracking human gaze and incorporating this attention as a factor), using the occupancy representation of sensors for control (e.g., evasion of the occupied space). Also, we would like to extend the framework with a user interface to allow its easier use and optimize the code for faster computation.
|
2308.00602 | Construction of free quasi-idempotent differential Rota-Baxter algebras
by Gröbner-Shirshov bases | Differential operators and integral operators are linked together by the
first fundamental theorem of calculus. Based on this principle, the notion of a
differential Rota-Baxter algebra was proposed by Guo and Keigher from an
algebraic abstraction point of view. Recently, the subject has attracted more
attention since it is associated with many areas in mathematics, such as
integro-differential algebras. This paper considers differential algebras,
Rota-Baxter algebras and differential Rota-Baxter algebras in the
quasi-idempotent operator context. We establish a Gr\"obner-Shirshov basis for
free commutative quasi-idempotent differential algebras (resp. Rota-Baxter
algebras, resp. differential Rota-Baxter algebras). This provides a linear
basis of free object in each of the three corresponding categories by
Composition-Diamond lemma. | Huizhen Qiu, Shanghua Zheng, Yangfan Dan | 2023-07-08T09:40:22Z | http://arxiv.org/abs/2308.00602v1 | # Construction of free quasi-idempotent differential
###### Abstract.
Differential operators and integral operators are linked together by the first fundamental theorem of calculus. Based on this principle, the notion of a differential Rota-Baxter algebra was proposed by Guo and Keigher from an algebraic abstraction point of view. Recently, the subject has attracted more attention since it is associated with many areas in mathematics, such as integro-differential algebras. This paper considers differential algebras, Rota-Baxter algebras and differential Rota-Baxter algebras in the quasi-idempotent operator context. We establish a Grobner-Shirshov basis for free commutative quasi-idempotent differential algebras (resp. Rota-Baxter algebras, resp. differential Rota-Baxter algebras). This provides a linear basis of free object in each of the three corresponding categories by Composition-Diamond lemma.
Key words and phrases:Rota-Baxter algebra; Differential algebra; Differential Rota-Baxter algebra; Grobner-Shirshov basis; Quasi-idempotent operator 2010 Mathematics Subject Classification: 13A99, 16W99, 13P10, 08B20, 12H05
###### Contents
* 1 Introduction
* 1.1 Differential Rota-Baxter algebras
* 1.2 Grobner-Shirshov bases for free \(\Omega\)-operated algebras
* 1.3 Quasi-idempotent operators and outline of the paper
* 2 Basic properties and examples
* 3 Grobner-Shirshov bases for free commutative \(\Omega\)-operated algebras
* 3.1 Free commutative \(\Omega\)-operated algebras
* 3.2 Composition-Diamond lemma
* 3.3 A monomial order on \(\mathbb{C}(Y)\)
* 4 Free commutative quasi-idempotent differential Rota-Baxter algebras
* 4.1 Linear bases of free commutative quasi-idempotent differential algebras
* 4.2 Linear bases of free commutative quasi-idempotent Rota-Baxter algebras
* 4.3 Linear bases of free commutative quasi-idempotent differential Rota-Baxter algebras
## 1. Introduction
The aim of this paper is to give linear bases of free commutative quasi-idempotent differential algebras (resp. Rota-Baxter algebras, resp. differential Rota-Baxter algebras) by means of Grobner-Shirshov bases.
###### Contents
* 1 Introduction
* 2 Preliminaries and main results
* 3 The \(\Omega\)-
In particular, the GS bases theory for free commutative and noncommutative \(\Omega\)-operated algebras has been successfully established in [10, 35]. Later, they were used to construct the free commutative and noncommutative Morita Rota-Baxter algebras of weight \(\lambda\). Furthermore, GS bases were applied to the explicit construction of free object in the category of differential type algebras, Rota-Baxter type algebras, integro-differential algebras and Lie differential Rota-Baxter algebras [9, 17, 18, 29, 33, 34, 32].
This paper is devoted to refine some aspects of the above works in the quasi-idempotent operators context by employing the general method of GS bases.
### Quasi-idempotent operators and outline of the paper
There are intimate relations among quasi-idempotent operators, Rota-Baxter algebras and Hopf algebras. Jian [26] constructed the Rota-Baxter operator by using quasi-idempotent elements. Based on this result, it follows that each finite dimensional Hopf algebra possesses a Rota-Baxter algebra structure. Dually, Ma, Li and Yang [30] proved that every finite dimensional Hopf algebra contains a Rota-Baxter coalgebra (bialgebra) by the same way. Recently, Zheng, Guo and Zhang [45] characterized a quasi-idempotent operator by a generic Rota Baxter paired modules of weight \(\lambda\).
On the other hand, Aguiar and Moreira [1] gave the explicit construction of free noncommutative quasi-idempotent Rota-Baxter algebras on one generator by using angularly decorated rooted trees. More importantly, the free tridendriform (resp. dendriform) algebra on one generator can be embedded into the free quasi-idempotent Rota-Baxter algebras (resp. with an idempotent generator).
Motivated by a differential operator being a left inverse of a Rota-Baxter operator, it is desirable to introduce a notion of a quasi-idempotent differential algebra and consider the free object in the corresponding category. Consequently differential operators, Rota-Baxter operators and quasi-idempotent operators combine together to provide a quasi-idempotent differential Rota-Baxter algebra. We shall gives an answer to the Shirshov's question of free quasi-idempotent differential Rota-Baxter algebras by GS bases, with emphasis on the commutative case.
This is our main motivation of this paper.
The outline of this paper is as follows. Section 2 gives some properties and examples of quasi-idempotent differential Rota-Baxter algebras. Section 3 recalls the explicit construction of free commutative \(\Omega\)-operated algebras on a set, and establishes the Composition-Diamond lemma for free commutative \(\Omega\)-operated algebras. A monomial order on free commutative \(\Omega\)-operated monoids is given for GS bases for free commutative quasi-idempotent differential Rota-Baxter algebras. With these results in hand, Section 4 first obtains a GS basis for free commutative quasi-idempotent differential (resp. Rota-Baxter) algebras of weight \(\lambda\). Based on this, we supply a GS basis for free commutative quasi-idempotent differential Rota-Baxter algebras of weight \(\lambda\). As a consequence, a linear basis of the free commutative quasi-idempotent differential algebras (resp. Rota-Baxter algebras, resp. differential Rota-Baxter algebras) is obtained.
**Convention.** Throughout this paper, \(\mathbf{k}\) is taken to be a field of characteristic 0. All algebras and linear maps are taken over \(\mathbf{k}\), unless the contrary is specified. By an algebra we mean an associative unitary \(\mathbf{k}\)-algebra.
## 2. Basic properties and examples
We first introduce notions of a quasi-idempotent differential algebra, a quasi-idempotent Rota-Baxter algebra and a quasi-idempotent differential Rota-Baxter algebra. Then some examples of each of these three algebras are given, respectively.
**Definition 2.1**.: Let \(R\) be an algebra. Let \(0\neq\lambda\in\mathbf{k}\) be fixed.
1. A linear operator \(P:R\to R\) is called a **quasi-idempotent operator of weight \(\lambda\)** on \(R\) if \(P^{2}=-\lambda P\).
2. A nonzero element \(\xi\in R\) is called a **quasi-idempotent element of weight \(\lambda\)** if \(\xi^{2}=-\lambda\xi\).
3. A **quasi-idempotent differential algebra of weight \(\lambda\)** is an algebra \(R\) together with a quasi-idempotent operator \(d\) of weight \(\lambda^{-1}\) (that is, \(d^{2}=-\lambda^{-1}d\)) on \(R\) satisfying Eqs. (1) and (2). Then \(d\) is called a **quasi-idempotent differential operator of weight \(\lambda\)**.
4. A **quasi-idempotent Rota-Baxter algebra of weight \(\lambda\)** is an algebra \(R\) together with a quasi-idempotent operator \(P\) of weight \(\lambda\) on \(R\) satisfying Eq. (3). Then \(P\) is called a **quasi-idempotent Rota-Baxter operator of weight \(\lambda\)**.
5. A **quasi-idempotent differential Rota-Baxter algebra of weight \(\lambda\)** is an algebra \(R\) together with a quasi-idempotent differential operator \(d\) of weight \(\lambda\) and a quasi-idempotent Rota-Baxter operator \(P\) of weight \(\lambda\) such that (4) \[d\circ P=\mathrm{id}_{R}.\]
6. If a quasi-idempotent operator \(d\) of weight \(\lambda^{-1}\) exactly satisfies Eq. (1), then \(d\) is called a **weak quasi-idempotent differential operator of weight \(\lambda\)**. Particularly, a weak quasi-idempotent differential operator of weight \(\lambda\) with \(d(1)\neq 0\) is called a **degenerate quasi-idempotent differential operator of weight \(\lambda\)**.
For simplicity, we will also use the notion of a quasi-idempotent operator (resp. element, resp. algebra) interchangeably with the notion a quasi-idempotent operator (resp. element, resp. algebra) of weight \(\lambda\). We next give some properties of quasi-idempotent Rota-Baxter algebras.
**Proposition 2.2**.: _Let \(P\) be a quasi-idempotent Rota-Baxter operator of weight \(\lambda\). Then \(\bar{P}:=-\lambda\mathrm{id}-P\) is also a quasi-idempotent Rota-Baxter operator of weight of \(\lambda\)._
Proof.: By [20, Proposition 1.1.12], \(\bar{P}\) is a Rota-Baxter operator of weight \(\lambda\). By \(P^{2}(x)=-\lambda P(x)\),
\[\bar{P}^{2}(x)=\lambda^{2}x+2\lambda P(x)+P^{2}(x)=(-\lambda)(-\lambda \mathrm{id}-P)(x)=-\lambda\bar{P}(x).\]
Thus \(\bar{P}\) is a quasi-idempotent operator of weight \(\lambda\).
**Proposition 2.3**.: _Let \(P\) be a linear operator on an algebra \(R\). Then \(P\) is a quasi-idempotent Rota-Baxter operator of nonzero weight \(\lambda\) if and only if there exists a vector space direct sum decomposition_
\[R=R_{1}\oplus R_{2}\]
_of \(R\) into \(\mathbf{k}\)-subalgebras \(R_{1}\) and \(R_{2}\) of \(R\) such that for all \(a:=a_{1}+a_{2}\) with \(a_{1}\in R_{1},a_{2}\in R_{2}\),_
\[P:R\to R_{1},\;a\mapsto-\lambda a_{1}.\]
Proof.: Suppose that \(P\) is a quasi-idempotent Rota-Baxter operator of nonzero weight \(\lambda\). Then by Proposition 2.2, \(\bar{P}=-\lambda\mathrm{id}-P\) is also a quasi-idempotent Rota-Baxter operator of weight \(\lambda\). Let \(R_{1}:=P(R)\) and let \(R_{2}=\bar{P}(R)\). Then by [20, Proposition 1.1.12], \(R_{1}\) and \(R_{2}\) are subalgebras of \(R\). We next verify that \(R=R_{1}\oplus R_{2}\). Firstly, we have
\[a=-\lambda^{-1}P(a)+(-\lambda^{-1})(-\lambda a-P(a)).\]
Thus \(R=R_{1}+R_{2}\). Furthermore, we let \(x\in R_{1}\cap R_{2}\). Then for some \(r_{1},r_{2}\in R\),
\[x=P(r_{1})=-\lambda r_{2}-P(r_{2}).\]
Thus
\[x=-\lambda^{-1}P^{2}(r_{1})=(-\lambda^{-1})P(-\lambda r_{2}-P(r_{2}))=P(r_{2}) -P(r_{2})=0.\]
Thus \(R=R_{1}\oplus R_{2}\). Assume that \(a=a_{1}+a_{2}\) for some \(a_{1}\in R_{1}\) and \(a_{2}\in R_{2}\). Since \(a=-\lambda^{-1}P(a)+(-\lambda^{-1})(-\lambda a-P(a))\), we know that \(P(a)=-\lambda a_{1}\).
Conversely, by direct computation, we get
\[P^{2}(a)=P(-\lambda a_{1})=\lambda^{2}a_{1}=-\lambda P(a).\]
From the proof of [20, Theorem 1.1.14], we obtain that \(P\) is a Rota-Baxter operator of weight \(\lambda\). Thus \(P\) is a quasi-idempotent Rota-Baxter operator of nonzero weight \(\lambda\).
The intimate relationship between quasi-idempotent Rota-Baxter algebras and Nijenhuis algebras is presented as below.
**Definition 2.4**.: A **Nijenhuis algebra** is a pair \((R,P)\) consisting of an algebra \(R\) and a linear operator \(P\) on \(R\) satisfying the **Nijenhuis identity**
\[P(x)P(y)=P(xP(y))+P(P(x)y)-P^{2}(xy),\quad\forall x,y\in R. \tag{5}\]
**Proposition 2.5**.: _Let \((R,P)\) be a quasi-idempotent Rota-Baxter algebra of weight \(\lambda\). Then \((R,P)\) is a Nijenhuis algebra._
Proof.: Since \(P\) is a Rota-Baxter operator of weight \(\lambda\), we have \(P(x)P(y)=P(xP(y))+P(P(x)y)+\lambda P(xy)\). Then by \(P^{2}(xy)=-\lambda P(xy)\), we get \(P(x)P(y)=P(xP(y))+P(P(x)y)-P^{2}(xy)\), proving the statement.
But the converse of the above proposition is false. For example, by [13, Example 2.3], the left multiplication operator \(L_{a}:R\to R,x\mapsto ax\) is a Nijenhuis operator. If \(a^{2}\neq-\lambda a\), then \(L_{a}^{2}(x)=a^{2}x\neq-\lambda ax=-\lambda L_{a}(x)\) for some \(x\in R\). Thus \(L_{a}\) is not quasi-idempotent, and so is not a quasi-idempotent Rota-Baxter operator of weight \(\lambda\).
Next we consider some examples of quasi-idempotent differential Rota-Baxter algebras. By [22, Proposition 2.10], there is a classical differential Rota-Baxter algebra structure on \(\lambda\)-Hurwitz series. It is desirable to consider the quasi-idempotent case. For this, we recall the notion of \(\lambda\)-Hurwitz series. Let \(A\) be a \(\mathbf{k}\)-algebra. Denote
\[A^{\mathbb{N}}:=\{f:\mathbb{N}\to A\}:=\{(f(n)):=(f(0),f(1),f(2),\cdots,f(n), \cdots)\,|\,f(n)\in A,\;n\geq 0\}.\]
Addition and scalar multiplication on \(A^{\mathbb{N}}\) are defined by: for all \(f,g\in A^{\mathbb{N}}\),
\[f+g=((f+g)(n)),\quad\text{where}\quad(f+g)(n)=f(n)+g(n)\] \[kf=((kf)(n)),\quad\text{ where}\quad(kf)(n)=kf(n).\]
The product on \(A^{\mathbb{N}}\) is defined by:
\[fg=((fg)(n)),\quad\text{where}\quad(fg)(n)=\sum_{k=0}^{n}\sum_{j=0}^{n-k} \binom{n}{k}\binom{n-k}{j}\lambda^{k}f(n-j)g(k+j).\]
Then \(A^{\mathbb{N}}\) becomes a unitary commutative \(\mathbf{k}\)-algebra, called the \(\lambda\)**-Hurwitz series algebra over \(A\)** or simply the \(\lambda\)**-Hurwitz series ring over \(A\)**, also denoted by \(A^{\mathbb{N}}\). The identity \(1\) of \(A^{\mathbb{N}}\) is given by \(1(0)=1_{A}\) and \(1(n)=0\) if \(n>0\). Define a linear map
\[\partial_{A}:A^{\mathbb{N}}\to A^{\mathbb{N}},\quad(\partial_{A}(f))(n)=f(n+1 ),\quad n\in\mathbb{N},\;f\in A^{\mathbb{N}}.\]
Then by [22, Proposition 2.7], \((A^{\mathbb{N}},\partial_{A})\) is a differential algebra of weight \(\lambda\). If \(0\neq\lambda\in\mathbf{k}\), we set
\[\overline{A^{\mathbb{N}}}=\{f:\mathbb{N}\to A\;|\;f(n)=-\lambda^{-1}f(n-1),n \geq 1\}\subseteq A^{\mathbb{N}}.\]
Thus we can write \(f\in\overline{A^{\mathbb{N}}}\) as
\[f=(f(0),-\lambda^{-1}f(0),(-\lambda^{-1})^{2}f(0),\cdots(-\lambda^{-1})^{n}f(0), \cdots).\]
Denote by \(\bar{\partial}_{A}\) the restriction of \(\partial_{A}\) to \(\overline{A^{\mathbb{N}}}\). A direct computation shows that \(\overline{A^{\mathbb{N}}}\) together with \(\bar{\partial}_{A}\) forms a quasi-idempotent differential subalgebra of \(A^{\mathbb{N}}\). Furthermore, we define a linear map
\[\bar{\pi}_{A}:\overline{A^{\mathbb{N}}}\to\overline{A^{\mathbb{N}}},\quad(\bar {\pi}_{A}(f))(0)=-\lambda f(0),\quad(\bar{\pi}_{A}(f))(n)=f(n-1),\,n\geq 1,\quad f \in\overline{A^{\mathbb{N}}}.\]
**Proposition 2.6**.: _The triple \((\overline{A^{\mathbb{N}}},\bar{\partial}_{A},\bar{\pi}_{A})\) is a quasi-idempotent differential Rota-Baxter algebra of weight \(\lambda\)._
Proof.: For all \(f\in\overline{A^{\mathbb{N}}}\), we have
\[\bar{\pi}_{A}(f)=(-\lambda f(0),f(0),-\lambda^{-1}f(0),\cdots,(-\lambda^{-1}) ^{n-1}f(0),\cdots)=-\lambda f, \tag{6}\]
proving \(\bar{\pi}_{A}(f)\) is in \(\overline{A^{\mathbb{N}}}\). Then
\[\bar{\pi}_{A}^{2}(f)=(\lambda^{2}f(0),-\lambda f(0),f(0),-\lambda^{-1}f(0), \cdots,(-\lambda^{-1})^{n-2}f(0),\cdots)=\lambda^{2}f,\]
Thus \(\bar{\pi}_{A}^{2}(f)=-\lambda\bar{\pi}_{A}(f)\), proving that \(\bar{\pi}_{A}\) is a quasi-idempotent operator of weight \(\lambda\). In addition, we have
\[((\bar{\partial}_{A}\circ\bar{\pi}_{A})(f))(0)=(\bar{\partial}_{A}(-\lambda f ))(0)=(-\lambda f)(1)=f(0),\]
and
\[((\bar{\partial}_{A}\circ\bar{\pi}_{A})(f))(n)=(\bar{\partial}_{A}(\bar{\pi}_ {A}(f)))(n)=(\bar{\partial}_{A}(f))(n-1)=f(n),\,n\geq 1.\]
This gives \(\bar{\partial}_{A}\circ\bar{\pi}_{A}=\mathrm{id}_{\overline{A^{\mathbb{N}}}}\). Then it remains to verify that \(\bar{\pi}_{A}\) is a Rota-Baxter operator of weight \(\lambda\) on \(\overline{A^{\mathbb{N}}}\). Define
\[h_{0}:=\bar{\pi}_{A}(f)\bar{\pi}_{A}(g)-\bar{\pi}_{A}(\bar{\pi}_{A}(f)g)-\bar {\pi}_{A}(f\bar{\pi}_{A}(g))-\lambda\bar{\pi}_{A}(fg).\]
Then
\[\bar{\partial}_{A}(h_{0}) = \bar{\partial}_{A}\Big{(}\bar{\pi}_{A}(f)\bar{\pi}_{A}(g)-\bar{ \pi}_{A}(\bar{\pi}_{A}(f)g)-\bar{\pi}_{A}(f\bar{\pi}_{A}(g))-\lambda\bar{\pi}_ {A}(fg)\Big{)}\] \[= \bar{\partial}_{A}\Big{(}\bar{\pi}_{A}(f)\bar{\pi}_{A}(g)\Big{)} -\bar{\partial}_{A}\Big{(}\bar{\pi}_{A}(\bar{\pi}_{A}(f)g)\Big{)}-\bar{ \partial}_{A}\Big{(}\bar{\pi}_{A}(f\bar{\pi}_{A}(g))\Big{)}-\bar{\partial}_{A} \Big{(}\lambda\bar{\pi}_{A}(fg)\Big{)}\] \[= \bar{\partial}_{A}(\bar{\pi}_{A}(f))\bar{\pi}_{A}(g)+\bar{\pi}_{ A}(f)\bar{\partial}_{A}(\bar{\pi}_{A}(g))+\lambda\bar{\partial}_{A}(\bar{\pi}_{A}(f)) \bar{\partial}_{A}(\bar{\pi}_{A}(g))-\bar{\partial}_{A}\Big{(}\bar{\pi}_{A}( \bar{\pi}_{A}(f)g)\Big{)}\] \[-\bar{\partial}_{A}\Big{(}\bar{\pi}_{A}(f\bar{\pi}_{A}(g))\Big{)} -\bar{\partial}_{A}\Big{(}\lambda\bar{\pi}_{A}(fg)\Big{)}\quad(\text{by }\bar{ \partial}_{A}\text{ being a differential operator of weight }\lambda)\] \[= f\bar{\pi}_{A}(g)+\bar{\pi}_{A}(f)g+\lambda fg-\bar{\pi}_{A}(f)g -f\bar{\pi}_{A}(g)-\lambda fg\quad(\text{by }\bar{\partial}_{A}\circ\bar{\pi}_{A}=\mathrm{id}_{\overline{A^{ \mathbb{N}}}})\] \[= 0.\]
Thus \(h_{0}=(h_{0}(0),0,\cdots 0,\cdots)\). By Eq. (6), we get
\[h_{0}(0) = \bar{\pi}_{A}(f)\bar{\pi}_{A}(g)(0)-\bar{\pi}_{A}(\bar{\pi}_{A}(f) g)(0)-\bar{\pi}_{A}(f\bar{\pi}_{A}(g))(0)-\lambda\bar{\pi}_{A}(fg)(0)\] \[= (-\lambda)f(0)(-\lambda)g(0)-\Big{(}-\lambda\Big{(}-\lambda f(0)g (0)\Big{)}\Big{)}-\Big{(}-\lambda\Big{(}-\lambda f(0)g(0)\Big{)}\Big{)}- \lambda\Big{(}-\lambda\Big{(}f(0)g(0)\Big{)}\Big{)}\] \[= \lambda^{2}f(0)g(0)-\lambda^{2}f(0)g(0)-\lambda^{2}f(0)g(0)+ \lambda^{2}f(0)g(0)\] \[= 0.\]
Thus \(h_{0}=0\), and so \(\bar{\pi}_{A}\) is a Rota-Baxter operator of weight \(\lambda\) on \(\overline{A^{\mathbb{N}}}\).
**Example 2.7**.: Let \(R\) be an algebra and let \(\lambda\neq 0\). Define a linear operator
\[d_{\lambda}:R\to R,\quad x\mapsto-\lambda^{-1}x,\quad\forall x\in R.\]
Firstly, \(d(1)=-\lambda^{-1}\neq 0\). Then for all \(x,y\in R\),
\[d_{\lambda}^{2}(x)=(-\lambda^{-1})^{2}x=-\lambda^{-1}d_{\lambda}(x)\]
and
\[d_{\lambda}(x)y+xd_{\lambda}(y)+\lambda d_{\lambda}(x)d_{\lambda}(y)=(- \lambda^{-1}x)y+x(-\lambda^{-1}y)+\lambda(-\lambda^{-1}x)(-\lambda^{-1}y)=d_{ \lambda}(xy).\]
Then \(d_{\lambda}\) is a degenerate quasi-idempotent differential operator of weight \(\lambda\).
**Example 2.8**.: Let \(R\) be an algebra and let \(\lambda\neq 0\). Define a linear operator
\[P_{\lambda}:R\to R,\quad x\mapsto-\lambda x,\quad\forall x\in R.\]
Let \(x,y\in R\). Then
\[P_{\lambda}(P_{\lambda}(x)y)+P_{\lambda}(xP_{\lambda}(y))+\lambda P_{\lambda} (xy)=\lambda^{2}xy+\lambda^{2}xy-\lambda^{2}xy=P_{\lambda}(x)P_{\lambda}(y),\]
and
\[P_{\lambda}^{2}(x)=P_{\lambda}(-\lambda x)=-\lambda P_{\lambda}(x).\]
Then \(P_{\lambda}\) is a quasi-idempotent Rota-Baxter operator of weight \(\lambda\). Furthermore, \((d_{\lambda}\circ P_{\lambda})(x)=x\), where \(d_{\lambda}\) is given in Example 2.7. Thus, \((R,d_{\lambda},P_{\lambda})\) is a degenerate quasi-idempotent differential Rota-Baxter algebra of weight \(\lambda\).
At the end of this section, we will see that every quasi-idempotent element of \(R\) can induce a degenerate quasi-idempotent differential operator, a quasi-idempotent Rota-Baxter operators and further a quasi-idempotent differential Rota-Baxter operator.
**Proposition 2.9**.: _[_26_, Proposition 2.2]_ _Let \(R\) be an algebra. Let \(\xi\in R\) be a quasi-idempotent element of weight \(\lambda\). Then the linear operator \(P_{\xi}:R\to R\) given by \(x\mapsto\xi x\), is a quasi-idempotent Rota-Baxter operator of weight \(\lambda\) on \(R\)._
Similarly, we have
**Proposition 2.10**.: _Let \(R\) be a commutative \(\mathbf{k}\)-algebra. If \(\xi\in R\) be an invertible quasi-idempotent element of weight \(\lambda\), then the linear operator \(d_{\xi}:R\to R\) given by \(x\to\xi^{-1}x\), is a degenerate quasi-idempotent differential operator of weight \(\lambda\)._
Proof.: By \(\xi^{2}=-\lambda\xi\), we get \(\lambda(\xi^{-1})^{2}=-\xi^{-1}\) and \(\lambda\neq 0\). Then for all \(x,y\in R\), we have
\[d_{\xi}^{2}(x)=(\xi^{-1})^{2}x=-\lambda^{-1}d_{\xi}(x),\quad d_{\xi}(1)=\xi^{ -1}\neq 0,\]
and
\[d_{\xi}(x)y+xd_{\xi}(y)+\lambda d_{\xi}(x)d_{\xi}(y)=\xi^{-1}xy+\xi^{-1}xy+ \lambda(\xi^{-1})^{2}xy=d_{\xi}(xy).\]
Thus \(d_{\xi}\) is a degenerate quasi-idempotent differential operator of weight \(\lambda\).
**Proposition 2.11**.: _Let \(R\) be a commutative \(\mathbf{k}\)-algebra. Let \(\xi\in R\) be an invertible quasi-idempotent element of weight \(\lambda\). Let \(d_{\xi}\) and \(P_{\xi}\) be as above. Then_
1. _The pair_ \((R,d_{\xi})\) _is a commutative degenerate quasi-idempotent differential_ \(\mathbf{k}\)_-algebra of weight_ \(\lambda\)_._
2. _The pair_ \((R,P_{\xi})\) _is a commutative quasi-idempotent Rota-Baxter_ \(\mathbf{k}\)_-algebra of weight_ \(\lambda\)_._
3. _The triple_ \((R,d_{\xi},P_{\xi})\) _is a commutative degenerate quasi-idempotent differential Rota-Baxter_ \(\mathbf{k}\)_-algebra of weight_ \(\lambda\)
Proof.: \((a)\) follows from Proposition 2.10.
\((b)\) follows from Proposition 2.9.
\((c)\) For all \(x\in R\), we have
\[(d_{\xi}\circ P_{\xi})(x)=d_{\xi}(\xi x)=\xi^{-1}(\xi x)=x,\]
proving \(d_{\xi}\circ P_{\xi}=\operatorname{id}_{R}\). Then by Items \((a)\) and \((b)\), \((R,d_{\xi},P_{\xi})\) is a degenerate quasi-idempotent differential Rota-Baxter algebra of weight \(\lambda\).
## 3. Grobner-Shirshov bases for free commutative \(\Omega\)-operated algebras
In this section, we first recall from [19, 35] the construction of free commutative \(\Omega\)-operated unitary algebras on a set \(Y\), and the Composition-Diamond lemma for free commutative \(\Omega\)-operated algebras.
### Free commutative \(\Omega\)-operated algebras
An \(\Omega\)**-operated algebra** (or called an **algebra with multiple operators**) is defined to be an algebra \(R\) equipped with a set \(\Omega\) of multiple linear operators.
Let \(Y\) be a set. Denote by \(C(Y)\) the free commutative monoid on \(Y\) with the identity \(1\). Let
\[\Omega=\bigcup_{n=1}^{\infty}\Omega_{n},\]
where \(\Omega_{n}\) is the set of \(n\)-ary operators. We first construct the free commutative \(\Omega\)-operated monoid on \(Y\) by a direct system
\[\{\,\iota_{k}:\mathfrak{C}_{k}\to\mathfrak{C}_{k+1}\,\}_{k=0}^{\infty}\]
of free commutative monoids \(\mathfrak{C}_{n}\), where \(\iota_{n}\) is the natural embedding.
Let
\[Y_{0}:=Y\quad\text{and}\quad\mathfrak{C}_{0}:=C(Y_{0}).\]
Then we define
\[Y_{1}:=Y\sqcup\Omega(\mathfrak{C}_{0})\quad\text{and}\quad\mathfrak{C}_{1}:=C( Y_{1}),\]
where
\[\Omega(\mathfrak{C}_{0})=\bigcup_{n=1}^{\infty}\{\,\omega_{n}(u_{1},u_{2}, \cdots,u_{n})\,|\,\omega_{n}\in\Omega_{n},u_{i}\in\mathfrak{C}_{0},i=1,2, \cdots,n\}.\]
The injection \(\iota_{0}:Y_{0}\to Y_{1}\) induces an embedding from \(\mathfrak{C}_{0}\) to \(\mathfrak{C}_{1}\), still denoted by \(\iota_{0}\). For a given \(k\geq 0\), assume by induction that we have defined the commutative \(\Omega\)-operated monoid \(\mathfrak{C}_{i}\) with the properties that \(\mathfrak{C}_{i}=C(Y\sqcup\Omega(\mathfrak{C}_{i-1}))\) and the natural embedding \(\iota_{i-1}:\mathfrak{C}_{i-1}\to\mathfrak{C}_{i}\) for \(0\leq i\leq k\). Set
\[Y_{k+1}:=Y\sqcup\Omega(\mathfrak{C}_{k})\quad\text{and}\quad\mathfrak{C}_{k+1 }:=C(Y_{k+1}),\]
where
\[\Omega(\mathfrak{C}_{k})=\bigcup_{n=1}^{\infty}\{\,\omega_{n}(u_{1},u_{2}, \cdots,u_{n})\,|\,\omega_{n}\in\Omega_{n},u_{i}\in\mathfrak{C}_{k},i=1,2, \cdots,n\}.\]
Then the identity map on \(Y\) and \(\iota_{k-1}\) together induce an injection
\[\iota_{k}:Y\sqcup\Omega(\mathfrak{C}_{k-1})\to Y\sqcup\Omega(\mathfrak{C}_{k}).\]
Then by the functoriality of \(C\), we get an embedding, also denoted by \(\iota_{k}\),
\[\iota_{k}:\mathfrak{C}_{k}\to\mathfrak{C}_{k+1}.\]
This completes the desired direct system. Then by taking the direct limit, we obtain a commutative monoid
\[\mathfrak{C}(Y):=\lim_{\longrightarrow}\mathfrak{C}_{k}=\bigcup_{k\geq 0} \mathfrak{C}_{k}.\]
Applying the direct limit on both sides of the equation \(\mathfrak{C}_{k}=C(Y\sqcup\Omega(\mathfrak{C}_{k-1}))\), we have
\[\mathfrak{C}(Y)=C(Y\sqcup\Omega(\mathfrak{C}(Y))).\]
It follows that \(\Omega(\mathfrak{C}(Y)\subseteq\mathfrak{C}(Y)\). Then every nonunit element \(u\in\mathfrak{C}(Y)\) has a unique expression of the form
\[u=u_{1}\cdots u_{r}, \tag{7}\]
where \(r\geq 1\) and \(u_{i}\in Y\sqcup\Omega(\mathfrak{C}(Y)),i=1,\cdots,r\), which is called a **prime \(\Omega\)-word**. We call \(r\) the **breadth** of \(u\), denoted by \(|u|\), with the convention that \(|u|=0\) if \(u=1\). The **depth**\(\mathrm{dep}(u)\) of \(u\) is said to be the least \(n\geq 0\) such that \(u\) is contained in \(\mathfrak{C}_{n}\). Thus \(\mathrm{dep}(u)=0\) if and only if \(u\in\mathfrak{C}_{0}=C(Y)\).
Define a set map
\[\bigsqcup_{\omega_{n}}:\mathfrak{C}(Y)^{n}\to\mathfrak{C}(Y),\quad(u_{1},u_{ 2},\cdots,u_{n})\mapsto\omega_{n}(u_{1},u_{2},\cdots,u_{n}),\quad\forall \omega_{n}\in\Omega_{n}.\]
Let \(\bigsqcup_{\Omega_{n}}:=\{\bigsqcup_{\omega_{n}}|\omega_{n}\in\Omega_{n}\}\). Let \(\bigsqcup_{\Omega}:=\bigcup_{n\geq 1}^{\infty}\{\bigsqcup_{\Omega_{n}}| \Omega_{n}\in\Omega\}\). Then \(\mathfrak{C}(Y)\) together with \(\bigsqcup_{\Omega}\) forms a commutative \(\Omega\)-operated monoid. Denote by \(\mathbf{k}\mathfrak{C}(Y)\) the \(\mathbf{k}\)-space with basis \(\mathfrak{C}(Y)\). Extending the multiplication on \(\mathfrak{C}(Y)\) by bilinearity and the set maps \(\bigsqcup_{\Omega}\) by multilinearity, also dented by \(\bigsqcup_{\Omega}\), we obtain a commutative \(\Omega\)-operated algebra \(\mathbf{k}\mathfrak{C}(Y)\). An element of \(\mathfrak{C}(Y)\) (resp. \(\mathbf{k}\mathfrak{C}(Y)\)) is called a **commutative \(\Omega\)-word**(resp. **commutative \(\Omega\)-polynomial**). If \(Y\) is a finite set, we can also just list its elements, as in \(\mathbf{k}\mathfrak{C}(x,y)\) when \(Y=\{x,y\}\). Let \(j_{Y}:Y\to\mathbf{k}\mathfrak{C}(Y)\) be the natural embedding.
**Proposition 3.1**.: _[_17, 19, 35_]_ _Let \(j_{Y}\), \(\bigsqcup_{\Omega}\) and \(\mathbf{k}\mathfrak{C}(Y)\) be as above. Then \(\mathbf{k}\mathfrak{C}(Y):=(\mathbf{k}\mathfrak{C}(Y),\bigsqcup_{\Omega},j_{Y})\) is the free commutative \(\Omega\)-operated unitary algebra on set \(Y\)._
Now we set \(X=\{x_{1},\cdots,x_{n}\}\). Let \(\phi(x_{1},x_{2},\cdots,x_{n})\) be a commutative \(\Omega\)-polynomial in \(\mathbf{k}\mathfrak{C}(X)\). Let \((R,P_{\Omega})\) be a commutative \(\Omega\)-operated algebra, where \(P_{\Omega}:=\{P_{\omega_{n}}:R^{n}\to R\,|\,\omega_{n}\in\Omega_{n},n\geq 1\}\), are the multiple linear operators on \(R\). Define a set map by defining
\[f:X\to R,\quad x_{i}\mapsto r_{i},\quad i=1,\cdots,n.\]
By the universal property of \(\mathbf{k}\mathfrak{C}(X)\), there exists a unique \(\Omega\)-operated algebra morphism \(\bar{f}:\mathbf{k}\mathfrak{C}(X)\to R\). Then we denote
\[\phi_{R}(r_{1},r_{2},\cdots,r_{n}):=\bar{f}\Big{(}\phi(x_{1},\cdots,x_{n}) \Big{)}.\]
**Definition 3.2**.: _[_32, 43_]_ _Let \(X=\{x_{1},\cdots,x_{n}\}\) and let \(\phi(x_{1},\cdots,x_{n})\in\mathbf{k}\mathfrak{C}(X)\)._
1. _[label=()]_
2. _We say that a commutative_ \(\Omega\)_-operated algebra_ \((R,P_{\Omega})\) _is a_ **commutative \(\phi\)-algebra**_, if_ \[\phi_{R}(r_{1},\cdots,r_{n})=0\quad\text{for all }r_{1},\cdots,r_{n}\in R.\] _In this case,_ \(P_{\Omega}\) _is called a_ \(\phi\)_-operator_._
3. _We call_ \(\phi(x_{1},\cdots,x_{n})=0\) _(or simply_ \(\phi(x_{1},\cdots,x_{n})\)_) a **commutative \(\Omega\)-operated polynomial identity** (short for \(\Omega\)-COPI)._
4. _Let_ \(\Phi\subset\mathbf{k}\mathfrak{C}(X)\) _be a family of_ \(\Omega\)_-COPIs. A commutative_ \(\Omega\)_-operated algebra_ \((R,P_{\Omega})\) _is called a_ **commutative \(\Phi\)-algebra** _if it is a_ \(\phi\)_-algebra for all_ \(\phi\in\Phi\)
_._
4. Let \(S\) be a subset of \(R\). The smallest operated ideal of \(R\) containing \(S\) is called the \(\Omega\)**-operated ideal generated by \(S\)**, denoted by \(\mathrm{Id}(S)\).
**Proposition 3.3**.: _[_19, 32_]_ _Let \(\Phi\subset\mathbf{k}\mathfrak{C}(X)\) be a family of \(\Omega\)-COPIs. Let \(R=\mathbf{k}\mathfrak{C}(Z)\) be the free commutative \(\Omega\)-operated algebra on the set \(Z\) with the natural embedding \(j_{Z}:Z\to\mathbf{k}\mathfrak{C}(Z)\). Let \(\mathrm{Id}_{\Phi}(Z)\) be the \(\Omega\)-operated ideal in \(R\) generated by the following set_
\[\{\phi_{R}(r_{1},\cdots,r_{n})\,|\,r_{1},\cdots,r_{n}\in R,\,\phi(u_{1},\cdots u _{n})\in\Phi\}.\]
_Let \(\Pi_{\Phi}:R\to R/\mathrm{Id}_{\Phi}(Z)\) be the quotient morphism. Then the quotient \(\Omega\)-operated algebra \(R/\mathrm{Id}_{\Phi}(Z)\), together with \(i_{Z}:=\Pi_{\Phi}\circ j_{Z}\) and the operator \(P_{\Omega}\) induced by \(\lfloor\,\rfloor_{\Omega}\), is the free commutative \(\Phi\)-algebra on \(Z\)._
From this, we obtain
**Proposition 3.4**.: _Let \(X=\{x,y\}\) and let \(\Omega=\{\lfloor\,\rfloor_{\mathrm{d}},\,\lfloor\,\rfloor_{\mathrm{p}}\}\) be a set of two distinct linear operators on \(\mathbf{k}\mathfrak{C}(x,y)\). Let \(R=\mathbf{k}\mathfrak{C}(Z)\) be the free commutative \(\Omega\)-operated algebra on the set \(Z\) with the natural embedding \(j_{Z}:Z\to\mathbf{k}\mathfrak{C}(Z)\). Let \(0\neq\lambda\in\mathbf{k}\). Denote_
\[\Phi_{\mathrm{d}}:=\left\{\begin{array}{l}\phi_{1}(x,y):=\lfloor x\rfloor_{ \mathrm{d}}\lfloor y\rfloor_{\mathrm{p}}+\lambda^{-1}[x]_{\mathrm{d}}y+ \lambda^{-1}x[y]_{\mathrm{d}}-\lambda^{-1}[xy]_{\mathrm{d}},\\ \phi_{2}(x,y):=\lfloor x\rfloor_{\mathrm{d}}^{2}+\lambda^{-1}[x]_{\mathrm{d}} \end{array}\right\}\subset\mathbf{k}\mathfrak{C}(x,y).\]
\[\Phi_{\mathrm{p}}:=\left\{\begin{array}{l}\phi_{3}(x,y):=\lfloor x\rfloor_{ \mathrm{p}}\lfloor y\rfloor_{\mathrm{p}}-\lfloor x\rfloor_{\mathrm{p}}y \rfloor_{\mathrm{p}}-\lfloor\lfloor x\rfloor_{\mathrm{p}}y\rfloor_{\mathrm{p }}-\lambda[xy]_{\mathrm{p}},\\ \phi_{4}(x,y):=\lfloor x\rfloor_{\mathrm{p}}^{2}+\lambda[x]_{\mathrm{p}}\end{array} \right\}\subset\mathbf{k}\mathfrak{C}(x,y).\]
1. Let \(\Pi_{\Phi_{\mathrm{d}}}:R\to R/\mathrm{Id}_{\Phi_{\mathrm{d}}}(Z)\) be the quotient morphism. Then the quotient \(\Omega\)-operated algebra \(R/\mathrm{Id}_{\Phi_{\mathrm{d}}}(Z)\), together with \(i_{Z}:=\Pi_{\Phi_{\mathrm{d}}}\circ j_{Z}\) and the operator \(d\) induced by \(\lfloor\,\rfloor_{\mathrm{d}}\), is the free commutative quasi-idempotent differential algebra on \(Z\).
2. Let \(\Pi_{\Phi_{\mathrm{p}}}:R\to R/\mathrm{Id}_{\Phi_{\mathrm{p}}}(Z)\) be the quotient morphism. Then the quotient \(\Omega\)-operated algebra \(R/\mathrm{Id}_{\Phi_{\mathrm{p}}}(Z)\), together with \(i_{Z}:=\Pi_{\Phi_{\mathrm{p}}}\circ j_{Z}\) and the operator \(P\) induced by \(\lfloor\,\rfloor_{\mathrm{p}}\), is the free commutative quasi-idempotent Rota-Baxter algebra on \(Z\).
3. Let \(\Pi_{\Phi_{\mathrm{q}_{\mathrm{q}_{\mathrm{q}_{\mathrm{q}_{\mathrm{q}_{ \mathrm{q}_{\mathrm{q}_{\mathrm{q}_{\mathrm{q}_{\mathrm{q}_{\mathrm{q}_{ \mathrm{q}_{\mathrm{q}_{\mathrm{q}_{\mathrm{q}_{\mathrm{q}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\\\\\\\\\\\\\\\\\\\\\ \ \ \\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}
1. If there exist \(\omega,\mu,\nu\in\mathfrak{C}(Y)\) such that \(\omega=\bar{f}\mu=\nu\bar{g}\) with \(\max\{\,|\bar{f}|,\,|\bar{g}|\,\}<|\omega|<|\bar{f}|+|\bar{g}|\), we call \[(f,g)_{\omega}^{\mu,\nu}:=f\mu-\nu g\] the **intersection composition of \(f\) and \(g\) with respect to \((\mu,\nu)\)**.
2. If there exist \(q\in\mathfrak{C}^{\star}(Y)\) and \(\omega\in\mathfrak{C}(Y)\) such that \(\omega=\bar{f}=q|_{\bar{g}}\), we call \[(f,g)_{\omega}^{q}:=f-q|_{g}\] the **including composition of \(f\) and \(g\) with respect to \(q\)**.
The \(\Omega\)-word \(\omega\) presented in the above definition is called the **ambiguity** of \((f,g)_{\omega}^{\mu,\nu}\) or \((f,g)_{\omega}^{q}\).
**Definition 3.6**.: Let \(\leq\) be a monomial order on \(\mathfrak{C}(Y)\). Let \(S\) be a set of monic commutative \(\Omega\)-polynomials in \(\mathbf{k}\mathfrak{C}(Y)\). Let \(\omega\in\mathfrak{C}(Y)\).
1. A commutative \(\Omega\)-polynomial \(f\) is called **trivial modulo**\((S,\omega)\) if \[f=\sum_{i}c_{i}q_{i}|_{s_{i}},\] where \(c_{i}\in\mathbf{k}\), \(q_{i}\in\mathfrak{C}^{\star}(Y)\), \(s_{i}\in S\) and \(q_{i}|_{\bar{s}_{i}}<\omega\), and we denote it by \[f\equiv 0\mod(S,\omega).\]
2. For any commutative \(\Omega\)-polynomials \(f\) and \(g\), a pair \((f,g)\) is called **congruent modulo**\((S,\omega)\), denoted by \(f\equiv g\mod(S,\omega)\), if \(f-g\) is trivial modulo \((S,\omega)\).
3. The set \(S\) is called a **Grobner-Shirshov basis with respective to \(\leq\)** if for all \(f,g\in S\), both \((f,g)_{\omega}^{\mu,\nu}\) and \((f,g)_{\omega}^{u}\) are trivial modulo \((S,\omega)\).
**Theorem 3.7**.: (Composition-Diamond Lemma)[35, Thoerem 3.2] _Let \(S\subseteq\mathbf{k}\mathfrak{C}(Y)\) be a set of monic commutative \(\Omega\)-polynomials. Let \(\leq\) be a monomial order on \(\mathfrak{C}(Y)\). Then the following statements are equivalent:_
1. \(S\) _is a Grobner-Shirshov basis._
2. _As_ \(\mathbf{k}\)_-vector spaces,_ \[\mathbf{k}\mathfrak{C}(Y)=\mathbf{k}\mathrm{Irr}(S)\oplus\mathrm{Id}(S),\] \[\text{where }\mathrm{Irr}(S):=\mathfrak{C}(Y)\{\,q|_{s}\,|\,q\in \mathfrak{C}^{\star}(Y),s\in S\}\text{, and so }\mathrm{Irr}(S)\text{ is a }\mathbf{k}\text{-basis of }\mathbf{k}\mathfrak{C}(Y)/\mathrm{Id}(S).\]
### A monomial order on \(\mathfrak{C}(Y)\)
In this section, we will give a monomial order on \(\mathfrak{C}(Y)\) when the well-ordered set \(\Omega=\Omega_{1}=\{\,|\,\mbox{$\}_{\mathrm{d}},\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\}\) is composed of two distinct linear operators on \(\mathbf{k}\mathfrak{C}(Y)\) such that \(\mbox{$\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\, \mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\, \mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\, \mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\, \mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\, \mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\, \mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\, \mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$\}\,\mbox{$ \}\,
2. Define \[u\leq_{\mathrm{br\omega}}v\;\Leftrightarrow\;|u|_{\Omega}\leq|v|_{\Omega},\] where \(|u|_{\Omega}\) (resp. \(|v|_{\Omega}\)) is the \(\Omega\)-breadth of \(u\) (resp. \(v\)) defined in Eq. (9).
We next construct a monomial order \(\leq_{\mathrm{cdb}}\) on \(\mathfrak{C}(Y)\). Since \(\mathfrak{C}(Y)=\bigcup_{k\geq 0}\mathfrak{C}_{k}\), we do this by induction on \(k\geq 0\). Let \((Y,\leq)\) be a well-ordered set.
1. Let \(u,v\in\mathfrak{C}_{0}(=C(Y))\). Write \[u=u_{1}\cdots u_{r}\quad\text{and}\quad v=v_{1}\cdots v_{s},\] where \(u_{1},\cdots,u_{r},v_{1},\cdots,v_{s}\in Y\). Then define \[u\leq_{0}v\;\Leftrightarrow\;u\leq_{\mathrm{dlex}}v,\] where \(\leq_{\mathrm{dlex}}\) is the degree lexicographical order, that is, \[u\leq_{\mathrm{dlex}}v\;\Leftrightarrow\;(\deg_{Y}(u),u_{1},\cdots,u_{r}) \leq(\deg_{Y}(v),v_{1},\cdots,v_{s}).\] Here \(\deg_{Y}(u)\) is the number of occurrence of \(y\in Y\) in \(u\).
2. For a given \(k\geq 1\), suppose that a well order \(\leq_{k}\) on \(\mathfrak{C}_{k}\) has been defined. Let \(u,v\in\mathfrak{C}_{k+1}\). Since \(\mathfrak{C}_{k+1}=C(Y\sqcup\Omega(\mathfrak{C}_{k}))\), we can write \[u=u_{0}[u_{1}^{*}]_{\alpha_{1}}u_{1}[u_{2}^{*}]_{\alpha_{2}}\cdots[u_{r}^{*}] _{\alpha_{r}}u_{r}\quad\text{and}\quad v=v_{0}[v_{1}^{*}]_{\beta_{1}}v_{1}[v_{ 2}^{*}]_{\beta_{2}}\cdots[v_{s}^{*}]_{\beta_{s}}v_{s},\] where \(u_{0},v_{0},u_{i},v_{j}\in C(Y)\), \(\alpha_{i},\beta_{i}\in\{\mathrm{d},\mathrm{p}\}\) and \(u_{i}^{*},v_{j}^{*}\in\mathfrak{C}_{k}\) for \(i=1,\cdots,r\) and \(j=1,\cdots,s\). We first assume that \(r=s\). Then define \[u\leq_{\mathrm{lex}_{k+1}}v\;\Leftrightarrow (\alpha_{1},\alpha_{2},\cdots,\alpha_{r},u_{1}^{*},u_{2}^{*}, \cdots,u_{r}^{*},u_{0},u_{1},\cdots,u_{r})\] \[\leq(\beta_{1},\beta_{2},\cdots,\beta_{r},v_{1}^{*},v_{2}^{*}, \cdots,v_{r}^{*},v_{0},v_{1},\cdots,v_{r}).\] Then by the induction hypothesis and [44, Lemma 5.4.(b)], \(\leq_{\mathrm{lex}_{k+1}}\) is a well order. If \(\Omega\) is a singleton, then \(\alpha_{i}=\beta_{i}\) for \(1\leq i\leq r\). So \(\leq_{\mathrm{lex}_{k+1}}\) is the same as the case of the order \(\leq_{\mathrm{db}}\) defined in [44, Lemma 5.5]. This is the only difference between \(\leq_{\mathrm{cdb}}\) and \(\leq_{\mathrm{db}}\). Now define \[u\leq_{k+1}v\;\Leftrightarrow\;\left\{\begin{array}{l}u<_{\mathrm{dgu}}v,\\ \text{or }u=_{\mathrm{dgu}}v\text{ and }u<_{\mathrm{br\omega}}v,\\ \text{or }u=_{\mathrm{dgu}}v\text{, }u=_{\mathrm{br\omega}}v\text{ }(=r)\text{ and }u<_{\mathrm{lex}_{k+1}}v.\end{array}\right.\] Then by [44, Lemma 5.4.(a)], \(\leq_{k+1}\) is a well order on \(\mathfrak{C}_{k+1}\). Finally define the order (10) \[\leq_{\mathrm{cdb}}:=\bigcup_{k\geq 0}\leq_{k}.\]
**Proposition 3.9**.: _The order \(\leq_{\mathrm{cdb}}\) is a monomial order on \(\mathfrak{C}(Y)\)._
Proof.: It follows from the same proof of [44, Theorem 5.8].
## 4. Free commutative quasi-idempotent differential Rota-Baxter algebras
In this section we will give a linear basis of free commutative quasi-idempotent differential algebras (resp. Rota-Baxter algebras, resp. differential Rota-Baxter algebras) by using the Composition-Diamond lemma.
### Linear bases of free commutative quasi-idempotent differential algebras
Firstly, this section will give a Grobner-Shirshov basis for free commutative quasi-idempotent differential algebras, leading to a linear basis of free commutative quasi-idempotent differential algebras. For this, \(\Omega\) is taken to be the singleton \(\{[\,]_{\mathrm{d}}\}\). We present our main result of this section as below.
**Theorem 4.1**.: _Let \(Y\) be a well-ordered set. Let \(\lambda\neq 0\) and let_
\[S_{\mathrm{d}}:=\left\{\begin{array}{l}\phi_{1}(u,v):=[u]_{\mathrm{d}}[v]_{ \mathrm{d}}+\lambda^{-1}[u]_{\mathrm{d}}v+\lambda^{-1}u[v]_{\mathrm{d}}- \lambda^{-1}[uv]_{\mathrm{d}}\\ \phi_{2}(u,v):=[u]_{\mathrm{d}}^{2}+\lambda^{-1}[u]_{\mathrm{d}}\end{array} \right|u,v\in\mathfrak{C}(Y)\right\}.\]
_With the monomial order \(\leq_{\mathrm{cdb}}\) defined in Eq. (10),_
1. \(S_{\mathrm{d}}\) _is a Grobner-Shirshov basis in_ \(\mathbf{k}\mathfrak{C}(Y)\)_._
2. \[\mathrm{Irr}(S_{\mathrm{d}}):=\mathfrak{C}(Y)\left\{q_{1}|_{[u]_{\mathrm{d}}[v ]_{\mathrm{d}}},q_{2}|_{u]_{\mathrm{d}}^{2}}|\,q_{1},q_{2}\in\mathfrak{C}^{ \star}(Y),u,v\in\mathfrak{C}(Y)\right\}\] _is a linear basis of the free commutative quasi-idempotent differential algebra on_ \(Y\)_._
Proof.: \((a)\) Denote by \(i\wedge j\) the composition of \(\Omega\)-polynomials of types \(\phi_{i}(u,v)\) and \(\phi_{j}(u,v)\) for \(i,j=1,2\). The ambiguities of all possible compositions of commutative \(\Omega\)-polynomials in \(S\) are presented as below.
\[\begin{array}{|c|c|c|}\hline i\wedge j&1&2\\ \hline 1&\begin{array}{c}[u]_{\mathrm{d}}[v]_{\mathrm{d}}[w]_{\mathrm{d}}\\ \lfloor q|_{[u]_{\mathrm{d}}[v]_{\mathrm{d}}}[w]_{\mathrm{d}}\\ \hline 2&\lfloor q|_{[u]_{\mathrm{d}}[v]_{\mathrm{d}}}[d]_{\mathrm{d}}^{2} \end{array}\rfloor_{\mathrm{d}}&\lfloor q|_{u_{\mathrm{d}}^{2}}]_{\mathrm{d}}^{ 2}\end{array}\]
We only verify the first case: \(1\wedge 1\). The other cases are easy to check. Let
\[f:=\phi_{1}(u,v)=[u]_{\mathrm{d}}[v]_{\mathrm{d}}+\lambda^{-1}[u]_{\mathrm{d} }v+\lambda^{-1}u[v]_{\mathrm{d}}-\lambda^{-1}[uv]_{\mathrm{d}},\]
\[g:=\phi_{1}(z,w)=\lfloor z]_{\mathrm{d}}[w]_{\mathrm{d}}+\lambda^{-1}[z]_{ \mathrm{d}}w+\lambda^{-1}z[w]_{\mathrm{d}}-\lambda^{-1}[zw]_{\mathrm{d}}.\]
By the monomial order \(\leq_{\mathrm{cdb}}\), we obtain \(\bar{f}=[u]_{\mathrm{d}}[v]_{\mathrm{d}}\) and \(\bar{g}=\lfloor z]_{\mathrm{d}}[w]_{\mathrm{d}}\).
**(The case of intersection compositions)**. Suppose that \(\omega=\bar{f}\mu=\nu\bar{g}\) with \(\max\{|\bar{f}|,\,|\bar{g}|\,\}<|\omega|<|\bar{f}|+|\bar{g}|\). Then \(\mu=\lfloor w]_{\mathrm{d}}\) and \(\nu=\lfloor u]_{\mathrm{d}}\). Thus \(v=z\), and \(\omega=\lfloor u]_{\mathrm{d}}[v]_{\mathrm{d}}[w]_{\mathrm{d}}\).
\[(f,g)_{\omega}^{\mu,\nu} = f\mu-\nu g\] \[= \Big{(}[u]_{\mathrm{d}}[v]_{\mathrm{d}}+\lambda^{-1}[u]_{\mathrm{ d}}v+\lambda^{-1}u[v]_{\mathrm{d}}-\lambda^{-1}[uv]_{\mathrm{d}}\Big{)}|w]_{ \mathrm{d}}\] \[-[u]_{\mathrm{d}}\Big{(}[v]_{\mathrm{d}}[w]_{\mathrm{d}}+\lambda^ {-1}[v]_{\mathrm{d}}w+\lambda^{-1}v[w]_{\mathrm{d}}-\lambda^{-1}[\nu w]_{ \mathrm{d}}\Big{)}\] \[= \lambda^{-1}u[v]_{\mathrm{d}}[w]_{\mathrm{d}}-\lambda^{-1}[uv]_{ \mathrm{d}}[w]_{\mathrm{d}}-\lambda^{-1}[u]_{\mathrm{d}}[v]_{\mathrm{d}}w+ \lambda^{-1}[u]_{\mathrm{d}}[vw]_{\mathrm{d}}\] \[\equiv \lambda^{-1}u\Big{(}-\lambda^{-1}[v]_{\mathrm{d}}w-\lambda^{-1} v[w]_{\mathrm{d}}+\lambda^{-1}[vw]_{\mathrm{d}}\Big{)}\] \[-\lambda^{-1}\Big{(}-\lambda^{-1}[uv]_{\mathrm{d}}w-\lambda^{-1} uv[w]_{\mathrm{d}}+\lambda^{-1}[uvw]_{\mathrm{d}}\Big{)}\] \[-\lambda^{-1}\Big{(}-\lambda^{-1}[u]_{\mathrm{d}}v-\lambda^{-1} u[v]_{\mathrm{d}}+\lambda^{-1}[uv]_{\mathrm{d}}\Big{)}w\] \[+\lambda^{-1}\Big{(}-\lambda^{-1}[u]_{\mathrm{d}}vw-\lambda^{-1} u[vw]_{\mathrm{d}}+\lambda^{-1}[uvw]_{\mathrm{d}}\Big{)}\] \[\equiv -\lambda^{-2}u[v]_{\mathrm{d}}w-\lambda^{-2}uv[w]_{\mathrm{d}}+ \lambda^{-2}u[vw]_{\mathrm{d}}\] \[+\lambda^{-2}[uv]_{\mathrm{d}}w+\lambda^{-2}uv[w]_{\mathrm{d}}- \lambda^{-2}[uvw]_{\mathrm{d}}\] \[+\lambda^{-2}[u]_{\mathrm{d}}vw+\lambda^{-2}u[v]_{\mathrm{d}}w- \lambda^{-2}[uv]_{\mathrm{d}}w\]
\[\begin{array}{rcl}&&-\lambda^{-2}[u]_{\rm d}vw-\lambda^{-2}u[vw]_{\rm d}+\lambda^{ -2}|uvw|_{\rm d}\\ &\equiv&0\qquad\bmod(S_{\rm d},\omega).\end{array}\]
Thus \((f,g)^{\prime\prime}_{\omega}\) is trivial modulo \((S_{\rm d},\omega)\).
(**The case of including compositions**). Suppose that \(\omega=\bar{f}=q^{\prime}|_{\bar{g}}\). In order to keep consistent with notations of variables in the above table, we set \(f:=\phi_{1}(z,w)\) and \(g:=\phi_{1}(u,v)\). This gives \(\lfloor z\rfloor_{\rm d}\lfloor w\rfloor_{\rm d}=q^{\prime}|_{|u|_{\rm d}|v _{\rm d}}\). If \(q^{\prime}=\star\), then \(\lfloor z\rfloor_{\rm d}=\lfloor u\rfloor_{\rm d}\) and \(\lfloor w\rfloor_{\rm d}=\lfloor v\rfloor_{\rm d}\). This gives \((f,g)^{\prime\prime}_{\omega}=0\), which is trivial modulo \((S_{\rm d},\omega)\). Now let \(q^{\prime}\neq\star\). Then \(q^{\prime}=\lfloor q\rfloor_{\rm d}\lfloor w\rfloor_{\rm d}\) or \(q^{\prime}=\lfloor z\rfloor_{\rm d}\lfloor q\rfloor_{\rm d}\), where \(q\) is in \(\mathbb{C}^{\star}(Y)\). Thus \(z=q|_{|u|_{\rm d}|v\rfloor_{\rm d}}\) or \(w=q|_{|u|_{\rm d}|v\rfloor_{\rm d}}\). Since the multiplication of \(\mathbf{k}\mathfrak{C}(Y)\) is commutative, we only consider the first case when \(z=q|_{|u|_{\rm d}|v\rfloor_{\rm d}}\). Thus \(\omega=\lfloor q|_{|u|_{\rm d}|v\rfloor_{\rm d}}|_{\rm d}|w\rfloor_{\rm d}\) and \(q^{\prime}=\lfloor q\rfloor_{\rm d}|w\rfloor_{\rm d}\).
\[\begin{array}{rcl}(f,g)^{\prime\prime}_{\omega}&=&f-q^{\prime}|_{\bar{g}}\\ &=&\lfloor q|_{|u|_{\rm d}|v\rfloor_{\rm d}}|_{\rm d}|w\rfloor_{\rm d}+\lambda ^{-1}|q|_{|u|_{\rm d}|v\rfloor_{\rm d}}|_{\rm d}w+\lambda^{-1}q|_{|u|_{\rm d}| v\rfloor_{\rm d}}|w|_{\rm d}\\ &&-\lambda^{-1}|q|_{|u|_{\rm d}|v\rfloor_{\rm d}}w|_{\rm d}-\lfloor q|_{|u|_{ \rm d}|v\rfloor_{\rm d}+\lambda^{-1}|u|_{\rm d}w+\lambda^{-1}u|_{\rm d}- \lambda^{-1}|w|_{\rm d}}|_{\rm d}|w|_{\rm d}\\ &=&\lambda^{-1}|q|_{|u|_{\rm d}|v\rfloor_{\rm d}}|_{\rm d}w+\lambda^{-1}q|_{|u _{\rm d}|v\rfloor_{\rm d}}|w|_{\rm d}-\lambda^{-1}|q|_{|u|_{\rm d}|v\rfloor _{\rm d}}|w|_{\rm d}\\ &&-\lambda^{-1}|q|_{|u|_{\rm d}|v\rfloor_{\rm d}}|w|_{\rm d}-\lambda^{-1}|q|_{ |u|_{\rm d}|v\rfloor_{\rm d}}|w|_{\rm d}+\lambda^{-1}|q|_{|u|_{\rm d}|}|w|_{ \rm d}\\ &\equiv&-\lambda^{-2}|q|_{|u|_{\rm d}|v\rfloor_{\rm d}}w-\lambda^{-2}|q|_{|u|_ {\rm d}|v\rfloor_{\rm d}}w+\lambda^{-2}|q|_{|u|_{\rm d}|}w|_{\rm d}\\ &&-\lambda^{-2}q|_{|u|_{\rm d}|v\rfloor_{\rm d}}w|_{\rm d}-\lambda^{-2}q|_{|u |_{\rm d}|}|w|_{\rm d}+\lambda^{-2}q|_{|u|_{\rm d}|}|w|_{\rm d}\\ &&+\lambda^{-2}|q|_{|u|_{\rm d}w}|_{\rm d}+\lambda^{-2}|q|_{|u|_{\rm d}|v \rfloor_{\rm d}}w|_{\rm d}-\lambda^{-2}|q|_{|u|_{\rm d}w}|_{\rm d}\\ &&+\lambda^{-2}|q|_{|u|_{\rm d}|v\rfloor_{\rm d}}w+\lambda^{-2}q|_{|u|_{\rm d} |v\rfloor_{\rm d}}w|_{\rm d}-\lambda^{-2}|q|_{|u|_{\rm d}|}w|_{\rm d}\\ &&+\lambda^{-2}|q|_{|u|_{\rm d}|v\rfloor_{\rm d}}w+\lambda^{-2}q|_{|u|_{\rm d} |v\rfloor_{\rm d}}|w|_{\rm d}-\lambda^{-2}|q|_{|u|_{\rm d}|}w|_{\rm d}\\ &&-\lambda^{-2}|q|_{|u|_{\rm d}|}|_{\rm d}w-\lambda^{-2}q|_{|u|_{\rm d}|}|w|_{ \rm d}+\lambda^{-2}|q|_{|u|_{\rm d}|}w|_{\rm d}\\ &\equiv&0\qquad\bmod(S_{\rm d},\omega).\end{array}\]
Thus \((f,g)^{\prime\prime}_{\omega}\) is trivial modulo \((S_{\rm d},\omega)\).
(\(b\)) By Proposition 3.4 (\(a\)), \(\mathbf{k}\mathfrak{C}(Y)/\mathrm{Id}(S_{\rm d})\) is the free quasi-idempotent differential algebra on \(Y\). Then by Item (\(a\)) and Theorem 3.7 (\(b\)), \(\mathrm{Irr}(S_{\rm d})\) is a linear basis of \(\mathbf{k}\mathfrak{C}(Y)/\mathrm{Id}(S_{\rm d})\).
### Linear bases of free commutative quasi-idempotent Rota-Baxter algebras
In order to obtain a linear basis of free commutative quasi-idempotent Rota-Baxter algebras, we will establish a Grobner-Shirshov basis for such algebras. For this, \(\Omega\) is taken to be the singleton \(\{\lfloor\,\rfloor_{\rm p}\}\).
**Theorem 4.2**.: _Let \(Y\) be a well-ordered set. Let_
\[S_{\rm rb}:=\left\{\begin{array}{l}\phi_{3}(u,v):=\lfloor u\rfloor_{\rm p} \lfloor v\rfloor_{\rm p}-\lfloor u\lfloor v\rfloor_{\rm p}\rfloor_{\rm p}- \lfloor\lfloor u\rfloor_{\rm p}v\rfloor_{\rm p}-\lambda\lfloor uv\rfloor_{ \rm p}\\ \phi_{4}(u,v):=\lfloor u\rfloor_{\rm p}^{2}+\lambda\lfloor u\rfloor_{\rm p} \end{array}\right|u,v\in\mathfrak{C}(Y)\right\}.\]
_With the monomial order \(\leq_{\rm cdb}\) defined in Eq. (10),_
1. \(S_{\rm rb}\) _is a Grobner-Shirshov basis in_ \(\mathbf{k}\mathfrak{C}(Y)\)_._
2. \[\mathrm{Irr}(S_{\rm rb}):=\mathfrak{C}(Y)\left\{q_{1}|_{|u|_{\rm p}\lfloor v \rfloor_{\rm p}},q_{2}|_{|u|_{\rm p}^{2}}|\,q_{1},q_{2}\in\mathfrak{C}^{\star}(Y ),u,v\in\mathfrak{C}(Y)\right\}\] _is a linear basis of the free commutative quasi-idempotent Rota-Baxter algebra on_ \(Y\)_._
Proof.: (\(a\)) We also denote by \(i\wedge j\) the composition of \(\Omega\)-polynomials of types \(\phi_{i}(u,v)\) and \(\phi_{j}(u,v)\) for \(i,j=3,4\). The ambiguities of all possible compositions of commutative \(\Omega\)-polynomials in
are presented as below.
\[\begin{array}{|c|c|c|}\hline i\wedge j\backslash\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\
\(b\) \[\mathrm{Irr}(S_{\mathrm{drb}}):=\mathfrak{C}(Y)\left\{\begin{array}{l}q_{1}|_{u| \mathrm{p}|v|_{p}},\,q_{2}|_{u_{\mathrm{p}}^{2}},\,q_{3}|_{u|\mathrm{d}|v|_{ \mathrm{d}}},\\ q_{4}|_{u|_{\mathrm{d}}^{2}},\,q_{5}|_{\lfloor u|_{\mathrm{p}}\rfloor_{\mathrm{d }}}\end{array}\right|q_{i}\in\mathfrak{C}^{\star}(Y),1\leq i\leq 5,u,v\in \mathfrak{C}(Y)\right\}.\] _is a linear basis of the free commutative quasi-idempotent differential Rota-Baxter algebra on_ \(Y\)_._
Proof.: \((a)\) Denote by \(i\wedge j\) the composition of \(\Omega\)-polynomials of types \(\phi_{i}(u,v),\phi_{j}(u,v)\in S_{\mathrm{drb}}\) for \(i,j=1,2,3,4,5\). All ambiguities of compositions of commutative \(\Omega\)-polynomials in \(S\) are given in the following table.
Firstly, by the proof of Theorem 4.1 and Theorem 4.2, the eight cases: \(1\wedge 1\), \(1\wedge 2\), \(2\wedge 1\), \(2\wedge 2\), \(3\wedge 3\), \(3\wedge 4\), \(4\wedge 3\) and \(4\wedge 4\), have been proved. By the proof of [35, Theorem 6.1 ], the three cases: \(3\wedge 5\), \(5\wedge 3\) and \(5\wedge 5\), are already validated. We need to check the remaining fourteen cases. We only verify two cases:\(1\wedge 3,2\wedge 3\). The others are similar. Let
\[f_{1}:=\phi_{1}(u_{1},v_{1})=\lfloor u_{1}\rfloor_{\mathrm{d}} \lfloor v_{1}\rfloor_{\mathrm{d}}+\lambda^{-1}\lfloor u_{1}\rfloor_{\mathrm{d }}v_{1}+\lambda^{-1}\lfloor u_{1}\lfloor v_{1}\rfloor_{\mathrm{d}}-\lambda^{- 1}\lfloor u_{1}v_{1}\rfloor_{\mathrm{d}}\] \[f_{2}:=\phi_{2}(u_{2},v_{2})=\lfloor u_{2}\rfloor_{\mathrm{d}}^{2 }+\lambda^{-1}\lfloor u_{2}\rfloor_{\mathrm{d}}\] \[f_{3}:=\phi_{3}(u_{3},v_{3})=\lfloor u_{3}\rfloor_{\mathrm{p}} \lfloor v_{3}\rfloor_{\mathrm{p}}-\lfloor u_{3}\lfloor v_{3}\rfloor_{\mathrm{ p}}-\lfloor u_{3}\rfloor_{\mathrm{p}}v_{3}\rfloor_{\mathrm{p}}-\lambda\lfloor u_{3}v_{3}\rfloor_{ \mathrm{p}}\] \[f_{4}:=\phi_{4}(u_{4},v_{4})=\lfloor u_{4}\rfloor_{\mathrm{p}}^{2 }+\lambda\lfloor u_{4}\rfloor_{\mathrm{p}}\] \[f_{5}:=\phi_{5}(u_{5},v_{5})=\lfloor u_{5}\rfloor_{\mathrm{p}} \rfloor_{\mathrm{d}}-u_{5}.\]
By the monomial order \(\leq_{\mathrm{cdb}}\), we obtain \(\bar{f}_{1}=\lfloor u_{1}\rfloor_{\mathrm{d}}\lfloor v_{1}\rfloor_{\mathrm{d}}\), \(\bar{f}_{2}=\lfloor u_{2}\rfloor_{\mathrm{d}}^{2}\), \(\bar{f}_{3}=\lfloor u_{3}\rfloor_{\mathrm{p}}\lfloor v_{3}\rfloor_{\mathrm{p}}\), \(\bar{f}_{4}=\lfloor u_{4}\rfloor_{\mathrm{p}}^{2}\) and \(\bar{f}_{5}=\lfloor u_{5}\rfloor_{\mathrm{p}}\rfloor_{\mathrm{d}}\).
**Case: \(1\wedge 3\)**.
(**The case of intersection compositions**). Suppose that \(\omega=\bar{f}_{1}\mu=v\bar{f}_{3}\) with \(\max\{|\bar{f}_{1}|,\,|\bar{f}_{3}|\}<|\omega|<|\bar{f}_{1}|+|\bar{f}_{3}|\). Then \(|\omega|=3\), and so \(\omega=\lfloor u_{1}\rfloor_{\mathrm{d}}\lfloor v_{1}\rfloor_{\mathrm{d}}\mu= \nu\lfloor u_{3}\rfloor_{\mathrm{p}}\lfloor v_{3}\rfloor_{\mathrm{p}}\). Thus \(\lfloor v_{1}\rfloor_{\mathrm{d}}=\lfloor u_{3}\rfloor_{\mathrm{p}}\), contradiction. Thus there are no intersection compositions of \(f_{1}\) and \(f_{3}\).
(**The case of including compositions**). Rewrite \(f_{1}=\phi_{1}(z,w)\) and \(f_{3}=\phi_{3}(u,v)\). Suppose that \(\omega=\bar{f}_{1}=q^{\prime}|_{\bar{f}_{3}}\). This gives \(\lfloor z\rfloor_{\mathrm{d}}\lfloor w\rfloor_{\mathrm{d}}=q^{\prime}|_{u \lfloor p_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}}\). Then \(q^{\prime}=\lfloor q\rfloor_{\mathrm{d}}\lfloor w\rfloor_{\mathrm{d}}\) or \(q^{\prime}=\lfloor z\rfloor_{\mathrm{d}}\lfloor q\rfloor_{\mathrm{d}}\), where \(q\) is in \(\mathfrak{C}^{\star}(Y)\). Thus \(z=q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}}\) or \(w=q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}}\). We only check the fist case when \(\omega=\lfloor q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}}\rfloor_{ \mathrm{d}}[w]_{\mathrm{d}}\) and \(q^{\prime}=\lfloor q\rfloor_{\mathrm{d}}[w]_{\mathrm{d}}\).
\[(f_{1},f_{3})^{q^{\prime}}_{\omega} = f_{1}-q^{\prime}|_{f_{3}}\] \[= |q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}}\rfloor_{\mathrm{d }}[w]_{\mathrm{d}}+\lambda^{-1}|q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p }}}|_{\mathrm{d}}w+\lambda^{-1}q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}}| w|_{\mathrm{d}}-\lambda^{-1}|q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p }}}w|_{\mathrm{d}}\] \[-|q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}-\lfloor u|_{ \mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}-\lfloor u|_{\mathrm{p}}\lfloor v \rfloor_{\mathrm{p}}-\lfloor u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}- \lambda|w\rfloor_{\mathrm{p}}}|_{\mathrm{d}}|w|_{\mathrm{d}}\] \[= \lambda^{-1}|q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}}|_{ \mathrm{d}}w+\lambda^{-1}q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}}|w|_{ \mathrm{d}}-\lambda^{-1}|q|_{u|_{\mathrm{p}}\lfloor v\rfloor_{\mathrm{p}}}w|_{ \mathrm{d}}\]
\[+|q|_{[u\nu]_{\rm p},{\rm p}}]_{\rm d}|w|_{\rm d}+|q|_{[\lfloor u \nu]_{\rm p},{\rm p}}]_{\rm d}|w|_{\rm d}+\lambda|q|_{[\lfloor uv\rfloor_{\rm p} }]_{\rm d}|w|_{\rm d}\] \[\equiv \lambda^{-1}|q|_{[u\nu]_{\rm p},{\rm p}}]_{\rm d}w+\lambda^{-1}|q| _{[\lfloor u\nu]_{\rm p},{\rm p}}]_{\rm d}w+\lambda|q|_{[\lfloor uv\rfloor_{\rm p }}]_{\rm d}w\] \[+\lambda^{-1}q|_{[u\nu]_{\rm p},{\rm p}}]_{\rm d}+\lambda^{-1}q|_{ [\lfloor u\nu]_{\rm p},{\rm p}}]_{\rm d}+q|_{[\lfloor uv\rfloor_{\rm p}}]_{ \rm d}\] \[-\lambda^{-1}|q|_{[u\nu]_{\rm p},{\rm p}}]_{\rm d}w]_{\rm d}- \lambda^{-1}|q|_{[\lfloor u\nu]_{\rm p},{\rm p}}]_{\rm d}w]_{\rm d}\] \[-\lambda^{-1}|q|_{[\lfloor u\nu]_{\rm p},{\rm p}}]_{\rm d}w- \lambda^{-1}q|_{[\lfloor u\nu]_{\rm p},{\rm p}}]_{\rm d}+\lambda^{-1}|q|_{[ \lfloor u\nu]_{\rm p},{\rm p}}]_{\rm d}\] \[-|q|_{[\lfloor uv\rfloor_{\rm p}}]_{\rm d}w-q|_{[\lfloor uv \rfloor_{\rm p}}]_{\rm d}+|q|_{[\lfloor uv\rfloor_{\rm p}}w]_{\rm d}\] \[\equiv 0\qquad{\rm mod}\ (S_{\rm drb},\omega).\]
**Case: \(2\wedge 3\)**.
(**The case of intersection compositions**). Suppose that \(\omega=\tilde{f}_{2}\mu=\nu\tilde{f}_{3}\) with \(\max\{\,|\tilde{f}_{2}|,\,|\tilde{f}_{3}|\,\}<|\omega|<|\tilde{f}_{2}|+| \tilde{f}_{3}|\). Then \(2<|\omega|<3\), contradiction. Thus there are no intersection compositions of \(f_{2}\) and \(f_{3}\).
(**The case of including compositions**). Rewrite \(f_{2}=\phi_{2}(z,w)\) and \(f_{3}=\phi_{3}(u,v)\). Suppose that \(\omega=\tilde{f}_{2}=q^{\prime}|_{j_{3}}\). This gives \(|z|_{\rm d}^{2}=q^{\prime}|_{u|_{\rm p}|v|_{\rm p}}\). Thus \(q^{\prime}=|q|_{\rm d}^{2}\) and \(z=q|_{[u\nu]_{\rm p}[v]_{\rm p}}\), where \(q\) is in \(\mathfrak{C}^{\star}(Y)\). Hence \(\omega=|q|_{[u\nu]_{\rm p}[v]_{\rm p}}]_{\rm d}^{2}\).
\[(f_{2},f_{3})^{\prime\prime}_{\omega} = f_{2}-q^{\prime}|_{f_{3}}\] \[= |q|_{[u\nu]_{\rm p},{\rm p}}]_{\rm d}^{2}+\lambda^{-1}|q|_{[u\nu] _{\rm p}[v]_{\rm p}}]_{\rm d}-|q|_{[u\nu]_{\rm p}[v]_{\rm p}-[\lfloor u\nu]_{ \rm p}-\lambda[\nu]_{\rm p}}-\lambda[\nu]_{\rm p}]_{\rm d}|_{\rm d}^{2}\] \[= \lambda^{-1}|q|_{[u\nu]_{\rm p},{\rm p}}]_{\rm d}+|q|_{[u\nu]_{ \rm p},{\rm p}}]_{\rm d}^{2}+|q|_{[\lfloor u\nu]_{\rm p},{\rm p}}]_{\rm d}^{2}\] \[\equiv \lambda^{-1}|q|_{[u\nu]_{\rm p},{\rm p}}]_{\rm d}+\lambda^{-1}|q|_{[ \lfloor u\nu]_{\rm p},{\rm p}}]_{\rm d}+|q|_{[\lfloor u\nu]_{\rm p},{\rm p}}]_{ \rm d}\] \[-\lambda^{-1}|q|_{[u\nu]_{\rm p},{\rm p}}]_{\rm d}-\lambda^{-1}|q|_{[ \lfloor u\nu]_{\rm p},{\rm p}}]_{\rm d}-|q|_{[\lfloor uv\rfloor_{\rm p},{\rm p }}]_{\rm d}\] \[\equiv 0\qquad{\rm mod}\ (S_{\rm drb},\omega).\]
(\(b\)) By Proposition 3.4\((c\)), \({\bf k}\mathfrak{C}(Y)/{\rm Id}(S_{\rm drb})\) is the free commutative quasi-idempotent Rota-Baxter algebra on \(Y\). Then by Item \((a)\) and Theorem 3.7\((b)\), \({\rm Irr}(S_{\rm drb})\) is a linear basis of \({\bf k}\mathfrak{C}(Y)/{\rm Id}(S_{\rm drb})\).
**Acknowledgements**: This work was supported by the National Natural Science Foundation of China (Grant Nos. 11601199 and 11961031) and Jiangxi Provincial Natural Science Foundation (Grant No. 20224BAB201003).
|
2308.10411 | In-Rack Test Tube Pose Estimation Using RGB-D Data | Accurate robotic manipulation of test tubes in biology and medical industries
is becoming increasingly important to address workforce shortages and improve
worker safety. The detection and localization of test tubes are essential for
the robots to successfully manipulate test tubes. In this paper, we present a
framework to detect and estimate poses for the in-rack test tubes using color
and depth data. The methodology involves the utilization of a YOLO object
detector to effectively classify and localize both the test tubes and the tube
racks within the provided image data. Subsequently, the pose of the tube rack
is estimated through point cloud registration techniques. During the process of
estimating the poses of the test tubes, we capitalize on constraints derived
from the arrangement of rack slots. By employing an optimization-based
algorithm, we effectively evaluate and refine the pose of the test tubes. This
strategic approach ensures the robustness of pose estimation, even when
confronted with noisy and incomplete point cloud data. | Hao Chen, Weiwei Wan, Masaki Matsushita, Takeyuki Kotaka, Kensuke Harada | 2023-08-21T01:35:06Z | http://arxiv.org/abs/2308.10411v1 | # In-Rack Test Tube Pose Estimation Using RGB-D Data
###### Abstract
Accurate robotic manipulation of test tubes in biology and medical industries is becoming increasingly important to address workforce shortages and improve worker safety. The detection and localization of test tubes are essential for the robots to successfully manipulate test tubes. In this paper, we present a framework to detect and estimate poses for the in-rack test tubes using color and depth data. The methodology involves the utilization of a YOLO object detector to effectively classify and localize both the test tubes and the tube racks within the provided image data. Subsequently, the pose of the tube rack is estimated through point cloud registration techniques. During the process of estimating the poses of the test tubes, we capitalize on constraints derived from the arrangement of rack slots. By employing an optimization-based algorithm, we effectively evaluate and refine the pose of the test tubes. This strategic approach ensures the robustness of pose estimation, even when confronted with noisy and incomplete point cloud data.
## I Introduction
Object detection and pose estimation play a vital role in various applications, like autonomous driving, industrial automation, and augmented reality. They function as the eyes for intelligent systems to identify objects and determine their position and orientation. This paper specially focuses on detecting and estimating poses of test tubes within a rack, which is a key task for automation in biology and medicine.
Detecting in-rack test tubes has unique challenges. The tubes in racks are often tightly spaced, resulting in partial occlusions from neighboring tubes that hinder detection and pose estimation. Furthermore, transparent or semi-transparent test tubes create complexity through refractive effects that degrade depth sensor data quality. Together, these intricacies have obstructed precise and robust test tube detection and localization.
Many research has been conducted on object detection and pose estimation. Traditional methods first detect and estimate coarse poses for target objects using feature or template matching [1, 2]. Then, the coarse pose is refined using iterative closest point (ICP) [3]. These methods often require exhaustively searching the input data to match templates, which is inefficient and prone to failure with background clutter and sensor noise. Recent advance in machine learning used neural networks to extract features and can work well with the occlusion and clutter environment. Some end-to-end pose estimation networks like YOLO6d [4] can real-time estimate poses of objects in the clutter environment. However, learning-based methods require large amounts of training data to achieve good performance, which can be time-consuming and required considerable manual work to label data.
In this paper, we developed a framework to detect and estimate poses of test tubes. The frameworks follows two-staged pipeline: a YOLO object detector first classify and localize test tubes and tube racks in color image. Then the pose of tube rack is analyzed by using traditional point cloud registration. After that, an optimization-based feature fitting method leverages extracted point clouds of test tubes and the pose of the test tube rack to estimate the pose of the test tubes. The method is efficient and precious. In some cases, point clouds of transparent/semi-transparent test tubes are corrupted, the proposed method can still estimate the poses for them.
The contributions of this paper are two-fold. (1) We develop a framework to achieve effective in-rack test tube detection that can be utilized in various test tube manipulation tasks. (2) We propose an optimization-based feature fitting method that utilizes the pose of the test tube rack as a prior to estimate poses for test tubes, which enable accurate pose estimation for test tubes even in the presence of noise and incomplete point cloud data.
The remaining part of this paper is structured as follows: Section II reviews related work. Section III outlines the workflow of the proposed method. Section IV-B delivers
Fig. 1: The workflow of the system involves taking 2D images and point clouds as input. The outputs are the estimated poses of the test tubes and racks. A key innovation of our system is that we incorporate the correlation between each test tube and its corresponding rack slot as a prior constraint for test tube pose estimation. This enables robust and precise detection of test tubes even when the point cloud data of test tubes is noisy or incomplete.
technical details. Section V shows experiments and analysis. Section VI draws conclusions.
## II Related Work
Most recent research treat object detection and pose estimation as two sequential tasks. Object detectors first classify and localize objects in an image, then pose estimators determine precise spatial locations and orientations for the detected objects. In this section, we review research on object detection and pose estimation.
### _Object Detection_
Early object detection methods relied on handcrafted features [5][6] along with SVM classifiers [7]. However, traditional methods lacked of effective image representation, subsequently limiting accuracy. The advent of convolutional neural networks (CNNs) changed this landscape dramatically [8]. R-CNN [9] pioneered region proposals for localization coupled with CNNs for classification. Later one-stage detectors like YOLO [10] eliminated region proposals, using a single network for bounding box prediction and classification. Transformers like DETR have also emerged, replacing hand-designed anchors and non-maximum suppression (NMS) with attention mechanisms. Modern object detectors [11][12] can achieve high accuracy while maintaining real-time performance.
In this paper, we employ YOLOv5 [13] to classify and localize the tube rack and various test tubes in 2D images. YOLOv5 excels in both speed and accuracy for object detection tasks. While learning-based object detectors require extensive training data to perform well, acquiring such data is a laborious and time-consuming process. In our previous work, we introduced an intuitive robotic data collection system that efficiently gathers training data for the object detector without requiring human intervention [14].
### _Pose Estimation_
Pose estimation methods can be broadly categorized into two approaches: feature-based methods and template-based methods.
In feature-based methods, early research focused on estimating poses by establishing correspondences between the 2D input image and the 3D object model through the matching of 2D handcrafted features [1, 15]. However, these 2D feature matching methods tend to struggle when dealing with texture-less or symmetric objects. With the advent of consumer-grade depth sensors like Kinect and Realsense, feature-based methods such as [16, 17] shifted towards establishing correspondences between point clouds and 3D models using 3D handcrafted features. Nonetheless, similar to methods based on 2D features, those relying on 3D features also exhibit performance limitations when dealing with objects exhibiting symmetrical properties.
In template-based methods, a prototype image representation--known as a template--is compared to various locations within the input image. At each location, a similarity score is computed, and the best match is determined by comparing these similarity scores. A popular example of this approach is LINEMOD [2], where each template is represented by a map of color gradient orientations and surface normal orientations. However, it's worth noting that template matching methods are prone to issues of occlusion between objects, as the similarity score of the template is likely to be low if the object is occluded.
Recently, the integration of deep learning has significantly enhanced the performance of 6D pose estimation algorithms. PoseCNN [18], a deep learning architecture, garnered attention for its proficiency in predicting 3D object poses from single RGB images. Subsequently, methods like YOLO6d [4] have enabled real-time 6D pose estimation. Nonetheless, it's important to highlight that learning-based methods demand substantial training data to achieve satisfactory performance. This requirement can be time-consuming and often necessitates substantial manual effort for data labeling.
In this study, we proposed a feature-based method for in-rack test tube pose estimation. The method evaluates the test tube poses from point clouds based on a cylinder shaped feature fitting and novelly incorporating the correlation between each tube and its corresponding rack slot as a prior constraint. Even when the point cloud of a test tube is noisy or incomplete, our proposed method can still perform well. The experimental results clearly demonstrate the advantages of this method for detecting in-rack test tubes, as compared to prior work.
## III Systematic Workflow
Fig. 1 illustrates the comprehensive workflow of the system. The system's inputs consist of two elements: a 2D image and point clouds. These data are sourced from a vision sensor. The resulting output comprises estimated poses for both the test tubes and the test tube rack. The proposed vision system operates under three key assumptions: 1) The test tube rack is assumed to be opaque. 2) The test tubes are assumed to have their bottom centers approximately aligned with the central rack slot. This assumption stems from the fact that most tube racks incorporate a specific indentation at the bottom of each slot to facilitate test tube alignment. 3) The input gray images and point clouds have been previously calibrated by the manufacturer, ensuring that each pixel in the gray image corresponds to a point in the point cloud. If the image sensor and depth sensor lack calibration, the method outlined in [19] can be employed for calibration.
The system's initial step involves employing YOLO to detect both the test tubes and the test tube rack within the 2D image. YOLO is highly effective at identifying bounding boxes for these objects, subsequently enabling extraction of their corresponding point clouds. In our previous work [14], a data collection system was developed, combining robotic in-hand observation and data synthesis to automatically generate training data for the YOLO object detector. The object detector trained with dataset prepared by our method can achieve an impressive success rate of 98.8% for the in-rack test tube detection task. Subsequent pre-processing steps, including outlier removal and the removal of any additional
areas detected by YOLO, are applied to the extracted point clouds. Following this, the system estimates the pose of the tube rack by fitting a pre-collected point cloud template of the rack to the version obtained from the vision sensor. This fitting process involves utilizing the Oriented Bounding Box (OBB) and the Iterative Closest Point (ICP) algorithm. An initial transformation for the tube rack is determined by analyzing its OBB, and the ICP algorithm further refines the transformation to achieve higher accuracy.
Finally, the system evaluates the poses of the test tubes through an optimization-based approach that takes into account their point clouds and the pose of the tube rack. The method begins by projecting the point clouds of the test tubes onto the upper surface of the test tube rack, thereby identifying the corresponding slots for the tubes. Subsequently, the method determines the pose of each test tube by optimizing a rotation matrix. This matrix aligns the maximum number of points with the surface of a cylinder, which has a radius equivalent to that of the test tube. The optimization-based pose estimation method employs the hole center's position and the test tube's radius to calculate the test tube's pose. Further details regarding this method are elaborated upon in the subsequent section.
## IV Optimization-based Pose Estimation Considering Rack Constraints
In this section, the detail of the pose estimation is discussed. We use an example shown in Fig. 2 to demonstrate the proposed method.
### _Tube Rack Pose Estimation_
In this subsection, we present our method for estimating the pose of the test tube rack using point cloud template matching. As shown in Fig. 3 (c), we initially create a template using the top surface point cloud of the test tube rack. This template is chosen due to its dense point distribution and minimal occlusions.
The process of estimating the test tube rack pose involves two key steps. The initial phase calculates an approximate transformation between the template point cloud and the point cloud acquired from the vision sensor. To achieve this, we employ the Oriented Bounding Box (OBB) algorithm, a well-established technique that identifies the box with the least area containing all the given points. For reference, Fig. 3 (a) and (b) provide an instance of the OBB applied to a point cloud cluster. The pose of the OBB serves as an initial estimate for the template point cloud's position.
Subsequently, we enhance the transformation using the ICP algorithm. ICP is employed to iteratively minimize the reprojection error associated with fixed correspondences between the template point cloud and the captured point cloud until convergence is reached. This process ensures an accurate estimation of the test tube rack's pose. To visualize the results, Fig. 4 demonstrates the rough estimate derived from OBB and the refined estimate achieved through ICP. Notably, the yellow point cloud represents data from the vision sensor, while the blue point cloud depicts the template. Once the pose of the test tube rack is determined, we can subsequently deduce the poses of the individual test tubes from this information.
### _Optimization-Based Tube Pose Estimation_
In this subsection, we present the method to estimate the poses of test tubes. The method firstly finds out the containing hole for each test tube by projecting the convex hull of its point cloud on the top surface of the test tube rack. Since we have already known the pose of the test tube rack, the position of each hole on the rack can also be concluded from it. Fig. 7(b) is the projection result of the example. The gray rectangles are the hole. The blue convex shapes are the projection of the convex hull of the test tube point cloud. The test tube is located in the hole containing the largest partition of the area of the its convex shape.
Support we have \(N\) test tubes on the rack. The point cloud of \(i\)th test tube is notated as \(\textbf{P}^{i}=\{\textbf{p}_{1}^{i},\ldots,\textbf{p}_{i}^{i},\ldots\textbf{ p}_{M_{i}}^{i}\}\), where \(\textbf{p}_{j}^{i}\in\mathbb{R}^{3}\), \(M_{i}\) is the number of points in the \(i\)th point cloud. The radius of \(i\)th test tube is \(r_{i}\). Suppose the dimension of holes of the test tube rack are the same, and the notions of length, width, height are \(2L\), \(2W\), \(H\) respectively.
As shown in Fig. 5, the local coordinate of the test tube is attached to the central bottom. To estimate the pose of the \(i\)th test tube, we use two parameters \(\alpha_{i}\), \(\beta_{i}\) to represent the orientation, where \(\alpha_{i}\), \(\beta_{i}\) are the Euler angles of \(x\) axis and \(y\) axis of the \(i\)th test tube coordinate. Then the orientation of
Fig. 4: The left image shows roughly estimated transformation by utilizing the OBB. The right image shows the pose refine by the ICP. The yellow point cloud is obtained from the vision sensor. The blue point cloud is the template.
Fig. 3: (a) The extracted tube rack point cloud. (b) The OBB for the extracted tube rack point cloud. (c) The template tube rack point cloud used for point cloud registration.
Fig. 2: The image on the left is an example of semi-transparent test tube detection. The right image shows the point cloud of the example.
the \(i\)th test tube can be defined as:
\[\mathbf{R}(\alpha_{i},\beta_{i}) =\begin{bmatrix}cos\beta_{i}&0&sin\beta_{i}\\ 0&1&0\\ -sin\beta_{i}&0&cos\beta_{i}\end{bmatrix}\begin{bmatrix}1&0&0\\ 0&cos\alpha_{i}&-sin\alpha_{i}\\ 0&sin\alpha_{i}&cos\alpha_{i}\end{bmatrix} \tag{1}\] \[=\begin{bmatrix}cos\beta_{i}&sin\beta_{i}sin\alpha_{i}&sin\beta_ {i}cos\alpha_{i}\\ 0&cos\alpha_{i}&-sin\alpha_{i}\\ -sin\beta_{i}&\cos\beta_{i}sin\alpha_{i}&cos\beta_{i}cos\alpha_{i}\end{bmatrix}\]
The distance between the \(\mathbf{p}_{j}\) to the line segment that starts from origin point and points to \(z\) direction of \(i\)th test tube coordinate is represented as:
\[D(\alpha_{i},\beta_{i},\mathbf{p}_{j},\mathbf{o}_{i})=||(\mathbf{o}_{i}- \mathbf{p}_{j})\times\begin{bmatrix}sin\beta_{i}cos\alpha_{i}\\ -sin\alpha_{i}\\ cos\beta_{i}cos\alpha_{i}\end{bmatrix}|| \tag{2}\]
where \(\mathbf{o}_{i}\) is the origin point of the \(i\)th test tube coordinate in the world coordinate, which can be concluded from the pose of the test tube rack. We use a cylinder that has the same radius as the test tube to represent the simplified geometry feature for the test tube. The pose of the cylinder is considered to be the same as the test tube. The orientation of the cylinder is controlled by the parameter \(\alpha\) and \(\beta\). Although some parts of the point cloud of the transparent test tube miss, the remaining part of the point cloud still has enough features for the cylinder to fit. Fig. 6 shows an example of using the cylinder to fit the point cloud. We suppose that the cylinder is in the correct pose when the point cloud is located near the surface of the cylinder. By rotating the cylinder, we can determine the optimal pose for the cylinder that point cloud is well distributed near its surface. The following equation is used to find out the pose.
\[\min_{\alpha,\beta\in(-\pi,\pi]}\ \ \ \sum_{k=1}^{M_{l}}(D(\alpha_{i},\beta_{i}, \mathbf{p}_{k},\mathbf{o}_{i})-r_{i}) \tag{3}\]
We define \(\alpha_{i}^{*}\) and \(\beta_{i}^{*}\) as the solution to minimize the distance between extracted point cloud and the surface of the simplified cylinder model. The final pose of \(i\)th test tube is
\[i\text{th test tube pose}=\begin{bmatrix}\mathbf{R}(\alpha_{i}^{*},\beta_{i}^{* })&\mathbf{o}_{i}\\ \mathbf{0}&1\end{bmatrix} \tag{4}\]
Fig. 7(c) shows a result of the pose estimation of the example. In addition, we need to define a function to reject wrong cases. For the wrong cases that point clouds of test tubes are seriously corrupted, the simplified cylinder geometry feature no longer exists in the point cloud. So the estimated pose is meaningless. The meaningless estimated pose can be effectively found by examining the geometry constraints between the rack hole and test tube. The geometry constraints tell us the \(\alpha_{i}\) and \(\beta_{i}\) should be bound in the following range so that the \(i\)th test tube and test tube rack have no overlap.
\[tan(\alpha_{i})(W-\frac{r_{i}}{sin\alpha_{i}})>H \tag{5}\] \[tan(\pi-\alpha_{i})(W-\frac{r_{i}}{sin(\pi-\alpha_{i})})>H\] \[tan(\beta_{i})(L-\frac{r_{i}}{sin\beta_{i}})>H\] \[tan(\pi-\beta_{i})(L-\frac{r_{i}}{sin(\pi-\beta_{i})})>H\]
## V Experiments
### _Setup and Evaluation Metrics_
In this section, we have assessed our proposed pose estimation algorithm for test tubes. Fig. 8(a) depicts the experimental setup employed for evaluating the accuracy of our pose estimation approach. In this configuration, a Photoneo Phoxi M 3D scanner is positioned above. The test tubes and rack is positioned on a flat surface table.
We conducted evaluations on three types of test tubes. These test tubes, along with their corresponding point clouds, are illustrated in Fig. 8(b). Among these, the "Tube 1" posed the greatest challenge due to its semi-transparent materials, resulting in corrupted and incomplete point cloud data. In
Fig. 5: The demonstration of the test tube in the rack hole. The red, green, blue arrows represent the x, y, z-axis of the test tube local coordinate. The left image shows an ideal case: the central axis of the transparent test tube and the central axis of the hole containing the test tube are coincident. The right image shows a case that the test tube reaches the boundary of the test tube hole.
Fig. 6: The left image shows an incomplete point cloud of the transparent test tube. The right image shows the fitted cylinder for the point cloud.
Fig. 7: (a) The point cloud of the example viewed from the top view. (b) The projection of convex hull of point cloud onto the top surface of the test tube rack. (3) The result using the proposed pose estimation method.
contrast, the remaining two test tubes are with opaque tube caps, resulting in a relatively simpler detection process.
To evaluate the poses of the test tubes, we designed custom 3D-printed caps, as shown in Fig. 9(a). Each cap featured a square surface at its top. The square top surface can be easily extracted by employing plane segmentation algorithms. After extraction, these square features were employed as target point clouds, which served as a basis for fitting a square point cloud template via the ICP algorithm. This process yielded accurate transformations that were employed as the ground truth for the orientations of the test tubes. To further determine the translation of the test tubes, we measured the physical distance between the center of the square top surface and the origin point of the test tubes. With the ground truth data in hand, we proceeded to replace the 3D-printed caps with the original caps of the test tubes, while maintaining the test tube's pose unchanged. Subsequently, we performed scans to capture point cloud data for the test tubes with their original tube caps. This approach enabled a comprehensive comparison of our pose estimation algorithm's performance under realistic conditions. Note that while a highly accurate sensor isn't strictly required for our pose estimation algorithm, for the purpose of our evaluations, we opted for a high-precision sensor to obtain a point cloud in order to acquire a point cloud with minimal distortion. This choice ensured the establishment of a precise ground truth for the poses of the test tubes.
Regarding the evaluation metrics, we assessed the average rotational and translational errors along each axis. To compute the rotational error, we measured the angle between the quaternion of the attitude parameter (converted to Euler angles) and the correct angle for each rotation axis. Notably, since we defined the rotational symmetric axis for the test tube as the Z-axis, our assessment solely focused on the rotational error along the X and Y axes. On the other hand, the translation error was determined as the disparity between the translations observed along each axis.
### _Comparison with ICP_
The results of the pose estimation for the test tubes are presented in Table I. To facilitate a comprehensive evaluation, we have chosen the popular pose estimation technique of point cloud registration for comparison. This technique involves utilizing the global registration + ICP algorithm from the Open3D library, which efficiently establishes transformation between the template point cloud and the target point cloud. The template point cloud is generated in accordance with the CAD model of the respective test tubes.
When considering the pose estimation results for "Tube 1," it is evident that our proposed method significantly outperforms the global registration + ICP approach. This notable improvement can be attributed to the substantial dissimilarity between the realistic point cloud of the semi-transparent "Tube 1" and the point cloud template derived from the CAD model. This disparity often leads to the failure of global registration, consequently yielding incorrect outcomes. On the contrary, for "Tube 2" and "Tube 3," the presence of opaque caps in their point clouds allows for facile identification through the cap features. Moreover, when evaluating the rotational error, incorporating constraints derived from the rack slot further reduces the rotational error. Our strategic consideration of rack-based constraints aids in mitigating the impact of noise.
However, when examining the translational error associated with "Tube 3," our method demonstrates a relatively larger discrepancy in translational accuracy compared to the ICP method. This variance can be attributed to our reliance on the assumption that the bottom centers of the test tubes are roughly aligned with the central rack slot. In future iterations, we plan to refine our algorithm by removing this constraint, thereby enhancing its versatility and applicability across various scenarios.
Fig. 8: (a) The system configuration. (b) The image of test tubes and rack. (c) The point cloud of test tubes and rack obtained with the Phoxi 3D scanner above.
Fig. 9: (a) The custom 3D printed caps. (b.1) The point clouds of a 3d printed cap. (b.2) The point clouds in red are extracted using plane segmentation algorithm from Open3D library [20]. (c) A ground truth pose generated by point cloud registration. The red, green and blue arrows represent the X, Y, Z axes, respectively.
### _Time Costs_
In this subsection, we evaluate the time costs of our proposed method, including the time for the object detection in 2D image and the subsequent pose estimation on 3D point cloud. The specifications of our computing setup are as follows: an Intel i9-13900K CPU and an Nvidia RTX 4090 GPU. Note that the speed of the object detection stage primarily hinges on the GPU device, while the pose estimation stage is influenced by the CPU device. The results are shown in Table II. It includes the time taken for object detection per image,, rack pose estimation time per rack, the tube estimation time per tube using our method, and tube estimation time per tube through point cloud registration, utilized for comparison.
The results illustrate that the most time-consumption part in our framework is the rack pose estimation component, which necessitates performing the ICP algorithm on an extensive set of point clouds to ensure the precision of pose estimation. Another significant observation is the notable speed enhancement achieved by our proposed tube pose estimation method, showcasing an approximate threefold increase in speed compared to the traditional point cloud registration approach. This distinction becomes particularly significant when dealing with a substantial number of test tubes placed on the rack, a scenario frequently encountered in real-world conditions.
## VI Conclusions
In this paper, we developed the framework for the detection and pose estimation of in-rack test tubes. Through a two-staged process, we are able to achieve good accuracy and efficiency in detecting both the rack structure and test tubes. The experimental result showed that our proposed method exhibits substantial improvements in both accuracy and computational efficiency by comparing with traditional point cloud registration method. These results underscore the significant potential of our approach in advancing the field of in-rack test tube detection.
One potential limitation of the framework is that inaccuracies in rack pose estimation could consequently impact the accuracy of tube pose estimation. For future work, the framework could be refined by focusing solely on analyzing the rack slots from the point cloud data and using the estimated rack slot information as priors for tube pose estimation. This approach would eliminate the need for accurate rack pose estimation.
|
2305.01286 | Cartan calculus in string topology | In this manuscript, we investigate a Cartan calculus on the homology of free
loop spaces which is introduced by Kuribayashi, Wakatsuki, Yamaguchi and the
author. In particular, it is proved that the Cartan calculus can be described
by the loop product and bracket in string topology. Moreover, by using the
descriptions, we show that the loop product behaves well with respect to the
Hodge decomposition of the homology of free loop spaces. | Takahito Naito | 2023-05-02T09:38:33Z | http://arxiv.org/abs/2305.01286v1 | # Cartan calculus in string topology
###### Abstract.
In this manuscript, we investigate a Cartan calculus on the homology of free loop spaces which is introduced by Kuribayashi, Wakatsuki, Yamaguchi and the author. In particular, it is proved that the Cartan calculus can be described by the loop product and bracket in string topology. Moreover, by using the descriptions, we show that the loop product behaves well with respect to the Hodge decomposition of the homology of free loop spaces.
Key words and phrases:Cartan calculus, String topology, Free loop space, Rational homotopy theory 2010 Mathematics Subject Classification: Primary 55P50; Secondary 55P62
## 1. Introduction and Results
Throughout of this manuscript, we assume that \(M\) is a closed oriented smooth manifold of dimension \(m\) and the coefficient of singular (co)homology is a field \(\mathbb{K}\) with \(\operatorname{char}\mathbb{K}=0\). Let \(LM=\operatorname{Map}(S^{1},M)\) be the free loop space of \(M\) and \(\operatorname{aut}_{1}(M)\) the connected component of the mapping space \(\operatorname{Map}(M,M)\) containing the identity map of \(M\). Here, we always identify \(S^{1}\) with \(\mathbb{R}/\mathbb{Z}\).
The classical Cartan calculus for differential geometry consists of three types of derivations on \(\Omega^{*}(M)\) the de Rham complex of \(M\): the Lie derivative \(L_{X}\), the contraction (interior product) \(i_{X}\) with a vector field \(X\) on \(M\) and the exterior derivative \(d\). The Lie derivative and the contraction induce actions of the space of vector fields on the de Rham complex. Moreover, these derivations satisfy Cartan (magic) formula \(L_{X}=[d,i_{X}]\) for any vector field \(X\), where \([\,\ ]\) denotes the commutator bracket.
This structure is formulated by Fiorenza and Kowalzig in [8] as a homotopy Cartan calculus. In [10], Kuribayashi, Wakatsuki, Yamaguchi and the author investigated homotopy Cartan calculi relating to the free loop spaces. We gave a structure of homotopy Cartan calculi on the Hochschild chain complex of \(\Omega^{*}(M)\). Moreover, as a geometric description of the structure, we constructed operators \(L\), \(e\) from \(\pi_{*}(\operatorname{aut}_{1}(M))\otimes\mathbb{K}\) to \(\operatorname{End}(H^{*}(LM))\). In this manuscript, we focus on a homologically defined version of the description
\[L,e:\pi_{*}(\operatorname{aut}_{1}(M))\otimes\mathbb{K}\longrightarrow \operatorname{End}(H_{*}(LM)); \tag{1.1}\]
see Section 4 for more details.
On the other hands, the homology of \(LM\) has rich algebraic structures in string topology initiated by Chas and Sullivan. In [4], they defined a Batalin-Vilkovisky algebra structure on the shifted homology \(\mathbb{H}_{*}(LM):=H_{*+m}(LM)\) with respect to a multiplication \(\bullet\) called the loop product and the Batalin-Vilkovisky (BV) operator \(\Delta\) which is given by the rotation of loops. In particular, \(\mathbb{H}_{*}(LM)\) is a Gerstenhaber algebra with the loop bracket \(\{\,\ \}\); see Section 2 for details about the algebraic structures.
The aim of this manuscript is to investigate a relation between the loop product (bracket) and the operations (1.1). In particular, we show that the operations (1.1) can be described by using the loop product and bracket. In order to describe the
result, we recall the morphism \(\Gamma_{1}:\pi_{*}(\Omega\operatorname{aut}_{1}(M))\otimes\mathbb{K}\to\mathbb{H}_{*} (LM)\) due to Felix and Thomas [6]; see also Section 5 for the definition. They proved that \(\Gamma_{1}\) is injective when \(M\) is simply-connected. By using the morphism \(\Gamma_{1}\), we prove the following theorem, which is a main result in this manuscript. Here, the notations \(L_{f}\) and \(e_{f}\) means the values of \(L\) and \(e\) at \(f\in\pi_{*}(\operatorname{aut}_{1}(M))\), respectively.
**Theorem 1.1**.: _Let \(h\in\pi_{n}(\Omega\operatorname{aut}_{1}(M))\) for \(n\geq 1\) and \(a\in\mathbb{H}_{*}(LM)\). Then, the loop product \(\Gamma_{1}(h)\bullet a\) and the loop bracket \(\{\Gamma_{1}(h),a\}\) satisfy the identities_
1. \(\Gamma_{1}(h)\bullet a=(-1)^{n}e_{\partial(h)}(a)\) _and_
2. \(\{\Gamma_{1}(h),a\}=L_{\partial(h)}(a)-(-1)^{n}\Delta\Gamma_{1}(h)\bullet a\)__
_in \(\mathbb{H}_{*}(LM)\). Here, \(\partial:\pi_{n}(\Omega\operatorname{aut}_{1}(M))\stackrel{{ \cong}}{{\longrightarrow}}\pi_{n+1}(\operatorname{aut}_{1}(M))\) is the adjoint map. Moreover, if \(M\) is simply-connected, then the following identity holds;_
1. \(\{\Gamma_{1}(h),a\}=L_{\partial(h)}(a)\)_._
A proof of Theorem 1.1 is stated in Section 6. The identities in Theorem 1.1 give us some applications with respect to the loop product. The following corollary follows immediately from Theorem 1.1(3).
**Corollary 1.2**.: _Let \(f\in\pi_{*}(\operatorname{aut}_{1}(M))\). If \(M\) is simply-connected, then the operator \(L_{f}:\mathbb{H}_{*}(LM)\to\mathbb{H}_{*}(LM)\) is a derivation with respect to the loop product._
We also discuss a behavior of the loop product in the _Hodge decomposition_ of \(H_{*}(LM)\). When \(M\) is simply-connected, the homology of \(LM\) admits a direct sum decomposition \(H_{*}(LM)\cong\bigoplus_{i}H_{*}^{(i)}(LM)\) and each summand \(H_{*}^{(i)}(LM)\) is given as a eigenspace; see [12]. Felix and Thomas [7] proved that the loop product \(\bullet\) behaves well with respect to the Hodge decomposition in the following sense;
\[\bullet:\mathbb{H}_{*}^{(i)}(LM)\otimes\mathbb{H}_{*}^{(j)}(LM)\longrightarrow \mathbb{H}_{*}^{(\leq i+j)}(LM).\]
An equivariant version of the result is given from Berest, Ramadoss and Zhang [2] when the manifold \(M\) is rationally elliptic. Note that the eigenspace \(H_{*}^{(i)}(LM)\) can be defined even if \(M\) is not simply-connected. We show the following behavior of the loop product of non-simply connected manifolds in the Hodge decomposition.
**Theorem 1.3**.: _Let \(h\in\pi_{n}(\Omega\operatorname{aut}_{1}(M))\) for \(n\geq 1\) and \(a\in\mathbb{H}_{*}^{(i)}(LM)\). Then, the loop product \(\Gamma_{1}(h)\bullet a\) is contained in \(\mathbb{H}_{*}^{(i+1)}(LM)\), that is, the loop product \(\bullet\) induces_
\[\bullet:\operatorname{Im}\Gamma_{1}\otimes\mathbb{H}_{*}^{(i)}(LM)\longrightarrow \mathbb{H}_{*}^{(i+1)}(LM).\]
This manuscript is organized as follows. In Section 2, we recall a homotopy theoretic construction of the loop product and the loop bracket. In Section 3, geometric and algebraic definitions of the Hodge decomposition of \(H_{*}(LM)\) are described. The definition of the operators (1.1) is introduced in Section 4. The morphism \(\Gamma_{1}\) due to Felix and Thomas is stated in Section 5. Moreover, some properties about \(\Gamma_{1}\) with respect to the Hodge decomposition are also observed. Section 6 is devoted to proving Theorem 1.1, Corollary 1.2 and Theorem 1.3. In Section 7, we give some examples of the image of \(\Gamma_{1}\) when \(M\) is a sphere.
## 2. Loop product and loop bracket
In this section, we first introduce a construction of shriek maps in general setting for recalling a homotopy theoretic description of the loop product due to Cohen
and Jones [3]. Consider the pullback diagram of connected spaces
Here \(N_{i}\) is a compact oriented smooth manifold of dimension \(n_{i}\), \(p\) is a fibration and \(i\) is an embedding. Observe that \(j\) is an embedding as topological spaces. Consider the associated disk bundle \(\pi:D(\nu)\to N_{2}\) of the normal bundle of \(i\) and an embedding \(D(\nu)\hookrightarrow N_{1}\). We identify \(D(\nu)\) with the embedding image in \(N_{1}\) and simply write \(\tilde{D}(\nu):=p^{-1}(D(\nu))\) and \(\partial\tilde{D}(\nu):=p^{-1}(\partial D(\nu))\). Since \(p\) is a fibration, the homotopy lifting property shows that there exists a map \(\tilde{\pi}:\tilde{D}(\nu)\to E_{2}\) such that \(q\circ\tilde{\pi}=\pi\circ p|_{\tilde{D}(\nu)}\).
Let \(u\in H^{n_{1}-n_{2}}(D(\nu),\partial D(\nu))\) be the Thom class and denote by \(\tilde{u}\) the pullback of the cohomology class \(p^{*}(u)\) in \(H^{n_{1}-n_{2}}(\tilde{D}(\nu),\partial\tilde{D}(\nu))\). Then the _shriek map_ of \(j\), denoted by \(j_{!}\), is defined as the composite
where \(\cap\tilde{u}\) denotes the cap product with \(\tilde{u}\).
Let \(LM\times_{M}LM\) denote the subspace of the product \(LM\times LM\) consisting of pairs of loops having the same basepoint, that is, there exists the pullback diagram
in which \(j\) is the inclusion, \(\operatorname{ev}_{0}\) is the evaluation map at \(0\) and Diag is the diagonal map. Let \(\operatorname{comp}:LM\times_{M}LM\to LM\) be the concatenation of loops defined by
\[\operatorname{comp}(\gamma_{1},\gamma_{2})(t)=\left\{\begin{array}{ll}\gamma _{1}(2t)&\left(0\leq t\leq\frac{1}{2}\right)\\ \gamma_{2}(2t-1)&\left(\frac{1}{2}\leq t\leq 1\right)\end{array}\right.\]
for \((\gamma_{1},\gamma_{2})\in LM\times_{M}LM\). Then the _loop product_, denoted by \(\operatorname{Lp}\), is defined as
where \(\times\) denotes the cross product. The loop product \(\operatorname{Lp}\) induces a multiplication on the shifted homology \(\mathbb{H}_{*}(LM):=H_{*+m}(LM)\) defined by
\[a\bullet b:=(-1)^{m\|a\|}\operatorname{Lp}(a\otimes b)=(-1)^{m(|a|+m)} \operatorname{Lp}(a\otimes b)\]
for \(a\), \(b\in\mathbb{H}_{*}(LM)\), where \(\|a\|\) stands for the degree of \(a\) in \(\mathbb{H}_{*}(LM)\). The definition of \(\bullet\) implies that the grading shift morphism \(s^{m}:H_{*}(LM)\to\mathbb{H}_{*}(LM)\), \(s^{m}(a)=a\) of degree \(m\) fits into the commutative diagram
It is well-known that the multiplication \(\bullet\) is an associative, unital and commutative multiplication, and moreover, the homology class \(c_{*}([M])\in\mathbb{H}_{0}(LM)\) is the unit with respect to \(\bullet\), where \([M]\) is the fundamental class of \(M\) and \(c:M\to LM\) is a map which assigns an element \(x\in M\) a constant loop at \(x\). Especially, we have
\[\operatorname{Lp}\left(c_{*}([M])\otimes a\right)=(-1)^{m}a \tag{2.1}\]
in the non-shifted homology \(H_{*}(LM)\).
Next, we recall the Batalin-Vilkovisky (BV) operator \(\Delta\) on the homology \(H_{*}(LM)\). Let \(r:S^{1}\times LM\to LM\) be a \(S^{1}\)-action of \(LM\) induced by the rotation of loops. Explicitly, \(r\) is defined by \(r(s,\gamma)(t)=\gamma(t+s)\) for \(s,t\in S^{1}\) and \(\gamma\in LM\). Then, we define \(\Delta\) as the composite
where \([S^{1}]\) is the fundamental class of \(S^{1}\).
Chas and Sullivan [4] showed that the loop product \(\bullet\) and the BV operator \(\Delta\) turn \(\mathbb{H}_{*}(LM)\) into a BV-algebra; see also [11], [14]. In general, from the result due to Getzler [9], any BV-algebras have a structure of Gerstenhaber algebras. Precisely, the bracket \(\{\,\ \}\) on \(\mathbb{H}_{*}(LM)\) defined by
\[\{a,b\}:=(-1)^{\|a\|}\Delta(a\bullet b)-(-1)^{\|a\|}\Delta(a)\bullet b-a\bullet \Delta(b)\]
is a Lie bracket which satisfies the Poisson identity
\[\{a,b_{1}\bullet b_{2}\}=\{a,b_{1}\}\bullet b_{2}+(-1)^{\|b_{1}\|(\|a\|+1)}b_{ 1}\bullet\{a,b_{2}\}. \tag{2.2}\]
This bracket is called the _loop bracket_.
## 3. Hodge decomposition of the homology of free loop space
In this section, we recall geometric and algebraic definitions for the Hodge decomposition of \(H_{*}(LM)\) and compare them. Let \(k\geq 2\) be an integer and \(p_{k}:S^{1}\to S^{1}\) the \(k\)-fold covering given by \(p_{k}(t)=kt\) for \(t\in S^{1}\). We denote by \(\varphi_{k}:LM\to LM\) the map induced by \(p_{k}\), and by \(H_{*}^{(i)}(LM)=\{a\in H_{*}(LM)\mid\varphi_{k*}(a)=k^{i}a\}\) the eigenspace of \(\varphi_{k*}\) the induced map in homology corresponding to the eigenvalue \(k^{i}\) for an integer \(i\geq 0\). Remark that the definition of \(H_{*}^{(i)}(LM)\) does not depend on the choice of \(k\) since \(\operatorname{char}\mathbb{K}=0\).
**Lemma 3.1**.: _The image of \(H_{*}^{(i)}(LM)\) under \(\Delta\) is contained in \(H_{*}^{(i-1)}(LM)\), namely, \(\Delta\left(H_{*}^{(i)}(LM)\right)\subset H_{*}^{(i-1)}(LM).\)_
Proof.: Let \(r:S^{1}\times LM\to LM\) be the \(S^{1}\)-action stated in Section 2. By definition, it is easy to check that the following diagram is commutative:
(3.1)
Observe that \(p_{k*}[S^{1}]=k[S^{1}]\) in \(H_{1}(S^{1})\). For any \(a\in H_{*}^{(i)}(LM)\), the definition of \(\Delta\) and a commutativity of the diagram (3.1) yield that
\[k\cdot\varphi_{k*}\left(\Delta a\right) =\varphi_{k*}\circ r_{*}\left(p_{k*}[S^{1}]\times a\right)\] \[=r_{*}\circ\left(1\times\varphi_{k}\right)_{*}\left([S^{1}]\times a\right)\] \[=r_{*}([S^{1}]\times k^{i}a)\] \[=k^{i}\cdot\Delta(a),\]
which completes the proof.
_Remark 3.2_.: In [7, Theorem 2], Felix and Thomas proved the same assertion of Lemma 3.1 by algebraic way when \(M\) is simply-connected.
Next we recall an algebraic definition for the Hodge decomposition which is described by using a Sullivan model for \(LM\) due to Vigue-Poirrier and Sullivan [13] in rational homotopy theory. We refer the reader to the book [5] for details about notations and terminology from rational homotopy theory.
From now on, we assume that \(M\) is simply-connected in this section. Let \(\wedge V=(\wedge V,d)\) be a minimal Sullivan model for \(M\) and \(\mathcal{L}=(\wedge V\otimes\wedge\overline{V},D)\) the Sullivan model for \(LM\) described in [5, SS15(c)]. Here, \(\overline{V}^{i}=V^{i+1}\) is the suspension of \(V\). We denote by \(\bar{v}\in\overline{V}\) the element which corresponds to \(v\in V\). Let \(s\) be a derivation of degree \(-1\) on \(\wedge V\otimes\wedge\overline{V}\) defined by \(s(v)=\bar{v}\) and \(s(\bar{v})=0\). The differential \(D\) of \(\mathcal{L}\) is the unique extension of \(d\) which satisfies the condition \(Ds+sd=0\).
From the definition of \(D\), we have a direct sum decomposition \(\mathcal{L}=\bigoplus_{i}\mathcal{L}_{(i)}\) of complexes, where \(\mathcal{L}_{(i)}=(\wedge V\otimes\wedge^{i}\overline{V},D)\). Applying the homology functor to the decomposition yields
\[H^{*}(\mathcal{L})\cong\bigoplus_{i\geq 0}H^{*}(\mathcal{L}_{(i)}). \tag{3.2}\]
We here consider \(H^{*}_{(i)}(LM)=\{\alpha\in H^{*}(LM)\mid\varphi_{k}^{*}(\alpha)=k^{i}\alpha\}\) the cohomological version of \(H^{(i)}_{*}(LM)\). The following proposition asserts that \(H^{*}(\mathcal{L}_{(i)})\) is an algebraic construction for \(H^{*}_{(i)}(LM)\). It is a well known result, however, we provide a proof for the sake of completeness.
**Proposition 3.3**.: _The homology \(H^{*}(\mathcal{L}_{(i)})\) is isomorphic to \(H^{*}_{(i)}(LM)\)._
Proof.: First consider a morphism \(\mathcal{M}_{\varphi_{k}}:\mathcal{L}\to\mathcal{L}\) defined by \(\mathcal{M}_{\varphi_{k}}(v)=v\) and \(\mathcal{M}_{\varphi_{k}}(\bar{v})=k\bar{v}\) for \(v\in V\). It is easy to check that \(\alpha\in H^{*}(\mathcal{L})\) belongs to the direct summand \(H^{*}(\mathcal{L}_{(i)})\) if and only if \(\mathcal{M}_{\varphi_{k}}(\alpha)=k^{i}\alpha\) holds. Moreover, the result [1, Theorem 3.2] asserts that \(\mathcal{M}_{\varphi_{k}}\) is a Sullivan representative for \(\varphi_{k}\). Therefore,
\[H^{*}_{(i)}(LM)\cong\{\alpha\in H^{*}(\mathcal{L})\mid\mathcal{M}_{\varphi_{k }}^{*}(\alpha)=k^{i}\alpha\}=H^{*}(\mathcal{L}_{(i)}),\]
and the proof is complete.
In the rest of this section, we compare with the homological definition \(H^{(i)}_{*}(LM)\) and the cohomological definition \(H^{*}_{(i)}(LM)\). Let \(\langle\,\ \rangle:H^{n}(LM)\otimes H_{n}(LM)\to\mathbb{K}\) be the Kronecker pairing. Since the characteristic of \(\mathbb{K}\) is zero, it is a non-degenerate pairing from the universal coefficient theorem; see [5, Proposition 5.3] for example. Let us consider a paring
\[\langle\,\ \rangle_{ij}:H^{n}_{(i)}(LM)\otimes H^{(j)}_{n}(LM)\longrightarrow \mathbb{K}\]
induced by \(\langle\,\ \rangle\). Then we have the following.
**Lemma 3.4**.: _The pairing \(\langle\,\ \rangle_{ij}\) is non-degenerate if and only if \(i=j\), that is, \(H^{n}_{(i)}(LM)\cong\operatorname{Hom}(H^{(i)}_{n}(LM),\mathbb{K})\)._
Proof.: For any \(\alpha\in H^{n}_{(i)}(LM)\) and \(a\in H^{(j)}_{n}(LM)\), we have
\[k^{i}\langle\alpha,a\rangle_{ij}=\langle k^{i}\alpha,a\rangle=\langle \varphi_{k}^{*}(\alpha),a\rangle=\langle\alpha,\varphi_{k*}(a)\rangle=\langle \alpha,k^{j}a\rangle=k^{j}\langle\alpha,a\rangle_{ij}.\]
Hence \(\langle\,\ \rangle_{ij}=\delta_{ij}\langle\,\ \rangle\) holds, where \(\delta_{ij}\) is the Kronecker delta. Since \(\langle\,\ \rangle\) is non-degenerate, the assertion is proved from the direct sum decomposition (3.2).
**Corollary 3.5**.: _If \(M\) is simply-connected, then the evaluation map \(\operatorname{ev}_{0}:LM\to M\) induces an isomorphism \(\operatorname{ev}_{0*}:H^{(0)}_{*}(LM)\to H_{*}(M)\)._
Proof.: It is easy to show that the canonical inclusion \(\wedge V\hookrightarrow\mathcal{L}\) induces an isomorphism \(H^{*}(\wedge V)\cong H^{*}(\mathcal{L}_{(0)})\). Since the inclusion is a Sullivan representative for \(\operatorname{ev}_{0}\), the assertion follows from Proposition 3.3 and Lemma 3.4.
## 4. Geometric Cartan calculus on free loop spaces
In this section, we give a Cartan calculus on \(H_{*}(LM)\) introduced in Section 1 as the operator (1.1). Note that it is a homologically defined Cartan calculus of the one due to Kuribayashi, Wakatsuki, Yamaguchi and the author in [10].
We first consider two homology classes \(c_{*}([S^{n}])\) and \(\eta_{n}\) in \(H_{*}(LS^{n})\) for \(n\geq 2\). The first one is the homology class in \(H_{n}(LS^{n})\) which is obtained by \([S^{n}]\) the fundamental class of \(S^{n}\) via the constant loop map \(c:S^{n}\hookrightarrow LS^{n}\). The second one is defined by \(\eta_{n}=(\operatorname{ad}_{1})_{*}([S^{n-1}])\), where \(\operatorname{ad}_{1}:S^{n-1}\to\Omega S^{n}\hookrightarrow LS^{n}\) is the adjoint of the identity \(1:S^{n-1}\wedge S^{1}\cong S^{n}\to S^{n}\) given by \(\operatorname{ad}_{1}(u)(t)=u\wedge t\) for \(u\in S^{n-1}\) and \(t\in S^{1}\).
**Lemma 4.1**.: _The homology class \(\eta_{n}\) has the following properties._
1. \(\eta_{n}\in H_{n-1}^{(1)}(LS^{n})\)_._
2. \(\Delta(\eta_{n})=c_{*}([S^{n}])\)_._
Proof.: Let \(\tilde{p}_{k}:S^{n-1}\to S^{n-1}\) be the composite
\[S^{n-1}\cong S^{n-2}\wedge S^{1}\xrightarrow{\text{\tiny$1\wedge p_{k}$}}S^{n -2}\wedge S^{1}\cong S^{n-1}\]
and consider the diagram
where \(p_{k}\) and \(\varphi_{k}\) are maps stated in Section 3. Obviously, the right-hand side square is commutative. We can show that the maps contained in the homotopy set \([S^{n},S^{n}]\) corresponding to \(\varphi_{k}\circ\operatorname{ad}_{1}\) and \(\operatorname{ad}_{1}\circ\tilde{p}_{k}\) through the adjoint congruence \([S^{n-1},\Omega S^{n}]\cong[S^{n},S^{n}]\) coincide. It follows that the left-hand side square is homotopy commutative. Since \(\tilde{p}_{k*}([S^{n-1}])=k[S^{n-1}]\) in \(H_{n-1}(S^{n-1})\), the assertion (1) follows from a homotopy commutativity of the diagram.
By Lemma 3.1 and the assertion (1), \(\Delta(\eta_{n})\) is contained in \(H_{n}^{(0)}(LS^{n})\). The commutative diagram
yields that \(\operatorname{ev}_{0*}\circ\Delta(\eta_{n})=[S^{n}]=\operatorname{ev}_{0*} \circ c_{*}([S^{n}])\) in \(H_{n}(S^{n})\). Since \(S^{n}\) is simply-connected for \(n\geq 2\), \(\operatorname{ev}_{0*}:H_{*}^{(0)}(LS^{n})\xrightarrow{\cong}H_{*}(S^{n})\) is an isomorphism from Corollary 3.5, which completes the proof.
Given \(f\in\pi_{n}(\operatorname{aut}_{1}(M))\) which is represented by \(f:S^{n}\to\operatorname{aut}_{1}(M)\). Let \(\operatorname{ad}_{f}:S^{n}\times M\to M\) be the adjoint of \(f\) defined by \(\operatorname{ad}_{f}(u,x)=f(u)(x)\) for \(u\in S^{n}\), \(x\in M\) and denote by \(L(\operatorname{ad}_{f}):LS^{n}\times LM\to LM\) the induced map between the free loop spaces. By using the homology classes \(c_{*}([S^{n}])\) and \(\eta_{n}\), we define morphisms
\[L,\ e:\pi_{*}(\operatorname{aut}_{1}(M))\otimes\mathbb{K}\longrightarrow \operatorname{End}(H_{*}(LM))\]
by \(L(f)(a)=L(\operatorname{ad}_{f})_{*}(c_{*}([S^{n}])\times a)\), \(e(f)(a)=L(\operatorname{ad}_{f})_{*}(\eta_{n}\times a)\) for \(a\in H_{*}(LM)\), respectively. Here the notation \(\times\) means the cross product. We will simply write \(L_{f}:=L(f)\) and \(e_{f}:=e(f)\).
**Lemma 4.2**.: _The operators \(L_{f}\), \(e_{f}\) and \(\Delta\) satisfy Cartan formula, namely,_
\[L_{f}=\Delta\circ e_{f}-(-1)^{n-1}e_{f}\circ\Delta.\]
Proof.: First, it is easily seen that the following diagram is commutative;
(4.1)
The commutativity of (4.1) and Lemma 4.1 (2) show that
\[\Delta\circ e_{f}(a) =\Delta\circ L(\operatorname{ad}_{f})_{*}(\eta_{n}\times a)\] \[=L(\operatorname{ad}_{f})_{*}\left(\Delta(\eta_{n})\times a+(-1)^ {n-1}\eta_{n}\times\Delta(a)\right)\] \[=L_{f}(a)+(-1)^{n-1}e_{f}\circ\Delta(a)\]
for \(a\in H_{*}(LM)\).
**Proposition 4.3**.: _The operators \(L_{f}\) and \(e_{f}\) induce_
\[L_{f}:H_{*}^{(i)}(LM)\longrightarrow H_{*}^{(i)}(LM)\quad\text{and}\quad e_{ f}:H_{*}^{(i)}(LM)\longrightarrow H_{*}^{(i+1)}(LM).\]
Proof.: Naturality of the cross product \(\times\) asserts that it induces
\[\times:H_{*}^{(i)}(LS^{n})\otimes H_{*}^{(j)}(LM)\longrightarrow H_{*}^{(i+j) }(LS^{n}\times LM).\]
Moreover, \(L(\operatorname{ad}_{f})_{*}\) preserves the degree with respect to the Hodge decomposition. Therefore, the assertion follows from \(c_{*}([S^{n}])\in H_{*}^{(0)}(LS^{n})\) and Lemma 4.1 (1).
## 5. The morphism \(\Gamma_{1}\) and the Hodge decomposition
In this section, we begin with recalling the morphism \(\Gamma_{1}\) due to Felix and Thomas [6]. Let \(g:\Omega\operatorname{aut}_{1}(M)\times M\to LM\) be a map defined by \(g(\gamma,x)(t)=\gamma(t)(x)\) for \(\gamma\in\Omega\operatorname{aut}_{1}(M)\), \(x\in M\) and \(t\in S^{1}\). Then the map \(\Gamma_{1}\) is defined as the composite
where \([M]\in H_{m}(M)\) is the fundamental class and Hur is the Hurewicz map, that is, \(\operatorname{Hur}(h)=h_{*}([S^{n}])\) for \(h:S^{n}\to\Omega\operatorname{aut}_{1}(M)\) in \(\pi_{n}(\Omega\operatorname{aut}_{1}(M))\).
**Lemma 5.1**.: _The image of \(\Gamma_{1}\) is contained in \(\mathbb{H}_{*}^{(1)}(LM)\)._
Proof.: Let \(\varphi_{k}^{\prime}:\Omega\operatorname{aut}_{1}(M)\to\Omega\operatorname{ aut}_{1}(M)\) be a map induced by \(p_{k}\) stated in Section 3. Then \(\varphi_{k}\circ g=g\circ(1\times\varphi_{k}^{\prime})\) and it follows that the diagram
\[\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.
commutes. Since \(\varphi^{\prime}_{k}\) coincides with the composite
the induced map between homotopy group \(\varphi^{\prime}_{k*}\) satisfies \(\varphi^{\prime}_{k*}(h)=kh\) for any \(h\in\pi_{*}(\Omega\operatorname{aut}_{1}(M))\), which completes the proof.
**Proposition 5.2**.: _If \(M\) is simply-connected, then the composite_
\[\Delta\circ\Gamma_{1}:\pi_{n}(\Omega\operatorname{aut}_{1}(M))\otimes\mathbb{ K}\to\mathbb{H}_{n+1}(LM)\]
_is zero for \(n\geq 0\)._
Proof.: From Lemma 3.1 and Proposition 5.1, the image of \(\Delta\circ\Gamma_{1}\) is contained in \(\mathbb{H}_{n+1}^{(0)}(LM)\). Moreover, \(\mathbb{H}_{n+1}^{(0)}(LM)\cong H_{n+1+m}(M)=\{0\}\) from Corollary 3.5.
## 6. Proofs of Theorem 1.1, Corollary 1.2 and Theorem 1.3
We first investigate a relation between the loop product and the morphism induced by \(g\) in homology stated in Section 5. In this section, we often regard \(H_{*}(M)\) as a vector subspace of \(H_{*}(LM)\) through the morphism induced by the constant loop map \(c:M\hookrightarrow LM\) in homology. Let \(g^{\prime}:\Omega\operatorname{aut}_{1}(M)\times LM\to LM\times_{M}LM\) be a map defined by \(g^{\prime}(\gamma_{1},\gamma_{2})=(g(\gamma_{1},\gamma_{2}(0)),\gamma_{2})\) for \(\gamma_{1}\in\Omega\operatorname{aut}_{1}(M)\) and \(\gamma_{2}\in LM\) and put \(\operatorname{comp}^{\prime}:=\operatorname{comp}\circ g^{\prime}\).
**Lemma 6.1**.: _The following diagram is commutative:_
Proof.: It is easy to check a commutativity of the following diagram:
where \(\operatorname{pr}_{23}\) is the projection on the second and third factors. Note that the fiber product \(M\times_{M}LM\) is identified with \(LM\) by a homeomorphism given by the composite
It follows that the following diagram is commutative:
Therefore, the commutativity of the diagram and the definition of the loop product proves the lemma.
Proof of Theorem 1.1.: In order to prove the identity (1) in the assertion, it is enough to show that the following diagram is commutative;
(6.1)
Here \(e^{\prime}\) is the adjoint of the operator \(e\) in Section 4. Observe that the composite of the left-hand side vertical arrows coincides with \(\Gamma_{1}\otimes s^{m}\). We see that the top square is commutative by definition. From Lemma 6.1 and the formula (2.1), the commutativity of the middle triangle in (6.1) is shown. Given \(a\in H_{*}(LM)\) and \(h\in\pi_{n-1}(\Omega\operatorname{aut}_{1}(M))\) which is represented by a map \(h:S^{n-1}\to\Omega\operatorname{aut}_{1}(M)\). Then we have
\[\operatorname{comp}^{\prime}_{*}\circ\circ(\operatorname{Hur}\otimes 1)(h \otimes a)=\left(\operatorname{comp}^{\prime}\circ(h\times 1)\right)_{*}([S^{n-1}] \times a). \tag{6.2}\]
On the other hand, let \(f:=\partial(h):S^{n}\cong S^{n-1}\wedge S^{1}\to\operatorname{aut}_{1}(M)\) be the adjoint of \(h\) given by \(f(u\wedge t)(x):=h(u)(t)(x)\) for \(u\in S^{n-1}\), \(t\in S^{1}\) and \(x\in M\). By the definition of \(e^{\prime}\), we have
\[e^{\prime}\circ(\partial\otimes 1)(h\otimes a)=L(\operatorname{ad}_{f})_{*}( \eta_{n}\times a)=(L(\operatorname{ad}_{f})\circ(\operatorname{ad}_{1}\times 1 ))_{*}([S^{n-1}]\times a), \tag{6.3}\]
where \(\operatorname{ad}_{f}:M\times S^{n}\to M\) is the adjoint of \(f\). Observe that, for \(\gamma\in LM\),
\[L(\operatorname{ad}_{f})\circ(\operatorname{ad}_{1}\times 1)(u,\gamma)(t)=h(u)(t )(\gamma(t))\]
and
\[\operatorname{comp}^{\prime}\circ(h\times 1)(u,\gamma)(t)=\left\{\begin{array}{ll} \gamma(2t)&\left(0\leq t\leq\frac{1}{2}\right)\\ h(u)(2t-1)(\gamma(0))&\left(\frac{1}{2}\leq t\leq 1\right).\end{array}\right.\]
Now define three homotopies \(H_{i}:S^{n-1}\times LM\times I\to LM\) for \(i=1,2,3\) by
\[H_{1}(u,\gamma,s)(t)=\left\{\begin{array}{ll}\gamma(0)&\left(0\leq t\leq \frac{s+2}{3}s\right)\\ h(u)\left(\frac{3t-2s}{3-2s}\right)\left(\gamma\left(\frac{3t-2s}{3-2s}\right) \right)&\left(\frac{2}{3}s\leq t\leq 1\right),\end{array}\right.\]
\[H_{2}(u,\gamma,s)(t)=\left\{\begin{array}{ll}\gamma(3st)&\left(0\leq t\leq \frac{1}{3}\right)\\ h(u)\left(3st-s\right)\left(\gamma\left(s\right)\right)&\left(\frac{1}{3}\leq t \leq\frac{2}{3}\right)\\ h(u)\left(3t+3s-3st-2\right)\left(\gamma\left(3t+3s-3st-2\right)\right)&\left( \frac{2}{3}\leq t\leq 1\right)\end{array}\right.\]
and
\[H_{3}(u,\gamma,s)(t)=\left\{\begin{array}{ll}\gamma\left(\frac{6t}{s+2} \right)&\left(0\leq t\leq\frac{s+2}{6}\right)\\ h(u)\left(\frac{6t-s-2}{s+2}\right)\left(\gamma\left(0\right)\right)&\left( \frac{s+2}{6}\leq t\leq\frac{s+2}{3}\right)\\ \gamma(0)&\left(\frac{s+2}{3}\leq t\leq 1\right).\end{array}\right.\]
It is easy to check that \(L(\operatorname{ad}_{f})\circ(\operatorname{ad}_{1}\times 1)=H_{1}|_{s=0}\), \(H_{1}|_{s=1}=H_{2}|_{s=0}\), \(H_{2}|_{s=1}=H_{3}|_{s=0}\) and \(H_{3}|_{s=1}=\operatorname{comp}^{\prime}\circ(h\times 1)\), and these imply that \(L(\operatorname{ad}_{f})\circ(\operatorname{ad}_{1}\times 1)\) is homotopic to \(\operatorname{comp}^{\prime}\circ(h\times 1)\). Therefore, from (6.2) and (6.3), we show that the diagram (6.1) is commutative.
The identity (2) follows from (1) and Lemma 4.2. Indeed, we have
\[\{\Gamma_{1}(h),a\} =(-1)^{|\Gamma_{1}(h)|}\Delta(\Gamma_{1}(h)\bullet a)-(-1)^{| \Gamma_{1}(h)|}\Delta(\Gamma_{1}(h))\bullet a-\Gamma_{1}(h)\bullet\Delta(a)\] \[=\left(\Delta e_{\partial(h)}+(-1)^{|\partial(h)|}e_{\partial(h)} \Delta\right)(a)-(-1)^{|h|}\Delta(\Gamma_{1}(h))\bullet a\] \[=L_{\partial(h)}(a)-(-1)^{|h|}\Delta(\Gamma_{1}(h))\bullet a.\]
If \(M\) is simply-connected, \(\Delta\circ\Gamma_{1}=0\) from Proposition 5.2. Therefore, we see that the identity (3) holds from (2).
Proof of Corollary 1.2.: Recall that the loop bracket satisfies Poisson identity (2.2). By virtue of Theorem 1.1(3), we have
\[L_{f}(a\bullet b) =\{\Gamma_{1}\circ\partial^{-1}(f),a\bullet b\}\] \[=\{\Gamma_{1}\circ\partial^{-1}(f),a\}\bullet b+(-1)^{\|a\|(\| \Gamma_{1}\circ\partial^{-1}(f)\|+1)}a\bullet\{\Gamma_{1}\circ\partial^{-1}( f),b\}\] \[=L_{f}(a)\bullet b+(-1)^{\|L_{f}\|\|a\|}a\bullet L_{f}(b).\]
These implies that \(L_{f}\) is a derivation with respect to the loop product.
Proof of Theorem 1.3.: By Theorem 1.1(1) and Proposition 4.3, we have
\[\Gamma_{1}(h)\bullet a=(-1)^{n}e_{\partial(h)}(a)\in\mathbb{H}_{*}^{(i+1)}(LM)\]
for \(h\in\pi_{n}(\Omega\operatorname{aut}_{1}(M))\) and \(a\in\mathbb{H}_{*}^{(i)}(LM)\), which proves the theorem.
## 7. Examples of the image of \(\Gamma_{1}\)
In this section, we discuss about the image of the morphism \(\Gamma_{1}\) when the case where \(M\) is a simply-connected sphere \(S^{n}\) for \(n=2,3\). Since \(\Gamma_{1}\) is injective from [6], it is enough to consider nontrivial elements in the homotopy group \(\pi_{*}(\operatorname{aut}_{1}(S^{n}))\).
_Example 7.1_.: Let \(S^{3}\) be the \(3\)-dimensional sphere. We may regard \(S^{3}\) as the unit sphere in the quaternions \(\mathbb{H}\). Consider a multiplication \(\mu:S^{3}\times S^{3}\to S^{3}\) induced by the multiplication of \(\mathbb{H}\). The adjoint of \(\mu\) induces a map \(\operatorname{ad}_{\mu}:S^{3}\to\operatorname{aut}_{1}(S^{3})\), and it is a representative of a nonzero element in \(\pi_{3}(\operatorname{aut}_{1}(S^{3}))\otimes\mathbb{K}\). Since \(\Gamma_{1}\) is injective from [6, Theorem 2], \(\Gamma_{1}(\partial^{-1}(\operatorname{ad}_{\mu}))\) is a nonzero homology class in \(H_{5}(LS^{3})\). Explicitly, by the definition of \(\Gamma_{1}\), the homology class is obtained by the value of a morphism induced by the composite
(7.1)
in homology at the fundamental class of \(S^{2}\times S^{3}\).
On the other hand, since \(S^{3}\) is a Lie group, it is well-known that \(LS^{3}\) splits as the product \(\Omega S^{3}\times S^{3}\) with a homeomorphism \(\psi:\Omega S^{3}\times S^{3}\to LS^{3}\) defined by \(\psi(\gamma,u)(t)=\mu(\gamma(t),x)\) for \(\gamma\in\Omega S^{3}\), \(u\in S^{3}\) and \(t\in S^{1}\). We here recall the adjoint \(\operatorname{ad}_{1}:S^{2}\to\Omega S^{3}\) and the homology class \(\eta_{3}\in H_{2}(\Omega S^{3})\) stated in Section 4. Then, it is easy to check that the composite (7.1) coincides with
\[S^{2}\times S^{3}\xrightarrow{\operatorname{ad}_{1}\times 1}\Omega S^{3} \times S^{3}\xrightarrow{\psi}LS^{3}\]
and therefore \(\Gamma_{1}(\partial^{-1}(\operatorname{ad}_{\mu}))=\psi_{*}(\eta_{3}\times[S^ {3}])\) in \(H_{5}(LS^{3})\).
_Example 7.2_.: Let \(S^{2}\) be the \(2\)-dimensional sphere which is regarded as the unit sphere in \(\mathbb{R}^{3}\). Consider a \(S^{3}\)-action \(\mu^{\prime}:S^{3}\times S^{2}\to S^{2}\) induced by the conjugate action \(\mathbb{H}\times\mathbb{R}^{3}\to\mathbb{R}^{3}\). Here \(\mathbb{R}^{3}\) is regarded as the subspace of \(\mathbb{H}\) consisting of pure quaternions, that is, quaternions with \(0\) scalar part. It is known fact that the restriction map \(\mu^{\prime}|_{S^{3}}:S^{3}\cong S^{3}\times\{*\}\to S^{2}\) is the Hopf fibration. Hence the adjoint of \(\mu^{\prime}\) denoted by \(\operatorname{ad}_{\mu^{\prime}}:S^{3}\to\operatorname{aut}_{1}(S^{2})\) is a representative of a nonzero element in
\(\pi_{3}(\operatorname{aut}_{1}(S^{2}))\otimes\mathbb{K}\). Therefore, by the injectivity of \(\Gamma_{1}\), we obtain a nonzero homology class \(\Gamma_{1}(\partial^{-1}(\operatorname{ad}_{\mu^{\prime}}))\) which is obtained by the value of a morphism induced by the composite
in homology at the fundamental class of \(S^{2}\times S^{2}\).
## Acknowledgement
We would like to thank Katsuhiko Kuribayashi, Shun Wakatsuki and Toshihiro Yamaguchi for valuable discussions and suggestions.
|
2304.08835 | Spin-Gravity Coupling in a Rotating Universe | The coupling of intrinsic spin with the nonlinear gravitomagnetic fields of
Goedel-type spacetimes is studied. We work with Goedel-type universes in order
to show that the main features of spin-gravity coupling are independent of
causality problems of the Goedel universe. The connection between the
spin-gravitomagnetic field coupling and Mathisson's spin-curvature force is
demonstrated in the Goedel-type universe. That is, the gravitomagnetic
Stern--Gerlach force due to the coupling of spin with the gravitomagnetic field
reduces in the appropriate correspondence limit to the classical Mathisson
spin-curvature force. | Bahram Mashhoon, Masoud Molaei, Yuri N. Obukhov | 2023-04-18T09:04:22Z | http://arxiv.org/abs/2304.08835v2 | # Spin-Gravity Coupling in a Rotating Universe
###### Abstract
The coupling of intrinsic spin with the nonlinear gravitomagnetic fields of Godel-type spacetimes is studied. We work with Godel-type universes in order to show that the main features of spin-gravity coupling are independent of causality problems of the Godel universe. The connection between the spin-gravitomagnetic field coupling and Mathisson's spin-curvature force is demonstrated in the Godel-type universe. That is, the gravitomagnetic Stern-Gerlach force due to the coupling of spin with the gravitomagnetic field reduces in the appropriate correspondence limit to the classical Mathisson spin-curvature force.
spin-gravity coupling, Godel-type universe pacs: 04.20.Cv
## I Introduction
Inertia is the intrinsic tendency of matter to remain in a given condition. The state of matter in spacetime is determined by its mass and spin; indeed, mass and spin characterize the irreducible unitary representations of the Poincare group [1]. Therefore, mass and spin
determine the inertial properties of a particle. In classical physics, the inertial forces that act on a particle are proportional to its inertial mass; moreover, the moment of inertia is the rotational analogue of mass. Inertial effects of intrinsic spin are independent of the inertial mass of the particle and depend purely on intrinsic spin. Inertia of intrinsic spin is of quantum origin and its properties therefore complement the inertial characteristics of mass and orbital angular momentum of the particle.
It turns out that the intrinsic spin \(\mathbf{S}\) of a particle couples to the rotation of a noninertial observer resulting in a Hamiltonian of the form \(\mathcal{H}_{sr}=-\,\mathbf{S}\cdot\mathbf{\Omega}\), where \(\mathbf{\Omega}\) is the angular velocity of the observer's local spatial frame with respect to a nonrotating (i.e., Fermi-Walker) transported frame. A general account of the spin-rotation coupling is contained in [2] and more recent discussions of its observational basis can be found in [3; 4; 5; 6]. A similar phenomenon occurs in a gravitational field [7; 8]. The spin-rotation effect can be theoretically extended to the spin-gravity coupling via the gravitational Larmor theorem [9; 10], which is the rotational side of Einstein's principle of equivalence. Imagine a free test gyroscope with its center of mass held at rest in a gravitational field; then, the locally measured components of the gyroscope's spin vector undergo a precessional motion with an angular velocity that is given by the locally measured gravitomagnetic field. The Gravity Probe B (GP-B) space experiment has measured the gravitomagnetic field of the Earth [11; 12].
According to the gravitational Larmor theorem, the gravitomagnetic field of a rotating system is locally equivalent to a rotation resulting in a Hamiltonian for intrinsic spin-gravity coupling of the form \(\mathcal{H}_{sg}=\mathbf{S}\cdot\mathbf{B}\), where \(\mathbf{B}\) is the relevant gravitomagnetic field [13]. For prospects regarding the measurement of intrinsic spin-gravity coupling, see [14; 15; 16; 17; 18]. In general, \(\mathbf{B}\) depends on position and the intrinsic spin-gravity coupling leads to a measured gravitomagnetic Stern-Gerlach force of the form \(-\mathbf{\nabla}(\mathbf{S}\cdot\mathbf{B})\). It has been shown [19], within the framework of _linearized_ general relativity, that the gravitomagnetic Stern-Gerlach force associated with spin-gravity coupling reduces in the correspondence limit to Mathisson's classical spin-curvature force [20; 21]. It would be interesting to extend this result to the nonlinear regime. The purpose of the present work is to study further the inertial effects of intrinsic spin by investigating the intrinsic spin-gravity coupling for spinning test particles in Godel-type spacetimes. For background material, Ref. [19] and the references cited therein should be consulted for further important information regarding the topic of spin-rotation-gravity coupling and its experimental basis.
Gravitomagnetism in the Godel-type universe
With respect to spacetime coordinates \(x^{\mu}=(ct,x,y,z)\), the metric of the Godel solution [22] of Einstein's gravitational field equations arises as a special case in the class of the so-called Godel-type models [23; 24; 25] described by the line element
\[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=-\,dt^{2}-2\sqrt{\sigma}\,e^{\mu x}\,dt\,dy+ dx^{2}+\kappa\,e^{2\mu x}\,dy^{2}+dz^{2}\,, \tag{1}\]
with arbitrary constant parameters \(\mu,\sigma\) and \(\kappa\). In our conventions, the speed of light \(c=1\) and Planck's constant \(\hbar=1\), unless specified otherwise; moreover, the metric signature is \(+2\) and Greek indices run from \(0\) to \(3\), while Latin indices run from \(1\) to \(3\). The system of coordinates in metric (1) is admissible provided
\[\sigma+\kappa>0\,. \tag{2}\]
Moreover, we assume throughout that \(\sigma>0\). In general, the Godel-type universe contains closed timelike curves, which could lead to problems with causality. However, one can demonstrate [23] that closed timelike curves are absent in model (1), provided
\[\kappa\geq 0\,. \tag{3}\]
Specifically, for the Godel universe \(\kappa=-1\) in metric (1); therefore, closed timelike curves do exist in the Godel universe. To ensure that our considerations regarding spin-gravity coupling are independent of the causality difficulties of the Godel universe, we use metric (1) for our main calculations in this paper.
The Godel-type universe is a regular stationary and spatially homogeneous spacetime that contains rotating matter. Consider the class of observers that are all spatially at rest in this spacetime. Each such observer has a velocity \(4\)-vector \(u^{\mu}=\delta^{\mu}_{0}\) that is free of acceleration, expansion and shear; however, it is rotating in the negative sense about the \(z\) axis and its vorticity \(4\)-vector
\[\omega^{\mu}=\frac{1}{2}\eta^{\mu\nu\rho\sigma}u_{\nu}u_{\rho;\sigma}\,, \tag{4}\]
is purely spatial \(\omega^{\mu}=(0,\mathbf{\omega})\), with the \(3\)-vector
\[\mathbf{\omega}=-\,\Omega\,\partial_{z}\,,\qquad\Omega=\frac{\mu}{2}\sqrt{\frac{ \sigma}{\sigma+\kappa}}\,. \tag{5}\]
For the sake of definiteness, we henceforth assume that \(\Omega>0\); then, Eq. (5) implies that \(\mu>0\) as well. Here, \(\eta_{\alpha\beta\gamma\delta}=(-g)^{1/2}\epsilon_{\alpha\beta\gamma\delta}\) is the Levi-Civita tensor and \(\epsilon_{\alpha\beta\gamma\delta}\) is the alternating
symbol with \(\epsilon_{0123}=1\). It is interesting to note that in nonrelativistic fluid mechanics vorticity vector \(\mathbf{\omega}_{N}\) is defined as \(\mathbf{\omega}_{N}=\mathbf{\nabla}\times\mathbf{v}\), where \(\mathbf{v}\) is the flow velocity. If the fluid rotates with spatially uniform angular velocity \(\mathbf{\Omega}\) such that \(\mathbf{v}=\mathbf{\Omega}\times\mathbf{x}\), then \(\mathbf{\omega}_{N}=2\,\mathbf{\Omega}\). In this paper, we follow the relativistic definition of vorticity.
The geometry of the Godel-type model has been studied by a number of authors [26; 27; 28]. The Weyl curvature of Godel-type spacetime is of type D in the Petrov classification. The Godel-type universe admits five Killing vector fields, namely, \(\partial_{t}\), \(\partial_{y}\), \(\partial_{z}\), \(\partial_{x}-\mu y\,\partial_{y}\) and [23; 29]
\[K=\frac{2\sqrt{\sigma}\,e^{-\,\mu x}}{\sigma+\kappa}\,\partial_{t}-2\mu y\, \partial_{x}+\left(\mu^{2}y^{2}-\frac{e^{-2\mu x}}{\sigma+\kappa}\right) \partial_{y}\,. \tag{6}\]
We are interested in the measurements of an observer that is free and spatially at rest in spacetime with 4-velocity vector \(u^{\mu}=dx^{\mu}/d\tau\) and proper time \(\tau\), where \(\tau=t+\)constant. The observer carries along its geodesic world line a natural tetrad frame \(e^{\mu}{}_{\dot{\alpha}}\) that is orthonormal, namely,
\[g_{\mu\nu}\,e^{\mu}{}_{\dot{\alpha}}\,e^{\nu}{}_{\dot{\beta}}=\eta_{\dot{ \alpha}\dot{\beta}}\,, \tag{7}\]
where \(\eta_{\mu\nu}=\mathrm{diag}(-1,1,1,1)\) is the Minkowski metric tensor. Indeed,
\[e_{\hat{0}}=\partial_{t}\,,\qquad e_{\hat{1}}=\partial_{x}\,,\qquad e_{\hat{2 }}=-\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\,\partial_{t}+\frac{e^{-\,\mu x}}{ \sqrt{\sigma+\kappa}}\,\partial_{y}\,,\qquad e_{\hat{3}}=\partial_{z}\,, \tag{8}\]
where the spatial axes of the observer's frame are primarily along the background coordinate axes. Introducing the dual coframe \(\vartheta^{\hat{\alpha}}\),
\[\vartheta^{\widehat{0}}=dt+\sqrt{\sigma}\,e^{\mu x}dy,\qquad\vartheta^{\widehat {1}}=dx,\qquad\vartheta^{\widehat{2}}=\sqrt{\sigma+\kappa}\,e^{\mu x}dy, \qquad\vartheta^{\widehat{3}}=dz, \tag{9}\]
such that \(e_{\hat{\alpha}}\rfloor\vartheta^{\hat{\beta}}=\delta^{\beta}_{\alpha}\), line element (1) is recast into
\[ds^{2}=-\,\left(dt+\sqrt{\sigma}\,e^{\mu x}\,dy\right)^{2}+dx^{2}+\left(\sigma +\kappa\right)e^{2\mu x}\,dy^{2}+dz^{2}\,. \tag{10}\]
Let \(\lambda^{\mu}{}_{\dot{\alpha}}\) be the orthonormal tetrad frame that is parallel transported along the observer's geodesic world line such that \(D\lambda^{\mu}{}_{\dot{\alpha}}/d\tau=0\). We find that
\[\lambda^{\mu}{}_{\dot{1}}=e^{\mu}{}_{\dot{1}}\cos\Omega\tau+e^{\mu}{}_{\dot{2 }}\sin\Omega\tau\,,\qquad\lambda^{\mu}{}_{\dot{2}}=-e^{\mu}{}_{\dot{1}}\sin \Omega\tau+e^{\mu}{}_{\dot{2}}\cos\Omega\tau\,, \tag{11}\]
while \(\lambda^{\mu}{}_{\dot{3}}=e^{\mu}{}_{\dot{3}}\) and naturally \(\lambda^{\mu}{}_{\dot{0}}=e^{\mu}{}_{\dot{0}}=u^{\mu}\). It is simple to check these results using the Christoffel symbols
\[\Gamma^{0}_{10}=\sqrt{\frac{\sigma}{\sigma+\kappa}}\,\Omega\,,\qquad\Gamma^{1} _{20}=\sqrt{\sigma+\kappa}\,e^{\mu x}\,\Omega\,,\qquad\Gamma^{2}_{10}=-\, \frac{e^{-\,\mu x}}{\sqrt{\sigma+\kappa}}\,\Omega\, \tag{12}\]
which are the only nonzero components of \(\Gamma^{\mu}_{\nu 0}\). Therefore, the observer's natural frame rotates with respect to the parallel-transported frame about their common \(z\) axis with frequency \(-\Omega\), which is consistent with vorticity (5).
Let us now consider the special case of metric (1) with parameters
\[\mu=\sqrt{2}\,\Omega,\qquad\sigma=2,\qquad\kappa=-\,1\,. \tag{13}\]
With these parameters, metric (1) reduces to the Godel line element
\[ds^{2}=-\,dt^{2}-2\sqrt{2}\,e^{\sqrt{2}\Omega x}\,dt\,dy+dx^{2}-e^{2\sqrt{2} \Omega x}\,dy^{2}+dz^{2}\,. \tag{14}\]
For the Godel universe, Einstein's field equations
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+\Lambda g_{\mu\nu}=8\pi G\,T_{\mu\nu} \tag{15}\]
have a perfect fluid source
\[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}\,, \tag{16}\]
where \(\rho\) is the energy density, \(p\) is the pressure and \(u^{\mu}=\delta^{\mu}_{0}\) is the 4-velocity vector of the perfect fluid. In this special case, \(R_{\mu\nu}=2\Omega^{2}u_{\mu}u_{\nu}\) and
\[2\,\Omega^{2}=8\pi G(\rho+p)\,,\qquad\Lambda+\Omega^{2}=8\pi Gp\,. \tag{17}\]
In the absence of the cosmological constant \(\Lambda\), we have as the source of the Godel universe a perfect fluid with a stiff equation of state \(\rho=p=\Omega^{2}/(8\pi G)\). Another possibility is dust (\(p=0\)), with \(4\pi G\rho=-\Lambda=\Omega^{2}\). It follows from Eq. (17) that \(-\Lambda=4\pi G(\rho-p)\); therefore, in any realistic situation, the cosmological constant of the Godel universe must be negative or zero (\(\Lambda\leq 0\)).
The spinning test particle in the Godel universe is immersed in the perfect fluid source and its intrinsic spin couples to the vorticity of the fluid. The nature of the spin-gravity coupling and its connection with Mathisson's classical spin-curvature force provided the original motivation for the present work.
After this brief digression regarding the Godel universe, we return to the Godel-type metric with explicit components
\[(g_{\mu\nu})=\begin{bmatrix}-1&0&-\sqrt{\sigma}\,W&0\\ 0&1&0&0\\ -\sqrt{\sigma}\,W&0&\kappa\,W^{2}&0\\ 0&0&0&1\end{bmatrix}\,,\quad(g^{\mu\nu})=\begin{bmatrix}-\frac{\kappa}{ \sigma+\kappa}&0&-\frac{\sqrt{\sigma}}{\sigma+\kappa}W^{-1}&0\\ 0&1&0&0\\ -\frac{\sqrt{\sigma}}{\sigma+\kappa}W^{-1}&0&\frac{1}{\sigma+\kappa}W^{-2}&0\\ 0&0&0&1\end{bmatrix}\,, \tag{18}\]
where \(W(x)=e^{\mu x}\) and \(\sqrt{-g}=\sqrt{\sigma+\kappa}\,W(x)\).
## III Mathisson's spin-curvature force
To connect Mathisson's classical spin-curvature force in the correspondence limit with intrinsic spin that is purely of quantum origin, it proves useful to introduce a classical model of intrinsic spin. To simplify matters, we permanently attach a free spin vector \(\mathbf{S}\) to a Newtonian point particle resulting in a "pole-dipole" particle. The particle thus carries the spin vector along its world line and the corresponding equations of motion in a gravitational field are the Mathisson-Papapetrou pole-dipole equations [20; 30],
\[\frac{DP^{\mu}}{d\varsigma}=-\,\frac{1}{2}R^{\mu}\,_{\nu\alpha\beta}\,U^{\nu}S ^{\alpha\beta}\,, \tag{19}\]
\[\frac{DS^{\mu\nu}}{d\varsigma}=P^{\mu}U^{\nu}-P^{\nu}U^{\mu}\,, \tag{20}\]
where \(U^{\mu}=dx^{\mu}/d\varsigma\) is the 4-velocity of the pole-dipole particle, \(U^{\mu}U_{\mu}=-\,1\), and \(\varsigma\) is its proper time. The particle's 4-momentum is \(P^{\mu}\) and its spin tensor is \(S^{\mu\nu}\), which satisfies the Frenkel-Pirani supplementary condition [31; 32]
\[S^{\mu\nu}\,U_{\nu}=0\,. \tag{21}\]
In this system, the inertial mass of the particle \(m\), \(m:=-P^{\mu}U_{\mu}\), and the magnitude of its spin \(s\), \(s^{2}:=\frac{1}{2}S^{\mu\nu}S_{\mu\nu}\), are constants of the motion. Moreover, Pirani has shown that the spin vector \(S^{\mu}\), \(S^{\mu}\,U_{\mu}=0\),
\[S^{\mu}=-\,\frac{1}{2}\,\eta^{\mu\nu\rho\sigma}\,U_{\nu}S_{\rho\sigma}\,,\qquad S ^{\alpha\beta}=\eta^{\alpha\beta\gamma\delta}\,U_{\gamma}S_{\delta}\,, \tag{22}\]
is Fermi-Walker transported along the particle's world line [32]. That is, the Mathisson-Papapetrou equations for a spinning test particle together with the Frenkel-Pirani supplementary condition imply that the spin vector of a test pole-dipole particle is nonrotating in this classical model consistent with the inertia of intrinsic spin. Furthermore, the Mathisson-Papapetrou equations together with the Frenkel-Pirani supplementary condition imply that in the massless limit, the spinning massless test particle follows a null geodesic with the spin vector parallel or antiparallel to its direction of motion [33]. Hence, our classical model is consistent with physical expectations.
What is the influence of the inertia of intrinsic spin on the motion of the spinning particle? From Eq. (20), we find
\[P^{\mu}=m\,U^{\mu}+S^{\mu\nu}\,\frac{DU_{\nu}}{d\varsigma}\,, \tag{23}\]
so that in the absence of spin, \(P^{\mu}=m\,U^{\mu}\), and the particle simply follows a timelike geodesic of the background gravitational field. In the presence of spin, on the other hand, the Mathisson spin-curvature force \(\mathcal{F}^{\mu}\), \(\mathcal{F}^{\mu}U_{\mu}=0\),
\[\mathcal{F}^{\mu}=-\,\frac{1}{2}R^{\mu}{}_{\nu\alpha\beta}\,U^{\nu}S^{\alpha \beta}=\ ^{*}R^{\mu}{}_{\nu\rho\sigma}\,U^{\nu}\,S^{\rho}\,U^{\sigma}\,,\qquad^{*}R_{ \mu\nu\rho\sigma}=\frac{1}{2}\,\eta_{\mu\nu\alpha\beta}\,R^{\alpha\beta}{}_{ \rho\sigma}\,, \tag{24}\]
must be taken into account [21]. It follows from Eq. (23) that \(P^{\mu}-m\,U^{\mu}\) is of second order in spin; hence, the Mathisson-Papapetrou equations of motion to first order in spin become [34]
\[\frac{DS^{\mu\nu}}{d\varsigma}\approx 0 \tag{25}\]
and
\[m\,\frac{DU^{\mu}}{d\varsigma}\approx\mathcal{F}^{\mu}=-\,\frac{1}{2}R^{\mu}{ }_{\nu\alpha\beta}\,U^{\nu}S^{\alpha\beta}\,. \tag{26}\]
## IV Spin-Vorticity-Gravity Coupling
We now turn to the behavior of spinning test particles in the Godel-type spacetime. Within the framework of linearized general relativity, it can be shown in general that in source-free Ricci-flat regions of the gravitational field the Mathisson force corresponds to the Stern-Gerlach force associated with the spin-gravitomagnetic field coupling [19]. In the Godel-type universe, on the other hand, the spinning particle is immersed in the source of the gravitational field. Is \(\mathcal{F}_{\mu}=-\partial_{\mu}(\mathcal{H}_{sg})\) still valid for the Godel-type spacetime?
Let us consider a spinning test particle held at rest in space at fixed \((x,y,z)\) coordinates in the Godel-type spacetime. According to the free reference observer with adapted tetrad frames \(e^{\mu}{}_{\hat{\alpha}}\) and \(\lambda^{\mu}{}_{\hat{\alpha}}\) at the same location, the spin vector to linear order stays fixed with respect to the parallel-propagated frame as a consequence of Eq. (25); that is, \(S_{\hat{i}}\), \(i=1,2,3\), are constants of the motion, where \(S_{\hat{\alpha}}=S_{\mu}\,\lambda^{\mu}{}_{\hat{\alpha}}\); hence,
\[S_{\hat{0}}=0\,,\qquad S_{\hat{i}}=S_{\mu}\,\lambda^{\mu}{}_{\hat{i}}\,. \tag{27}\]
The motion of the comoving observer has vorticity in accordance with Eq. (4) and we therefore expect that the spin should couple to the vorticity resulting in the spin-vorticity
Hamiltonian given by
\[{\cal H}_{sv}=-\mathbf{S}\cdot\mathbf{\omega}=\Omega\,S^{\hat{3}}\,. \tag{28}\]
Furthermore, the spin vector precesses with frequency \(\Omega\,\partial_{z}\) with respect to the observer's natural frame \(e^{\mu}{}_{\hat{i}}\) based on the spatial coordinate axes. The Hamiltonian associated with this motion is the spin-gravity Hamiltonian given by
\[{\cal H}_{sg}=\mathbf{S}\cdot\mathbf{B}\,, \tag{29}\]
where \(\mathbf{B}=\mathbf{\Omega}=\Omega\,\partial_{z}\) is the gravitomagnetic field in this case. The result is,
\[{\cal H}_{sg}=\Omega\,S^{\hat{3}}\,. \tag{30}\]
The spin-gravity coupling is indeed the same as spin-vorticity coupling in this case, since the spinning particle, while engulfed by the source of the gravitational field, is fixed in space and comoving with the observer. It is clear that in this case \(\partial_{\mu}({\cal H}_{sg})=0\), so that the Stern-Gerlach force vanishes. To calculate the Mathisson force in this case, we need to find the Riemann curvature tensor for the Godel-type universe since the Mathisson force is directly proportional to spacetime curvature.
In metric (1), the nonzero components of the Riemann tensor can be obtained from
\[R_{0101}=\Omega^{2},\;R_{0202}=(\kappa+\sigma)e^{2\mu x}\Omega^{2},\;R_{0112}= -\sqrt{\sigma}e^{\mu x}\Omega^{2},\;R_{1212}=-\,\kappa\Big{(}\frac{4\kappa}{ \sigma}+5\Big{)}e^{2\mu x}\Omega^{2}\,. \tag{31}\]
We are interested in the components of the curvature tensor projected onto the orthonormal tetrad frame \(\lambda^{\mu}{}_{\hat{\alpha}}\) adapted to our fiducial observer, namely,
\[R_{\hat{\alpha}\hat{\beta}\hat{\gamma}\hat{\delta}}=R_{\mu\nu\rho\sigma}\, \lambda^{\mu}{}_{\hat{\alpha}}\,\lambda^{\nu}{}_{\hat{\beta}}\,\lambda^{\rho}{ }_{\hat{\gamma}}\,\lambda^{\sigma}{}_{\hat{\delta}}\,. \tag{32}\]
The measured components of the Riemann tensor can be expressed via the symmetries of the Riemann tensor as a \(6\times 6\) matrix in the standard manner with indices that range over the set \(\{01,02,03,23,31,12\}\). The end result is of the general form
\[\left[\begin{array}{cc}\mathbb{E}&\mathbb{H}\\ \mathbb{H}^{T}&\mathbb{S}\end{array}\right]\,, \tag{33}\]
where \(\mathbb{E}\), \(\mathbb{H}\) and \(\mathbb{S}\) represent the measured gravitoelectric, gravitomagnetic and spatial components of the Riemann curvature tensor, respectively, and \(\mathbb{E}\) and \(\mathbb{S}\) are symmetric matrices,
while \(\mathbb{H}\) is traceless. In the case of Godel-type spacetime (1), we find \(\mathbb{H}=0\) and
\[(\mathbb{E}_{\hat{i}\hat{j}})=\begin{bmatrix}\Omega^{2}&0&0\\ 0&\Omega^{2}&0\\ 0&0&0\end{bmatrix}\,,\qquad(\mathbb{S}_{\hat{i}\hat{j}})=\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&-\left(1+\frac{4\kappa}{\sigma}\right)\Omega^{2}\end{bmatrix}\,. \tag{34}\]
These results are equally valid if the curvature tensor is projected onto the natural frame \(e^{\mu}{}_{\hat{\alpha}}\) of the reference observer.
We find that the Mathisson force, given by Eq. (24), can be expressed as
\[\mathcal{F}^{\mu}=\lambda^{\mu}{}_{\hat{\alpha}}\,\mathcal{F}^{\hat{\alpha}}\,,\qquad\mathcal{F}^{\hat{0}}=0\,,\qquad\mathcal{F}^{\hat{i}}=\mathbb{H}^{\hat {i}\hat{j}}S_{\hat{j}}\,. \tag{35}\]
However, \(\mathbb{H}=0\); therefore, the measured components of the Mathisson force vanish as well. That is,
\[\mathcal{F}_{\mu}=-\,\partial_{\mu}(\mathcal{H}_{sg})=0\,. \tag{36}\]
It is important to verify this result in a quasi-inertial Fermi normal coordinate system established about the world line of an arbitrary reference observer that is spatially at rest.
## V Fermi coordinates in Godel-type spacetimes
To explore spin-gravity coupling in Fermi coordinates, it is convenient to set up a quasi-inertial system of coordinates based on the nonrotating spatial frame adapted to a fiducial geodesic observer that is at rest in space with fixed \((x,y,z)\) coordinates and 4-velocity vector \(u^{\mu}=\delta^{\mu}_{0}\) in Godel-type spacetime (1). The reference observer establishes in the neighborhood of its world line a Fermi normal coordinate system based on the parallel-propagated spatial frame \(\lambda^{\mu}{}_{\hat{i}}\), \(i=1,2,3\), given by Eq. (11). That is, at each event \(\bar{x}^{\mu}(\tau)\) on its world line, there is a local hypersurface formed by all spacelike geodesic curves that are orthogonal to the observer's world line at \(\bar{x}^{\mu}(\tau)\). Consider an event with coordinates \(x^{\mu}\) on this hypersurface that can be connected to \(\bar{x}^{\mu}(\tau)\) by a unique spacelike geodesic of proper length \(\ell\). Then, the reference observer can assign Fermi coordinates \(X^{\mu}=(T,X^{i})\) to \(x^{\mu}\) such that
\[T:=\tau\,,\qquad X^{i}:=\ell\,\xi^{\mu}\lambda_{\mu}{}^{\hat{i}}(\tau)\,. \tag{37}\]
Here, \(\xi^{\mu}\), \(\xi^{\mu}\,u_{\mu}=0\), is a unit spacelike vector tangent to the unique spacelike geodesic at \(\bar{x}^{\mu}(\tau)\).
For the case of Godel's universe, one can find the exact Fermi metric coefficients [29]. The previous results are generalized here for Godel-type spacetime (1). For the spacelike geodesics \(x^{\mu}(\ell)\), we use Killing vector fields \(\partial_{t},\partial_{y}\) and \(\partial_{z}\) to derive the equations of motion
\[t^{\prime}+\sqrt{\sigma}\,e^{\mu x}\,y^{\prime}=E\,,\quad\sqrt{ \sigma}\,e^{\mu x}\,t^{\prime}-\kappa\,e^{2\mu x}\,y^{\prime}=k\,, \tag{38}\] \[z^{\prime}=h\,,\quad-\,t^{\prime 2}-2\sqrt{\sigma}\,e^{\mu x}\,t^{ \prime}\,y^{\prime}+x^{\prime 2}+\kappa e^{2\mu x}\,y^{\prime 2}+z^{\prime 2}=1\,. \tag{39}\]
Here, \(E,k\) and \(h\) are integration constants; moreover, a prime denotes the derivative of a spacetime coordinate with respect to proper length \(\ell\), e.g., \(t^{\prime}=dt/d\ell\). The condition \(\xi_{\mu}\lambda^{\mu}{}_{\dot{0}}=0\), where \(\xi^{\mu}=dx^{\mu}/d\ell\), implies \(E=0\). Then, with \(z=h\,\ell\) and \(E=0\), we find
\[t^{\prime}=\frac{\sqrt{\sigma}\,e^{-\,\mu x}\,k}{\sigma+\kappa}\,, \tag{40}\] \[y^{\prime}=-\,\frac{e^{-\,2\mu x}\,k}{\sigma+\kappa}\,,\] (41) \[x^{\prime 2}+\frac{e^{-\,2\mu x}\,k^{2}}{\sigma+\kappa}=1-h^{2}\,. \tag{42}\]
Ordinary differential equation (42) has the general solution for \(x(\ell)\) given by
\[e^{\mu x}=\alpha_{0}\,\cosh\left(a\ell+b\right), \tag{43}\]
where the constant parameters are fixed as
\[\alpha_{0}\,a=\frac{\left|k\right|\mu}{\sqrt{\sigma+\kappa}}\,,\qquad a=\mu \sqrt{1-h^{2}} \tag{44}\]
and the condition
\[\alpha_{0}\,\cosh b=1 \tag{45}\]
is imposed to satisfy \(x(0)=0\).
Substituting Eq. (43) into Eqs. (40) and (41), we find the solutions for \(t(\ell)\) and \(y(\ell)\):
\[t-\tau=\frac{2}{\mu}\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\frac{k }{\left|k\right|}\,\left[\arctan e^{a\ell+b}-\arctan e^{b}\right]\,, \tag{46}\] \[y=-\,\frac{1}{\mu\,\alpha_{0}}\,\frac{1}{\sqrt{\sigma+\kappa}} \,\frac{k}{\left|k\right|}\,\left[\tanh\left(a\ell+b\right)-\tanh b\right]\,. \tag{47}\]
Then, making use of Eqs. (8)-(11) and (37), we derive for the Fermi coordinates
\[T=\tau\,,\qquad Z=\ell\,h\,, \tag{48}\] \[X\cos(\Omega T)-Y\sin(\Omega T)=\frac{\ell\left|k\right|}{\sqrt{ \sigma+\kappa}}\,\sinh b\,,\] (49) \[X\sin(\Omega T)+Y\cos(\Omega T)=-\,\frac{\ell\,k}{\sqrt{\sigma+ \kappa}}\,. \tag{50}\]
As in [29], we introduce cylindrical coordinates
\[X=\rho\cos\theta\,,\qquad Y=\rho\sin\theta \tag{51}\]
and recast Eqs. (49)-(50) into
\[\cos(\theta+\Omega T) =\tanh b\,, \tag{52}\] \[\sin(\theta+\Omega T) =-\,\frac{|k|}{k\,\cosh b}\,,\] (53) \[\mu\rho =\ell\,a\,. \tag{54}\]
As a result, we rewrite the solutions (43), (47) and (46) as
\[e^{\mu x} =\cosh(\mu\rho)+\sinh(\mu\rho)\,\cos(\theta+\Omega T)\,, \tag{55}\] \[\sqrt{\sigma+\kappa}\,\mu\,y =\frac{\tanh(\mu\rho)\,\sin(\theta+\Omega T)}{1+\tanh(\mu\rho)\, \cos(\theta+\Omega T)}\,,\] (56) \[\tan\left[\frac{\mu}{2}\sqrt{\frac{\sigma+\kappa}{\sigma}}\,(T-t )\right] =\frac{(e^{\mu\rho}-1)\sin(\theta+\Omega T)}{1-\cos(\theta+\Omega T )+\left[1+\cos(\theta+\Omega T)\right]e^{\mu\rho}}\,. \tag{57}\]
Finally, the transformation from \((t,x,y,z)\) to Fermi coordinates \((T,X,Y,Z)\) can be conveniently written in terms of the new variables
\[\mathfrak{R}=\mu\rho\,,\qquad\mathfrak{F}=\theta+\Omega T\,, \tag{58}\]
as follows:
\[e^{\mu x} =\cosh\mathfrak{R}+\sinh\mathfrak{R}\,\cos\mathfrak{F}\,, \tag{59}\] \[\sqrt{\sigma+\kappa}\,\mu\,y =\frac{\sinh\mathfrak{R}\,\sin\mathfrak{F}}{\cosh\mathfrak{R}+ \sinh\mathfrak{R}\,\cos\mathfrak{F}}\,,\] (60) \[\tan\left[\frac{\mu}{2}\sqrt{\frac{\sigma+\kappa}{\sigma}}\,(T-t )\right] =\frac{\left(e^{\mathfrak{R}}-1\right)\sin\mathfrak{F}}{1-\cos \mathfrak{F}+\left(1+\cos\mathfrak{F}\right)e^{\mathfrak{R}}}\,. \tag{61}\]
By differentiation, we get
\[dt+\sqrt{\sigma}\,e^{\mu x}dy =dT+\frac{1}{\mu}\sqrt{\frac{\sigma}{\sigma+\kappa}}\left(\cosh \mathfrak{R}-1\right)d\mathfrak{F}\,, \tag{62}\] \[dx^{2}+\left(\kappa+\sigma\right)e^{2\mu x}\,dy^{2} =\frac{1}{\mu^{2}}\left(d\mathfrak{R}^{2}+\sinh^{2}\mathfrak{R} \,d\mathfrak{F}^{2}\right)\,. \tag{63}\]
It remains to substitute these results into Eq. (10) to derive the line element of the Godel-type universe in terms of Fermi coordinates. We find
\[ds^{2}= -\,\left(1+\mathbb{L}\right)dT^{2}-2\Omega\,\mathbb{K}\,dT\left( XdY-YdX\right)\] \[+dX^{2}+dY^{2}+dZ^{2}+\frac{\mathbb{F}}{X^{2}+Y^{2}}(XdY-YdX)^{2}\,, \tag{64}\]
where
\[\mathbb{L} =\frac{\sigma}{4(\sigma+\kappa)}\left[\sinh^{2}\mathfrak{R}-\frac{ \sigma+2\kappa}{\sigma+\kappa}(\cosh\mathfrak{R}-1)^{2}\right]\,, \tag{65}\] \[\mathbb{K} =-\,\frac{\kappa}{\sigma+\kappa}\,\frac{(\cosh\mathfrak{R}-1)^{2 }}{\mathfrak{R}^{2}}\,,\] (66) \[\mathbb{F} =\frac{\sinh^{2}\mathfrak{R}}{\mathfrak{R}^{2}}-1-\frac{\sigma}{ \sigma+\kappa}\,\frac{(\cosh\mathfrak{R}-1)^{2}}{\mathfrak{R}^{2}} \tag{67}\]
are functions of the variable
\[\mathfrak{R}=2\Omega\,\sqrt{\frac{\sigma+\kappa}{\sigma}}\,(X^{2}+Y^{2})^{1/2}\,. \tag{68}\]
## VI Spin-gravity coupling in Fermi coordinates
In general, the spacetime metric in the Fermi system is given by
\[ds^{2}=\hat{g}_{\mu\nu}\,dX^{\mu}dX^{\nu}\,, \tag{69}\]
where
\[\hat{g}_{00}=-1-R_{\hat{0}\hat{0}\hat{0}\hat{j}}(T)\,X^{i}X^{j}+\cdots\,,\quad \hat{g}_{0i}=-\frac{2}{3}\,R_{\hat{0}\hat{j}\hat{k}\hat{k}}(T)\,X^{j}X^{k}+\cdots \tag{70}\]
and
\[\hat{g}_{ij}=\delta_{ij}-\frac{1}{3}\,R_{\hat{i}\hat{k}\hat{j}\hat{l}}(T)\,X^ {k}X^{l}+\cdots\,. \tag{71}\]
In these expansions in powers of spatial Fermi coordinates, the coefficients are in general functions of \(T\) and consist of components of Riemann curvature tensor and its covariant derivatives as measured by the reference observer that permanently occupies the spatial origin of Fermi coordinate system. That is, the metric of the Fermi normal coordinate system established on the basis of a parallel-propagated spatial frame along the world line of a geodesic observer is the Minkowski metric plus perturbations caused by the curvature of spacetime. Fermi coordinates are admissible within a cylindrical spacetime region around the world line of the fiducial observer and the radius of this cylinder is given by an appropriate radius of curvature of spacetime [29].
As defined in Eq. (32), \(R_{\hat{\alpha}\hat{\beta}\hat{\gamma}\hat{\delta}}\) are evaluated at the origin of spatial Fermi coordinates via the projection of the Riemann tensor on the tetrad frame \(\lambda^{\mu}{}_{\hat{\alpha}}\) of the fiducial observer; indeed, for the stationary Godel-type spacetime the nonzero components of \(R_{\hat{\alpha}\hat{\beta}\hat{\gamma}\hat{\delta}}\) are constants and
can be obtained from
\[R_{\hat{0}\hat{1}\hat{0}\hat{1}}=R_{\hat{0}\hat{2}\hat{0}\hat{2}}=\Omega^{2}, \qquad R_{\hat{1}\hat{2}\hat{1}\hat{2}}=-\,\left(1+\frac{4\kappa}{\sigma}\right) \Omega^{2} \tag{72}\]
via the symmetries of the Riemann curvature tensor.
We define the curvature-based gravitoelectric potential \(\hat{\Phi}\) and gravitomagnetic vector potential \(\hat{\mathbf{A}}\) via \(\hat{g}_{00}=-1+2\hat{\Phi}\) and \(\hat{g}_{0i}=-2\hat{A}_{i}\)[35; 36]. Indeed,
\[\hat{\Phi}=-\frac{1}{2}\,R_{\hat{0}\hat{1}\hat{0}\hat{j}}\,X^{i}X^{j}+\cdots, \qquad\hat{A}_{i}=\frac{1}{3}\,R_{\hat{0}\hat{j}\hat{k}}\,X^{j}X^{k}+\cdots\,. \tag{73}\]
The corresponding fields are given by
\[\hat{\mathbf{E}}=-\mathbf{\nabla}\hat{\Phi}\,,\qquad\hat{\mathbf{B}}=\mathbf{\nabla}\times\hat {\mathbf{A}}\,, \tag{74}\]
as expected; more explicitly,
\[\hat{E}_{i}=R_{\hat{0}\hat{1}\hat{0}\hat{j}}\,X^{j}+\cdots\,,\qquad\hat{B}_{i }=-\frac{1}{2}\,\epsilon_{ijk}\,R^{\hat{j}\hat{k}}_{\hat{0}\hat{l}}\,X^{l}+\cdots\,. \tag{75}\]
To lowest order, the gravitomagnetic field vanishes in the Godel-type spacetime; therefore, we need to compute higher-order terms.
In Section V, we derived the exact Fermi metric coefficients for the Godel-type universe. They are given explicitly by
\[\hat{g}_{00}= -1-\frac{\sigma}{4(\sigma+\kappa)}\left[\sinh^{2}\mathfrak{R}- \frac{\sigma+2\kappa}{\sigma+\kappa}(\cosh\,\mathfrak{R}-1)^{2}\right], \tag{76}\] \[\hat{g}_{0i}= -\frac{\kappa}{\sigma+\kappa}\,\Omega\,\frac{(\cosh\mathfrak{R} -1)^{2}}{\mathfrak{R}^{2}}(Y,-X,0) \tag{77}\]
and
\[(\hat{g}_{ij})=\begin{bmatrix}1+\mathbb{A}&-\mathbb{C}&0\\ -\,\mathbb{C}&1+\mathbb{B}&0\\ 0&0&1\end{bmatrix}\,, \tag{78}\]
where
\[\mathbb{A}=\mathbb{F}\,\frac{Y^{2}}{X^{2}+Y^{2}}\,,\qquad\mathbb{B}=\mathbb{F }\,\frac{X^{2}}{X^{2}+Y^{2}}\,,\qquad\mathbb{C}=\mathbb{F}\,\frac{XY}{X^{2}+Y ^{2}}\,. \tag{79}\]
The exact Fermi coordinate system has been established around the fiducial observer fixed at \(X=Y=Z=0\).
For \(\kappa\geq 0\), there are no closed timelike curves. In the special case of the Godel universe with parameters (13), there are no closed timelike curves within a cylindrical region about the \(Z\) axis with
\[\mathfrak{R}=\sqrt{2}\,\Omega\,(X^{2}+Y^{2})^{1/2}\leq\mathfrak{R}_{\max}\,, \qquad\mathfrak{R}_{\max}=2\ln(1+\sqrt{2})\,. \tag{80}\]
Indeed, a circle in the \((X,Y)\) plane inside this domain is spacelike; however, it becomes null for \(\mathfrak{R}=\mathfrak{R}_{\rm max}\) and timelike for \(\mathfrak{R}>\mathfrak{R}_{\rm max}\).
The stationary and divergence-free gravitomagnetic vector field of the Godel-type universe is given by \(\hat{B}_{1}=\hat{B}_{2}=0\) and
\[\hat{B}_{3}=-\,\frac{\kappa}{\sigma+\kappa}\,\Omega(\cosh\mathfrak{R}-1)\frac{ \sinh\mathfrak{R}}{\mathfrak{R}}\,. \tag{81}\]
It is interesting to note that \(\hat{B}_{3}\) and its first derivative with respect to \(\mathfrak{R}\) vanish at \(\mathfrak{R}=0\); then, \(\hat{B}_{3}\) monotonically increases with increasing \(\mathfrak{R}\) and diverges as \(\mathfrak{R}\to\infty\). More explicitly,
\[\hat{B}_{3}=-\,\frac{2\kappa}{\sigma}\Omega^{3}(X^{2}+Y^{2})\left[1+\Omega^{2 }\Big{(}\frac{\sigma+\kappa}{\sigma}\Big{)}(X^{2}+Y^{2})+\frac{2\Omega^{4}}{5 }\Big{(}\frac{\sigma+\kappa}{\sigma}\Big{)}^{2}(X^{2}+Y^{2})^{2}+\cdots \right]\,, \tag{82}\]
so that the fiducial observer measures a null gravitomagnetic field at its location (\(X=Y=Z=0\)). Furthermore, the gravitomagnetic field away from the \(Z\) axis points along \(Z\) and is cylindrically symmetric; indeed, it vanishes all along \(Z\), but increases monotonically away from the \(Z\) axis and eventually diverges as the radius of the cylinder about the \(Z\) axis approaches infinity.
Within the Fermi coordinate system, it is useful to define the class of fundamental observers that remain at rest in space, each with fixed \((X,Y,Z)\) coordinates. For our present purposes, we concentrate on the set of fundamental observers that occupy a cylindrical region in the neighborhood of the \(Z\) axis. Specifically, in this region we can express the metric tensor in Fermi coordinates as
\[\hat{g}_{\mu\nu}=\eta_{\mu\nu}+\hat{h}_{\mu\nu}\,, \tag{83}\]
where the nonzero components of the gravitational potentials are given by
\[\hat{h}_{00}=-\,\Omega^{2}(X^{2}+Y^{2})\,,\quad\hat{h}_{01}=-\,\frac{\kappa} {\sigma}\Omega^{3}(X^{2}+Y^{2})Y\,,\quad\hat{h}_{02}=\frac{\kappa}{\sigma} \Omega^{3}(X^{2}+Y^{2})X\,, \tag{84}\]
and
\[\hat{h}_{11}=\frac{1}{3}\Big{(}1+\frac{4\kappa}{\sigma}\Big{)}\Omega^{2}Y^{2} \,,\quad\hat{h}_{12}=-\,\frac{1}{3}\Big{(}1+\frac{4\kappa}{\sigma}\Big{)} \Omega^{2}XY\,,\quad\hat{h}_{22}=\frac{1}{3}\Big{(}1+\frac{4\kappa}{\sigma} \Big{)}\Omega^{2}X^{2}\,. \tag{85}\]
That is, for the sake of simplicity we confine our considerations to a cylindrical region about the \(Z\) axis such that \(\Omega\left|X\right|=\Omega\left|Y\right|\lesssim\varepsilon\), where \(0<\varepsilon\ll 1\) and all terms of order \(\varepsilon^{4}\) and higher are neglected in our analysis.
In the cylindrical neighborhood of the fiducial observer under consideration, fundamental observers have access to adapted orthonormal tetrad frames \(\varphi^{\mu}{}_{\hat{\alpha}}\) given in the Fermi coordinate \((T,X,Y,Z)\) system by
\[\varphi^{\mu}{}_{\hat{0}}=(1+\tfrac{1}{2}\hat{h}_{00},0,0,0)\,, \qquad\varphi^{\mu}{}_{\hat{1}}=(\hat{h}_{01},1-\tfrac{1}{2}\hat{h}_{11},0,0)\,, \tag{86}\] \[\varphi^{\mu}{}_{\hat{2}}=(\hat{h}_{02},-\hat{h}_{12},1-\tfrac{1}{ 2}\hat{h}_{22},0)\,,\qquad\varphi^{\mu}{}_{\hat{3}}=(0,0,0,1)\,. \tag{87}\]
These tetrad axes are primarily along the Fermi coordinate directions; indeed, for \(X=Y=0\), \(\varphi^{\mu}{}_{\hat{\alpha}}\to\lambda^{\mu}{}_{\hat{\alpha}}\). According to these fundamental observers, a spinning particle held at rest in space has a 4-velocity vector in the Fermi system given by \(\hat{U}^{\mu}=\varphi^{\mu}{}_{\hat{0}}\); moreover, its spin vector has measured components
\[\hat{S}_{\hat{0}}=0\,,\qquad\hat{S}_{\hat{i}}=\hat{S}_{\mu}\,\varphi^{\mu}{}_ {\hat{i}}\,, \tag{88}\]
since \(\hat{S}^{\mu}\,\hat{U}_{\mu}=0\). Furthermore, the gravitomagnetic field at the location of the spin is given by
\[\hat{B}_{1}=0\,,\qquad\hat{B}_{2}=0\,,\qquad\hat{B}_{3}=-\,\frac{1}{2}(\partial _{X}\,\hat{h}_{02}-\partial_{Y}\,\hat{h}_{01})=-\,\frac{2\kappa}{\sigma}\Omega ^{3}(X^{2}+Y^{2})\,, \tag{89}\]
in agreement with Eq. (82) within our approximation scheme. The Hamiltonian for spin-gravity coupling in the Fermi frame is thus given by
\[\hat{\mathcal{H}}_{sg}=\boldsymbol{\hat{S}}\cdot\boldsymbol{\hat{B}}=-\, \frac{2\kappa}{\sigma}\Omega^{3}(X^{2}+Y^{2})\hat{S}^{\hat{3}}\,, \tag{90}\]
which reduces in our approximation to \(-\,\frac{2\kappa}{\sigma}\Omega^{3}(X^{2}+Y^{2})S^{\hat{3}}\), where \(S^{\hat{3}}\) is a constant. The corresponding Stern-Gerlach force is then
\[-\,\partial_{\mu}\,\hat{\mathcal{H}}_{sg}=\frac{4\kappa}{\sigma}\,\Omega^{3} \,S^{\hat{3}}(0,X,Y,0)\,. \tag{91}\]
Next, we need to compute the Mathisson force in the Fermi frame, namely,
\[\hat{\mathcal{F}}_{\mu}=-\,\frac{1}{2}\,\hat{R}_{\mu\nu\alpha\beta}\hat{U}^{ \nu}\,\hat{S}^{\alpha\beta}\,. \tag{92}\]
For metric (83), the curvature tensor to first order in the perturbation is given by
\[\hat{R}_{\mu\nu\alpha\beta}=\frac{1}{2}\,(\hat{h}_{\mu\beta,\,\nu\alpha}+\hat {h}_{\nu\alpha,\,\mu\beta}-\hat{h}_{\nu\beta,\,\mu\alpha}-\hat{h}_{\mu\alpha, \,\nu\beta})\,. \tag{93}\]
We are interested in the gravitomagnetic components of this curvature tensor as measured by the fundamental observers. Projection of this tensor on the tetrad frame \(\varphi^{\mu}{}_{\hat{\alpha}}\) does not
affect its components in our approximation scheme. We find in this case
\[(\hat{\mathbb{H}}_{\bar{i}\bar{j}})=\begin{bmatrix}0&0&\kappa_{1}\\ 0&0&\kappa_{2}\\ 0&0&0\end{bmatrix}\,, \tag{94}\]
where
\[\kappa_{1}=\frac{1}{2}\partial_{X}\left(\partial_{X}\,\hat{h}_{02}-\partial_{Y }\,\hat{h}_{01}\right),\qquad\kappa_{2}=\frac{1}{2}\partial_{Y}\left(\partial _{X}\,\hat{h}_{02}-\partial_{Y}\,\hat{h}_{01}\right). \tag{95}\]
Hence, \(\hat{\mathcal{F}}_{\bar{0}}=0\) and \(\hat{\mathcal{F}}_{\bar{i}}=\hat{\mathbb{H}}_{\bar{i}\bar{j}}\hat{S}^{\bar{j}}= (\kappa_{1},\kappa_{2},0)S^{\bar{3}}\) at the level of approximation under consideration here. Moreover, Eq. (89) implies
\[\kappa_{1}=\frac{4\kappa}{\sigma}\,\Omega^{3}X\,,\qquad\kappa_{2}=\frac{4 \kappa}{\sigma}\,\Omega^{3}Y\,. \tag{96}\]
Therefore, \(\hat{\mathcal{F}}_{\mu}=-\,\partial_{\mu}\,\hat{\mathcal{H}}_{sg}\) as measured by the fundamental observers within the cylindrical domain in the Fermi frame.
We have thus far relied on the classical pole-dipole model for the evaluation of spin-gravity coupling. It is important to demonstrate that our considerations are consistent with the solutions of the Dirac equation in the Godel-type universe.
## VII Dirac equation in the Godel-type universe
Let us start with the Dirac equation in the form [37; 38]
\[\left(i\gamma^{\alpha}\nabla_{\alpha}-m\right)\Psi=0\,,\qquad\nabla_{\mu}= \partial_{\mu}+\Gamma_{\mu}\,, \tag{97}\]
where the fermion wave function \(\Psi\) is a 4-component spacetime scalar variable composed of the pair of 2-spinors \(\varphi\) and \(\chi\):
\[\Psi=\begin{bmatrix}\varphi\\ \chi\end{bmatrix},\qquad\varphi=\begin{bmatrix}\varphi_{1}\\ \varphi_{2}\end{bmatrix},\qquad\chi=\begin{bmatrix}\chi_{1}\\ \chi_{2}\end{bmatrix}\,. \tag{98}\]
As before, we assume the observer in the gravitational field has a natural adapted orthonormal tetrad field and
\[\gamma^{\alpha}=e^{\alpha}_{\ \hat{\beta}}\,\gamma^{\hat{\beta}}\,,\qquad\{ \gamma^{\mu},\gamma^{\nu}\}=-2g^{\mu\nu}(x)I_{4}\,, \tag{99}\]
where \(I_{n}\) is the \(n\)-dimensional identity matrix and
\[\gamma^{\hat{0}}=\begin{bmatrix}I_{2}&0\\ 0&-I_{2}\end{bmatrix}\,,\qquad\gamma^{\hat{i}}=\begin{bmatrix}0&\sigma_{i}\\ -\sigma_{i}&0\end{bmatrix}\,. \tag{100}\]
Here, \(\sigma_{i}\) are Pauli matrices, namely,
\[\sigma_{1}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}\,,\qquad\sigma_{2}=\begin{bmatrix}0&-i\\ i&0\end{bmatrix}\,,\qquad\sigma_{3}=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}\,. \tag{101}\]
The spin connection \(\Gamma_{\mu}\) (also known as Fock-Ivanenko coefficients) is given by
\[\Gamma_{\mu}=-\,\frac{i}{4}\,e^{\nu}{}_{\dot{\alpha}}\,e_{\nu}{}_{\dot{\beta}; \mu}\,\sigma^{\hat{\alpha}\hat{\beta}}\,,\qquad\sigma^{\hat{\alpha}\hat{\beta} }:=\frac{i}{2}[\gamma^{\hat{\alpha}},\gamma^{\hat{\beta}}]\,. \tag{102}\]
Making use of tetrad frame (8), we find, after some algebra, the explicit form of the Dirac equation (97) in Godel-type spacetime (1):
\[\Big{[}\Big{(}\gamma^{\hat{0}}-\sqrt{\frac{\sigma}{\sigma+\kappa} }\,\gamma^{\hat{2}}\Big{)}i\partial_{t}-\gamma^{\hat{1}}\,p_{x}-\frac{e^{-\mu x }}{\sqrt{\kappa+\sigma}}\,\gamma^{\hat{2}}\,p_{y}-\gamma^{\hat{3}}\,p_{z}\] \[\qquad\qquad\qquad+\frac{i\mu}{2}\gamma^{\hat{1}}+\frac{\mu}{4} \sqrt{\frac{\sigma}{\sigma+\kappa}}\,\gamma^{\hat{0}}\,\Sigma^{\hat{3}}-m \Big{]}\Psi=0\,. \tag{103}\]
Here, as usual, the momentum operator is \(\mathbf{p}=-\,i\mathbf{\nabla}\) and the spin operator \(\mathbf{\Sigma}\) is given by the matrix
\[\Sigma^{\hat{i}}=\begin{bmatrix}\sigma_{i}&0\\ 0&\sigma_{i}\end{bmatrix}\,. \tag{104}\]
Next, due to the symmetries of Godel-type spacetime, we assume a solution of the form
\[\Psi=\psi(x)\exp(-i\,\omega\,t+i\,k_{2}\,y+i\,k_{3}\,z)\,, \tag{105}\]
where the four components of \(\psi(x)\) satisfy ordinary differential equations, namely,
\[\frac{d\psi}{dx}=\mathcal{M}\psi\,, \tag{106}\]
where \(\mathcal{M}\) is the \(4\times 4\) matrix
\[\mathcal{M}=\begin{bmatrix}\mathcal{A}_{+}&ik_{3}&0&i\mathcal{B}_{+}\\ -ik_{3}&-\mathcal{A}_{-}&i\mathcal{B}_{-}&0\\ 0&i\mathcal{B}_{+}&\mathcal{A}_{+}&ik_{3}\\ i\mathcal{B}_{-}&0&-ik_{3}&-\mathcal{A}_{-}\end{bmatrix}+im\begin{bmatrix}0& 0&0&1\\ 0&0&1&0\\ 0&-1&0&0\\ -1&0&0&0\end{bmatrix}\,. \tag{107}\]
Here, \(\mathcal{A}_{\pm}\) and \(\mathcal{B}_{\pm}\) are given by
\[\mathcal{A}_{\pm}=\omega\sqrt{\frac{\sigma}{\sigma+\kappa}}\pm\Omega\,\sqrt{ \frac{\sigma+\kappa}{\sigma}}+k_{2}\,e^{-\,\mu x}\,,\qquad\mathcal{B}_{\pm}= \omega\pm\frac{\Omega}{2}\,. \tag{108}\]
The spin-vorticity-gravity coupling is evident in the way the frequency of the radiation is changed by \(\pm\Omega/2\) in agreement with previous results [36; 39; 40]. If \(k_{2}=0\), the waves can only travel parallel or antiparallel to the rotation axis. In this case, matrix \(\mathcal{M}\) has constant elements and the general solution of Eq. (106) can be expressed in terms of the eigenvalues and eigenfunctions of \(\mathcal{M}\). It turns out that no propagation can occur in this case due to the requirement that the wave amplitude be finite at all times [40]. These general results for the Dirac equation are consistent with the propagation of scalar and electromagnetic waves in the Godel-type universe; for brief accounts of these latter topics, see the appendices at the end of this paper.
To deal with the general case, we henceforth assume \(k_{2}\neq 0\) and change to \(\xi=e^{-\,\mu x}\) instead of \(x\) as the independent variable. Let us recall here that \(\mu>0\), since we have explicitly assumed \(\Omega>0\). For \(\infty>x>-\infty\), we find \(\xi\) goes from zero to \(+\infty\); hence, \(\xi\) is a radial coordinate. In terms of \(\xi\), Eq. (106) takes the form
\[\xi\,\frac{d\psi}{d\xi}=\mathbb{M}\psi\,, \tag{109}\]
where matrix \(\mathbb{M}\) is simply related to \(\mathcal{M}\), namely,
\[\mathbb{M}=\begin{bmatrix}-\bar{\mathcal{A}}_{+}&-i\gamma&0&-i\bar{\mathcal{B }}_{+}\\ i\gamma&\bar{\mathcal{A}}_{-}&-i\bar{\mathcal{B}}_{-}&0\\ 0&-i\bar{\mathcal{B}}_{+}&-\bar{\mathcal{A}}_{+}&-i\gamma\\ -i\bar{\mathcal{B}}_{-}&0&i\gamma&\bar{\mathcal{A}}_{-}\end{bmatrix}-\frac{ im}{2\Omega}\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\begin{bmatrix}0&0&0&1\\ 0&0&1&0\\ 0&-1&0&0\\ -1&0&0&0\end{bmatrix}\,. \tag{110}\]
Here, \(\bar{\mathcal{A}}_{\pm}\) and \(\bar{\mathcal{B}}_{\pm}\) are given by
\[\bar{\mathcal{A}}_{\pm}=\frac{\omega}{2\Omega}\left(\frac{\sigma}{\sigma+ \kappa}\right)\pm\frac{1}{2}+\beta\,\xi\,,\qquad\bar{\mathcal{B}}_{\pm}= \frac{1}{2}\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\left(\frac{\omega}{\Omega} \pm\frac{1}{2}\right) \tag{111}\]
and we have introduced dimensionless parameters
\[\beta=\frac{k_{2}}{2\Omega}\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\,,\qquad \gamma=\frac{k_{3}}{2\Omega}\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\,. \tag{112}\]
To clarify the structure of the resulting system (109)-(110), we note that the 4-spinor (98) can be decomposed into the sum of the left and right spinors,
\[\psi=\psi^{L}+\psi^{R}\,,\quad\psi^{L}=\frac{1}{2}(1-\gamma_{5})\psi\,,\quad \psi^{R}=\frac{1}{2}(1+\gamma_{5})\psi\,, \tag{113}\]
where \(\gamma_{5}:=i\,\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\). By definition, the left and right spinors are eigenstates of the \(\gamma_{5}\) matrix: \(\gamma_{5}\psi^{L}=-\,\psi^{L}\) and \(\gamma_{5}\psi^{R}=\psi^{R}\). Furthermore, we decompose the left and right spinors
into the eigenstates of the \(\Sigma^{\hat{3}}\) spin matrix (i.e., "spin-up" and "spin-down" states):
\[\psi^{L}=\psi^{L}_{+}+\psi^{L}_{-}\,,\quad\psi^{R}=\psi^{R}_{+}+\psi^{R}_{-}\,, \quad\Sigma^{\hat{3}}\psi^{L}_{\pm}=\pm\psi^{L}_{\pm}\,,\quad\Sigma^{\hat{3}} \psi^{R}_{\pm}=\pm\psi^{R}_{\pm}\,. \tag{114}\]
After these steps, we thus have
\[\psi^{L}_{+}=\mathcal{L}_{+}\begin{bmatrix}1\\ 0\\ 1\\ 0\end{bmatrix},\quad\psi^{L}_{-}=\mathcal{L}_{-}\begin{bmatrix}0\\ 1\\ 0\\ 1\end{bmatrix},\quad\psi^{R}_{+}=\mathcal{R}_{+}\begin{bmatrix}1\\ 0\\ -1\\ 0\end{bmatrix},\quad\psi^{R}_{-}=\mathcal{R}_{-}\begin{bmatrix}0\\ 1\\ 0\\ -1\end{bmatrix}, \tag{115}\]
where explicitly
\[\mathcal{L}_{+}=\frac{1}{2}(\varphi_{1}+\chi_{1})\,,\qquad \mathcal{L}_{-}=\frac{1}{2}(\varphi_{2}+\chi_{2})\,, \tag{116}\] \[\mathcal{R}_{+}=\frac{1}{2}(\varphi_{1}-\chi_{1})\,,\qquad \mathcal{R}_{-}=\frac{1}{2}(\varphi_{2}-\chi_{2})\,. \tag{117}\]
Taking these definitions into account, we can straightforwardly recast system (109)-(110) into an equivalent but more transparent form:
\[\Big{(}\xi\frac{d}{d\xi}+\bar{\mathcal{A}}_{+}\Big{)}\mathcal{L} _{+}= -i(\bar{\mathcal{B}}_{+}+\gamma)\mathcal{L}_{-}+i\frac{m}{2 \Omega}\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\,\mathcal{R}_{-}\,, \tag{118}\] \[\Big{(}\xi\frac{d}{d\xi}-\bar{\mathcal{A}}_{-}\Big{)}\mathcal{L} _{-}= -i(\bar{\mathcal{B}}_{-}-\gamma)\mathcal{L}_{+}+i\frac{m}{2 \Omega}\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\,\mathcal{R}_{+}\,,\] (119) \[\Big{(}\xi\frac{d}{d\xi}+\bar{\mathcal{A}}_{+}\Big{)}\mathcal{R} _{+}= i(\bar{\mathcal{B}}_{+}-\gamma)\mathcal{R}_{-}-i\frac{m}{2 \Omega}\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\,\mathcal{L}_{-}\,,\] (120) \[\Big{(}\xi\frac{d}{d\xi}-\bar{\mathcal{A}}_{-}\Big{)}\mathcal{R} _{-}= i(\bar{\mathcal{B}}_{-}+\gamma)\mathcal{R}_{+}-i\frac{m}{2 \Omega}\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\,\mathcal{L}_{+}\,. \tag{121}\]
The nontrivial mass mixes the left and right modes. However, for the massless (\(m=0\)) case or in the high-energy approximation (\(\frac{mc^{2}}{\hbar\Omega}\ll 1\)) we can neglect the last terms on the right-hand sides. As a result, the left modes \(\mathcal{L}_{\pm}\) decouple from the right modes \(\mathcal{R}_{\pm}\) and the system reduces to
\[\Big{(}\xi\frac{d}{d\xi}+\bar{\mathcal{A}}_{+}\Big{)}\mathcal{L} _{+}= -i(\bar{\mathcal{B}}_{+}+\gamma)\mathcal{L}_{-}\,, \tag{122}\] \[\Big{(}\xi\frac{d}{d\xi}-\bar{\mathcal{A}}_{-}\Big{)}\mathcal{L} _{-}= -i(\bar{\mathcal{B}}_{-}-\gamma)\mathcal{L}_{+}\,,\] (123) \[\Big{(}\xi\frac{d}{d\xi}+\bar{\mathcal{A}}_{+}\Big{)}\mathcal{R} _{+}= i(\bar{\mathcal{B}}_{+}-\gamma)\mathcal{R}_{-}\,,\] (124) \[\Big{(}\xi\frac{d}{d\xi}-\bar{\mathcal{A}}_{-}\Big{)}\mathcal{R} _{-}= i(\bar{\mathcal{B}}_{-}+\gamma)\mathcal{R}_{+}\,. \tag{125}\]
It is interesting to mention that in this approximation scheme Eq. (109) can also be solved by a different approach that is briefly described in Appendix A.
### Explicit solutions
Multiplying Eq. (122) by \(-\,i(\bar{\mathcal{B}}_{-}-\gamma)\) and Eq. (123) by \(-\,i(\bar{\mathcal{B}}_{+}+\gamma)\), we derive the second-order equations for the _left modes_:
\[\Big{(}\xi\frac{d}{d\xi}+\bar{\mathcal{A}}_{+}\Big{)}\Big{(}\xi \frac{d}{d\xi}-\bar{\mathcal{A}}_{-}\Big{)}\mathcal{L}_{-} =\big{[}-\,\bar{\mathcal{B}}_{+}\bar{\mathcal{B}}_{-}+\gamma( \bar{\mathcal{B}}_{+}-\bar{\mathcal{B}}_{-})+\gamma^{2}\big{]}\,\mathcal{L}_{- }\,, \tag{126}\] \[\Big{(}\xi\frac{d}{d\xi}-\bar{\mathcal{A}}_{+}\Big{)}\Big{(}\xi \frac{d}{d\xi}+\bar{\mathcal{A}}_{-}\Big{)}\mathcal{L}_{+} =\big{[}-\,\bar{\mathcal{B}}_{+}\bar{\mathcal{B}}_{-}+\gamma( \bar{\mathcal{B}}_{+}-\bar{\mathcal{B}}_{-})+\gamma^{2}\big{]}\,\mathcal{L}_{ +}\,. \tag{127}\]
In Eq. (111), it is useful to introduce a dimensionless parameter \(\alpha\),
\[\alpha:=\frac{\omega}{2\Omega}\left(\frac{\sigma}{\sigma+\kappa}\right)\,, \qquad\bar{\mathcal{A}}_{\pm}=\alpha+\beta\,\xi\pm\frac{1}{2}\,; \tag{128}\]
then,
\[\bar{\mathcal{A}}_{+}\bar{\mathcal{A}}_{-}=(\alpha+\beta\xi)^{2} -\frac{1}{4}\,,\qquad\bar{\mathcal{A}}_{+}-\bar{\mathcal{A}}_{-}=1\,, \tag{129}\] \[\bar{\mathcal{B}}_{+}\bar{\mathcal{B}}_{-}=\frac{\sigma}{\sigma+ \kappa}\Big{[}\frac{\omega^{2}}{(2\Omega)^{2}}-\frac{1}{16}\Big{]}\,,\qquad \bar{\mathcal{B}}_{+}-\bar{\mathcal{B}}_{-}=\frac{1}{2}\,\sqrt{\frac{\sigma }{\sigma+\kappa}}\,. \tag{130}\]
Employing the ansatz
\[\mathcal{L}_{\pm}=\xi^{-1}\,u_{\mp\frac{1}{2}}\,, \tag{131}\]
we can recast Eqs. (126) and (127) into the form
\[\xi^{2}\frac{d^{2}}{d\xi^{2}}u_{s}+\Big{[}\frac{1}{4}-\tilde{\mu}_{f}^{2}- \beta^{2}\xi^{2}-2\beta\xi\,(\alpha+s)\Big{]}\,u_{s}=0\,, \tag{132}\]
where \(s=\pm\frac{1}{2}\) and
\[\tilde{\mu}_{f}^{2} =\alpha^{2}+\gamma^{2}-\,\bar{\mathcal{B}}_{+}\bar{\mathcal{B}}_ {-}+\gamma(\bar{\mathcal{B}}_{+}-\bar{\mathcal{B}}_{-})\] \[=\frac{1}{\mu^{2}}\Big{[}-\,\omega^{2}\,\frac{\kappa}{\sigma+ \kappa}+\left(k_{3}-\Omega/2\right)^{2}\Big{]}\,. \tag{133}\]
With a new independent variable \(\tilde{\xi}=2|\beta|\xi\), Eq. (132) can be reduced to Whittaker's equation [41]
\[\frac{d^{2}u_{s}}{d\tilde{\xi}^{2}}+\left[-\,\frac{1}{4}+\frac{\tilde{\kappa} _{f}}{\tilde{\xi}}+\frac{\frac{1}{4}-\tilde{\mu}_{f}^{2}}{\tilde{\xi}^{2}} \right]u_{s}=0\,, \tag{134}\]
where
\[\tilde{\kappa}_{f}=-\,\frac{\beta}{|\beta|}\,(\alpha+s). \tag{135}\]
The Dirac field is a linear perturbation on the Godel-type spacetime; therefore, \(\psi(x)\) should be bounded. Demanding that \(\psi(x)\) be finite everywhere, the acceptable solution of Whittaker's equation is given via the confluent hypergeometric functions by
\[u_{s}=u_{s}^{0}\,\exp(-\tfrac{1}{2}\tilde{\xi})\,\tilde{\xi}^{\tfrac{1}{2}+\tilde {\mu}_{f}}\,_{1}F_{1}(\tfrac{1}{2}+\tilde{\mu}_{f}-\tilde{\kappa}_{f},1+2 \tilde{\mu}_{f};\tilde{\xi})\,, \tag{136}\]
where
\[\frac{1}{2}+\tilde{\mu}_{f}-\tilde{\kappa}_{f}=-\,n\,,\quad n=0,1,2,\ldots\,. \tag{137}\]
In this case, the confluent hypergeometric function can be expressed in terms of the associated Laguerre polynomial.
For \(k_{2}<0\), \(\beta\) is negative and \(\tilde{\kappa}_{f}=\alpha+s=\frac{\omega}{2\Omega}\left(\frac{\sigma}{\sigma+ \kappa}\right)+s\), with \(s=\pm\tfrac{1}{2}\). Then, combining Eqs. (137) and (133), we derive the dispersion relation
\[\omega=(2n+1+2s)\Omega\pm\left[(k_{3}-\Omega/2)^{2}-\frac{\kappa}{\sigma}(2n +1+2s)^{2}\Omega^{2}\right]^{1/2}\,. \tag{138}\]
Note that solutions with both signs of energy are admissible.
Similarly, multiplying Eq. (124) by \(i(\vec{\mathcal{B}}_{-}+\gamma)\) and Eq. (125) by \(i(\vec{\mathcal{B}}_{+}-\gamma)\), we derive the second-order equations for the _right modes_:
\[\Big{(}\xi\frac{d}{d\xi}+\vec{\mathcal{A}}_{+}\Big{)}\Big{(}\xi \frac{d}{d\xi}-\vec{\mathcal{A}}_{-}\Big{)}\mathcal{R}_{-} =\left[-\,\vec{\mathcal{B}}_{+}\vec{\mathcal{B}}_{-}-\gamma(\vec{ \mathcal{B}}_{+}-\vec{\mathcal{B}}_{-})+\gamma^{2}\right]\mathcal{R}_{-}\,, \tag{139}\] \[\Big{(}\xi\frac{d}{d\xi}-\vec{\mathcal{A}}_{+}\Big{)}\Big{(}\xi \frac{d}{d\xi}+\vec{\mathcal{A}}_{-}\Big{)}\mathcal{R}_{+} =\left[-\,\vec{\mathcal{B}}_{+}\vec{\mathcal{B}}_{-}-\gamma( \vec{\mathcal{B}}_{+}-\vec{\mathcal{B}}_{-})+\gamma^{2}\right]\mathcal{R}_{+}\,. \tag{140}\]
Using the ansatz
\[\mathcal{R}_{\pm}=\xi^{-1}\,v_{\mp\frac{1}{2}}\,, \tag{141}\]
we recast Eqs. (139) and (140) into
\[\xi^{2}\frac{d^{2}}{d\xi^{2}}v_{s}+\left[\frac{1}{4}-\bar{\mu}_{f}^{2}-\beta^{ 2}\xi^{2}-2\beta\xi\,(\alpha+s)\right]v_{s}=0\,, \tag{142}\]
where \(s=\pm\tfrac{1}{2}\), as before, but now we have
\[\bar{\mu}_{f}^{2} =\alpha^{2}+\gamma^{2}-\,\vec{\mathcal{B}}_{+}\vec{\mathcal{B}}_{ -}-\gamma(\vec{\mathcal{B}}_{+}-\vec{\mathcal{B}}_{-})\] \[=\frac{1}{\mu^{2}}\Big{[}-\omega^{2}\,\frac{\kappa}{\sigma+ \kappa}+(k_{3}+\Omega/2)^{2}\Big{]}\,. \tag{143}\]
With the independent variable \(\tilde{\xi}=2|\beta|\xi\), Eq. (142) can again be reduced to Whittaker's equation
\[\frac{d^{2}v_{s}}{d\tilde{\xi}^{2}}+\left[-\,\frac{1}{4}+\frac{\tilde{\kappa}_ {f}}{\tilde{\xi}}+\frac{\frac{1}{4}-\bar{\mu}_{f}^{2}}{\tilde{\xi}^{2}}\right] v_{s}=0\,, \tag{144}\]
where \(\tilde{\kappa}_{f}\) is given by Eq. (135). The regular solution of Eq. (144) is given by
\[v_{s}=v_{s}^{0}\,\exp(-\tfrac{1}{2}\tilde{\xi})\,\tilde{\xi}^{\tfrac{1}{2}+\bar{ \mu}_{f}}\,_{1}F_{1}(\tfrac{1}{2}+\bar{\mu}_{f}-\tilde{\kappa}_{f},1+2\bar{\mu} _{f};\tilde{\xi})\,, \tag{145}\]
where
\[\frac{1}{2}+\bar{\mu}_{f}-\tilde{\kappa}_{f}=-\,n\,,\quad n=0,1,2,\ldots\,. \tag{146}\]
As before, we can combine Eqs. (146) and (143) to derive the dispersion relation
\[\omega=(2n+1+2s)\Omega\pm\left[(k_{3}+\Omega/2)^{2}-\frac{\kappa}{\sigma}(2n+ 1+2s)^{2}\Omega^{2}\right]^{1/2}\,. \tag{147}\]
The motion of Dirac waves in the Godel-type universe is in general agreement with the corresponding results for the scalar and electromagnetic wave propagation described in Appendices B and C.
### Dealing with subtle points of Dirac theory on curved spacetimes
In order to have a correct quantum-mechanical interpretation, Dirac equation (97) should be recast into the form of the Schrodinger equation
\[i\frac{\partial\Psi}{\partial t}=\mathcal{H}\Psi\,. \tag{148}\]
In flat spacetime with the Minkowski metric \(\eta_{\mu\nu}=\text{diag}(-1,1,1,1)\) and the trivial frame \(e^{\mu}{}_{\dot{\alpha}}=\delta^{\mu}_{\alpha}\) and spin connection \(\Gamma_{\mu}=0\), this is straightforward. Multiplying Eq. (97) with \(\gamma^{\hat{0}}\), we derive Schrodinger equation (148) with the Hermitian Hamiltonian
\[\mathcal{H}=\beta_{\text{D}}\,m+\mathbf{\alpha}_{\text{D}}\cdot\mathbf{p}\,. \tag{149}\]
Here we denote, as usual, the matrices
\[\beta_{\text{D}}:=\gamma^{\hat{0}}\,,\qquad\alpha_{\text{D}}^{\hat{i}}:=\gamma ^{\hat{0}}\gamma^{\hat{i}}=\begin{bmatrix}0&\sigma_{i}\\ \sigma_{i}&0\end{bmatrix}\,,\qquad i=1,2,3\,. \tag{150}\]
In addition, one also needs a quantum-probabilistic picture which is related to the normalization of the wave function. As is well-known, a direct consequence of the Dirac equation (97) is the conservation of the vector current, which in flat spacetime can be expressed as
\[\partial_{\mu}J^{\mu}=0\,,\qquad J^{\mu}=\overline{\Psi}\gamma^{\mu}\Psi\,. \tag{151}\]
Integration over 3-space yields a global conservation law
\[\int d^{3}x\,J^{0}=\int d^{3}x\Psi^{\dagger}\Psi=\text{constant}=1\,. \tag{152}\]
The physical interpretation of the Dirac fermion dynamics is based on Eqs. (148) and (152), especially when the fermionic particle interacts with external fields.
Dirac theory on curved manifolds, however, involves a number of subtleties. In particular, the differential conservation law (151) is replaced by its curved version
\[\nabla_{\mu}J^{\mu}=\frac{1}{\sqrt{-g}}\,\partial_{\mu}\left(\sqrt{-g}J^{\mu} \right)=0\,,\qquad J^{\mu}=e^{\mu}{}_{\dot{\alpha}}\,\overline{\Psi}\gamma^{ \dot{\alpha}}\Psi\,, \tag{153}\]
which yields the global conservation law
\[\int d^{3}x\,\sqrt{-g}\,J^{0}=\int d^{3}x\sqrt{-g}\,e^{0}{}_{\dot{\alpha}}\Psi ^{\dagger}\gamma^{\dot{\alpha}}\Psi=\text{constant}=1\,. \tag{154}\]
For the natural Godel-type tetrad frame (8), we have \(e^{0}{}_{\dot{\alpha}}\gamma^{\dot{0}}\gamma^{\dot{\alpha}}=1-\sqrt{\frac{ \sigma}{\sigma+\kappa}}\,\alpha_{\text{D}}^{\dot{2}}\); therefore, the physical interpretation of the solutions is unclear. Besides that, Dirac equation (103) obviously cannot be directly recast into the form of the Schrodinger wave equation (148).
Both issues are related to the choice of the tetrad frame, which is defined up to an arbitrary local Lorentz transformation. The choice (8) corresponds to the so-called Landau-Lifshitz gauge with \(e_{0}{}^{\dot{i}}=0\) and \(e^{i}{}_{\dot{0}}=0\). The situation is essentially improved when one chooses the Schwinger gauge for the frame, where \(e_{i}{}^{\dot{0}}=0\) and \(e^{0}{}_{\dot{i}}=0\). Then, Eq. (154) reduces to an "almost flat" form
\[\int d^{3}x\sqrt{-g}\,e^{0}{}_{\dot{0}}\,\Psi^{\dagger}\Psi=1\,, \tag{155}\]
and the Dirac equation is straightforwardly recast into the Schrodinger form [37; 38].
This suggests replacing the original tetrad frame (8) by a new one
\[\widetilde{e}_{\dot{0}}=\sqrt{\frac{\kappa}{\sigma+\kappa}}\Big{(}\partial_{t }+\frac{\sqrt{\sigma}}{\kappa}e^{-\mu x}\partial_{y}\Big{)}\,,\quad \widetilde{e}_{\dot{1}}=\partial_{x}\,,\quad\widetilde{e}_{\dot{2}}=\frac{e^{ -\mu x}}{\sqrt{\kappa}}\partial_{y}\,,\quad\widetilde{e}_{\dot{3}}=\partial_ {z}, \tag{156}\]
where we assume \(\kappa>0\). Obviously, this choice corresponds to the Schwinger gauge \(\widetilde{e}_{i}{}^{0}=0\) and \(\widetilde{e}^{\,0}{}_{\dot{i}}=0\), for \(i=1,2,3\).
For Godel-type spacetimes, the two frames (8) and (156) are related by the Lorentz transformation,
\[\widetilde{e}_{\,\dot{\alpha}}=\Lambda^{\dot{\beta}}{}_{\dot{\alpha}}\,e_{ \dot{\beta}}, \tag{157}\]
where explicitly
\[\Lambda^{\dot{\alpha}}{}_{\dot{\beta}}=\left(\begin{array}{cccc}\sqrt{\frac{ \sigma+\kappa}{\kappa}}&0&\sqrt{\frac{\sigma}{\kappa}}&0\\ 0&1&0&0\\ \sqrt{\frac{\sigma}{\kappa}}&0&\sqrt{\frac{\sigma+\kappa}{\kappa}}&0\\ 0&0&0&1\end{array}\right)\,. \tag{158}\]
Interestingly, the transformation with constant matrix elements is global, whereas in general only local Lorentz transformations are possible.
The change of a frame on the spacetime affects the fermionic wave function
\[\Psi\quad\longrightarrow\quad\widetilde{\Psi}=L^{-1}\,\Psi \tag{159}\]
via the spinor matrix \(L\) that satisfies
\[L^{-1}\gamma^{\hat{\alpha}}L=\Lambda^{\hat{\alpha}}{}_{\hat{\beta}}\,\gamma^{ \hat{\beta}}. \tag{160}\]
Using a convenient parametrization with \(\cosh\zeta=\sqrt{\frac{\sigma+\kappa}{\kappa}}\) and \(\sinh\zeta=\sqrt{\frac{\sigma}{\kappa}}\), we easily derive
\[L=\cosh(\zeta/2)\,I_{4}+\sinh(\zeta/2)\,\alpha_{\rm D}^{\hat{2}}=\begin{bmatrix} \cosh(\zeta/2)\,I_{2}&\sinh(\zeta/2)\,\sigma_{2}\\ \sinh(\zeta/2)\,\sigma_{2}&\cosh(\zeta/2)\,I_{2}\end{bmatrix}\,. \tag{161}\]
The spinor transformation (159) mixes the spin-up and spin-down states (\(\mathcal{L}_{\pm}\)) for the left modes (and similarly for the right modes) and an appropriate normalization of the solutions should be fixed for the squares \(\widetilde{\Psi}^{\dagger}\widetilde{\Psi}\) of the transformed wave functions.
## VIII Dirac equation in Fermi frame
Let us next consider the Dirac equation in the quasi-inertial Fermi frame of Section VI. We are interested in the propagation of Dirac particles as described by fundamental observers that are all spatially at rest in the Fermi frame and occupy the limited cylindrical region about the \(Z\) axis such that \(\Omega|X|=\Omega|Y|\lesssim\varepsilon\). As before, we ignore all terms of order \(\varepsilon^{4}\) and higher. The preferred observers have adapted orthonormal tetrad frames \(\varphi^{\mu}{}_{\hat{\alpha}}\) given in Eqs. (86)-(87). Let us note that \(\varphi_{\mu\,\hat{\alpha}}\) can be written in \((T,X,Y,Z)\) coordinate system as
\[\varphi_{\mu\,\hat{0}}=(-1+\tfrac{1}{2}\hat{h}_{00},\hat{h}_{01},\hat{h}_{02}, 0)\,,\qquad\varphi_{\mu\,\hat{1}}=(0,1+\tfrac{1}{2}\hat{h}_{11},\hat{h}_{12}, 0)\,, \tag{162}\]
\[\varphi_{\mu\,\hat{2}}=(0,0,1+\tfrac{1}{2}\hat{h}_{22},0)\,,\qquad\varphi_{ \mu\,\hat{3}}=(0,0,0,1)\,. \tag{163}\]
We employ perturbations beyond Minkowski spacetime in our Fermi frame; hence, in the absence of \(\hat{h}_{\mu\nu}\), we have \(\varphi^{\mu}{}_{\dot{\alpha}}\to\delta^{\mu}_{\alpha}\). To simplify matters even further, we assume henceforth that the deviation from Minkowski spacetime is only due to the gravitomagnetic potentials \(\hat{h}_{01}=-\,\frac{\kappa}{\sigma}\,\Omega^{3}Y(X^{2}+Y^{2})\) and \(\hat{h}_{02}=\frac{\kappa}{\sigma}\,\Omega^{3}X(X^{2}+Y^{2})\) that give rise to the gravitomagnetic field \(\hat{\mathbf{B}}=(0,0,\hat{B}_{3})\), where \(\hat{B}_{3}=-\,2\frac{\kappa}{\sigma}\,\Omega^{3}(X^{2}+Y^{2})\).
With these assumptions, the spin connection (102) can be computed using the tetrad system \(\varphi_{\mu\,\dot{\alpha}}\) that is adapted to our reference observers and we find
\[\gamma^{\mu}\hat{\Gamma}_{\mu}=\frac{i}{2}\hat{B}_{3}\begin{bmatrix}\sigma_{3 }&0\\ 0&-\sigma_{3}\end{bmatrix}\,. \tag{164}\]
That is, the spin connection is proportional to the gravitomagnetic field of the Godel-type universe in the Fermi frame under consideration here.
For the sake of simplicity, we assume a solution of the Dirac equation that propagates along the \(Z\) axis and is of the form
\[\hat{\Psi}=\hat{\psi}(X,Y)\exp(-i\,\omega\,T+i\,k_{3}\,Z)\,. \tag{165}\]
Moreover, it is convenient to define
\[\hat{\mathbb{X}}=\begin{pmatrix}\hat{\psi}_{1}\\ \hat{\psi}_{3}\end{pmatrix},\qquad\hat{\mathbb{Y}}=\begin{pmatrix}\hat{\psi}_{ 2}\\ \hat{\psi}_{4}\end{pmatrix}. \tag{166}\]
In this case, Dirac's equation reduces to
\[\left[\partial_{X}+i\partial_{Y}+\frac{\kappa}{\sigma}\,\omega\Omega^{3}(X^{2} +Y^{2})(X+iY)\right]\hat{\mathbb{X}}=-\,\frac{i\kappa}{\sigma}\,\Omega^{3}(X^ {2}+Y^{2})\sigma_{1}\hat{\mathbb{Y}}+i\begin{bmatrix}k_{3}&\omega+m\\ \omega-m&k_{3}\end{bmatrix}\hat{\mathbb{Y}} \tag{167}\]
and
\[\left[\partial_{X}-i\partial_{Y}-\frac{\kappa}{\sigma}\,\omega\Omega^{3}(X^{2 }+Y^{2})(X-iY)\right]\hat{\mathbb{Y}}=\frac{i\kappa}{\sigma}\,\Omega^{3}(X^{ 2}+Y^{2})\sigma_{1}\hat{\mathbb{X}}+i\begin{bmatrix}-k_{3}&\omega+m\\ \omega-m&-k_{3}\end{bmatrix}\hat{\mathbb{X}}\,. \tag{168}\]
Here, \(\partial_{X}:=\partial/\partial X\), etc.; furthermore, we note that
\[(\partial_{X}\pm i\partial_{Y})(X^{2}+Y^{2})^{2}=4(X^{2}+Y^{2})(X\pm iY)\,, \tag{169}\]
\[(\partial_{X}\pm i\partial_{Y})[(X^{2}+Y^{2})(X\mp iY)]=4(X^{2}+Y^{2})\,. \tag{170}\]
In the absence of the gravitational perturbation, positive-frequency plane wave solutions of the free Dirac equation propagating in the \(Z\) direction are given by
\[\hat{w}^{\pm}\,e^{-i\,\omega\,T+i\,k_{3}\,Z}\,, \tag{171}\]
where the spin of the Dirac particle is either parallel (\(\hat{w}^{+}\)) or antiparallel (\(\hat{w}^{-}\)) to the \(Z\) direction; that is,
\[\hat{w}^{+}=N^{\dagger}\begin{bmatrix}1\\ 0\\ \varrho\\ 0\end{bmatrix}\,,\qquad\hat{w}^{-}=N^{\Downarrow}\begin{bmatrix}0\\ 1\\ 0\\ -\varrho\end{bmatrix}\,. \tag{172}\]
Here, \(N^{\dagger}\) and \(N^{\Downarrow}\) are positive normalization constants, \(\omega=(m^{2}+k_{3}^{2})^{1/2}\) and
\[\varrho:=\frac{k_{3}}{\omega+m}=\frac{\omega-m}{k_{3}}\,. \tag{173}\]
With these background states, we solve Eqs. (167) and (168) to linear order in the gravitomagnetic perturbation and obtain, after some algebra,
\[\hat{\Psi}^{+}=N^{\dagger}\left[\begin{array}{c}\exp[-\frac{3\kappa}{8\sigma }\omega\Omega^{3}(X^{2}+Y^{2})^{2}]\\ \frac{i\kappa}{4\sigma}\Omega^{3}\varrho(X^{2}+Y^{2})(X+iY)\\ \varrho\exp[-\frac{3\kappa}{8\sigma}\omega\Omega^{3}(X^{2}+Y^{2})^{2}]\\ \frac{i\kappa}{4\sigma}\Omega^{3}(X^{2}+Y^{2})(X+iY)\end{array}\right]e^{-i\, \omega\,T+i\,k_{3}\,Z}\,, \tag{174}\]
\[\hat{\Psi}^{-}=N^{\Downarrow}\left[\begin{array}{c}\frac{i\kappa}{4\sigma} \Omega^{3}\varrho(X^{2}+Y^{2})(X-iY)\\ \exp[\frac{3\kappa}{8\sigma}\omega\Omega^{3}(X^{2}+Y^{2})^{2}]\\ -\frac{i\kappa}{4\sigma}\Omega^{3}(X^{2}+Y^{2})(X-iY)\\ -\varrho\exp[\frac{3\kappa}{8\sigma}\omega\Omega^{3}(X^{2}+Y^{2})^{2}]\end{array} \right]e^{-i\,\omega\,T+i\,k_{3}\,Z}\,. \tag{175}\]
These solutions of Dirac's equation exhibit the coupling of spin with the gravitomagnetic field of Godel-type universe and may be compared and contrasted with the results of Appendix B for the propagation of circularly polarized electromagnetic waves along the \(Z\) axis in the Fermi frame.
## IX Discussion
We have investigated in detail the coupling of intrinsic spin with the gravitomagnetic fields of a three-parameter class of Godel-type spacetimes. These stationary and homogeneous
rotating universes are characterized by the set of constant parameters \((\kappa,\sigma,\mu)\); for \(\kappa<0\), there are closed timelike curves (CTCs) in spacetime, while for \(\kappa\geq 0\), CTCs are absent. For \((\kappa,\sigma,\mu)\to(-1,2,\sqrt{2}\,\Omega)\), we recover Godel's rotating universe model, where \(\Omega>0\) is the frequency of rotation. On the background Godel-type spacetimes, we have studied Dirac's equation and worked out its solutions; furthermore, we have extended our results to exact Fermi normal coordinate systems in these universes. We have shown that the Stern-Gerlach force due to the coupling of intrinsic spin with the gravitomagnetic field of a Godel-type spacetime is in agreement in the correspondence limit with the classical Mathisson spin-curvature force. This is a nonlinear generalization of previous work that focused on linearized general relativity [19]. Our main results turn out to be independent of the possible causality difficulties of the Godel-type spacetimes.
## Appendix A Alternative Solution of Eq. (109)
The purpose of this appendix is to present a different approach to the solution of Eq. (109).
We can write Eq. (109) in the form
\[\xi\,\frac{d(\mathcal{U}\psi)}{d\xi}=\mathcal{U}\,\mathbb{M}\,\mathcal{U}^{-1 }(\mathcal{U}\psi)\,, \tag{109}\]
where \(\mathcal{U}\) is a constant unitary matrix given by
\[\mathcal{U}=\frac{1}{\sqrt{2}}\begin{bmatrix}I_{2}&-I_{2}\\ I_{2}&I_{2}\end{bmatrix}\,. \tag{110}\]
Under this similarity transformation, we have
\[\mathcal{U}\,\gamma^{\hat{0}}\,\mathcal{U}^{\dagger}=\begin{bmatrix}0&I_{2}\\ I_{2}&0\end{bmatrix}=\gamma_{5}\,,\qquad\mathcal{U}\,\gamma^{\hat{i}}\, \mathcal{U}^{\dagger}=\gamma^{\hat{i}}\,. \tag{111}\]
That is, the standard representation of Dirac matrices is thus transformed to the chiral (Weyl) representation. Employing this representation, we find
\[\mathcal{U}\,\mathbb{M}\,\mathcal{U}^{-1}=\begin{bmatrix}-\bar{\mathcal{A}}_ {+}&i\bar{\mathcal{B}}_{+}-i\gamma&0&0\\ i\bar{\mathcal{B}}_{-}+i\gamma&\bar{\mathcal{A}}_{-}&0&0\\ 0&0&-\bar{\mathcal{A}}_{+}&-i\bar{\mathcal{B}}_{+}-i\gamma\\ 0&0&-i\bar{\mathcal{B}}_{-}+i\gamma&\bar{\mathcal{A}}_{-}\end{bmatrix}-\frac{ im}{2\Omega}\,\sqrt{\frac{\sigma}{\sigma+\kappa}}\begin{bmatrix}0&0&0&1\\ 0&0&1&0\\ 0&-1&0&0\\ -1&0&0&0\end{bmatrix}\,, \tag{112}\]
where \(\bar{\mathcal{A}}_{\pm}\) and \(\bar{\mathcal{B}}_{\pm}\) are given by Eq. (111). Expressing \(\mathcal{U}\psi\) in the form
\[\mathcal{U}\psi=\sqrt{2}\,\begin{bmatrix}\mathcal{R}\\ \mathcal{L}\end{bmatrix},\qquad\mathcal{R}=\begin{bmatrix}\mathcal{R}_{+}\\ \mathcal{R}_{-}\end{bmatrix},\qquad\mathcal{L}=\begin{bmatrix}\mathcal{L}_{+} \\ \mathcal{L}_{-}\end{bmatrix}\,, \tag{100}\]
where \(\mathcal{R}\) and \(\mathcal{L}\) are now right-handed and left-handed two-component Weyl spinors, we recover system of equations (118)-(121). The rest of the analysis would follow the treatment presented in Section VII.
## Appendix B Scalar Waves in the Godel-type Universe
Consider first a scalar field \(\phi\) of inertial mass \(m\) propagating on the background Godel-type spacetime (1). The wave equation is
\[g^{\mu\nu}\phi_{;\mu\nu}-\frac{m^{2}c^{2}}{\hbar^{2}}\phi=0\,, \tag{101}\]
where \(\hbar/(mc)\) is the Compton wavelength of the particle. The back reaction is of second order in the perturbation and can be neglected. The scalar wave equation can be written as
\[\frac{1}{\sqrt{-g}}\,\frac{\partial}{\partial x^{\mu}}\left(\sqrt{-g}\,g^{\mu \nu}\frac{\partial\phi}{\partial x^{\nu}}\right)-\frac{m^{2}c^{2}}{\hbar^{2}} \phi=0\,, \tag{102}\]
where for metric (1), \(\sqrt{-g}=e^{\mu x}\sqrt{\sigma+\kappa}\). Moreover, \(\partial_{t}\), \(\partial_{y}\) and \(\partial_{z}\) are Killing vector fields; therefore, we assume
\[\phi(x)=e^{-i\omega t+ik_{2}y+ik_{3z}}\,\bar{\phi}(\xi)\,,\qquad\xi:=e^{-\,\mu x }\,, \tag{103}\]
where \(\xi\) increases from \(0\) to \(\infty\) as the \(x\) coordinate decreases from \(+\infty\) to \(-\infty\). In terms of the new radial variable \(\xi\), the equation for \(\bar{\phi}\) reduces to
\[\frac{d^{2}\bar{\phi}}{d\xi^{2}}-\left[\alpha_{s}^{2}+\frac{\beta_{s}}{\xi}+ \frac{\zeta_{s}(\zeta_{s}+1)}{\xi^{2}}\right]\bar{\phi}=0\,, \tag{104}\]
where
\[\alpha_{s}=\frac{k_{2}}{\mu\sqrt{\sigma+\kappa}}\,,\quad\beta_{s}=\frac{2 \omega\,k_{2}}{c\,\mu^{2}}\,\frac{\sqrt{\sigma}}{\sigma+\kappa},\quad\zeta_{s }(\zeta_{s}+1)=\frac{1}{\mu^{2}}\left(-\,\frac{\omega^{2}}{c^{2}}\,\frac{ \kappa}{\sigma+\kappa}+k_{3}^{2}+\frac{m^{2}c^{2}}{\hbar^{2}}\right). \tag{105}\]
Let us assume \(\zeta_{s}>0\) and note that for \(k_{2}=0\), Eq. (104) for \(\bar{\phi}\) has solutions of the form \(\xi^{-\zeta_{s}}\) and \(\xi^{\zeta_{s}+1}\) that diverge at \(\xi=0\) and \(\xi=\infty\), respectively. However, the scalar perturbation must be finite everywhere; therefore, waves cannot freely propagate parallel or antiparallel
to the axis of rotation of the Godel-type spacetime. Next, for \(k_{2}\neq 0\), we introduce a new variable \(\bar{\xi}:=\frac{|k_{2}|\sqrt{\sigma}}{\Omega(\sigma+\kappa)}\,\xi\), in terms of which Eq. (24) takes the form of Whittaker's equation [41],
\[\frac{d^{2}\bar{\phi}}{d\bar{\xi}^{2}}+\left[-\,\frac{1}{4}+\frac{\bar{\kappa}_{ s}}{\bar{\xi}}+\frac{\frac{1}{4}-\bar{\mu}_{s}^{2}}{\bar{\xi}^{2}}\right]\bar{ \phi}=0\,, \tag{26}\]
where
\[\bar{\kappa}_{s}=-\,\frac{\omega}{2\Omega}\,\frac{k_{2}}{|k_{2}|}\,\frac{ \sigma}{\sigma+\kappa}\,,\qquad\bar{\mu}_{s}=\pm(\zeta_{s}+\tfrac{1}{2})\,. \tag{27}\]
In terms of the confluent hypergeometric functions, bounded solutions of this equation can be expressed up to proportionality constants by
\[\exp(-\tfrac{1}{2}\bar{\xi})\,\bar{\xi}^{\zeta+1}\,_{1}F_{1}(-n,2\zeta_{s}+2; \bar{\xi})\,,\qquad n=0,1,2,\ldots\,. \tag{28}\]
Here, \(\zeta_{s}>0\), \(\bar{\mu}_{s}=\zeta_{s}+1/2\) and
\[\zeta_{s}+1+\frac{\omega}{2\Omega}\,\frac{k_{2}}{|k_{2}|}\,\frac{\sigma}{ \sigma+\kappa}=-\,n\,,\qquad\omega=\pm\,2\Omega\,(n+\zeta_{s}+1)\,\frac{ \sigma+\kappa}{\sigma}\,, \tag{29}\]
for \(k_{2}<0\) (upper plus sign) or \(k_{2}>0\) (lower minus sign), respectively. Negative frequency in the case of \(k_{2}>0\) indicates that waves traveling forward in time move backward along the \(y\) direction. Finally, we note that only certain frequencies are allowed for the scalar waves; for instance, for \(k_{2}<0\), we have \(\omega_{n}=2\Omega\,(n+\zeta_{s}+1)(\sigma+\kappa)/\sigma\). That is,
\[\omega_{n}^{\pm}=(2n+1)\Omega\pm\left[\Big{(}-4n(n+1)\,\frac{\kappa}{\sigma} +1\Big{)}\Omega^{2}+k_{3}^{2}+\frac{m^{2}c^{2}}{\hbar^{2}}\right]^{1/2}\,, \tag{30}\]
where \(\omega_{n}^{+}>0\) for all \(n\) by definition, while \(\omega_{n}^{-}>0\) for \(n=1,2,3,\ldots\), only if
\[\omega_{n}^{+}\,\omega_{n}^{-}=4n(n+1)\,\frac{\sigma+\kappa}{\sigma}\,\Omega^ {2}-k_{3}^{2}-\frac{m^{2}c^{2}}{\hbar^{2}}>0\,. \tag{31}\]
For further work on the scalar perturbations of the Godel-type universe and its extensions, see [48; 49; 50].
## Appendix C Electromagnetic Waves in the Godel-type Universe
Propagation of electromagnetic radiation in the Godel universe was originally investigated in the search for the coupling of photon helicity with the rotation of matter [51]. In Godel-type spacetimes, Maxwell's equations can be reduced to an equation of the form of Eq. (24), where instead of the quantities in Eq. (25), we find [52]
\[\alpha_{s}\to\alpha_{em}=\frac{K_{2}^{\pm}}{\mu\sqrt{\sigma+\kappa}}\,,\qquad \beta_{s}\to\beta_{em}=\frac{2\omega\,K_{2}^{\pm}}{c\,\mu^{2}}\,\frac{\sqrt{ \sigma}}{\sigma+\kappa} \tag{32}\]
and \(\zeta_{s}\to\zeta_{em}\), where
\[\zeta_{em}(\zeta_{em}+1)=\frac{1}{\mu^{2}}\left(-\,\frac{\omega^{2}}{c^{2}}\,\frac {\kappa}{\sigma+\kappa}+(K_{3}^{\pm})^{2}\mp 2\Omega\,K_{3}^{\pm}\right)\,, \tag{10}\]
since photon is massless (\(m=0\)). The helicity coupling evident in Eq. (10) is consistent with the spin-vorticity-gravity coupling described in Section IV. That is, based on the results of Section IV, we would expect that the corresponding Hamiltonian for a photon to be proportional to \(\pm\,\hbar K_{3}^{\pm}\Omega/\omega\), so that in terms of frequency we would have \(\pm\,K_{3}^{\pm}\Omega/\omega\). The effect should disappear in the case of a null geodesic consistent with the eikonal limit \(\omega\to\infty\). For further extensions and generalizations to Godel-type universes, see [52; 53; 54; 55; 56; 57; 58].
### EM Waves in the Fermi Frame
We consider the propagation of electromagnetic radiation on the background quasi-inertial Fermi normal coordinate system. In terms of the Faraday tensor \(F_{\mu\nu}\), the source-free Maxwell equations can be expressed as
\[F_{[\mu\nu,\rho]}=0\,,\qquad(\sqrt{-g}\,F^{\mu\nu})_{,\nu}=0\,. \tag{11}\]
Using the same approach as in [51], we replace the gravitational field by a hypothetical optical medium that occupies Euclidean space with Cartesian Fermi coordinates \((X,Y,Z)\). The electromagnetic field equations (11) reduce to the traditional form of Maxwell's equations in a medium with the decompositions
\[F_{\mu\nu}\to(\tilde{\mathbf{E}},\tilde{\mathbf{B}})\,,\qquad\sqrt{-g}\,F^{\mu\nu}\to( -\tilde{\mathbf{D}},\tilde{\mathbf{H}})\,. \tag{12}\]
That is, \(F_{0i}=-\tilde{E}_{i}\) and \(F_{ij}=\epsilon_{ijk}\,\tilde{B}_{k}\); similarly, \(\sqrt{-g}\,F^{0i}=\tilde{D}_{i}\) and \(\sqrt{-g}\,F^{ij}=\epsilon_{ijk}\,\tilde{H}_{k}\). Here, \(\epsilon_{ijk}\) is the totally antisymmetric symbol with \(\epsilon_{123}=1\). The corresponding optical medium turns out to be gyrotropic with constitutive relations [59; 60; 61; 62; 63]
\[\tilde{D}_{i}=\hat{\epsilon}_{ij}\,\tilde{E}_{j}-(\hat{\mathbf{G}}\times\tilde{ \mathbf{H}})_{i}\,,\qquad\tilde{B}_{i}=\hat{\mu}_{ij}\,\tilde{H}_{j}+(\hat{\mathbf{G }}\times\tilde{\mathbf{E}})_{i}\,, \tag{13}\]
where the characteristics of the medium are conformally invariant and are given by
\[\hat{\epsilon}_{ij}=\hat{\mu}_{ij}=-\sqrt{-\hat{g}}\,\frac{\hat{g}^{ij}}{\hat {g}_{00}}\,,\qquad\hat{G}_{i}=-\frac{\hat{g}_{0i}}{\hat{g}_{00}}\,. \tag{14}\]
Expressing electromagnetic fields in the standard complex form and introducing the Riemann-Silberstein vectors,
\[\tilde{\mathbf{F}}^{\pm}=\tilde{\mathbf{E}}\pm i\,\tilde{\mathbf{H}}\,,\qquad\tilde{\mathbf{S}}^ {\pm}=\tilde{\mathbf{D}}\pm i\,\tilde{\mathbf{B}}\,, \tag{109}\]
the wave propagation equation can be expressed as the Dirac equation for photons in the gravitational field. That is,
\[\mathbf{\nabla}\times\tilde{\mathbf{F}}^{\pm}=\pm\,i\,\frac{\partial\tilde{\mathbf{S}}^{ \pm}}{\partial t}\,,\qquad\mathbf{\nabla}\cdot\tilde{\mathbf{S}}^{\pm}=0\,, \tag{110}\]
where
\[\tilde{S}^{\pm}_{p}=\hat{\epsilon}_{pq}\,\tilde{F}^{\pm}_{q}\pm i\,(\hat{\mathbf{ G}}\times\tilde{\mathbf{F}}^{\pm})_{p}\,. \tag{111}\]
The Dirac-type equation implies \(\partial_{t}(\mathbf{\nabla}\cdot\tilde{\mathbf{S}}^{\pm})=0\); therefore, if \(\mathbf{\nabla}\cdot\tilde{\mathbf{S}}^{\pm}=0\) initially, then it is valid for all time.
To interpret the physical meaning of these results, it proves useful to consider plane electromagnetic waves of frequency \(\omega\) propagating along the \(z\) axis in a global inertial frame with coordinates \(x^{\mu}=(t,\mathbf{x})\) in Minkowski spacetime. Maxwell's equations are linear; therefore, we can use complex electric and magnetic fields and use the convention that only the real parts correspond to measurable quantities. The waves can have two independent orthogonal linear polarization states along the \(\hat{\mathbf{x}}\) and \(\hat{\mathbf{y}}\) directions, where \(\hat{\mathbf{x}}\) is a unit vector along the \(x\) axis, etc. The circular polarization states are constructed from the linear polarization states vis superposition; in this case, the electric (\(\mathbf{e}\)) and magnetic (\(\mathbf{b}\)) fields can be expressed as
\[\mathbf{e}_{\pm}=\frac{1}{2}a_{\pm}\,(\hat{\mathbf{x}}\pm i\,\hat{\mathbf{y}})\,e^{-i \omega\,(t-z)}\,,\qquad\mathbf{b}_{\pm}=\mp\,\frac{i}{2}\,a_{\pm}\,(\hat{\mathbf{x}} \pm i\,\hat{\mathbf{y}})\,e^{-i\omega\,(t-z)}\,, \tag{112}\]
where \(a_{+}\) and \(a_{-}\) are constant complex amplitudes. Here, the upper (lower) sign represents waves in which the orthogonal electric and magnetic fields rotate in the positive (negative) sense about the direction of wave motion. In the case of a photon with positive (negative) circular polarization, the photon has positive (negative) helicity, namely, its spin is \(+\hbar\) (\(-\hbar\)) along its direction of propagation. The Riemann-Silberstein vectors have interesting behaviors for helicity states of the photon; in fact, for _positive-helicity_ radiation,
\[\mathbf{e}_{+}+i\,\mathbf{b}_{+}=a_{+}\,(\hat{\mathbf{x}}+i\,\hat{\mathbf{y}})\,e^{-i\omega\, (t-z)}\,,\qquad\mathbf{e}_{+}-i\,\mathbf{b}_{+}=0\,, \tag{113}\]
while for radiation with _negative-helicity_,
\[\mathbf{e}_{-}+i\,\mathbf{b}_{-}=0\,,\qquad\mathbf{e}_{-}-i\,\mathbf{b}_{-}=a_{-}\,(\hat{\mathbf{ x}}-i\,\hat{\mathbf{y}})\,e^{-i\omega\,(t-z)}\,. \tag{114}\]
Hence, \(\mathbf{e}+i\,\mathbf{b}\) (\(\mathbf{e}-i\,\mathbf{b}\)) represents in essence an electromagnetic wave with positive (negative) helicity. It is important to note that Eqs. (107) and (108) that represent the propagation of electromagnetic test fields in a gravitational field completely decouple for different helicity states.
Imagine the propagation of electromagnetic waves with definite helicity along the \(Z\) axis in the Fermi normal coordinate system in the Godel-type spacetime. The universe rotates in the negative sense about the \(Z\) axis. We confine our considerations to the cylindrical region near the rotation axis where the perturbation analysis contained in Eqs. (83)-(85) is valid. To simplify matters, we will take into account only the gravitomagnetic potentials \(\hat{h}_{01}\) and \(\hat{h}_{02}\) and ignore the other potentials; therefore, in Eq. (106) we have
\[\hat{\epsilon}_{ij}=\hat{\mu}_{ij}\approx 1\,,\qquad\hat{\mathbf{G}}\approx-\,\frac{ \kappa}{\sigma}\Omega^{3}(X^{2}+Y^{2})(Y,-X,0)\,. \tag{109}\]
It is straightforward to show that in this case the field Eqs. (107) and (108) have the solution
\[\tilde{F}_{1}^{\pm}=\hat{a}_{\pm}\exp[-i\omega(T-Z)\mp\frac{\kappa}{4\sigma} \omega\Omega^{3}\,(X^{2}+Y^{2})^{2}]\,, \tag{110}\]
\[\tilde{F}_{2}^{\pm}=\pm i\hat{a}_{\pm}\exp[-i\omega(T-Z)\mp\frac{\kappa}{4 \sigma}\omega\Omega^{3}\,(X^{2}+Y^{2})^{2}] \tag{111}\]
and \(\tilde{F}_{3}^{\pm}=0\). Here, \(\hat{a}_{+}\) and \(\hat{a}_{-}\) are constant amplitudes for the positive and negatives helicity waves in the Fermi frame, respectively. If the wave propagates along the axis of rotation (i.e., \(-Z\) direction), then in Eqs. (110) and (111) we have \(Z\rightarrow-Z\) and \(\pm\rightarrow\mp\) in the exponents of these equations as well as in the coefficient of the latter equation. For \(\Omega=0\), the Fermi frame reduces to a global inertial frame in Minkowski spacetime and we recover waves of the form given in Eqs. (106)-(107).
The helicity-gravitomagnetic field coupling is evident in these results and corresponds to Eqs. (89) and (90) of Section VI; indeed, the form of this coupling is reminiscent of the helicity-twist coupling studied in [64].
|
2307.11834 | General relativistic pulsations of ultra-massive ZZ Ceti stars | Ultra-massive white dwarf stars are currently being discovered at a
considerable rate, thanks to surveys such as the {\it Gaia} space mission.
These dense and compact stellar remnants likely play a major role in type Ia
supernova explosions. It is possible to probe the interiors of ultra-massive
white dwarfs through asteroseismology. In the case of the most massive white
dwarfs, General Relativity could affect their structure and pulsations
substantially. In this work, we present results of relativistic pulsation
calculations employing relativistic ultra-massive ONe-core white dwarf models
with hydrogen-rich atmospheres and masses ranging from $1.29$ to $1.369
M_{\odot}$ with the aim of assessing the impact of General Relativity on the
adiabatic gravity ($g$)-mode period spectrum of very-high mass ZZ Ceti stars.
Employing the relativistic Cowling approximation for the pulsation analysis, we
find that the critical buoyancy (Brunt-V\"ais\"al\"a) and acoustic (Lamb)
frequencies are larger for the relativistic case, compared to the Newtonian
case, due to the relativistic white dwarf models having smaller radii and
higher gravities for a fixed stellar mass. In addition, the $g$-mode periods
are shorter in the relativistic case than in the Newtonian computations, with
relative differences of up to $\sim 50$ \% for the highest-mass models ($1.369
M_{\odot}$) and for effective temperatures typical of the ZZ Ceti instability
strip. Hence, the effects of General Relativity on the structure, evolution,
and pulsations of white dwarfs with masses larger than $\sim 1.29 M_{\odot}$
cannot be ignored in the asteroseismological analysis of ultra-massive ZZ Ceti
stars. | Alejandro H. Córsico, S. Reece Boston, Leandro G. Althaus, Mukremin Kilic, S. O. Kepler, María E. Camisassa, Santiago Torres | 2023-07-21T18:06:57Z | http://arxiv.org/abs/2307.11834v1 | # General relativistic pulsations of ultra-massive ZZ Ceti stars
###### Abstract
Ultra-massive white dwarf stars are currently being discovered at a considerable rate, thanks to surveys such as the _Gaia_ space mission. These dense and compact stellar remnants likely play a major role in type Ia supernova explosions. It is possible to probe the interiors of ultra-massive white dwarfs through asteroseismology. In the case of the most massive white dwarfs, General Relativity could affect their structure and pulsations substantially. In this work, we present results of relativistic pulsation calculations employing relativistic ultra-massive ONe-core white dwarf models with hydrogen-rich atmospheres and masses ranging from 1.29 to 1.369\(M_{\sun}\) with the aim of assessing the impact of General Relativity on the adiabatic gravity (\(g\))-mode period spectrum of very-high mass ZZ Ceti stars. Employing the relativistic Cowling approximation for the pulsation analysis, we find that the critical buoyancy (Brunt-Vaisala) and acoustic (Lamb) frequencies are larger for the relativistic case, compared to the Newtonian case, due to the relativistic white dwarf models having smaller radii and higher gravities for a fixed stellar mass. In addition, the \(g\)-mode periods are shorter in the relativistic case than in the Newtonian computations, with relative differences of up to \(\sim 50\) % for the highest-mass models (1.369\(M_{\sun}\)) and for effective temperatures typical of the ZZ Ceti instability strip. Hence, the effects of General Relativity on the structure, evolution, and pulsations of white dwarfs with masses larger than \(\sim 1.29M_{\sun}\) cannot be ignored in the asteroseismological analysis of ultra-massive ZZ Ceti stars.
keywords: stars: evolution -- stars: interiors -- stars: white dwarfs -- stars: pulsations -- asteroseismology -- relativistic processes
## 1 Introduction
ZZ Ceti variables are pulsating DA (H-rich atmosphere) white dwarf (WD) stars with effective temperatures in the range \(10\,500\lesssim T_{\rm eff}\lesssim 13\,500\) K and surface gravities in the interval \(7.5\lesssim\log g\lesssim 9.35\). They exhibit periods from \(\sim 100\) s to \(\sim 1400\) s due to nonradial gravity(\(g\)) modes with harmonic degree \(\ell=1\) and \(\ell=2\)(Winget & Kepler, 2008; Fontaine & Brassard, 2008; Althaus et al., 2010). The interiors of these compact stars, which constitute the evolutionary end of most stars in the Universe, can be investigated through the powerful tool of asteroseismology by comparing the observed periods with theoretical periods computed using large grids of WD stellar models (e.g. Corsico et al., 2019).
Although most ZZ Ceti stars have masses between \(\sim 0.5\) and \(\sim 0.8\)\(M_{\sun}\), at least seven ultra-massive (\(M_{\star}\gtrsim 1.05M_{\sun}\)) ZZ Ceti stars have been discovered so far: BPM 37093 (\(M_{\star}=1.13M_{\sun}\); Kanaan et al., 1992; Bedard et al., 2017), GD 518 (\(M_{\star}=1.24\)\(M_{\sun}\); Hermes et al., 2013), SDSS J084021.23+522217.4 (\(M_{\star}=1.16\)\(M_{\sun}\); Curd et al., 2017), WD J212402.03\(-\)600100.05 (\(M_{\star}=1.16\)\(M_{\sun}\); Rowan et al., 2019), J0204+8713 and J0551+4135 (\(M_{\star}=1.05\)\(M_{\odot}\) and \(M_{\star}=1.13\)\(M_{\sun}\), respectively; Vincent et al., 2020), and WD J004917.14\(-\)252556.81 (\(M_{\star}\sim 1.30\)\(M_{\sun}\); Kilic et al., 2023b). With such a high stellar mass, the latter is the most massive pulsating WD currently known. The discovery and characterisation of pulsating ultra-massive WDs through asteroseismology is important for understanding the supernovae type Ia explosions. We know that accreting CO-core WDs are the progenitors of these explosions (e.g., Nugent et al., 2011; Maoz et al., 2014), but we have not been able to probe the interior structure of such WDs near the Chandrasekhar limit.
Modern photometric data of pulsating WDs collected by spacecrafts such as the ongoing Transiting Exoplanet Survey Satellite mission (_TESS_; Ricker et al., 2014) and the already finished _Kepler/K2_ space mission (Borucki et al., 2010; Howell et al., 2014), brought along revolutionary improvements to the field of WD asteroseismology in at least two aspects (Corsico, 2020, 2022). First, the space missions provide pulsation periods with an unprecedented precision. Indeed, the observational precision limit of _TESS_ for the pulsation periods is
of order \(\sim 10^{-4}\) s or even smaller (Giammichele et al., 2022). Second, these space missions also enable the discovery of large numbers of new pulsating WDs. For example, Romero et al. (2022) used the _TESS_ data from the first three years of the mission, for Sectors 1 through 39, to identify 74 new ZZ Ceti stars, which increased the number of already known ZZ Cetis by \(\sim 20\) per cent. It is likely that many more pulsating WDs, not only average-mass (\(M_{\star}\sim 0.60\)\(M_{\sun}\)) objects, but also ultra-massive WDs, will be identified by _TESS_ and other future space telescopes such as the Ultraviolet Transient Astronomy Satellite (ULTRASAT, Ben-Ami et al., 2022) in the coming years, though _TESS_'s relatively small aperture limits its ability to observe intrinsically fainter massive WDs. In addition, large-scale wide-field ground-based photometric surveys like the Vera C. Rubin Observatory's Legacy Survey of Space and Time and the BlackGEM (Groot et al., 2022) will significantly increase the population of WD pulsators, including massive WDs.
The use of space telescopes for WD asteroseismology has opened up a new window into the interiors of these stars and led to some new and interesting questions. For example, the availability of pulsation periods with high precision supplied by modern space-based photometric observations has, for the first time, raised the question of whether it is possible to detect very subtle effects in the observed period patterns, such as the signatures of the current experimental \({}^{12}\)C(\(\alpha,\gamma\))\({}^{16}\)O reaction rate probability distribution function (Chidester et al., 2022), or the possible impact of General Relativity (GR) on the pulsation periods of ZZ Ceti stars (Boston et al., 2023). In particular, the possibility that relativistic effects can be larger than the uncertainties in the observed periods when measured with space missions has led Boston et al. (2023) to conclude that, for average mass WDs, the relative differences between periods in the Newtonian and relativistic calculations can be larger than the observational precision with which the periods are measured. Hence, to fully exploit the unprecedented quality of the observational data from _TESS_ and similar space missions, it is necessary to take into account the GR effects on the structure and pulsations of WDs.
The impact of GR is stronger as we consider more massive WD configurations. In particular, WDs with masses close to the Chandrasekhar mass (\(M_{\rm Ch}\sim 1.4M_{\sun}\)). Carvalho et al. (2018); Nunes et al. (2021) and Althaus et al. (2022) used static WD models and evolutionary ONe-core WD configurations, respectively, to explore the effects of GR on the structure of ultra-massive WDs. These investigations found that GR strongly impacts the radius and surface gravity of ultra-massive WDs. In addition, Althaus et al. (2022) found that GR leads to important changes in cooling ages and in mass-radius relationships when compared with Newtonian computations. Furthermore, Althaus et al. (2023) have extended the relativistic computations to CO-core ultra-massive WD models.
In the present work, we aim to assess the impact of GR on the g-mode period spectra of ultra-massive ZZ Ceti stars with masses \(\gtrsim 1.29M_{\sun}\). This is the lower limit for the WD mass from which the effects of GR begin to be relevant (Althaus et al., 2022). Our analysis is complementary to that of Boston et al. (2023), which is focused on average-mass pulsating DA WDs (\(\sim 0.60M_{\sun}\); the bulk of pulsating WD population). For these average-mass DA WDs, the difference of Newtonian physics and GR was shown to be on the order of the surface gravitational redshift \(z\sim 10^{-4}\), though for stars with very high central concentration of mass this difference could be an order of magnitude larger. Since the ultra-massive WDs are highly centrally condensed, GR might be even more important for these objects. The study of ultra-massive WDs is of particular interest at present, given the increasing rate of discovery of these objects (Gagne et al., 2018; Kilic et al., 2020, 2021; Hollands et al., 2020; Caiazzo et al., 2021; Torres et al., 2022; Kilic et al., 2023) and the prospect of finding pulsating ultra-massive WDs more massive than WD J004917.14\(-\)252556.81 (Kilic et al., 2023). This last point is particularly relevant in view of the capabilities of the current (e.g. _TESS_) and upcoming (e.g., _ULTRASAT, LSST, BlackGEM_) surveys.
The formalism of stellar pulsations in GR began with Thorne & Campolattaro (1967), using the Regge-Wheeler gauge to treat the pulsations as linear perturbations on top of a static, spherically symmetric background (Regge & Wheeler, 1957). The result was a reduction in the Einstein Field Equations (EFE) that describe spacetime curvature in GR to only five complex-valued equations for the perturbation amplitudes. Further theoretical work showed this system was only 4th-complex-order, with two degrees of freedom describing the fluid perturbations and two describing the gravitational perturbations (Ipser & Thorne, 1973). Later, Detweiler & Lindblom (1985) (see also Lindblom & Detweiler, 1983) reduced the perturbed EFE to the explicit form of four 1st-order complex-valued equations describing the normal mode perturbations. For quadrupole modes or higher (\(\ell\geq 2\)), the two gravitational degrees of freedom at the surface produce outgoing gravitational radiation (i.e. gravitational waves) which will gradually damp any excitations, so that stellar perturbations in GR can be at best quasinormal.
In asteroseismology, the outgoing gravitational radiation is largely an undesired complication, requiring specialized methods to avoid carrying the boundary condition out to spatial infinity (Chandrasekhar & Detweiler, 1975; Fackerell, 1971; Lindblom et al., 1997; Andersson et al., 1995). The outgoing gravitational waves can be easily removed using a form of the Cowling approximation within GR, first developed by McDermott et al. (1983) and further studied by Lindblom & Splinter (1990), McDermott et al. (1985), and Yoshida & Lee (2002). In this _relativistic Cowling_ approximation, the gravitational degrees of freedom are set to zero, retaining only the fluid perturbations. Further, there is no intrinsic damping, so that the problem becomes real-valued and the modes are stationary. This treatment is widely used to study the pulsation and stability of compact stellar objects in situations where knowledge of the outgoing gravitational waves is irrelevant, and especially in stars with surface crystallization (Flores et al., 2017; Yoshida & Lee, 2002). Another approach to include the relativistic effects in stellar pulsations is to use the _Post-Newtonian approximation_(Cutler, 1991; Poisson & Will, 2014; Boston et al., 2023). This approach is able to include gravitational perturbations in the form of two scalar potentials and a vector potential, without also producing gravitational radiation (Boston, 2022).
Most interest in pulsations of relativistic stars has focused on neutron stars (e.g. McDermott et al., 1988; Cutler & Lindblom, 1992; Lindblom & Splinter, 1989). The earliest calculations of pulsations in WDs involving GR tried to address the origin of radio sources discovered by Hewish et al. (1968), as an alternative to a neutron star origin (Thome & Ipser, 1968). These studies, which date back to the late 1960s, were devoted to computing the fundamental radial pulsation mode of Hamada-Salpeter WD models (Hamada & Salpeter, 1961) including GR effects (Faulkner & Gribbin, 1968; Skilling, 1968; Cohen et al., 1969). Boston et al. (2023) have recently renewed interest in this topic by focusing on relativistic pulsations of ZZ Ceti stars and other pulsating WDs, concentrating on average-mass WDs. In the present paper, we study the impact of GR on realistic evolutionary stellar models of ultra-massive DA WDs computed by Althaus et al. (2022), which are representative of very high-mass ONe-core ZZ Ceti stars. As a first step, in this work we adopt the relativistic Cowling approximation described above to incorporate relativistic effects in the pulsation calculations, following the treatment provided in Yoshida & Lee (2002). In future papers we plan to examine
the Post-Newtonian and full 4th-order GR equations, applied to ultra-massive ONe-core WDs and to ultra-massive CO-core WDs (Corsico et al., 2023b, in prep.).
The paper is organised as follows. In Sect. 2 we briefly describe the relativistic WD models computed by Althaus et al. (2022), emphasising the impact of GR on the stellar structure. We devote Sect. 3 to describe our approach for the relativistic nonradial stellar pulsations, particularly the formalism of the relativistic Cowling approximation (Sect. 3.1, 3.2, 3.3 and 3.4). The pulsation results for our ultra-massive WD models are described in Sect. 4. Finally, in Sect. 5 we summarise our findings. We present in Appendix A a derivation of the relativistic version of the "modified Ledoux" treatment of the Brunt-Vaisala frequency, and in Appendix B the results of a validation of the main results of the paper using a toy model based on Chandrasekhar's models.
## 2 Relativistic ultra-massive WD models
To determine whether to employ GR or Newtonian gravity in a system like a star, a qualitative general criterion commonly used is to assess the magnitude of the "relativistic correction factor", \(\varepsilon\), defined as \(\varepsilon=GM_{\star}/c^{2}R_{\star}\), where \(G\) is the Newtonian gravitational constant, \(c\) is the speed of light, and \(M_{\star}\) and \(R_{\star}\) are the stellar mass and radius, respectively (Poisson and Will, 2014)1. The larger \(\varepsilon\), the worse the approximation of Newtonian gravity. For instance, for a neutron star, \(\varepsilon\) is of order \(\sim 0.1\), while for a black hole, \(\varepsilon\sim 1\). For average mass (\(\sim 0.6M_{\sun}\)) WDs, \(\varepsilon\) is \(\sim 10^{-4}\), and that is why until recently the relativistic effects have been neglected in the calculation of their structures. If we instead consider an ultra-massive WD star with \(M\sim 1.3M_{\sun}\) and \(\varepsilon\sim 0.001\), at first glance, it is not clear if the relativistic effects should be included or not. However, Carvalho et al. (2018) showed that for the most massive WDs, the importance of GR for their structure and evolution cannot be ignored. In fact, numerous works based on static WD structures have shown that GR effects are relevant for the determination of the radius of massive WDs (Rotondo et al., 2011; Mathew and Nandy, 2017; Carvalho et al., 2018; Nunes et al., 2021). In particular, these studies have demonstrated that for fixed values of mass, deviations of up to 50% in the Newtonian WD radius are expected compared to the GR WD radius. Recently, Althaus et al. (2022) have presented the first set of constant rest-mass ONe-core ultra-massive WD evolutionary models with masses greater than \(\sim 1.30M_{\sun}\) (and up to \(1.369M_{\sun}\)) that fully take into account the effects of GR. This study demonstrates that the GR effects must be considered to assess the structural and evolutionary properties of the most massive WDs. This analysis has been extended recently by Althaus et al. (2023) to ultra-massive WDs with CO cores that result from the complete evolution of single progenitor stars that avoid C-ignition (Althaus et al., 2021; Camisassa et al., 2022).
Footnote 1: The parameter \(\varepsilon\) is nothing more than the surface gravitational redshift in the Newtonian limit, \(z\).
Althaus et al. (2022) employed the LPCODE stellar evolution code, appropriately modified to take into account relativistic effects. They considered initial chemical profiles as predicted by the progenitor evolutionary history (Siess, 2007, 2010; Camisassa et al., 2019), and computed model sequences of \(1.29\), \(1.31\), \(1.33\), \(1.35\), and \(1.369M_{\sun}\) WDs. The standard equations of stellar structure and evolution were generalised to include the effects of GR following Thorne (1977). In particular, the modified version of LPCODE computes the dimensionless GR correction factors \(\mathcal{H},\mathcal{G},\mathcal{V}\), and \(\mathcal{R}\) which turn to unity in the Newtonian limit. These factors correspond, respectively, to the enthalpy, gravitational acceleration, volume, and redshift correction parameters. For comparison purposes, Althaus et al. (2022) have also computed the same WD sequences for the Newtonian gravity case. All these sequences included the energy released during the crystallisation process, both due to latent heat and the induced chemical redistribution, as in Camisassa et al. (2019).
We briefly describe below some of the properties of the representative models of ultra-massive ONe-core WD stars, emphasising the impact of GR on their structure. We refer the reader to the paper by Althaus et al. (2022) for a detailed description of the effects of GR on the structural properties of these models. Here, we choose two template WD models characterised by stellar masses \(M_{\star}=1.29M_{\sun}\) and \(M_{\star}=1.35M_{\sun}\), H envelope thickness of \(\log(M_{\rm H}/M_{\star})\sim-6\), and an effective temperature of \(T_{\rm eff}\sim 12\,000\) K, typical of the ZZ Ceti instability strip. We distinguish two cases: one in which we consider Newtonian WD models (N case), and another one in which the WD structure is relativistic (GR case). In Fig. 1 we plot the run of the stellar radius and gravity in terms of the outer mass fraction coordinate, corresponding to WD models with \(1.29M_{\sun}\) (left panels) and \(1.35M_{\sun}\) (right panels), for the GR case (black curves) and the N case (red curves). Clearly, GR induces smaller radii and larger gravities, and this effect is much more pronounced for larger stellar masses.
In Table 1, which is a shortened version of Table 1 of Althaus et al. (2022), we include the values of the stellar radius and the surface gravity for models with \(T_{\rm eff}=10\,000\) K and masses between \(1.29M_{\sun}\) and \(1.369M_{\sun}\) in the GR and N cases. As can be seen, the impact of GR on the radius and gravity of the models is noticeable. In Fig. 2 we plot the relative differences \(\Delta R_{\star}=|R_{\star}^{\rm GR}-R_{\star}^{\rm N}|/R_{\star}^{\rm GR}\) (left panel) and \(\Delta g=(g^{\rm GR}-g^{\rm N})/g^{\rm GR}\) (right panel) in terms of the stellar mass. The stellar radius is lower by \(\sim 3\%\) (\(1.29M_{\sun}\)) to \(\sim 34\%\) (\(1.369M_{\sun}\)), and the surface gravity is higher by \(\sim 6\%\) (\(1.29M_{\sun}\)) to \(\sim 44\%\) (\(1.369M_{\sun}\)) compared to the case where GR is neglected. The typical observational uncertainties in the radii and surface gravities of the most massive WDs in the
Figure 1: The stellar radius (upper panels) and gravity (bottom panels) in terms of the outer mass fraction coordinate corresponding to ultra-massive DA WD models with \(M_{\star}=1.29M_{\sun}\) (left) and \(M_{\star}=1.35M_{\sun}\) (right), the GR case (black curves) and for the N case (red curves) (\(T_{\rm eff}\sim 12\,000\) K).
Montreal White Dwarf Database 100 pc sample (Kilic et al., 2021) are 3% and 6%, respectively. Hence, the differences between the GR and N cases can be detected observationally for WDs with masses above \(\sim 1.3~{}M_{\odot}\). These discrepancies must have important consequences for the pulsational properties of ultra-massive WDs, as we will see in Sect. 4.2.
## 3 Relativistic Nonradial Stellar Pulsations in WDs
In order to incorporate the relativistic effects in the pulsations of WDs, we adopt the relativistic Cowling approximation in the form developed by Yoshida & Lee (2002), and follow the GR formalism provided in Boston (2022).
### The relativistic Cowling approximation
The Cowling approximation of Newtonian nonradial pulsations (named after T. G. Cowling's pioneer paper; Cowling, 1941) is based on neglecting the gravitational potential perturbations during the fluid oscillations. This approximation has been widely used in Newtonian nonradial pulsation computations in the past, because it constitutes a 2nd-order differential eigenvalue problem, thus simplifying the complete 4th-order problem (Uuno et al., 1989). It is also a very good approximation to periods of \(g\)-modes in WDs, which are primarily envelope modes (Montgomery et al., 1999). The Cowling approximation has been frequently used in asymptotic treatment of stellar pulsations (see, for instance, Tassoul, 1980), and also in numerical treatments of \(g\)-mode pulsations in rapidly rotating WDs (e.g., Saio, 2019; Kumar & Townsley, 2023), although it has fallen out of use in the context of present-day numerical calculations of Newtonian nonradial stellar pulsations and asteroseismology. The _relativistic_ Cowling approximation (McDermott et al., 1983), on the other hand, is generally employed in the field of pulsations of relativistic objects such as neutron stars (Lindblom & Splinter, 1989; Yoshida & Lee, 2002; Sotani & Takiwaki, 2020) and hybrid (hadron plus quark matter phases) neutron stars (Tonetto & Lugones, 2020; Zheng et al., 2023).
In the next sections, we first describe the relativistic correction factors involved in the pulsation problem. Then, we provide relativistic expressions to calculate the critical frequencies (Brunt-Vaisala and Lamb frequencies), after which we assess the coefficients of the pulsation differential equations in the relativistic Cowling form. Finally, we provide the two first order differential equations to be solved, along with the boundary conditions of the eigenvalue problem.
Relativistic correction factors \(\mathcal{R}\), \(\mathcal{V}\), and potentials \(\nu\), \(\lambda\)
We start by considering the Schwarzschild metric of GR for space-time inside and around a star (Thorne, 1977):
\[ds^{2}=-e^{2\Phi/c^{2}}c^{2}dt^{2}+\left(1-\frac{2Gm}{c^{2}r}\right)^{-1}dr^{ 2}+r^{2}d\Omega^{2}, \tag{1}\]
where \(m\) is the "total mass inside radius \(r\)", which includes the rest mass, nuclear binding energy, internal energy, and gravity. \(\Phi\) is a gravitational potential, which in the Newtonian limit \(c\rightarrow\infty\) corresponds to the scalar Newtonian potential.
Following Thorne (1977) in his treatment of relativistic stellar interiors, it is convenient to write the metric in the form
\[ds^{2}=-\mathcal{R}^{2}c^{2}dt^{2}+\mathcal{V}^{2}dr^{2}+r^{2}d\Omega^{2}, \tag{2}\]
where the redshift correction factor \(\mathcal{R}\), and the volume correction factor \(\mathcal{V}\), are defined as (Thorne, 1977):
\[\mathcal{R}=e^{\Phi/c^{2}},\ \ \ \mathcal{V}=\left(1-\frac{2Gm}{c^{2}r}\right)^{-1/ 2}. \tag{3}\]
The metric is usually written also as a function of two relativistic gravitational potentials \(\nu\) and \(\lambda\)(Tolman, 1939; Oppenheimer & Volkoff, 1939), so that:
\[ds^{2}=-e^{\nu}c^{2}dt^{2}+e^{\lambda}dr^{2}+r^{2}d\Omega^{2}. \tag{4}\]
Equating Eqs. (1) and (4), we have
\[\nu=\frac{2\Phi}{c^{2}},\ \ \ \text{and}\ \ \ \lambda=-\ln\left(1-\frac{2Gm}{c^{ 2}r}\right). \tag{5}\]
We obtain \(\nu\) and \(\lambda\) in terms of the variables \(\mathcal{R}\) and \(\mathcal{V}\), that are the output of the relativistic LPCODE version (Althaus et al., 2022) by equating Eqs. (2) and (4):
\[\mathcal{R}^{2}=e^{\nu},\ \ \ \mathcal{V}^{2}=e^{\lambda}, \tag{6}\]
so that,
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \(M_{\star}/M_{\odot}\) & \(R_{\star}^{\text{GR}}\) & \(R_{\star}^{\text{N}}\) & \(\log g^{\text{GR}}\) & \(\log g^{\text{N}}\) \\ & [\(\times 10^{8}\) cm] & [\(\times 10^{8}\) cm] & [cm/s\({}^{2}\)] & [cm/s\({}^{2}\)] \\ \hline
1.29 & 2.609 & 2.685 & 9.401 & 9.375 \\
1.31 & 2.326 & 2.426 & 9.507 & 9.470 \\
1.33 & 2.005 & 2.157 & 9.643 & 9.579 \\
1.35 & 1.543 & 1.829 & 9.878 & 9.728 \\
1.369 & 1.051 & 1.409 & 10.217 & 9.961 \\ \hline \end{tabular}
\end{table}
Table 1: Stellar masses, radius and surface gravities of the ultra-massive ONe-core WD models at \(T_{\text{eff}}=10\,000\) K in the relativistic and Newtonian cases.
Figure 2: Left: the absolute relative difference between relativistic and Newtonian stellar radius versus stellar mass. Right: the relative difference between the relativistic and Newtonian surface gravity in terms of the stellar mass.
\[\nu=2\ln\mathcal{R},\quad\lambda=2\ln\mathcal{V}\,. \tag{7}\]
In the Newtonian limit, we have \(\mathcal{R}=e^{\nu/2}\to 1\) and \(\mathcal{V}=e^{\lambda/2}\to 1\), so that \(\nu,\lambda\to 0\).
We compute the derivatives of \(\nu\) and \(\lambda\) by calculating the numerical derivatives of \(\mathcal{R}\) and \(\mathcal{V}\) as:
\[\nu^{\prime}\equiv\frac{d\nu}{dr}=\frac{2}{\mathcal{R}}\left(\frac{d\mathcal{ R}}{dr}\right),\quad\lambda^{\prime}\equiv\frac{d\lambda}{dr}=\frac{2}{ \mathcal{V}}\left(\frac{d\mathcal{V}}{dr}\right)\,. \tag{8}\]
The numerical derivatives of \(\mathcal{R}\) and \(\mathcal{V}\), as well as \(\nu^{\prime}\) and \(\lambda^{\prime}\), are usually noisy when computed following Eqs. (8). To avoid this, we compute \(\nu^{\prime}\) and \(\lambda^{\prime}\) by employing solutions to the Einstein field equation for the static, spherically symmetric distribution of matter, given by Tolman (1939) and Oppenheimer & Volkoff (1939) (see also Tooper 1964):
\[e^{-\lambda}\left(\frac{1}{r^{2}}-\frac{\lambda^{\prime}}{r}\right)-\frac{1} {r^{2}}=-\frac{8\pi G}{c^{4}}\rho c^{2}, \tag{9}\]
\[e^{-\lambda}\left(\frac{1}{r^{2}}+\frac{\nu^{\prime}}{r}\right)-\frac{1}{r^{2} }=\frac{8\pi G}{c^{4}}P, \tag{10}\]
\[\frac{1}{2}e^{-\lambda}\left[\nu^{\prime\prime}+\frac{1}{2}\left(\nu^{\prime} -\lambda^{\prime}\right)\left(\nu^{\prime}+\frac{2}{r}\right)\right]=\frac{8 \pi G}{c^{4}}P, \tag{11}\]
where \(P\) is the pressure and \(\rho\) is the mass-energy density (not just the mass density). With some rearranging, we can write:
\[\lambda^{\prime}=\frac{1}{r}+\left(\frac{8\pi G}{c^{2}}\rho r-\frac{1}{r} \right)e^{\lambda} \tag{12}\]
\[\nu^{\prime}=-\frac{1}{r}+\left(\frac{8\pi G}{c^{4}}Pr+\frac{1}{r}\right)e^{\lambda} \tag{13}\]
\[\nu^{\prime\prime}=\frac{16\pi G}{c^{4}}Pe^{\lambda}-\frac{1}{2}\left(\nu^{ \prime}-\lambda^{\prime}\right)\left(\nu^{\prime}+\frac{2}{r}\right)\equiv \frac{d^{2}\nu}{dr^{2}} \tag{14}\]
To summarise, in our numerical treatment we employ Eqs. (12) and (13) to compute \(\lambda^{\prime}\) and \(\nu^{\prime}\) using the value of \(\lambda\) calculated with Eq. (7). We employ Eq. (14) to assess \(\nu^{\prime\prime}\) using \(\lambda\), \(\lambda^{\prime}\), and \(\nu^{\prime}\) derived above. The quantity \(\nu^{\prime\prime}\) is required to compute one of the coefficients of the pulsation differential equations (Sect. 3.4).
### Relativistic adiabatic exponent, sound speed, Lamb and Brunt-Vaisala frequencies
The relativistic adiabatic exponent, defined as \(\Gamma_{1}=\left(\frac{\partial\log P}{\partial\log\pi}\right)_{\textrm{ad}}\), where \(n\) is the baryon number density, can be expressed as (Thorne 1967; Meltzer & Thorne 1966):
\[\Gamma_{1}=\frac{\rho+(P/c^{2})}{P}\left(\frac{\partial P}{\partial\rho} \right)_{\textrm{ad}}=\frac{\rho+(P/c^{2})}{\rho}\left(\frac{\partial\log P}{ \partial\log\rho}\right)_{\textrm{ad}}, \tag{15}\]
This should be compared with the Newtonian case, where \(\Gamma_{1}=\Gamma_{1}\) is the sound speed, and \(\Gamma_{2}=\Gamma_{2}\) is the sound speed. The sound speed is \(\Gamma_{1}=\Gamma_{2}=\Gamma_{1}\), and \(\Gamma_{2}=\Gamma_{2}\) is the sound speed. The sound speed is \(\Gamma_{1}=\Gamma_{2}=\Gamma_{1}=\Gamma_{2}=\Gamma_{1}=\Gamma_{2}=\Gamma_{1}= \Gamma_{2}=\Gamma_{1}=\Gamma_{2}=\Gamma_{1}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{1}=\Gamma_{2}=\Gamma_{1}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{1}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma_{2}= \Gamma_{2}=\Gamma_{2}=\Gamma_{2}=\Gamma
\(\left(\frac{\partial\log P}{\partial\log\rho}\right)_{\rm ad}\). The relativistic sound speed, \(v_{s}\), is given by (Curtis, 1950):
\[v_{s}^{2}=\frac{\Gamma_{1}P}{\rho+(P/c^{2})}, \tag{16}\]
whereas in the Newtonian case, \(v_{s}^{2}=\left(\frac{\partial P}{\partial\rho}\right)_{\rm ad}=\Gamma_{1}\, \frac{P}{\rho}\).
The squared Lamb and Brunt-Vaisala critical frequencies of the nonradial stellar pulsations, \(L_{\ell}^{2}\) and \(N^{2}\), can be written as (Boston, 2022):
\[L_{\ell}^{2}=\ell(\ell+1)\frac{v_{s}^{2}}{r^{2}}, \tag{17}\]
\[N^{2}=\frac{c^{2}}{2r^{\prime}}v^{\prime}e^{-\lambda}\left[\frac{1}{\Gamma_{1} }\left(\frac{d\log P}{d\log r}\right)-\frac{\rho}{\rho+(P/c^{2})}\left(\frac{ d\log\rho}{d\log r}\right)\right]. \tag{18}\]
This expression for \(N^{2}\) is analogous to the Newtonian version of \(N^{2}\) for \(g\rightarrow\frac{c^{2}}{2}v^{\prime}\) and an additional relativistic correction factor \(e^{-\lambda}\).
The relativistic prescription given by Eq. (18) for the assessment of the Brunt-Vaisala frequency is not well-defined numerically, due to the high degree of electronic degeneracy prevailing in the core of the ultra-massive WDs, similar to the case of Newtonian pulsations (Brassard et al., 1991). In particular, the use of Eq. (18) leads to unacceptable numerical noise of \(N\), which can lead to miscalculations of the adiabatic \(g\)-mode periods. To avoid this problem, we employ a numerically convenient relativistic expression, analogous to the Newtonian recipe known as the _modified Ledoux_ prescription (Tassoul et al., 1990). The appropriate relativistic expression for \(N^{2}\), which is derived in Appendix A, is:
\[N^{2}=e^{-\lambda}\left(\frac{c^{2}}{2}v^{\prime}\right)^{2}\frac{\rho+(P/c^{ 2})}{P}\frac{\chi\chi}{\chi_{n}}\left[\nabla_{\rm ad}-\nabla+B\right], \tag{19}\]
where \(B\) is the Ledoux term, defined as:
\[B=-\frac{1}{\chi\chi}\sum_{i=1}^{M-1}\chi x_{i}\,\frac{d\ln X_{i}}{d\ln P}, \tag{20}\]
\(M\) being the number of different atomic species with fractional abundances \(X_{i}\) that satisfy the constraint \(\sum_{i=1}^{M-1}X_{i}+X_{M}=1\). The compressibilities \(\chi\Upsilon\), \(\chi_{n}\), and \(\chi X_{i}\) are defined as, similar to the Newtonian problem:
Figure 6: Upper panels: abundances by mass of the different chemical species as a function of the fractional mass, corresponding to the template WD models with masses \(M_{\bullet}=1.29M_{\odot}\) (left) and \(M_{\bullet}=1.35M_{\odot}\) (right), and effective temperature \(T_{\rm eff}\sim 12\,000\) K. Lower panels: logarithm of the squared Brunt-Väisälä and Lamb (\(\ell=1\)) frequencies for the GR case (solid lines) and the N case (dashed lines). The grey areas correspond to the crystallised core regions, in which \(g\) modes cannot propagate.
\[\chi\Upsilon=\left(\frac{\partial\ln P}{\partial\ln T}\right)_{n,\{X_{i}\}},\ \ \chi_{n}=\left(\frac{\partial\ln P}{\partial\ln n}\right)_{T,\{X_{i}\}},\ \ \chi_{X_{i}}=\left(\frac{\partial\ln P}{\partial\ln X_{i}}\right)_{T,n}. \tag{21}\]
Using \((d\ln\rho/d\ln n)=(\rho+P/c^{2})/\rho\) (Eq. 5.90 of Boston 2022, see also Thorne 1967), the compressibility \(\chi_{n}\) can be computed as:
\[\chi_{n}=\frac{\rho+(P/c^{2})}{\rho}\chi_{\rho}, \tag{22}\]
where \(\chi_{\rho}=\left(\frac{\partial\ln P}{\partial\ln P}\right)_{T,\{X_{i}\}}\). Here, \(\nabla_{\rm ad}\) and \(\nabla\) are the adiabatic and actual temperature gradients, respectively, defined as:
\[\nabla_{\rm ad}=\left(\frac{\partial\ln T}{\partial\ln P}\right)_{{\rm ad},\{ X_{i}\}},\ \ \ \ \nabla=\frac{d\ln T}{d\ln P}. \tag{23}\]
Eq. (19) is completely analogous to the Newtonian expression for the squared Brunt-Vaisala frequency, \(N^{2}=g^{2}(\rho/P)(x_{\rm T}/x_{\rho})\left[\nabla_{\rm ad}-\nabla+B\right]\). In the relativistic formula, \(g\) has been replaced by \(c^{2}\nu^{\prime}/2\), and the ratio \(\rho/P\) becomes \((\rho+P/c^{2})/P\). There is an additional relativistic factor, \(e^{-\lambda}\), and the compressibility \(\chi_{\rho}\) is replaced by \(\chi_{n}\), where \(n\) is the baryonic number density.
### Differential equations of the relativistic Cowling approximation
Here, we formulate the system of differential equations of the nonradial pulsations in the relativistic Cowling approximation form, that results when we ignore Eulerian metric perturbations in the pulsation equations (McDermott et al., 1983). This reduces the 4th-complex-order problem of nonradial pulsations in GR to a 2nd-real-order problem, which can be written as two real, 1st-order differential equations. Following Yoshida & Lee (2002), we define the dimensionless variables \(\omega\), \(y_{1}\), and \(y_{2}\), analogous to Dziembowski's variables in Newtonian pulsations (Dziembowski, 1971)2:
Footnote 2: At variance with Boston (2022), we use \(\sigma\) for the physically meaningful oscillation frequency, and \(\omega\) for the dimensionless frequency, following Unno et al. (1989).
\[\omega^{2}=\frac{R_{\star}^{3}}{GM_{\star}}\sigma^{2},\ \ \ y_{1}=\frac{\xi_{r}}{r}e^{-i\sigma t},\ \ \ y_{2}=\xi_{h}C_{1}\omega^{2}e^{-i\sigma t}, \tag{24}\]
where \(\xi_{r}\) and \(\xi_{h}\) correspond to the Lagrangian radial and horizontal displacements, respectively. We also define the following dimensionless functions, analogous to Dziembowski's coefficients (Dziembowski, 1971), calculated with respect to the stellar equilibrium model (Boston, 2022):
\[V_{g}(r)=-\frac{1}{\Gamma_{1}}\left(\frac{d\log P}{d\log r}\right)=\frac{\rho +(P/c^{2})}{\Gamma_{1}P}\frac{rc^{2}}{2}\nu^{\prime}, \tag{25}\]
\[U_{1}(r)=2+r\frac{\nu^{\prime\prime}}{\nu^{\prime}}, \tag{26}\]
\[U_{2}(r)=r\lambda^{\prime}, \tag{27}\]
\[C_{1}(r)=\frac{GM_{\star}}{c^{2}R_{\star}^{3}}\frac{2r}{\nu^{\prime}}e^{-\nu}, \tag{28}\]
and (Thorne, 1966)
\[A^{\star}(r)=\frac{1}{\Gamma_{1}}\left(\frac{d\log P}{d\log r}\right)-\frac{ \rho}{\rho+(P/c^{2})}\left(\frac{d\log\rho}{d\log r}\right)=\frac{rN^{2}}{c^{ 2}\nu^{\prime}/2}e^{\lambda}. \tag{29}\]
In the Newtonian limit, \(A^{\star},V_{g}\), and \(C_{1}\) will limit to their conventional expressions (Unno et al., 1989). On the other hand, in the Newtonian limit we have that \(U_{1}\) tends to \(U\), which is defined in Unno et al. (1989), and \(U_{2}\to 0\). Using these definitions, and defining \(x=r/R_{\star}\), the resulting differential equations for the relativistic Cowling approximation (McDermott et al., 1983; Lindblom & Splinter, 1989; Yoshida & Lee, 2002; Boston, 2022) are:
\[x\frac{dy_{1}}{dx}=\left(V_{g}-3+U_{2}\right)y_{1}+\left(\frac{\ell(\ell+1)}{C _{1}\omega^{2}}-V_{g}\right)y_{2}, \tag{30}\]
\[x\frac{dy_{2}}{dx}=\left(e^{\lambda}C_{1}\omega^{2}-A^{\star}\right)y_{1}+ \left(1+A^{\star}-U_{1}\right)y_{2}. \tag{31}\]
In the Newtonian limit, we have \(e^{\lambda}\to 1\) and \(U_{2}\to 0\), and the equations adopt exactly the form of the Newtonian Cowling approximation (Cowling, 1941; Unno et al., 1989). The boundary conditions for this system of differential equations are, at the stellar (fluid) center (\(x=0\)):
\[y_{1}C_{1}\omega^{2}-\ell y_{2}=0, \tag{32}\]
and at the stellar surface (\(x=1\)):
\[y_{1}-y_{2}=0,\ \ \ {\rm and}\ \ y_{1}=1\ \ {\rm(normalisation\ condition)}. \tag{33}\]
These are the same boundary conditions as for the Newtonian Cowling approximation.
For the ultra-massive WD models considered in this work, the stellar core is crystallised, so that the so called "hard-sphere" boundary conditions (Montgomery et al., 1999) may be adopted, which exclude the \(g\)-mode oscillations from the solid core regions. In that case, Eq. (32) is replaced by the condition:
\[y_{1}=0\ \ \ {\rm and}\ \ \ y_{2}=\ {\rm arbitrary} \tag{34}\]
at the radial shell \(x=x_{\rm crys}\) associated with the outward-moving crystallisation front, instead of the center of the star (\(x=0\)). To maintain consistency between Newtonian and GR calculations and for a clean comparison, we assume the same internal boundary condition for the GR case as for the N case, that the eigenfunctions are approximately zero in the solid core, and can be treated with a hard-sphere boundary condition.
In this work, to take into account the relativistic effects on \(g\)-mode pulsations of crystallised ultra-massive WD models, the LP-PUL pulsation code (Corsico & Althaus, 2006) has been appropriately modified to solve the problem of relativistic pulsations in the Cowling approximation as given by Eqs. (30) and (31), with boundary conditions given by Eqs. (33) and (34).
## 4 Pulsation results
### Properties of template models
It is illustrative to examine the metric parameters \(\nu\), \(\lambda\), \(\nu^{\prime}\), \(\lambda^{\prime}\) and \(\nu^{\prime\prime}\). In Figs. 3, 4, and 5 we show the \(\nu\) and \(\lambda\) and their derivatives \(\nu^{\prime}\), \(\lambda^{\prime}\) and \(\nu^{\prime\prime}\), in terms of the outer mass fraction coordinate, corresponding to the two template WD models with masses \(M_{\star}=1.29M_{\sun}\) (left) and \(M_{\star}=1.35M_{\sun}\) (right), and effective temperature \(T_{\rm eff}\sim 12\,000\) K. As can be seen, \(\nu\), \(\lambda\) quantities are very small throughout the star, being of a similar order with \(\dot{\kappa}\sim 0.001\). However, in the center the gravitational values are more extreme than near the surface, pointing to the high central concentration of the mass of these stars.
The chemical profiles (abundances by mass, \(X_{i}\)) of the different nuclear species corresponding to the template models are plotted in the upper panels of Fig. 6 as a function of the fractional outer mass. In the lower panels, we depict the logarithm of the squared Brunt
Figure 11: Same as Fig. 7, but for the logarithm of the quantity \(A^{\star}\) (Eq. 29).
Figure 8: Same as Fig. 7, but for the quantity \(U_{1}\) (Eq. 26).
Figure 10: Same as Fig. 7, but for the quantity \(C_{1}\) (Eq. 28).
Figure 7: Logarithm of the quantity \(V_{g}\) for the GR case (black solid lines), defined by Eq. (25), as a function of the fractional mass, corresponding to the template WD models with masses \(M_{\star}=1.29M_{\sun}\) (left) and \(M_{\star}=1.35M_{\sun}\) (right), and effective temperature \(T_{\rm eff}\sim 12\,000\) K. For comparison, we include the function \(V_{g}\) computed for the N case (red dashed lines). The vertical dashed blue line indicates the location of the boundary of the crystallised core region (grey zone).
Figure 9: Same as Fig. 7, but for the quantity \(U_{2}\) (Eq. 27). Since \(U_{2}\) has no counterpart in the Newtonian pulsation equations, only the GR case is plotted (black curves).
Vaisila (black lines) and dipole (\(\ell=1\)) Lamb (red lines) frequencies for the GR case (solid lines) and the N case (dashed lines). We have emphasised the crystallised regions of the core with grey. The chemical interface of \({}^{12}\)C,\({}^{16}\)O, and \({}^{20}\)Ne, which is located at \(-\log{(1-M_{r}/M_{\star})}\sim 1.5\), is embedded in the crystalline part of the core for both template models. Since we assume that \(g\)-mode eigenfunctions cannot penetrate the solid regions (due to the hard-sphere boundary condition, Eq. 34), this chemical interface is not relevant for the mode-trapping properties of the models. The chemical transition region between \({}^{12}\)C, \({}^{16}\)O, and \({}^{4}\)He [\(-\log{(1-M_{r}/M_{\star})}\sim 4.5\)], which is located in the fluid region in both models, also does not have a significant impact on the mode-trapping properties. Thus, mode-trapping properties are almost entirely determined by the presence of the \({}^{4}\)He/\({}^{1}\)H transition, which is located in the fluid external regions, at \(-\log{(1-M_{r}/M_{\star})}\sim 6\).
By closely inspecting Fig. 6, we conclude that the Brunt-Vaisila and Lamb frequencies for the N and GR cases are similar for the model with \(1.29M_{\sun}\), although they are significantly different for the \(M_{\star}=1.35M_{\sun}\) model, with both critical frequencies being higher for the GR case than for the N case. Because of this, it is expected that \(g\)-mode frequencies shift to larger values so that all periods experience a global offset towards shorter values in the relativistic case, compared to the Newtonian case. This will be verified with the calculations of the \(g\)-mode period spectra in both situations (Sect. 4.2).
We close this section by comparing the coefficients of the differential equations of the relativistic Cowling approximation with their Newtonian counterparts. In Figs. 7 to 11 we depict with black curves the dimensionless functions \(V_{g},U_{1},U_{2},C_{1}\) and \(A^{*}\) in the GR case, as defined by Eqs. (25) to (29), along with the same quantities corresponding to the N case (red curves), computed according to their definition (see, e.g., Unno et al., 1989). We include the cases of the two template WD models with \(M_{\star}=1.29M_{\sun}\) (left panel) and \(M_{\star}=1.35M_{\sun}\) (right panel). We marked the crystallised region in each model with a grey area with a dashed blue boundary. These figures demonstrate that the dimensionless quantities in the GR case are very similar to the ones for the N case, and this is true for both of the representative models. This is not surprising, since the relativistic correction factors \(\nu\) and \(\lambda\) and their derivatives \(\nu^{\prime},\lambda^{\prime}\) and \(\nu^{\prime\prime}\), that are included in the calculation of the dimensionless coefficients, are small. For the specific case of \(A^{\star}\), some numerical noise is observed in the core regions. This is irrelevant for the purposes of this investigation, since those regions are contained in the crystallised core and do not affect the \(g\) modes, which are prevented from propagating in the solid phase.
### Newtonian and relativistic \(g\)-mode period spectra
We computed N and GR nonradial \(g\)-mode \(\ell=1\) adiabatic pulsation periods in the range \(50\lesssim\Pi\lesssim 2000\) s using an updated version of the LP-PUL pulsation code that includes the capability to solve the pulsations equations in the relativistic Cowling approximation described in Sect. 3.1. The N-case pulsation periods were calculated by solving the differential problem of the Newtonian nonradial stellar pulsations (Unno et al., 1989). We emphasise that in the GR case we are using evolutionary WD models calculated in GR with relativistic, 2nd-order Cowling mode pulsations that ignore gravitational (i.e. spacetime) perturbations, while in the N case we are using evolutionary WD models calculated with Newtonian gravity and Newtonian, 4th-order mode pulsations that include gravitational perturbations3. We have also computed Newtonian periods by solving the 2nd-order Newtonian Cowling approximation (Unno et al., 1989). For \(g\)-modes, these 2nd-order periods are sufficiently similar to the 4th-order periods used in the N case, that the results are not impacted.
Footnote 3: This is at variance with the preliminary results presented in Corsico et al. (2023), in which Newtonian equations were used for the \(g\)-modes, with a fully relativistic WD as the background.
In the analysis below, to study the dependence of the relativistic effects on the stellar mass, we compare the \(g\)-mode period spectra calculated according to the N and GR cases for ultra-massive WD models of different stellar masses at effective temperatures typical of the ZZ Ceti instability strip.
Before analysing the behaviour of the periods, we first examine the impact of GR on the period spacing of \(g\) modes. According to the asymptotic theory of stellar pulsations, and in the absence of chemical gradients, the pulsation periods of the \(g\) modes with high radial order \(k\) (long periods) are expected to be uniformly spaced with a constant period separation given by (Tassoul, 1980; Tassoul et al., 1990):
\[\Delta\Pi_{\ell}^{\mathrm{a}}=\Pi_{0}/\sqrt{\ell(\ell+1)}, \tag{35}\]
where
\[\Pi_{0}=2\pi^{2}\left[\int_{\mathrm{fluid}}\frac{N}{r}dr\right]^{-1}, \tag{36}\]
with the integral in Eq. (36) calculated only in the fluid part of the star. Fig. 12 depicts the asymptotic period spacing for the sequences of \(1.29,1.31,1.33,1.35\) and \(1.369\)\(M_{\sun}\) WD models in terms of the effective temperature along the ZZ Ceti instability strip. We find that \(\Delta\Pi_{\ell}^{\mathrm{a}}\), the asymptotic period spacing, is smaller for the relativistic WD
Figure 12: Dipole \(\ell=1\) asymptotic period spacing for ultra-massive WD sequences with different stellar masses for the relativistic (GR) and Newtonian (N) cases in terms of \(T_{\mathrm{eff}}\) through the whole ZZ Ceti instability strip (light grey area).
sequences compared to the Newtonian sequences. This is expected, since the asymptotic period spacing is inversely proportional to the integral of the Brunt-Vaisala frequency divided by the radius. Since the Brunt-Vaisala frequency is larger for the relativistic case (see Fig. 6), the integral is larger and its inverse is smaller than in the Newtonian case. The differences of \(\Delta\Pi_{\bullet}^{2}\) between the GR and the N cases are larger for higher stellar masses, reaching a minimum difference of \(\sim 0.6\) s (which represents a relative variation in period spacing of \(\sim 3\)%) for \(1.29\)\(M_{\sun}\), and a maximum difference of \(\sim 3\) s (that constitutes a relative variation of \(\sim 48\)%) for \(1.369\)\(M_{\sun}\) for effective temperatures within the ZZ Ceti instability strip.
Since there are substantial differences in the separation of \(g\)-mode periods in the GR and N cases, it is natural to expect significant differences in the individual pulsation periods (II). In the upper panels of Figs. 13 and 14, we compare the periods of the GR and N cases for the less massive (\(1.29\)\(M_{\sun}\)) and the most massive (\(1.369\)\(M_{\sun}\)) WD model considered in this work (\(T_{\rm eff}\sim 12\,000\) K). \(M_{\bullet}=1.369\)\(M_{\sun}\) corresponds to the maximum possible value in the calculations of Althaus et al. (2022), above which the models become unstable with respect to GR effects. It is clear from these figures that the periods in the relativistic case are shorter than those in the Newtonian case, with the absolute differences becoming larger with increasing \(k\). This is mainly due to the structural differences of the equilibrium models in the GR case in relation to the N case (smaller radii and larger gravities characterising the relativistic WD models, see Fig. 1), and to a much lesser extent, due to the differences in the relativistic treatment of the pulsations in comparison with the Newtonian one.
To quantify the impact of GR on the period spectrum, we have plotted in the lower panel of each figure the absolute value of the relative differences between the GR periods and the N periods, \(\delta=|\Pi_{\rm GR}-\Pi_{\rm N}|/\Pi_{\rm GR}\), versus the radial order. These differences are smaller than \(\sim 0.035\) for the less massive model (\(1.29\)\(M_{\sun}\), Fig. 13), but they become as large as \(\sim 0.5\) for the most massive models (\(1.369\)\(M_{\sun}\), Fig. 14). We conclude that, for ultra-massive WDs with masses in the range \(1.29\leq M_{\bullet}/M_{\sun}\leq 1.369\), the impact of GR on the pulsations is important, resulting in changes from \(\sim 4\)% to \(\sim 50\)% in the values of \(g\)-mode periods.
Another way to visualise the impact of GR on the pulsation periods is to plot the periods for the GR and N cases in terms of stellar mass. We display in the upper panel of Fig. 15 the periods of selected \(g\) modes (with radial order \(k=5,10,20,40\) and \(70\)) in terms of the stellar mass for the GR and the N cases. In the lower panel, we show the absolute value of the relative difference \(\delta\) (percentage %) between the relativistic and Newtonian periods, as a function of the stellar mass. The relative differences in the periods exhibit an exponential growth with stellar mass, without appreciable dependence on the radial order (see also Figs. 13 and 14). The behaviour of \(\delta\) with the stellar mass visibly mirrors the exponential increase in the relative differences between the relativistic and Newtonian stellar radii and surface gravities, as seen in Fig. 2.
At first glance, the relative differences \(\delta\) might seem larger than expected, given recent work on periods in average-mass WDs by Boston et al. (2023). For the simple models considered there, it was shown that \(\delta\sim z\sim 10^{-4}\) for a WD with \(M_{\bullet}\approx 0.6M_{\sun}\). However, considering their Figure 4, it is possible for stars with high central concentrations, such as ultra-massive WDs, that \(\delta\) can be larger than \(z\), consistent with our present findings. To confirm this, we also carried out pulsational calculations on a simplified stratified Chandrasekhar-type equilibrium model that mimics a \(\sim 1.3\)\(M_{\sun}\) ultra-massive WD, in the case of Newtonian gravity and in the Post-Newtonian approximation, following the process in Boston et al. (2023). These calculations and their results are presented in the Appendix B. The comparison of the periods in both cases indicates a relative difference of the order of \(10^{-2}\), in complete agreement with the results obtained here for our WDs models of \(1.29\)\(M_{\sun}\) and \(1.31\)\(M_{\sun}\) (see Figs. 13 and 15).
It is interesting to examine how the period spacings versus periods change depending on whether we consider the GR case or the N case. We define the forward period spacing as \(\Delta\Pi=\Pi_{k+1}-\Pi_{k}\). The dipole (\(\ell=1\)) forward period spacing in terms of the periods is plotted in Figs. 16 to 20 for WD models with stellar masses between \(1.29\) and \(1.369\)\(M_{\sun}\) and \(T_{\rm eff}=12\,000\) K. We have adopted the same range in the \(y-\)axis in order to make the comparison of the results between the different stellar masses clearer. These figures show that, in general, the period spacing is larger in the N case than in the GR case, and that this difference becomes larger as the stellar mass increases. This
Figure 14: Same as Fig. 13, but for the case of a WD model with \(M_{\bullet}=1.369\)\(M_{\sun}\).
Figure 13: Upper panel: \(\ell=1\)\(g\)-mode periods in terms of the radial order (\(k\)) for a WD model with \(M_{\bullet}=1.29\)\(M_{\sun}\) and \(T_{\rm eff}\sim 12\,000\) K for the relativistic (GR, black dots) and Newtonian (N, red dots) cases. Lower panel: the absolute value of the relative difference between the periods of the GR and the N cases, \(\delta=|\Pi_{\rm GR}-\Pi_{\rm N}|/\Pi_{\rm GR}\) (blue).
is expected based on the behaviour of the asymptotic period spacing (see Fig.12), which is indicated with horizontal dashed lines.
### The case of the ultra-massive ZZ Ceti star WD J0049\(-\)2525
The ultra-massive DA WD star WD J004917.14\(-\)252556.81 (\(T_{\rm eff}=13\,020\pm 460\) K, \(\log g=9.341\pm 0.036\)) is the most massive pulsating WD known to date (Kilic et al., 2023b). It shows only two periods, at \(\sim 209\) s and \(\sim 221\) s, which are insufficient to find a single seismological model that would give us details of its internal structure. Extensive
Figure 16: Dipole (\(\ell=1\)) forward period spacing (\(\Delta\Pi=\Pi_{k+1}-\Pi_{k}\)) versus periods, corresponding to a WD model with \(M_{\bullet}=1.29M_{\sun}\) and \(T_{\rm eff}\sim 12\,000\) K for the relativistic case (red), and the Newtonian case (black). The horizontal dashed lines correspond to the asymptotic period spacings for both cases.
Figure 17: Same as Fig. 16, but for a WD model with \(M_{\bullet}=1.31M_{\sun}\).
Figure 18: Same as Fig. 16, but for a WD model with \(M_{\bullet}=1.33M_{\sun}\).
Figure 20: Same as Fig. 16, but for a WD model with \(M_{\bullet}=1.369M_{\sun}\).
Figure 15: Upper panel: the periods of selected \(\ell=1\)\(g\) modes (\(k=5\), 10, 20, 40 and 70) in terms of the stellar mass, for the GR case (solid lines with filled dots) and the N case (dashed lines with hollow dots). Lower panel: the absolute relative difference \(\delta\) (percentage %) between the relativistic and Newtonian periods, as a function of the stellar mass.
follow-up time-series photometry could allow discoveries of a significant number of additional pulsation periods that would help to probe its interior. Considering the ONe-core WD evolutionary models of Althaus et al. (2022), WD J0049\(-\)2525 has \(M_{\star}=1.283\pm 0.008\ M_{\sun}\) in the Newtonian gravity, or \(M_{\star}=1.279\pm 0.007\ M_{\sun}\) if we adopt the GR treatment. This heavyweight ZZ Cei, in principle, could be considered as an ideal target to explore the relativistic effects on ultra-massive WD pulsations. However, the difference between the relativistic and Newtonian mass of this target is tiny. A difference of only \(0.004M_{\sun}\) is even smaller than the uncertainties in the mass estimates. This small difference is due to the star being just slightly below the lower mass limit for the relativistic effects to be important4.
Footnote 4: That is, \(M_{\star}\sim 1.3M_{\sun}\), the lower limit of the mass regime of the so-called “_relativistic ultra-massive WDs_(Althaus et al., 2023).
Fig. 15 (see also Fig. 16) demonstrates that the effects of the GR on the \(g\)-mode periods of WD J0049\(-\)2525 are less than \(\sim 1\%\). Although extremely important for being the most massive pulsating WD star known, WD J0049\(-\)2525 is not massive enough for the exploration of the GR effects on WD pulsations. We conclude that, to be able to study the effects of GR on WD pulsations, we have to wait for the discovery and monitoring of even more massive pulsating WDs, especially the ones with \(M_{\star}\gtrsim 1.33\ M_{\sun}\).
### Prospects for finding pulsating WDs where GR effects are significant
Figure 21 shows the masses and effective temperatures for high probability (\(P_{\rm WD}\geq 0.9\)) WD candidates with \(M_{\star}\geq 1.3\ M_{\sun}\) in the Gaia EDR3 WD sample from Gentile Fusillo et al. (2021) assuming CO cores. Here we limit the sample to the temperature range near the ZZ Cei instability strip. The blue and red lines show the boundaries of the instability strip from Vincent et al. (2020) extrapolated to higher masses. There are 78 objects in this sample, including 7 spectroscopically confirmed DA WDs (labelled in the figure) and 6 magnetic or DC WDs. Kilic et al. (2023) found that only 48% of the \(M_{\star}\approx 1.3\ M_{\sun}\) WDs within 100 pc are DA WDs, with the rest being strongly magnetic (40% of the sample) or WDs with unusual atmospheric compositions (hot DQ, DBA, DC etc). Hence, follow-up spectroscopy is required to identify the DA WDs in this sample.
Kilic et al. (2023) presented time-series photometry for the five DA WDs cooler than \(13\,000\) K in Fig. 21. They did not detect any significant variations in four of the targets, and their observations were inconclusive for J0959\(-\)1828. Nevertheless, there are a number of relativistic ultra-massive WD candidates that may fall within the ZZ Cei instability strip, and therefore may exhibit pulsations. The masses shown here are based on the CO-core evolutionary models, for ONe cores the masses would be lower on average by \(0.04-0.05\ M_{\sun}\). Even then, there are 9 candidates with \(M_{\star}>1.35\ M_{\sun}\) and up to \(1.39\ M_{\sun}\) (assuming a CO core) near the instability strip. If confirmed, such targets would be prime examples of objects where GR effects would have a significant impact on their pulsation properties.
Unfortunately, the observational errors in temperatures and masses of these targets based on _Gaia_ photometry and parallaxes (Gentile Fusillo et al., 2021) are too large to effectively identify the best targets for follow-up. For example, 64 of the 78 objects shown here have temperature errors larger than 2000 K, roughly the width of the instability strip, and 62 have errors in mass that are larger than \(0.1\ M_{\sun}\). Hence, further progress on understanding the GR effects on WD pulsation will require spectroscopic and time-series observations of a relatively large sample of candidates to identify genuine pulsating ultra-massive WDs with \(M_{\star}\gtrsim 1.33\ M_{\sun}\). In addition, the median \(G-\)band magnitude for these 78 objects is 20.25 mag. Hence, 4-8m class telescopes would be needed to confirm pulsating DA WDs in this sample.
## 5 Summary and conclusions
In this paper, we have assessed for the first time the impact of GR on the \(g\)-mode period spectra of ultra-massive ZZ Cei stars. To this end, we pulsationally analysed fully evolutionary ONe-core ultra-massive WD models with masses from 1.29 to \(1.369M_{\sun}\) computed in the frame of GR (Althaus et al., 2022). We employed the LPCODE and LP-PUL evolutionary and pulsation codes, respectively, adapted for relativistic calculations. In particular, for the pulsation analysis, we considered the relativistic Cowling approximation. Our study is consistent with Boston et al. (2023), considering the high central compactness of the stars studied here. The study of pulsating ultra-massive WDs in the context of GR is timely considering the increasing rate of discovery of very high-mass objects (e.g., Kilic et al., 2020, 2021; Hollands et al., 2020; Caiazzo et al., 2021; Torres et al., 2022; Kilic et al., 2023), the discovery of the ZZ Cei WD J004917.14\(-\)252556.81 (the most massive pulsating WD currently known, Kilic et al., 2023), and the possibility of finding even more massive pulsating objects in the near future. This is particularly relevant in view of the space-based surveys like _TESS_ and _ULTRASAT_ and wide-field ground-based surveys like the LSST and BlackGEM.
We find that the Brunt-Vaisala and Lamb frequencies are larger for the Relativistic case compared to the Newtonian case, as a result of relativistic models having smaller radii and higher gravities. This
Figure 21: Masses and effective temperatures for high probability WD candidates with \(M_{\star}\geq 1.3\ M_{\sun}\) in the _Gaia_ EDR3 WD sample from Gentile Fusillo et al. (2021) assuming CO cores. For ONe cores, the masses would be lower on average by \(0.04-0.05\ M_{\sun}\). The blue and red lines show the empirical boundaries of the ZZ Cei instability strip from Vincent et al. (2020) extrapolated to higher masses. Blue and red dots show the spectroscopically confirmed DA and DC/magnetic WDs, respectively.
has the important consequence that the typical separation between consecutive \(g\)-mode periods is smaller in the relativistic case than in the Newtonian computations, with percentage differences of up to 48% in the case of the most massive model (1.369 \(M_{\sun}\)). We assessed the dipole period spectrum of \(g\) modes of our ultra-massive WD models for the Newtonian and the relativistic cases, and found that the periods in the GR case are shorter than in the Newtonian computations. In particular, for the less massive model (1.29 \(M_{\sun}\)), these relative differences are smaller than \(\sim 0.035\), but the variations reach values as large as \(\sim 0.5\) for the most massive model (1.369 \(M_{\sun}\)).
We conclude that, for ultra-massive DA WDs models with masses in the range that we have considered in this paper (\(1.29\leq M_{\star}/M_{\sun}\leq 1.369\)) and effective temperatures typical of the ZZ Ceti instability strip, GR does matter in computing the adiabatic \(g\)-mode pulsations, resulting in periods that are between \(\sim 4\) and \(\sim 50\)% shorter, depending on the stellar mass, when a relativistic treatment is adopted instead of a Newtonian one. This suggests that the effects of GR on the structure and pulsations of WDs with masses \(\gtrsim 1.29M_{\sun}\) cannot be ignored in asteroseismological analysis of ultra-massive ZZ Ceti stars and likely other classes of pulsating WDs.
## Acknowledgements
We wish to thank the suggestions and comments of an anonymous referee that improved the original version of this work. Part of this work was supported by AGENCIA through the Programa de Modernizacion Tecnologica BID 1728/OC-AR, by the PIP 112-200801-00940 grant from CONICET, by the National Science Foundation under grants AST-2205736, PHY-2110335, the National Aeronautics and Space Administration under grant 80NSSC22K0479, and the US DOE under contract DE-AC05-00OR22725. ST acknowledges support from MINECO under the PID2020-117252GB-I00 grant and by the AGAUR/Generalitat de Catalunya grant SGR-386/2021. MC acknowledges grant RYC2021-032721-I, funded by MCIN/AEI/10.13039/ 501100011033 and by the European Union NextGenerationEU/PRTR. This research has made use of NASA Astrophysics Data System.
## Data Availability Statement
The data underlying this article are available upon request.
|
2305.11095 | Prompting the Hidden Talent of Web-Scale Speech Models for Zero-Shot
Task Generalization | We investigate the emergent abilities of the recently proposed web-scale
speech model Whisper, by adapting it to unseen tasks with prompt engineering.
We selected three tasks: audio-visual speech recognition (AVSR), code-switched
speech recognition (CS-ASR), and speech translation (ST) on unseen language
pairs. We design task-specific prompts, by either leveraging another
large-scale model, or simply manipulating the special tokens in the default
prompts. Experiments show that compared to the default prompts, our proposed
prompts improve performance by 10% to 45% on the three zero-shot tasks, and
even outperform SotA supervised models on some datasets. In addition, our
experiments reveal many interesting properties of Whisper, including its
robustness to prompts, bias on accents, and the multilingual understanding in
its latent space. Code is available at
https://github.com/jasonppy/PromptingWhisper | Puyuan Peng, Brian Yan, Shinji Watanabe, David Harwath | 2023-05-18T16:32:58Z | http://arxiv.org/abs/2305.11095v3 | # Prompting the Hidden Talent of Web-Scale Speech Models
###### Abstract
We investigate the emergent abilities of the recently proposed web-scale speech model **Whisper**, by adapting it to unseen tasks with prompt engineering. We selected three tasks: audio-visual speech recognition (AVSR), code-switched speech recognition (CS-ASR), and speech translation (ST) on unseen language pairs. We design task-specific prompts, by either leveraging another large-scale model, or simply manipulating the special tokens in the default prompts. Experiments show that compared to the default prompts, our proposed prompts improve performance by \(10\%\) to \(45\%\) on the three zero-shot tasks, and even outperform SoA supervised models on some datasets. In addition, our experiments reveal many interesting properties of Whisper, including its robustness to prompts, bias on accents, and the multilingual understanding in its latent space. Code is available here
Puyuan Peng\({}^{1}\), Brian Yan\({}^{2}\), Shinji Watanabe\({}^{2}\), David Harwath\({}^{1}\)\({}^{1}\)Department of Computer Science, The University of Texas at Austin, USA
\({}^{2}\)Language Technology Institute, Carnegie Mellon University, USA
[email protected]
**Index Terms**: speech recognition, audio-visual speech recognition, speech translation, zero-shot learning, task adaptation, web-scale speech models
## 1 Introduction
The study of large scale foundation models [1] has become ubiquitous in many areas of AI, such as large language models for natural language processing [2, 3, 4], vision-and-language models for computer vision [5, 6]. One of the most intriguing aspects of these large scale pretrained models is their **emergent ability**[7], usually invoked by **prompting**[8], to generalize to unseen data or tasks [2, 5]. In addition to its scientific value, the zero-shot generalization capability of large scale models alleviates the burden of collecting specialized datasets or training special-purpose models for new tasks and domains, resulting in tremendous impact on the application of AI.
In the field of audio and speech processing, prompt engineering has only recently started to attract attention. Gao et al. [9] finetuned a wav2vec2 model [10] to produce tokens as prompt for the frozen GPT-2 [11] to do speech and audio classification tasks. Concurrently, Chang et al. [12] studied gradient-based prompt tuning on a pre-trained speech unit language model [13] for speech classification and generation tasks. Kim et al. [14] combined learnable prompts and adapters for efficient finetuning of audio models. Xue et al. [15] is the most similar work to ours. In that paper, the authors trained a Transformer-Transducer model using in-house data on a comparable scale to Whisper, and they ran test time gradient-based adaptation to fine-tune the model for speech translation on unseen language pairs. Our work is different from theirs because our adaptation methods are prompt-based and gradient-free, and we study three different zero-shot tasks instead of just one.
_Our work reveals and analyzes the hidden talent and weaknesses of **Whisper**[16]._ It is the first of its kind that studies **gradient-free zero-shot task generalization** abilities of web-scale speech models. We show that **Whisper can be easily adapted to unseen tasks by simply modifying its prompt**. The effectiveness of our proposed prompts are validated on three tasks - **audio-visual speech recognition (AVSR), code-switched speech recognition (CS-ASR)**, and **speech translation (ST) on unseen language pairs**.
## 2 The Whisper model
Here we briefly describe the Whisper model family [16] with an emphasis on the structure of its default prompt. Whisper is a family of Transformer-based encoder-decoder models [17] with parameters ranging from 39M (Tiny and Tiny) to 1.55B (Large and LargeV2). Whisper models can be categorized into two classes based on languages and tasks: English-only models and multilingual models. The multilingual models are trained on 630k hours of web-scraped speech data for multilingual automatic speech recognition (ASR), En\(\rightarrow\)X speech translation (ST), language identification (LID), and timestamp prediction. The English models are trained on the English subset of the data (438k hours) for ASR and timestamp prediction. The encoder of Whisper models takes in log Mel spectrogram, and produces features for the decoder. The decoder consumes encoder features, positional embeddings, and a prompt token sequence. It then produces the transcription of the input speech, or alternatively its translation depending on the prompt. The prompt used in the original Whisper paper is the following: <|sop|>previous text<|sot|><|language|><|task|><|nomitesta mps|>1. Those encapsulated in <|> are special tokens. previous text represents the transcript of the previous utterance, and is optional. For multilingual models, <|language|> should be replaced by one of the 99 language tokens that Whisper encountered during training. When the input language is unknown at inference, Whisper will first run LID which results in a probability distribution over the 99 languages, and the language with the highest probability is chosen to fill the <|language|> token. <|task|> will be replaced by either <|asr|> or <|st|> depending on whether the model should perform ASR or ST. We keep <|nottimestamps|> in all prompts as our tasks do not need Whisper to produce timestamps2.
Footnote 1: We use <|sop|> to abbreviate <|startofprev|>, and <|sot|> for <|startoftranscript|>. Also <|asr|> for <|transcribe|>, and <|st|> for <|translate|> later.
Footnote 2: and therefore we omit this token in the rest of the paper
In all three zero-shot tasks that we consider in this paper, **we only modify the prompt to the Whisper decoder without modifying the model weights or architecture.** See table 1 for a summary of our proposed prompts. |
2301.03166 | Improving Energy Saving of One-sided Matrix Decompositions on CPU-GPU
Heterogeneous Systems | One-sided dense matrix decompositions (e.g., Cholesky, LU, and QR) are the
key components in scientific computing in many different fields. Although their
design has been highly optimized for modern processors, they still consume a
considerable amount of energy. As CPU-GPU heterogeneous systems are commonly
used for matrix decompositions, in this work, we aim to further improve the
energy saving of one-sided matrix decompositions on CPU-GPU heterogeneous
systems. We first build an Algorithm-Based Fault Tolerance protected
overclocking technique (ABFT-OC) to enable us to exploit reliable overclocking
for key matrix decomposition operations. Then, we design an energy-saving
matrix decomposition framework, Bi-directional Slack Reclamation(BSR), that can
intelligently combine the capability provided by ABFT-OC and DVFS to maximize
energy saving and maintain performance and reliability. Experiments show that
BSR is able to save up to 11.7% more energy compared with the current best
energy saving optimization approach with no performance degradation and up to
14.1% Energy * Delay^2 reduction. Also, BSR enables the Pareto efficient
performance-energy trade-off, which is able to provide up to 1.43x performance
improvement without costing extra energy. | Jieyang Chen, Xin Liang, Kai Zhao, Hadi Zamani Sabzi, Laxmi Bhuyan, Zizhong Chen | 2023-01-09T04:42:37Z | http://arxiv.org/abs/2301.03166v2 | # Improving Energy Saving of One-sided Matrix Decompositions on CPU-GPU Heterogeneous Systems
###### Abstract
One-sided dense matrix decompositions (e.g., Cholesky, LU, and QR) are the key components in scientific computing in many different fields. Although their design has been highly optimized for modern processors, they still consume a considerable amount of energy. As CPU-GPU heterogeneous systems are commonly used for matrix decompositions, in this work, we aim to further improve the energy saving of one-sided matrix decompositions on CPU-GPU heterogeneous systems. We first build an Algorithm-Based Fault Tolerance protected overclocking technique (ABFT-OC) to enable us to exploit reliable overclocking for key matrix decomposition operations. Then, we design an energy-saving matrix decomposition framework, Bi-directional Slack Reclamation (BSR), that can intelligently combine the capability provided by ABFT-OC and DVFS to maximize energy saving and maintain performance and reliability. Experiments show that BSR is able to save up to 11.7% more energy compared with the current best energy saving optimization approach with no performance degradation and up to 14.1% \(Energy\times\)_Delay2_ reduction. Also, BSR enables the Pareto efficient performance-energy trade-off, which is able to provide up to 1.43\(\times\) performance improvement without costing extra energy.
2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 20221 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 2021 202 2021 2021 2021 2021 2021 2022 2021 2021 2021 2021 2021 2021 2021 2022 2021 2021 2021 2021 2021 2022 2021 2022 2021 2022 202 2022 2022 2022 222 2022 2022 2022 2022 2022 2222 2022 2022 222 2022 222 2222 222 2222 222 222 222 2222 222 222 222 222 222 222 222 222 22 222 222 222 222 222 2222 222 222 222 222 222 222 2222 222 222 222 222 222 222 2222 222 222 2222 222 2222 222 2222 222 2222 222 222 222 222 2222 222 222 2222 222 2222 2222 2222 222 2222 222 2222 2222 222 2222 222 2222 2222 2222 222 222 2222 2222 2222 2222 222 2222 222 2222 2222 222 2222 22222 22222 2222 222 222 222222 2222 2222 2222 2222 2222 2222 2222 2222 2222 22222 22222 2222 22222 2222 2222 22222 22222 22222 2222 22222 2222 2222 22222 22222 2222 222222 22222 22222 222222 22222 22222 2222 222222 22222 22222 22222 222222 22222 222222 222222 222222 22222 2222
energy saving of matrix decomposition on CPU-GPU heterogeneous systems, it is still desirable to further improve their energy saving since matrix decompositions as they still consume a considerable amount of energy. Improving the energy saving of matrix decomposition can lead to more energy-efficient scientific computing. However, the major challenge, as pointed out in [27, 28, 38, 51, 52], is that aggressive energy-saving optimizations can weaken the reliability of the system and cause performance degradation, which is unacceptable for time-sensitive and mission-critical scientific applications.
In this work, we aim to further improve the energy saving of one-sided matrix decompositions on CPU-GPU heterogeneous systems while maintaining performance and reliability. We first build an Algorithm-Based Fault Tolerance protected overclocking technique (ABFT-OC) to enable us to exploit reliable overclocking for key matrix decomposition operations. Then, we design an energy-saving matrix decomposition framework, Bi-directional Slack Reclamation (BSR), that can intelligently combine the capability provided by ABFT-OC and Dynamic Voltage and Frequency Scaling (DVFS) to maximize energy saving and maintain performance and reliability. Also, BSR enables the Pareto efficient performance-energy trade-off. Specifically, our contributions are listed as follows:
* We propose the first adaptive algorithm-based fault tolerance protected overclocking technique (ABFT-OC) for matrix decompositions on CPU-GPU heterogeneous systems. Overclocking with an optimized voltage guard-band can enable us to exploit higher clock frequencies with higher energy efficiency. However, aggressive overclocking can decrease system reliability, so we propose to couple ABFT with overclocking to enable trustable computation. To reduce fault tolerance overhead, we further propose a lightweight adaptive-ABFT technique that automatically adjusts its fault tolerance strength according to the error rate.
* Bi-directional Slack Reclamation (BSR), which aims to exploit _slack_, processor idle time, to save energy and enable flexible Pareto efficient performance-energy trade-off. Different from existing works, BSR reclaims slack in both directions using both ABFT-OC and DVFS to save more energy and enable performance improvement.
* We implement our BSR on three key one-sided matrix decompositions: Cholesky, LU, and QR. We evaluate our implementation on a modern CPU-GPU heterogeneous system with Nvidia GPU. Experiments show that BSR is able to save up to 11.7% more energy compared with the current best energy saving optimization approach with no performance degradation and up to 14.1% \(Energy\times Delay^{2}\) reduction. Also, BSR enables the Pareto efficient performance-energy trade-off, which is able to provide up to 1.43\(\times\) performance improvement without costing extra energy.
## 2 Related Works and Problem Statement
In this section, we first introduce the design of state-of-the-art matrix decomposition on CPU-GPU heterogeneous systems and we focus on discussing their key computing characteristics. Then, we review how existing works leverage such computing characteristics to optimize for energy efficiency. Finally, we formulate our research problem and challenges.
### State-of-the-art matrix decompositions
The state-of-the-art matrix decompositions for CPU-GPU heterogeneous systems use the blocked version matrix decomposition algorithms. Blocks, logically divided sub-matrices, form _Panel_ and _Trailing Matrix_. The decomposition process begins from the up left corner of the matrix and moves towards the down right corner iteratively. An illustration of one iteration of the LU decomposition is shown in **Figure 1(a)**. Each iteration includes three major operations: 1 _Panel decomposition_ (PD): \(L\cdot 1\times U11\gets A\cdot 1\); 2 _Panel update_
Figure 1: One iteration of LU decomposition
Figure 2: Slacks occur when decomposing a \(30720\times 30720\) matrix on our heterogeneous system. Block size is optimized for performance. Positive values represent slacks on the CPU side and negative values represent slacks on the GPU side.
(PU): \(U12\leftarrow(L11)^{-1}\times A12\); and _Tailing matrix update (TMU): \(A^{\prime}22\gets A22-L21\times U12\)_. Cholesky, LU, and QR decomposition all share similar three operations. On CPU-GPU heterogeneous systems, the three operations are assigned to different processors based on their characteristics. PD is assigned to the CPUs since it is highly sequential. PU and TMU are assigned to the GPUs as they are high parallelizable. As illustrated in **Figure 1(b)**, to overlap the computation on CPUs and GPUs, a look-ahead optimization [26] is used that allows the partial PU and TMU to be done first (i.e., PU' and TMU'), so that the PD of the next iteration can be done with the rest of PU and TMU concurrently. Depending on the computational power of the CPU/GPU and the amount of workload assigned during decomposition, those concurrent tasks may finish at different times, which leads to idle computing cycles on the CPU or GPU. The idle is called _slack_. **Figure 2** show how slack length can change during Cholesky, LU, and QR decompositions on our test platform.
### Existing slack-based energy saving
Matrix decompositions have been designed to maximize their usage on highly optimized BLAS-3 GPU kernels, so their energy efficiency is inherently high, which leaves limited room for further optimization. As for now, the most effective class of energy-saving optimizations for matrix decompositions on CPU-GPU heterogeneous systems is DVFS-based approaches, which aim to exploit different energy-saving techniques when there are slacks.
There are two main strategies for optimizing energy costs: _Race-to-Halt (R2H)_[22, 37, 39] and _Slack Reclamation (SR)_[7]. As shown in **Figure 3**, the main idea of R2H is to timely reduce clock frequency to the minimum as soon as the tasks on the non-critical path finish. The processor maintains its minimum clock frequency during the slack to save energy. This strategy is usually implemented by the hardware or the operating system leveraging their workload monitoring capabilities. SR saves energy by slowing down the tasks on the non-critical path. The reason this strategy can save energy is due to the relation between the dynamic power of the processor and its clock frequency \(P_{dynamic}\propto f^{2.4}\)[17]. Theoretically, SR is able to save more energy compared with R2H [7]. Since the processor's clock frequency need to be adjusted before the execution of each task and the length of slack can change as shown in **Figure 2**, some form of computation pattern prediction is necessary. In the start-of-the-art SR [7], the authors propose to predict computation patterns leveraging algorithmic knowledge in matrix decompositions.
### Motivation of further improving energy saving
Despite a lot of research efforts have been made to improve the energy saving of matrix decomposition on CPU-GPU heterogeneous systems, it is still desirable to further improve their energy saving since matrix decompositions are heavily used in many scientific applications. Thus improving the energy saving of matrix decomposition can lead to more energy-efficient scientific computing.
### Challenges of further improving energy saving
#### 2.4.1 Performance degradation
DVFS is designed to enable performance-energy trade-off while maintaining processor reliability. So, existing DVFS-based energy-saving techniques can only be applied to tasks on the non-critical path to avoid negatively impacting the overall performance. This has already been extensively exploited by existing works. To save even more energy, the only other choice is to apply DVFS-based energy-saving techniques to tasks on the critical path, however, this will inevitably lead to performance degradation to the overall decomposition since modern CPU and GPU processors tend to have better energy efficiency when running at lower clock frequencies.
#### 2.4.2 Reliability degradation
Other approaches such as processor undervolting can also be used to reduce the energy cost of computation. Since it works by decreasing the core supply voltage while maintaining the same clock frequencies, it can save energy without performance degradation. However, they can decrease system reliability [2, 51, 52]. Such reliability degradation can be manifested as hard errors (e.g., process or system crash) or SDCs (e.g., incorrect calculation, bit-flips in memory cells), which can seriously decrease the reliability of matrix decomposition. Although ABFT has been used with undervolting in [2, 51] to improve the energy efficiency of matrix-matrix multiplications and ensure computing correctness, applying existing ABFT techniques can still bring considerable performance overhead. This overhead can be especially high for matrix decompositions since the iterative computing fashion is prone to error propagation, which needs the strongest variant of ABFT, _full checksum ABFT_[4], to provide sufficient protection.
### Research questions
In this work, we try to answer the following research questions:
Figure 3: Existing slack-based energy saving
**RQ:1** How to further improve energy saving of matrix decompositions on CPU-GPU heterogeneous system beyond the state-of-the-art works?
**RQ:2** How to maximize energy saving for matrix decomposition while maintaining both performance and reliability at the same time?
**RQ:3** How to enable performance-energy trade-off in matrix decomposition?
## 3 Design of Energy-Saving Matrix Decomposition
In this work, we propose to build a matrix decomposition framework that maximizes energy saving while maintaining both performance and reliability at the same time. **Figure**4 shows the overview of our framework. We first focus on enabling reliable computation when overclocking by coupling ABFT with overclocking - ABFT-OC. To reduce fault tolerance overhead, we further propose a lightweight adaptive-ABFT technique that automatically adjusts its fault tolerance strength according to the error rate. Next, based on ABFT-OC, we propose a novel slack-based energy saving optimization framework - BSR, which aims to exploit slack, to save energy and enable flexible Pareto efficient performance-energy trade-off. Different from existing works, BSR reclaims slack in both directions using both ABFT-OC and DVFS to save more energy and enable performance improvement.
### Adaptive Algorithm-Based Fault Tolerance Protected Overclocking (ABFT-OC)
To design a technique that maximize energy saving for matrix decompositions, we seek hardware energy optimization techniques beyond DVFS. DVFS has been extensively used for energy saving by both hardware and applications. It optimizes energy efficiency by lowering the core voltage (\(V_{dd}\)) with the decrease of clock frequency for reducing energy consumption. However, lowering frequency can inevitably cause performance degradation. Processor voltage guardband optimization largely mitigates this issue by allowing lowering of the core voltage without decreasing clock frequency or overclocking without violating the hardware power limit.
#### 3.1.1 Voltage guardband optimization for overclocking
In this work, we define _overclocking_ as the processor state where it sustains at a higher-than-default clock frequencies. **Figure**5 (a) shows the achievable overclocking frequency range and their energy efficiency of our test GPU at different clock frequencies after we apply voltage guardband optimization. Please note unlike previous works that were based on Windows-based GPU driver [27, 29, 51] where the core voltage can be directly adjusted and monitored, the Linux-based GPU driver does not allow us to directly control and monitor the GPU core voltage. Even though we find that optimizing the voltage guardband of GPU on Linux is still achievable through the clock offset command of the NVML API on
Figure 4: Overview of our energy-saving matrix decomposition framework
Figure 5: Profiling results of our testing CPU and GPU
Linux-based GPU driver. We omit the details due to the page limit. CPU undervolting can be directly achieved on the Linux system. We set the offset of the CPU core voltage using a third-party tool intel-undervolt. **Figure**5 (c) shows the CPU energy efficiency before and after we set the optimized voltage guardband. Please note unlike our testing GPU, our testing CPU can achieve overclocking with the default guardband, but an optimized guardband can help us achieve higher energy efficiency.
Finding the optimized guardband is done by gradually lowering specific power settings of CPU/GPU to the point where energy efficiency is maximized without process or OS crash. The whole process can be done in less than 30 minutes and it only needs to be done once during software installation. As optimized guardband can be workload-dependent, we specifically use the workload in matrix decomposition i.e., TMU on GPU and PD on CPU to find optimized guardband. Also, as shown in **Figure**5 (b), we observe that setting to extreme high clock frequencies for the GPU can weaken the reliability of computation e.g., SDCs. So, we propose to incorporate fault tolerance with overclocking by designing ABFT-OC.
#### 3.1.2 Design of ABFT-OC
Reliable computation is the foundation of our optimized matrix decomposition. As overclocking achieved through the use of optimized guardband can lead to SDCs, we propose to use ABFT [4, 5, 6, 10, 11, 12, 14, 19, 20, 32, 33, 40, 41, 47, 48, 49, 50] to handle SDCs during matrix decompositions. Since the processor power state is under control and the corresponding SDC error rate is known, SDC error rate is predictable during matrix decompositions. So we propose the first ABFT that can adjust its fault tolerance strength and overhead at runtime based on the predicted error rate to minimize fault tolerance overhead and ensure correctness. SDC refers to the kind of error that only causes incorrect calculation results without process or system crash. When using our optimized guardband, the SDC is caused by insufficient core voltage supply when at high clock frequencies. The rate of SDC can increase as we increase the clock frequency when we apply a optimized guardband at the same time.
Depending on where the hardware fault occurs, it may be manifested as different kinds of SDC. For example, calculation error is usually caused by faults in the logic part of ALU or FPU. Memory storage error is usually caused by faults (e.g., bit flips) in the storage cells of DRAM, cache, or registers. For matrix operations, matrix elements can be repeatedly accessed to obtain final results. If an element whose value is corrupted gets repeatedly referenced, it may cause error propagation. Depending on the cause of the error and the computation pattern (i.e., how data is used/reused) of a matrix operation, the error pattern can be different. The degrees of error propagation [4] can be classified as: 0D, 1D, and 2D. **0D**: a single standalone error with no error propagation; **1D**: an error propagates to entire/part of one row/column; **2D**: an error propagates beyond one row/column. So, we distinguish different degrees of error propagation in **Figure**5.
```
1FunctionABFT-OC():
2In::Desired ABFT fault coverage \(FC_{desired}\)
3In::Desired GPU clock freq. \(F_{desired}^{GPU}\)
4In::Default GPU clock freq. \(F_{BASE}^{GPU}\)
5In::Predicted operation execution time \(T^{GPU}\)
6\(SingleABFTCheck\gets FALSE\)
7\(FullABFTCheck\gets FALSE\)
8while(\(\lambda_{F_{desired}^{GPU},DD}>0\)\(||\)\(\lambda_{F_{desired}^{GPU},DD}>0\)\(||\)\(\lambda_{F_{desired}^{GPU},2D}>0\))&&!SingleABFTCheck&!FullABFTCheckdo
9\(T_{projected}^{GPU}=T^{GPU}\times\frac{F_{desired}^{GPU}}{F_{BASE}^{GPU}}\)
10if\(FC_{single}(F_{desired}^{GPU},T_{projected}^{GPU})\)\(\geqslant FC_{desired}\)then
11\(SingleABFTCheck=TRUE\)
12
13else
14\(F_{desired}^{GPU}=F_{desired}^{GPU}-100MHz\)
15
16 end if
17
18 end while
19return\(F_{desired}^{GPU}\)\(SingleABFTCheck\), FullABFTCheck
```
**Algorithm 1**Adaptive-ABFT strategy
ABFT is based on the idea that if we encode a certain amount of matrix information in checksums before a matrix operation and apply the same matrix operation to checksums,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Iter & ABFT & 180MHz & 190MHz & 200MHz & 210MHz & 2200MHz \\ \hline \(5^{th}\) & Single & Fault-free & Full Coverage & 99.866 & 97.51\% & 96.45\% \\ & Full & Fault-free & Full Coverage & Full Coverage & Full Coverage & Full Coverage \\ \hline \(10^{th}\) & Single & Fault-free & Full Coverage & 99.94\% & 98.92\% & 98.46\% \\ & Full & Fault-free & Full Coverage & Full Coverage & Full Coverage & Full Coverage \\ \hline \(15^{th}\) & Single & Fault-free & Full Coverage & 99.98\% & 99.76\% & 99.65\% \\ & Full & Fault-free & Full Coverage & Full Coverage & Full Coverage & Full Coverage \\ \hline \end{tabular}
\end{table}
Table 1: Theoretical estimation on ABFT fault coverage (FC) on the TMU operation of the \(5^{th}\), \(10^{th}\), and \(15^{th}\) iteration of LU decomposition if we apply different clock frequencies.
Figure 6: ABFT checksum for detecting and correcting SDCs in matrix operations
the checksum relation would still hold for the resulting matrix. By verifying the checksum relations after the operation, we can detect and correct errors in the result matrix. Depending on how much information is encoded in checksums, the fault tolerance strength is different. As shown in **Figure 6**, there are two commonly schemes for checksum encoding: 1 Single side checksum encodes matrices along either rows or columns. Since it only encodes the matrix in one dimension, it brings relative lower overhead. However, it can only efficiently tolerate 0D error pattern. 2 Full checksum encodes matrices along both rows and column at the same time. Since it encodes matrices in both dimensions, it brings stronger protection i.e., both 0D and 1D error patterns. However, it also brings higher fault tolerance overhead.
Given that the fault tolerance strength is limited, we must determine suitable ABFT protection according to the error rate and limit the clock frequency range to ensure all errors can be detected and corrected with a high probability. Otherwise, undetected or uncorrected errors would cause serious error propagation later, which requires recovery with high overhead. In this work, we find that it is useful to estimate the probability that a certain kind of ABFT can detect and correct all errors given different error rates at different overclocking frequencies. In order to do that, we first define an error rate function \(R\) given clock frequency derived from our profiling results in **Figure 5**: \(\lambda_{f,ErrType}=R(f,ErrType)\) where \(\lambda\) is the error rate of a certain error type (\(ErrType\)). The error type can be 0D, 1D, or 2D. \(f\) is the processor clock frequency. Assuming the rate is constant for a given clock frequency, we treat the distribution of probability errors that occur during a period of time as the Poisson distribution. So, the probability of having \(k\) errors in a certain type during a period of time \(T\) can be estimated using the Poisson distribution function: \(p=\frac{e^{-\lambda_{f,ErrType}T^{2}}(\lambda_{f,ErrType}T)^{i}}{i!}\). Both single-side and full checksum encode the matrix for each matrix block individually. They cannot tolerate more than one fault strike to a matrix block during one error detection interval (i.e., one iteration of matrix decomposition). Assuming the matrix is of size \(n\) with matrix block size \(b\), single-side checksum ABFT can tolerate up to \(S=\frac{n}{b}\times\frac{n}{b}\) 0D errors, as long as two 0D errors do not strike the same matrix block within one iteration of matrix decomposition. Full checksum ABFT can tolerate up to \(S\) 0D and 1D errors combined, as long as two 0D/1D errors do not strike the same matrix block within one iteration of matrix decomposition. Assuming error occurs randomly and uniformly in time and space, we provide the theoretical estimation on the probability that ABFT can detect and correct all errors in one detection interval (i.e. _Fault Coverage (FC)_).
\[\frac{FC_{single}(f,T)}{k!}=\] \[\left(\sum_{k=0}^{S}\frac{e^{-\lambda_{f,0D}T}(\lambda_{f,0D}T)^ {k}}{k!}\right)\prod_{0}^{k}\frac{S-i}{S}\right)e^{-\lambda_{f,1D}T}e^{- \lambda_{f,2D}T}\] \[FC_{full}(f,T)=\] \[\left(\sum_{k=0}^{S}\sum_{j=0}^{S-k}\frac{e^{-\lambda_{f,0D}T}( \lambda_{f,0D}T)^{k}}{k!}\frac{e^{-\lambda_{f,1D}T}(\lambda_{f,1D}T)^{j}}{j!} \prod_{i=0}^{k+j}\frac{S-i}{S}\right)e^{-\lambda_{f,2D}T}\]
**Table 1** show the example estimation results based on different GPU overclocking frequencies and the execution time of the TMU operation in three selected iterations of the LU decomposition. We define \(FC>99.9999\%\) as _Full Coverage_. Having the capability of fault coverage estimation, we propose an adaptive-ABFT scheme. Unlike existing ABFT works, which enable ABFT during the entire matrix decomposition process, our adaptive-ABFT only enables ABFT error detection and correction when the error rate is above 0. **Algorithm 1** shows the adaptive-ABFT strategy. We first check the error rate function in Line 4. If the rate of any kind of error is above zero, we check if applying ABFT can provide enough fault coverage (Line 5 - 9). We prioritize single-side ABFT over full ABFT to lower fault tolerance overhead. If none of the ABFT schemes can provide enough fault coverage, we progressively lower the GPU clock frequency (Line 11) until enough fault coverage is provided. Finally, we return the adjusted clock frequency together with flags indicating if we need to do a single or full ABFT check. Please note ABFT-OC would also work for CPU. We exclusively apply it to GPU in our algorithm since SDCs only occur to the GPU on our test system.
### Bi-directional slack reclamation (BSR)
The current best energy-saving approach, single directional slack reclamation (SR) [7], saves energy by slowing down tasks on the non-critical paths via DVFS. This work proposes a novel Bi-directional slack reclamation (BSR) energy-saving technique that reclaims slacks in two directions at the same time using both ABFT-OC and DVFS. Specifically, BSR reclaims slacks by simultaneously slowing down tasks on the non-critical path using DVFS and speeding up tasks on the critical path using ABFT-OC. An illustration of BSR is shown in **Figure 7**. Compared with SR, BSR brings three major advantages: 1 potential higher energy saving through both DVFS and ABFT-OC at the same time; 2 performance improvement in addition to energy saving optimization; 3 enabling performance-energy consumption trade-off.
#### 3.2.1 Enhanced Algorithmic-based Slack Prediction
Slack prediction is critical for making correct power status adjustments so that energy saving can be maximized. As BSR enables more opportunities for slack reclamation, it is more critical for it to make accurate slack predictions. The state-of-the-art algorithmic slack prediction was first proposed by [7].
Figure 7: Bi-directional slack reclamation (BSR)
It mainly works by profiling the tasks in the \(1^{st}\) iteration of decomposition and using the profiled time together with ratios of computational time complexity between \(k^{th}\) iteration and the \(1^{st}\) to predict the execution time of tasks in the \(k^{th}\) iteration of decomposition. By leveraging algorithmic knowledge and profiling results, algorithmic slack prediction can achieve much higher prediction accuracy compared with statistical-learning-based approaches and hardware-based approaches.
However, we find that the accuracy of current algorithmic slack prediction highly relies on the profiling accuracy of the \(1^{st}\) iteration and the assumption that computational efficiency stays constant across different iterations on a given processor. As the measurement of the \(1^{st}\) iteration can be inaccurate (e.g., when it is short) and the computational efficiency of tasks can also change considerably throughout the decomposition process, all these inaccuracies can accumulate and cause large prediction errors in the latter part of the decomposition process, which lead to wrong slack reclamation decisions.
In BSR, we propose an enhanced algorithmic-based slack prediction that greatly improves slack prediction accuracy. The enhanced algorithmic-based slack prediction rely on the profiled execution time of the \(p\) last neighbor iterations to predict the execution time of the current iteration to reduce the negative impacts bring by inaccurate profiling and changes in computational efficiency since tasks in neighbor iterations tend to have similar input sizes and thus similar computational efficiencies. Since a closer neighbor has a more accurate estimation of computational efficiency, we apply different weights to different profiling results in our enhanced algorithmic-based slack prediction. Specifically, the execution time of a task in \(k^{th}\) iteration (\(T_{k}^{OP}\)) is predicted as:
\[T_{k}^{{}^{\prime}OP}=w_{1}r_{k-1,k}^{OP}T_{k-1}^{OP}+w_{2}r_{k-2,k}^{OP}T_{k- 21}^{OP}+...+w_{p}r_{k-p,k}^{OP}T_{k-p}^{OP}\]
where \(r_{j,k}^{OP}\) is the ratio of theoretical time complexity of \(OP\) between \(j^{th}\) and \(k^{th}\) iteration, which can be calculated based on the algorithm time complexity and relative change of the input sizes of \(OP\). **Table 2** shows the ratios of key components of matrix decompositions. We omit the calculation process due to the page limit. \(T_{k-i}^{OP}\) is the actual profiled execution time of \(OP\) of the \(i^{th}\) last neighbor. \(w_{1}\) is the weight we applied to the \(i^{th}\) last neighbor. Through empirical study, we find that \(p=4\) and \(w_{1}=\frac{1}{2},w_{2}=\frac{1}{4},w_{3}=\frac{1}{8},w_{4}=\frac{1}{8}\) can help provide enough prediction accuracy for energy saving. When ABFT is applied, the slack of the \(k^{th}\) iteration is predicted as:
\[\begin{split} slack_{k}=T_{k}^{{}^{\prime}T^{\prime}U}+T_{k}^{{}^{ \prime}T^{\prime}U\ checksum\ update}+T_{k}^{{}^{\prime}T^{\prime}U\ checksum\ per \ }\\ T_{k}^{{}^{\prime}PU}+T_{k}^{{}^{\prime}PU\ checksum\ update}+T_{k}^{{}^{ \prime}PU\ checksum\ per\ }\\ -T_{k}^{{}^{\prime}PD}-T_{k}^{{}^{\prime}PD\ checksum\ update}-T_{k}^{{}^{ \prime}PD\ checksum\ per\ }\\ -T_{k}^{{}^{\prime}Data\ Transfer}-T_{k}^{{}^{\prime}Transfer\ checksum} \end{split}\]
#### 3.2.2 Bi-directional slack reclamation strategies
Compared with SR, BSR offers more flexibility by reclaiming slacks from both directions, so the fractions of slacks that are reclaimed by the two tasks are adjustable, which in turn controls the performance-energy efficiency trade-off. So, we define _reclamation ratio_ (\(r\)) to be the fraction of the slack we try to reclaim by speeding up the task on the critical path and \(1-r\) to be the fraction we try to reclaim by slowing down the task on the non-critical path. **Algorithm 2** shows our BSR algorithm that makes decisions at the beginning of each matrix decomposition iteration. The execution time of tasks and slack are predicted in Line 3 - 4 using our enhanced algorithmic-based slack prediction. Given reclamation ratio \(r\), we calculate the desired execution time of tasks on CPU and GPU in Line 5 - 11. We also consider the overhead of DVFS operations in our calculation to minimize the impact on performance. Line 12 - 15 calculate the desired CPU/GPU clock frequencies and limit them within the available frequency range. Line 16 - 17 calculates the projected execution time if we apply the desired frequencies. Note that the projected time may be different from the desired time since desired frequencies could be out of the available range. Finally, we make decisions on whether or not we adjust CPU/GPU clock frequencies in Line 18 - 22. If the projected time suggests that it can make a negative impact on the performance, it will skip frequency adjustment for this iteration i.e., setting AdjustCPU/GPU to FALSE. Note that this does not mean we do not reclaim slack of this iteration. Since we still keep the adjusted CPU/GPU frequencies from the last iteration, the partial of slack can still be reclaimed. This strategy ensures we reclaim most of the slacks while minimizing performance impact. Line 23 invokes our adaptive-ABFT strategy for overclocking. Finally, we return the final decisions regarding CPU/GPU clock frequency adjustments and ABFT protection strength for the current iteration.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Operation & Computation \& Chocksum & Data Transfer \& Checksum Verification \\ & Update & & \\ \hline PD-Cho. & 1 & 1 & 1 \\ \hline TMU-Cho. & \((1+k)(1-\frac{p}{k-1})\) & N/A & \(1-\frac{p}{n-kb-p}\) \\ \hline PD-LU & \(1-\frac{p}{3k-1}(3k-1)\) & \(1-\frac{1}{n-kb}\) & \(1-\frac{p}{n-kb}\) \\ \hline PU-LU & \(1-\frac{p}{n-kb}\) & N/A & \(1-\frac{p-kb-p}{n-kb-p}\) \\ \hline TMU-LU & \(1-\frac{p}{n-kb}\) & N/A & \(1-\frac{p-kb}{n-kb-p}\) \\ \hline PD-QR & \(1-\frac{p}{6n-(6k-1)(6k-1)}\) & \(1-\frac{p}{n-kb-p}\) & \(1-\frac{p}{n-kb-p}\) \\ \hline TMU-QR & \(1-\frac{p}{n-kb-6}\) & \(-\) N/A & \(1-\frac{p}{n-kb-6}\) \\ & \(\frac{p}{n-kb-6}\) & \(+\) & \(\frac{p}{n-kb+6}\) & \(+\) \\ & \(\frac{p^{2}}{(n-kb-1)(n-kb+6)}\) & & \(\frac{p^{2}}{(n-kb-1)(n-kb+6)}\) \\ \hline \end{tabular}
\end{table}
Table 2: Ratios of time complexity of PD, PU, TMU, transfer size, and ABFT-related operations between \(k^{th}\) and \(k+1^{th}\) iteration. \(n\) and \(b\) are the total size and the block size of the input matrix respectively. PU of Cholesky and QR are omitted since they do not affect the slack
#### 3.2.3. Theoretical performance improvement and energy saving analysis
Next, we provide a theoretical analysis of performance improvement and energy saving. With losing generality, we assume that the slack on the CPU in the following discussion for simplification. The performance improvement mainly comes from speeding up the tasks on the critical path. So, the performance improvement of iteration \(k\) can be simply calculated as: \(\Delta T=T_{k}^{old}-T_{k}^{new}=T_{k}^{CPU}-(T_{k}^{GPU}-slack_{k}\times r)= slack_{k}\times r\). This suggests that higher \(r\) leads to higher performance. As for energy consumption, the theoretical amount of energy saving on the CPU when adopting BSR with reclamation ratio \(r\) in the iteration \(k\) can be estimated as:
\[\Delta E_{k}^{CPU}=\Delta E_{k}^{CPU}-\Delta E_{k}^{CPU}\] \[\Delta E_{k}^{CPU}-{dynamic}=E_{k}^{CPU}-{dynamic}\_old}-E_{k}^{CPU }-{dynamic}=\] \[d^{CPU}P_{total}^{CPU}T_{k}^{CPU}-\] \[a^{CPU}\left(\frac{f^{CPU\_new}}{f^{CPU\_old}}\right)^{2.4}d^{CPU }P_{total}^{CPU}(T_{k}^{CPU}+slack_{k}(1-r))=\] \[d^{CPU}P_{total}^{CPU}T_{k}^{CPU}-\] \[a^{CPU}\left(\frac{T_{k}^{CPU}}{T_{k}^{CPU}+slack_{k}(1-r)}\right) ^{2.4}d^{CPU}P_{total}^{CPU}\]
\[(T_{k}^{CPU}+slack_{k}(1-r))=\] \[\left(1-a^{CPU}\frac{(T_{k}^{CPU})^{1.4}}{(T_{k}^{CPU}+slack_{k} \times(1-r))^{1.4}}\right)d^{CPU}P^{CPU}T_{k}^{CPU}\]
\[\Delta E_{k}^{CPU\_static}=(T_{k}^{CPU}-\alpha^{CPU}(T_{k}^{CPU}+slack_{k}(1-r )))\] \[(1-d^{CPU})p_{total}^{CPU}\]
Similarly, we can estimate the energy saving on GPUs as follows:
\[\Delta E_{k}^{CPU}=\left(1-\alpha^{GPU}\frac{(T_{k}^{GPU})^{1.4}}{(T_{k}^{ CPU}-slack\times r)^{1.4}}\right)d^{GPU}p_{total}^{GPU}T_{k}^{GPU}+\] \[(T_{k}^{GPU}-\alpha^{GPU}(T_{k}^{GPU}-slack_{k}\times r))(1-d^{CPU })p_{total}^{GPU}\]
Where \(\alpha^{CPU/GPU}\) are total power reduction factors when we use optimized guardband of CPU/GPU. We measure that in our hardware profiling work **Figure**5. For clock frequencies out of the default range, we use constant values of the last measured value to estimate (dashed line). \(T_{k}^{CPU/GPU}\) are the original task execution time of CPU/GPU. \(P_{total}^{CPU/GPU}\) are the total power of CPU/GPU at the default guardband and clock frequencies. \(d^{CPU/GPU}\) are the ratios of the CPU/GPU dynamic power in the total power consumption. The change of CPU/GPU dynamic power is estimated using: \(P_{dynamic}\propto f^{2.4}\)[17]. When the critical path is on the GPU, it is for sure we can save energy on the CPU. However, whether or not we can save energy on the GPU depends on \(\alpha^{GPU}\) and \(r\). Assuming power reduction factor \(\alpha^{GPU}\) is fixed and minimized by applying optimized processor guardband, then the reclamation ratio \(r\) controls the trade-off between performance improvement and energy consumption. Higher \(r\) leads to higher performance but less energy saving, and vice versa. The highest energy saving can be achieved with \(r_{max\_energy}=0\) without performance improvement. The max \(r\) that achieves maximum without impacting energy efficiency is hard to be solved directly. So, we use a numerical approach to solve for \(r\). By solving \(\Delta E_{k}^{CPU}+\Delta E_{k}^{GPU}=0\) using Newton's method, we are able to get estimated solutions. For example, for decomposition with input \(30730\times 30720\), the averaged reclamation ratios across all iterations are 0.28 for Cholesky, 0.26 for LU, and
0.31 for QR, which approximately matches our experimental results in **Figure**11.
## 4 Experimental Evaluation
### Evaluation Methodology
We compare BSR with two state-of-the-art energy-saving approaches R2H and SR together with the original design in the MAGMA library.
* Original: The original matrix decompositions in the state-of-the-art MAGMA library. We keep the CPU/GPU clock frequency fixed at the default (autoboost disabled).
* R2H: The original matrix decompositions in the state-of-the-art MAGMA library with CPU/GPU autoboost feature enabled. The processor clock frequency is dynamically set according to the workload.
* SR: The state-of-the-art energy efficient matrix decompositions using single directional slack reclamation [7].
* BSR: Our proposed matrix decomposition with BSR energy efficiency optimization and ABFT-OC. Clock frequencies can reach greater ranch where SDCs can occur but are correctable by ABFT.
All the above versions are implemented for Cholesky, LU, and QR decomposition for double precision inputs with block size tuned for performance.
### Experimental Environment
All experiments are performed on a power-aware CPU-GPU server. **Table**3 lists hardware configuration of the experimental platform and system tools used for adjusting CPU/GPU guardband/clock frequencies and for measuring the energy consumption of CPU and GPU. Limited to the capability of our test platform, we only measure the energy consumption of the CPU package and GPU device. For accurate measurement of energy consumption and stable SDCs error rate at reduced guardband, we adjust the external cooling system to stabilize the CPU/GPU temperature at \(45^{\circ}\)C and \(55^{\circ}\)C respectively. From the software perspective, all matrix decomposition versions are built with GCC 7.4.0 and CUDA 11.6 with the highest optimization flags turned on. NVIDIA cuBLAS 11.1 and Intel MKL 2020 are used as linear algebra computing kernels. MKL is configured to use all CPU cores. The operating system is Ubuntu 18.04.
### Evaluation Results
#### 4.3.1 Online slack prediction accuracy comparison
**Figure**8 shows the relative online prediction error using only the first iteration to predict [7] vs. our enhanced slack prediction approach proposed in this work. We can see both approaches can give less than 10% relative error for the first 2/3 of the iterations. However, since [7] only depends on the profiling result of the first iteration, the error caused by profiling and prediction will accumulate and become significant (about 11.4% on average) as the decomposition progresses. Our enhanced algorithmic slack prediction uses an online calibration approach to effectively avoid error from accumulating and reducing relative prediction error to around 4% on average.
#### 4.3.2 ABFT overhead and correctness comparison
**Figure**9 shows the computational overhead and probability of computing correctness when different ABFT schemes are applied. We use double precision LU decomposition with BSR reclamation ratio \(r=0.25\) as an example. The correctness is estimated by repeating the decomposition 100,000 times and comparing the results. We observe similar results on other types of decompositions. Due to relative short slack in the later part of decomposition, higher GPU clock frequencies are needed, which reach degrees of overclocking that can have SDC errors. If we do not apply any fault tolerance, only 23.28% of the overall matrix decomposition tests output correct results. If we apply single-side checksum ABFT, it improves the percentage of tests with correct output to 76.11% since 0D errors can be effectively detected and corrected. However, 1D error cannot be handled by single-side
\begin{table}
\begin{tabular}{|l|c|c|} \hline Processor & Intel Core 17-9700K & NVIDIA RTX 2080 Ti \\ \hline Base Clock & 3.5(Two).1GHz & 1.3(Two).1GHz \\ \hline Overclocking & 3.6-4.5(Two).1GHz & 1.4-2.2(Two) 0.1GHz \\ \hline Memory & \multicolumn{2}{c|}{32 GB RAM} & \multicolumn{2}{c|}{12 GB RAM} \\ \hline Default guardband & \multicolumn{2}{c|}{Vcore offset: 0mV} & Graphics clock offset: 0 \\ \hline Optimized guardband & \multicolumn{2}{c|}{Vcore offset: -150mV} & Graphics clock offset: +200 \\ \hline \end{tabular}
\end{table}
Table 3: Hardware/System Configuration for Experiments.
Figure 8: Slack prediction error of the LU decomposition using different approaches
Figure 9: Comparing overhead and correctness when different ABFT scheme is applied in double precision LU decomposition with reclamation ratio \(r=0.25\)
can ensure all decomposition tests are correct, but it also brings 12% overhead. Our adaptive-ABFT can adaptively apply necessary levels of fault tolerance to ensure high reliability and low overhead. For example, when we set the reclamation ratio \(r=0.25\), the first 41 iterations are running at fault-free clock frequencies (1700Mhz), so adaptive-ABFT completely disables ABFT for eliminating unnecessary fault tolerance overhead. For \(42^{th}-49^{th}\) iteration, the slacks need to be reduced by BSR using more aggressive overclocking (up to 1900Mhz), so it applies single-side checksum ABFT. Finally, it applies full checksum ABFT after \(50^{th}\) iteration since higher clock frequencies are used (up to 2200Mhz). So, with adaptive-ABFT, we can still ensure all decomposition tests are correct with only 4% fault tolerance overhead.
#### 4.3.3 Per iteration performance and energy comparison.
To understand how each of the different approaches affects the performance and energy efficiency of matrix decompositions, we show the profiling results of \(2^{nd}\) and \(50^{th}\) iteration of the LU decomposition in terms of time and energy costs breakdown in **Figure**10. For the original version, we can see the slack occurs on the CPU side for the \(2^{nd}\) iteration and GPU side for the \(50^{th}\) iteration. For clarity, we refer to the case that slack is on the CPU side as \(\circledR\) and the case that slack is on the GPU side as \(\circledR\) in our following discussion. For R2H, we observe noticeable energy saving in both \(\circledR\) and
Figure 11. Pareto efficient performance-energy consumption trade-off enabled by adjusting the reclamation ratio. Input size: \(30720\times 30720\) double precision
Figure 12. Overall energy saving and ED2P Reduction compared with the original design. Input size: \(30720\times 30720\).
Figure 10. Time and energy saving breakdown of the \(2^{nd}\) and \(50^{th}\) iteration of the LU decomposition (Input size: \(30720\times 30720\)). Energy saving is compared with the original design. Positive values represent energy saving and negative values represent extra energy costs.
* due to reduced energy consumption on the CPU side and GPU side respectively. For SR, we see slack is fully reclaimed in *, but not fully reclaimed in
* due to the limited clock frequency range on the CPU and longer slack length. For BSR, we test different reclamation ratios \(r\) and mark their values under the bars. We set \(r\) from 0 to a certain value that leads to maximum achievable performance. This maximum \(r\) is higher for
* since GPU has greater overclocking capabilities than CPU in our system when we apply optimized guardband. We can see maximum energy saving is achieved when \(r=0\), which is consistent with our previous theoretical analysis. Maximum performance \(r=0.25\) for
* and *, which are close to our theoretical estimation. When we increase \(r\), we see an increase in energy consumption for the processor on the critical path due to the increase in clock frequency. For *, we observe a slight increase in energy-saving since the slack is long enough for the CPU to always run at the lowest clock frequency, and reducing the total execution time can save more CPU static energy. We also observe a slight decrease in energy saving in *, mainly due to the slight increases in clock frequencies. Even though it can still save energy since 1) the clock frequencies are low; 2) power reduction brings by optimized guardband. Finally, Thanks to ABFT-OC, we can exploit higher overclocking frequencies where we can achieve higher performance and energy efficiency in *.
#### 4.3.4 Overall energy saving and energy efficiency comparison
Next, we show the overall energy-saving capability of different approaches in **Figure 12(a)**. We evaluate all three matrix decompositions with an input size of \(30720\times 30720\). All four versions of each type of matrix decomposition produce a similar performance. To maximize energy saving the reclamation ratio of BSR is set to 0. We can see that compared with the state-of-the-art MAGMA library, our BSR is able to save energy by 30.7% for Cholesky, 28.2% for LU, and 28.8% for QR. That is \(1.31\times-1.49\times\) more energy saving compared with the current state-of-the-art SR energy saving approach and \(2.03\times-2.20\times\) more energy saving compared with R2H. In addition, we use \(Energy\times Delay^{2}\) (ED2P) to measure the energy efficiency of matrix decompositions. As shown in **Figure 12(b)**, compared with the original design, our BSR is able to reduce ED2P by 29.3%-31.6%. Compared with R2H, BSR is able to reduce ED2P by 18.6%-20.7%. Finally, compared with SR, BSR is able to reduce ED2P by 10.8%-14.1%.
#### 4.3.5 Overall energy saving on different input sizes
**Figure 13**, we show the results of applying energy-saving approaches on LU decomposition with different input sizes. Limited by the page space, we only show the results for LU decomposition. Other matrix decompositions behave similarly. We can see our BSR is able to stably save energy consumption across different input matrix sizes ranging from \(5120\times 5120\) and above. Note that it is hard to save energy on smaller matrices since they either lead to high fault tolerance overhead or small slacks that are hard to be reclaimed.
#### 4.3.6 Overall Pareto efficient performance-energy consumption trade-off
Finally, we show the overall Pareto efficient performance-energy consumption trade-off enabled by adjusting the reclamation ratio in BSR. As shown in **Figure 11**, by adjusting the reclamation ratio to a minimum 0, we achieve max energy saving with similar performance to the original design. In this case, compared with the original design, BSR is able to save energy by 28.2%-30.7%. Compared with R2H, BSR is able to save energy by 17.1%-18.9%. Compared with SR, BSR is able to save energy by 9.6%-11.7%. By increasing the reclamation ratio, we are able to adjust the performance or energy consumption of matrix decompositions. For example, with equal or less energy consumption, compared with the original design BSR is enable to improve the performance by 1.38\(\times\)-1.51\(\times\). Also, compared with R2H, BSR is enable to improve the performance by 1.33\(\times\)-1.43\(\times\). In addition, compared with SR, BSR is enable to improve the performance by 1.36\(\times\)-1.43\(\times\). Finally, we see the results of BSR with different reclamation ratios form a Pareto set such that we cannot improve energy saving and performance at the same time without reliability degradation.
## 5 Conclusion
In this work, we focused on further improving the energy saving of matrix decompositions on CPU-GPU heterogeneous systems beyond existing state-of-the-art works. To achieve our goal, we first proposed ABFT-OC, a novel overclocking technique that is protected by ABFT to enable reliable computation for key operations in matrix decompositions when overclocking. Next, based on ABFT-OC, we proposed BSR, a novel matrix decomposition framework, that aims to maximize energy saving while maintaining performance and reliability. We evaluated BSR on three key matrix decomposition algorithms - Cholesky, LU, and QR. Experiments show that BSR is able to save up to 11.7% more energy compared with the current best energy saving optimization approach with no performance degradation and up to 14.1% ED2P reduction. Also, BSR enables the Pareto efficient performance-energy trade-off, which is able to provide up to 1.43\(\times\) performance improvement without costing extra energy.
## 6 Acknowledgement
This work was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through the Advanced Computing (SciDAC) program under Award DESC0022209. The research was also partly supported by NSF Grant 1907401. |
2306.08094 | Can ChatGPT Enable ITS? The Case of Mixed Traffic Control via
Reinforcement Learning | The surge in Reinforcement Learning (RL) applications in Intelligent
Transportation Systems (ITS) has contributed to its growth as well as
highlighted key challenges. However, defining objectives of RL agents in
traffic control and management tasks, as well as aligning policies with these
goals through an effective formulation of Markov Decision Process (MDP), can be
challenging and often require domain experts in both RL and ITS. Recent
advancements in Large Language Models (LLMs) such as GPT-4 highlight their
broad general knowledge, reasoning capabilities, and commonsense priors across
various domains. In this work, we conduct a large-scale user study involving 70
participants to investigate whether novices can leverage ChatGPT to solve
complex mixed traffic control problems. Three environments are tested,
including ring road, bottleneck, and intersection. We find ChatGPT has mixed
results. For intersection and bottleneck, ChatGPT increases number of
successful policies by 150% and 136% compared to solely beginner capabilities,
with some of them even outperforming experts. However, ChatGPT does not provide
consistent improvements across all scenarios. | Michael Villarreal, Bibek Poudel, Weizi Li | 2023-06-13T19:27:18Z | http://arxiv.org/abs/2306.08094v2 | # Can ChatGPT Enable ITS? The Case of Mixed Traffic Control via Reinforcement Learning
###### Abstract
The surge in Reinforcement Learning (RL) applications in Intelligent Transportation Systems (ITS) has contributed to its growth as well as highlighted key challenges. However, defining objectives of RL agents in traffic control and management tasks, as well as aligning policies with these goals through an effective formulation of Markov Decision Process (MDP), can be challenging and often require domain experts in both RL and ITS. Recent advancements in Large Language Models (LLMs) such as GPT-4 highlight their broad general knowledge, reasoning capabilities, and commonsense priors across various domains. In this work, we conduct a large-scale user study involving 70 participants to investigate whether novices can leverage ChatGPT to solve complex mixed traffic control problems. Three environments are tested, including ring road, bottleneck, and intersection. We find ChatGPT has mixed results. For intersection and bottleneck, ChatGPT increases number of successful policies by 150% and 136% compared to solely beginner capabilities, with some of them even outperforming experts. However, ChatGPT does not provide consistent improvements across all scenarios.
## I Introduction
Large Language Models (LLMs) represent a significant advancement in artificial intelligence. Their usefulness as a general-purpose tool is anticipated to have a profound societal impact. Experiments with LLMs such as GPT-4 [1, 2], LLaMA [3], and PALM2 [4] demonstrate their strong reasoning and common sense abilities across domains such as math, science, and coding. To achieve this success, LLMs leverage Reinforcement Learning from Human Feedback (RLHF) [5, 6] to solve the alignment problem, i.e., follow user intent by fine-tuning them on human feedback. While there is a strong focus on improving LLMs though Reinforcement Learning (RL), the possibility of leveraging LLMs to assist RL problems is in nascent stages.
The RL framework is inherently challenging because it demands an understanding of Markov Decision Process (MDPs). Any RL problem must be formulated to a MDP through designing the state, action, reward, and discount-factor among other components [7]. This is usually a tedious task that requires numerous experimentation and careful analysis, often by a domain expert. Specifically, in research areas such as Intelligent Transportation Systems (ITS), it is challenging for novices to understand and design effective MDPs. Moreover, no standard technique exists for creating general-purpose MDPs that will work across environments and satisfy various goals.
Over the last decade, RL has been adopted to address complex control problems in ITS such as traffic management and autonomous driving [8, 9]. As autonomous agents, including vehicles and traffic lights, become more prevalent [10] in ITS, they introduce new challenges and opportunities. One emerging topic is mixed traffic control that uses RL-empowered robot vehicles (RVs) to regulate human-driven vehicles (HVs), thus improving the overall traffic flow [11, 12, 13, 14]. This surge in research interest has attracted a broader audience to participate in the topic, resulting in a growing demand for creative decision-making and control strategies enabled by RL. However, the initial technical barrier, specifically formulating MDP components to align with a control strategy, poses a challenge. LLMs, with their broad knowledge, commonsense priors, and creative capacity, show promise in reducing these barriers and simplifying the process.
In this project, we explore whether ChatGPT (with GPT-4 backend) can assist non-experts in ITS research to solve mixed traffic control problems. We conduct a large-scale user study with 70 participants who have no prior experience in ITS research. The participants are tasked to develop MDP components (state and reward) for three mixed traffic scenarios: a ring road, an intersection, and a bottleneck as shown in Fig. 1. We split participants into a control group, where participants attempt to solve problems solely based on common sense and prior knowledge, and a study group, where participants can prompt ChatGPT unrestricted. Participants are provided a manuscript with a general overview of RL, a few examples of MDP components, a reference bank of metrics related to traffic, and images and descriptions of the mixed traffic control environments. The formulated MDPs are then used to train policies and evaluate performance. From the study, we find that:
1. In the intersection environment, using ChatGPT can lead to a performance better than an expert.
2. In the bottleneck and intersection environments, using ChatGPT results in an increase in number of successful policies by 150% and 136%, respectively.
3. ChatGPT's creativity enables a 363% increase in utilization of new metrics, although the use of these new metrics does not always result in a successful policy.
4. In the ring environment, ChatGPT's help does not increase the policy success rate.
The wide range of results observed, from no success to outperforming the expert, across the mixed traffic control environments indicates that the effectiveness of utilizing
ChatGPT in ITS depends on the complexity of the specific problem, as well as the extent and manner in which ChatGPT is utilized.
## II Related Work
Recent works have studied how Large Language Models (LLMs) can be leveraged in a variety of RL tasks. In robotics, LLMs are utilized in planning and navigation by representing the robotic agent's input (state) in natural language, additionally incorporating the input with visual or raw sensor data for grounding. This approach demonstrates high data efficiency and generalization to unseen environments [16, 17].
Limited research has been found in minimizing the human effort required to generate an effective policy. Some works use LLMs to substitute components of MDPs, such as making LLM a surrogate or proxy reward function. By using the in-context learning capability of LLMs, hard-to-specify reward functions (e.g., versatility, fairness) have been attempted [18]. Meanwhile, other studies have used LLMs to replace a policy entirely [19, 20]. However, to the best of our knowledge, no previous work has directly evaluated the effectiveness of LLMs (with and without them) as a tool to reduce the effort in designing MDP components by novices, comparing the obtained policies with those from experts.
## III Preliminaries
In the following, we introduce the formulation of mixed traffic control as reinforcement learning (RL) tasks and discuss the corresponding test environments.
### _Reinforcement Learning_
We model mixed traffic control as a Partially Observable Markov Decision Process (POMDP) represented by a tuple (\(S,A\), \(P\), \(R\), \(p_{0}\), \(\gamma\), \(T\), \(\Omega\), \(O\)) where \(S\) is the state space; \(A\) is the action space; \(P(s^{\prime}|s,a)\) is the transition probability function; \(R\) is the reward function; \(p_{0}\) is the initial state distribution; \(\gamma\in(0,1]\) is the discount factor; \(T\) is the episode length (horizon); \(\Omega\) is the observation space; and \(O\) is the probability distribution of retrieving an observation \(\omega\in\Omega\) from a state \(s\in S\). At each timestep \(t\in[1,T]\), a robot vehicle (RV) uses its policy \(\pi_{\theta}(a_{t}|s_{t})\) to take an action \(a_{t}\in A\), given the state \(s_{t}\)\(\in S\). The RV's environment provides feedback from taking action \(a_{t}\) by calculating a reward \(r_{t}\) and transitioning the agent into the next state \(s_{t+1}\). The RV's goal is to learn a policy \(\pi_{\theta}\) that maximizes the discounted sum of rewards, i.e., return, \(R_{t}=\sum_{i=t}^{T}\gamma^{i-t}r_{i}\). Proximal Policy Optimization [21] is used to learn \(\pi_{\theta}\).
### _Mixed Traffic Control Environments_
#### Iii-B1 Ring
The ring environment (shown in Fig. 1 top) consists of a single-lane circular road network and 22 vehicles (21 HVs and one RV). It simulates how perturbations due to imperfections in human driving behavior can amplify and propagate, leading to an eventual standstill for some vehicles. This situation, known as'stop-and-go traffic', acts as a wave that propagates continually through the ring, opposite the direction of travel. The RV's goal is to prevent the formation of these waves. Ring is a widely used benchmark in traffic control [22]. An expert's [11] state space (major metrics given in Table I) is:
\[\mathrm{s}=\left\{\frac{v_{RV}}{v_{\max}},\frac{v_{\mathrm{lead}}-v_{RV}}{v _{\max}},f\left(x_{\mathrm{lead}}-x_{RV}\right)\right\}. \tag{1}\]
The difference in \(x_{\mathrm{lead}}\) and \(x_{RV}\) is passed through a normalization function \(f\). An expert's reward function encourages high average velocity and low control actions (acceleration) though a weighted combination given by:
\[\mathrm{r}=\frac{1}{n}\sum_{i}v_{i}-\alpha*\left|a_{RV}\right|, \tag{2}\]
where \(n=22\) and \(\alpha\) is chosen empirically.
#### Iii-B2 Bottleneck
The bottleneck environment (shown in Fig. 1 middle) simulates vehicles experiencing a capacity drop [23] where a road's outflow significantly decreases after the road's inflow surpasses a threshold. The RVs' goal is to improve outflow. Bottleneck represents a bridge with lanes decreasing from \(4\times l\) to \(2\times l\) to \(l\) (where \(l\) is a scaling factor and is one for our work). The RV penetration rate is 10%. An expert's [24] state space is:
\[\omega_{\mathrm{precise}}=\left\{\overline{X}_{HV},\overline{V}_{HV}, \overline{X}_{RV},\overline{V}_{RV},o_{20}\right\}, \tag{3}\]
where the mean positions and velocities of both vehicle types are considered across user-defined segments of the road network. An expert's reward function rewards increasing bottleneck outflow:
\[\mathrm{r}=o_{10}. \tag{4}\]
Fig. 1: Three mixed traffic control environments [11] (a deep reinforcement learning framework for traffic management), Ring, Bottleneck, and Intersection, are provided to the study participants. Robot vehicles are red and are controlled by learnt RL policies. Human-driven vehicles are white and are modeled by the Intelligent Driver Model [15].
#### Iii-B3 Intersection
The intersection environment (shown in Fig. 1 bottom) represents an unsignalized intersection where east/westbound traffic flow is less than north/southbound traffic flow. This flow discrepancy leads to east/westbound traffic queues as crossing the intersection would be unsafe otherwise. RVs drive in the north/south directions with a 20% penetration rate. The RVs' objective is minimizing east/west queues and increased average vehicle velocity. An expert's [24] state space is:
\[\mathrm{s}=\{V_{\mathrm{all}},I_{\mathrm{all}},E_{\mathrm{all}},D_{\mathrm{edge }},\overline{V}_{\mathrm{edge}}\}, \tag{5}\]
where \(I_{\mathrm{all}}\) is all vehicle's distance to intersection, \(E_{\mathrm{all}}\) is all vehicle's edge number, \(D_{\mathrm{edge}}\) is density of each edge, and \(\overline{V}_{\mathrm{edge}}\) is average vehicle velocity of each edge. There are eight edges for each direction on both sides of the intersection. An expert's reward function penalizes vehicle delay and vehicle standstills in traffic:
\[\mathrm{r}=-\frac{t*\sum((V_{\mathrm{max}}-V_{\mathrm{all}})/V_{\mathrm{max}})} {\mathrm{n}+\mathrm{eps}}-(\mathrm{gain}*ss_{n}), \tag{6}\]
where \(t\) is current timestep, \(n\) is number of vehicles, \(eps\) prevents zero division, \(gain\) is 0.2, and \(ss_{n}\) is the number of standstill vehicles.
## IV User Study
### _Participants and Cohorts_
We recruit 70 graduate students as participants (54 male, 16 female). The male/female participants have a median age of 23/22 and average 4.93/4.26 years of driving experience.
The 70 participants are split into two cohorts: a control group (38 participants) that only use prior non-expert knowledge and the manuscript, and a study group (32 participants) that additionally has access to ChatGPT [25, 1]. The control group helps assess whether ChatGPT can assist non-experts in solving traffic problems by providing baseline capabilities. We find the control group heavily uses collision-prevention metrics, including \(g_{lead}/g_{fol}\), \(ttc\), or \(st_{dist}/st_{\tau}\), with consistent use of \(x_{RV}\), \(v_{RV}\), and \(a_{RV}\) across all environments. Additionally, the control group tends to include objective-oriented metrics such as \(o\) or \(\overline{ss_{\tau}}\) for bottleneck and metrics about queue length/time standstill in the east/west directions for intersection. These metrics are straightforward and intuitive, aligning with our expectations for non-experts in ITS research.
We poll the study group's ChatGPT [25] use frequency with the following descending frequencies (number of participant selections): daily (2), several times a week (13), once a week (6), several times a month (5), once a month or less (2), and never (3).
### _ChatGPT Setup_
The study group participants use GPT-4 with 8k context length and temperature \(=0.7\), a static model provided by OpenAI with no additional fine-tuning. To simulate the practical use of ChatGPT and enable in-context learning, a chat interface is employed using TypingMind [26].
### _Manuscript_
We develop two versions of manuscripts1 (for different cohorts) with the following sections. Study duration is 95 minutes.
Footnote 1: [https://github.com/tmyllrrl/its-study](https://github.com/tmyllrrl/its-study)
#### Iv-C1 General Instructions and Background Questions
The general instructions outline participant conduct for the study's duration. Participants complete the background questions discussed in Sec. IV-A.
#### Iv-C2 Reinforcement Learning Overview
The RL overview section provides a brief explanation of the state space and reward function RL components. We provide transportation-related examples and explanations to ensure common understanding of RL across all participants.
#### Iv-C3 Answer Instructions and Bank of Metrics
The answer instructions cover how their answers should be formatted with examples. The bank of metrics is provided in Table I. The participants can use any new metric with explanation.
#### Iv-C4 Mixed Traffic Environment Descriptions and Questions
Each of the three traffic environments (shown in Fig. 1; details in Sec. III-B) receives a general description, problem explanation, and the mixed traffic objective of the RVs. The general description includes details such as the number of RVs present, general flow behavior of traffic, ratio of RVs to human-driven vehicles. We provide a supplementary video2 demonstrating the mixed traffic environments. We ask the participants three questions per environment: the state space, the reward function, and briefly explain the rationale behind the reward function. The state spaces participants provide are observation spaces; however, we solely use "state space" to prevent additional complexity/avoid confusion.
Footnote 2: [https://youtu.be/qfqgf176Fao](https://youtu.be/qfqgf176Fao)
## V Results
Next, we analyze the participants' answers. Then, we briefly discuss experiment setup and the results in the three mixed traffic control environments.
### _Participant Answers Analysis_
We find that ChatGPT [25, 1] impacts provided state spaces/reward functions by significantly decreasing invalid answers over using prior knowledge. We consider an answer valid if it contains metrics provided in the metrics bank or have well-defined explanations. Table II presents the
number of valid/invalid answers for all three environments. On average, only 61% of answers from the control group are valid, while 82% are valid for the study group, a 21% increase. This illustrates ChatGPT's capabilities in guiding participants to create valid state spaces/reward functions.
Another impact of ChatGPT is that the study group uses (in ring, bottleneck, intersection order) 35, 63, and 59 new metrics compared to the control group's 8, 17, and 21 new metrics that do not exist in the metrics bank. On average, this is a 363% increase in new metrics. This significant increase implies that ChatGPT can provide new perspectives to solving the mixed traffic problems.
We provide example state space and reward function in Fig. 2. The left image is a control group (Novice) state space, while the right image is a reward function from the study group (Novice + ChatGPT). The novice state space heavily uses existing metrics (a trend with the control group), while the ChatGPT reward shows the intricate metrics ChatGPT generates. ChatGPT also offers explanations that appear reasonable with the terms. For example, for the "maintain safe time to collision" in Fig. 2, ChatGPT states, "Since your state space includes the time-to-collision (TTC) for the front and rear vehicles, you can encourage the agent to maintain a safe TTC with both vehicles.... The reward is applied only when the actual TTC is less than the determined \(safeTTC\)."
### _Experiment Setup_
We train an RL policy for each valid answer using Proximal Policy Optimization (PPO) [21] with default RLlib hyperparameters [27]. The HVs use the Intelligent Driver Model (IDM) [15] with the stochastic noise range [\(-0.2\), \(0.2\)] added to account for heterogeneous driving behaviors. Policies are trained for 200 episodes. A fully-connected neural network with 2 hidden layers of size 8 are used for the ring and intersection environments and size 16 are used for the bottleneck environment. Experiments are conducted with Intel i7-12700k CPU and 32G RAM.
Fig. 3: Normalized reward curves during training are shown for the three traffic control environments. Each environment consists of three curves for the control group (Novices), the study group (Novices + ChatGPT), and an expert. For Novices and Novices + ChatGPT, the solid line indicates the average for all participants, with the shaded region representing variance. In both groups across all networks, average rewards increase over the course of training. This validates that both groups are able to develop state spaces and reward functions that are trainable using RL.
Fig. 2: Example state space from novice (left) and reward function from novice with ChatGPT (right) for ring. The reward function contains complex terms generated by ChatGPT. ChatGPT provides sound reasoning for using the reward function metrics (see Sec. V-A for details).
### _Experiment Results_
#### Iv-C1 Training
We supply training curves for control (Novices) and study (Novices + ChatGPT) groups in Fig. 3. The curves are normalized to [0,1] with the mean reward values averaged across the two respective groups at each episode. We also supply the expert's training curve. For all three environments and both groups, the curves show reward improving during training, validating participants were able to develop trainable policies as a result of the RVs' actions. Increasing rewards does not guarantee the RV achieves the environment's objective, as the RV may pursue actions that enhance rewards not in line with the goal.
#### Iv-C2 Ring
Results are given in Fig. 4 (left). Due to the task's complexity, we plot a policy's best performance over 10 tests for each trained ring policy. All vehicle average speed (x-axis; meters/second) and minimum speed of any vehicle in the ring (y-axis) for the last 100 seconds (total testing period is 600 seconds) are considered. We consider policies successful (outlined) if their minimum speed is greater than zero while maintaining relatively good average speed. An expert's performance [11] is also provided for comparison.
When using ChatGPT's assistance, five policies are successful, while seven policies are successful using only non-expert knowledge. This result defies expectations given ChatGPT's ability to increase valid answers and inject new metrics into the state spaces/reward functions for the ring environment. One explanation is despite the addition of new metrics, the metrics do not improve the robot vehicle's ability to prevent stop-and-go traffic. Another conjecture is while the study group is encouraged to use ChatGPT, the level of usage in participants varies from asking ChatGPT a few questions to completely relying on it. This impacts ChatGPT's ability to help the participants. Additionally, the high number of non-expert successful policies is unexpected. While none of the given policies reach the expert's level, the anticipated number is close to zero given the tasks' complexity. Non-experts have more capability than originally hypothesized.
#### Iv-C3 Bottleneck
Fig. 4 (middle) shows the bottleneck results. Each trained RL policy is tested 10 times with the average reported. Outflow (x-axis; vehicles/hour) is considered, and an expert's performance is given as a pink, vertical line. We consider a policy successful if the policy's outflow is greater than 1400.
While the outflow range between novices and novices with ChatGPT is similar, we observe an increase in successful policies when using ChatGPT's aid. The control group has 14 successful policies, while the study group has 19 successful policies, a 136% increase. Similar to ring, no participant-given policy outperforms the expert being near 100 shy.
#### Iv-C4 Intersection
Intersection results are given in Fig. 4 (right). The trained RL policy is evaluated 10 times with average results reported. All vehicle average speed (x-axis; meters/second) and east/westbound queue length (y-axis; number of waiting vehicles) are considered. We plot an expert's performance [11] and consider the nearest neighbors as successes (outlined). Four policies are successful with only non-expert knowledge, but six policies are successful (a 150% increase) when using ChatGPT aid. Additionally, of the six with-ChatGPT-help policies, four of them outperform the expert policy by a significant margin. This result is significant and showcases how ChatGPT can give non-experts the ability to compete with ITS domain experts. Two non-expert policies outperform the expert policy, an unexpected outcome though half as much as using ChatGPT.
For both bottleneck and intersection, we observe a 136% and 150% increase, respectively, in successful policies. Examples of successful policies are provided in Fig. 5. ChatGPT is new with its training set not provided meaning ChatGPT could have not been trained on a sufficient
Fig. 4: Results for the three mixed traffic environments with successful RL policies denoted. For ring, five policies using ChatGPT’s help are successes, while seven policies are successful using only non-expert knowledge. Using ChatGPT sees a decrease in successful policies by two compared to only using non-expert knowledge, illustrating ChatGPT needs to be better prompted or further improvements to be useful in this task. For bottleneck, 14 policies are successful without ChatGPT’s assistance, while 19 are successful without ChatGPT, a 136% increase. For intersection, only using non-expert knowledge results in four successful policies, while ChatGPT increases successes to six (150% increase) with four the rightmost green markers) of those outperforming the expert. The bottleneck and intersection increases illustrate how ChatGPT can enable more non-experts to solves complex mixed traffic control tasks. However, the number of increases is lower than expected, potentially showing ChatGPT needs better prompts or further improvement.
amount of RL, ITS, or both training data to provide even more assistance. While the level of ChatGPT assistance is participant-determined, we observe a significant number of policies with extensive ChatGPT are not successful.
## VI Conclusions
In this work, we conduct a large-scale user study involving non-experts in intelligent transportation systems (ITS) research trying to provide quality reinforcement learning (RL) state spaces and reward functions for three mixed control traffic environments. Our study finds using ChatGPT can increase the number of successful polices by 150% and 135% in the intersection and bottleneck environments, respectively. However, using ChatGPT does not increase successes in the ring environment. Additionally, the improvement rate from using ChatGPT is less than originally theorized. This potentially means that an insufficient amount of RL ITS problems were provided to ChatGPT during training.
In the future, we aim to advance this study in several directions. First, we are interested in testing whether ChatGPT can assist the design of other ITS applications such as traffic state estimation and simulation [28, 29, 30, 31, 32, 33, 34], vehicle motion and control [35, 36, 37, 38], and robust driving [39, 40, 41]. Second, we are also interested in studying mixed traffic control under other mobility topics such as traffic safety [42] and micromobility [43]. Finally, we would like to test our approach on other RL tasks, including graph sparsification [44] and quality species selection [45].
|
2302.02728 | Analysis of high-energy drop impact onto deep liquid pool | The present work is devoted to the analysis of drop impact onto a deep liquid
pool. It is focused on the effects of high-energy splash regimes, caused by the
impact of large raindrops at high velocities. Such cases are characterized by
short time scales and complex mechanisms, and they have thus received little
attention until now. The BASILISK open-source solver is used to perform
three-dimensional Direct Numerical Simulations (DNS). The capabilities of the
octree adaptive mesh refinement techniques enable to capture the small-scale
features of the flow, while the Volume of Fluid (VOF) approach combined with a
balanced force surface tension calculation is applied to advect the volume
fraction of the liquids and reconstruct the interfaces. The numerical results
compare well with experimental visualizations: both the evolution of crown and
cavity, the emanation of ligaments, the formation of bubble canopy, and the
growth of the downward spiral jet that pierces through the cavity bottom, are
correctly reproduced. Reliable quantitative agreements are also obtained
regarding the time evolution of rim positions, cavity dimensions and droplet
distributions through an observation window. Furthermore, simulation gives
access to various aspects of the internal flows (velocity, vorticity, pressure,
energy budget), which allows to better explain the corresponding physical
phenomena. Details of the early dynamics of bubble ring entrapment and
splashing performance, the formation/collapse of bubble canopy, and the
spreading of drop liquid, are discussed. The statistics of droplet size show
the bimodal distribution in time, corroborating distinct primary mechanisms of
droplet production at different stages. | Hui Wang, Shuo Liu, Annie-Claude Bayeul-Lainé, David Murphy, Joseph Katz, Olivier Coutier-Delgosha | 2023-02-06T12:09:04Z | http://arxiv.org/abs/2302.02728v1 | # Analysis of high-energy drop impact onto deep liquid pool
###### Abstract
The present work is devoted to the analysis of drop impact onto a deep liquid pool. It is focused on the effects of high-energy splash regimes, caused by the impact of large raindrops at high velocities. Such cases are characterized by short time scales and complex mechanisms, and they have thus received little attention until now. The BASILISK open-source solver is used to perform three-dimensional Direct Numerical Simulations (DNS). The capabilities of the octree adaptive mesh refinement techniques enable to capture the small-scale features of the flow, while the Volume of Fluid (VOF) approach combined with a balanced force surface tension calculation is applied to advect the volume fraction of the liquids and reconstruct the interfaces. The numerical results compare well with experimental visualizations: both the evolution of crown and cavity, the emanation of ligaments, the formation of bubble canopy, and the growth of the downward spiral jet that pierces through the cavity bottom, are correctly reproduced. Reliable quantitative agreements are also obtained regarding the time evolution of rim positions, cavity dimensions and droplet distributions through an observation window. Furthermore, simulation gives access to various aspects of the internal flows (velocity, vorticity, pressure, energy budget), which allows to better explain the corresponding physical phenomena. Details of the early dynamics of bubble ring entrapment and splashing performance, the formation/collapse of bubble canopy, and the spreading of drop liquid, are discussed. The statistics of droplet size show the bimodal distribution in time, corroborating distinct primary mechanisms of droplet production at different stages.
keywords: +
Footnote †: journal: Fluid Mech.
## 1 Introduction
The impact of raindrops on a deep liquid pool has been extensively studied since the initial works of Worthington (1883, 1908). Various behaviours are obtained, depending primarily on the miscible or immiscible character of the two liquids (Lhuissier _et al._, 2013; Castillo-Orozco _et al._, 2016), but also on their difference of density (Thomson & Newall, 1886; Manzello & Yang, 2002; Villermaux, 2022), the depth of the receiving volume (Macklin & Hobbs, 1969; Wang & Chen, 2000; Fedorchenko & Wang, 2004), and the angle of the impact (Zhbankova & Kolpakov, 1990; Okawa _et al._, 2008; Gielen _et al._, 2017; Liu, 2018), to mention only a few parameters.
In the specific case of identical liquids of density \(\rho\), with a \(90^{\circ}\) impact of the drop falling in the air on a deep volume of target liquid, very different phenomena are still observed, when the liquid viscosity and surface tension with air, the diameter \(d\) of the drop and the speed of the impact \(V\) are varied. Schotland (1960) has first reported the primary effect of the Weber number \(We=\rho V^{2}d/\sigma\) on the physics of the impact, i.e. the ratio of the drop kinetic energy to the energy required to deform the target liquid surface.
A few years later, Engel (1966, 1967) has provided a detailed description of various behaviours obtained from waterdrop impact on water pool when \(We\) is varied in the range \(3000\sim 20000\). Based on visualizations of the impact and using white particles in the target liquid and red ink in the impacting drop, she has shown that the drop impact creates a cavity in the flat free surface, and subsequently the rise of the target liquid in a cylindrical shape, from the edges of the cavity. A bubble-thin cylindrical sheet of liquid is erected at the upper edge of this crown, which eventually necks in and closes in a bubble dome in the most energetic test cases. After that, the cavity shallows and a jet forms at the cavity floor, which flows through the centre of the crown in case it is open or merges with a downward jet coming from the top of the bubble dome in case the crown has closed. The author suggests that the cavity may vibrate at its maximal expansion if all the drop kinetic energy is not yet transformed into potential energy, and she also assumes that: (i) the maximum possible bubble height is equal to the cavity diameter, (ii) the pressure evolution below the cavity floor should explain the formation of the upward jet, (iii) the liquid of the initial drop is carried by this jet and will eventually form a secondary bubble at the centre of the cavity.
In the following decades, this analysis has been progressively completed by additional experiments with increasing visualization capabilities (Rodriguez & Mesler, 1985; Pumphrey _et al._, 1989; Rein, 1996), leading to characteristic laws of cavity size and growth time (Leng, 2001) and an improved understanding of the phenomena mentioned hereabove (Rein, 1993). Indeed, these studies were usually focused on a specific mechanism obtained in a small range of \(We\) and Froude number \(Fr=V^{2}/gd\), like for example the crater formation and collapse (Prosperetti _et al._, 1989), the break-up of the upward jet for \(We\) between 5 and 200 (Manzello & Yang, 2002; Castillo Orozco, 2015), the bubble entrapment under the floor of the cavity and the subsequent rising thin jet (Thoroddsen _et al._, 2003; Deng _et al._, 2007; Hendrix _et al._, 2016), the breakup of the crown rim (Deegan _et al._, 2008), and the air layer trapped below the impacting drop when the respective surfaces of liquid are deformed due to the local pressure changes at the early stage of the process (Tran _et al._, 2013).
In the most recent works, attention has been especially focused on these first stages of the impact, where the entrapped air layer under the drop is ruptured, resulting in the generation of secondary bubbles (Pumphrey & Elmore, 1990; Thoroddsen _et al._, 2003, 2012). Immediately after the impact, the contact line between the drop and the receiving liquid, also called the "neck region", moves radially at high speed. It results in a liquid sheet, composed of water coming from the target liquid, which is ejected horizontally outwards at the base of the contact surface (Weiss & Yarin, 1999; Thoroddsen, 2002; Josserand & Zaleski, 2003). This
ejecta eventually breaks up into ligaments and droplets that result in the splashing of very fine sprays (Thoroddsen _et al._, 2011; Zhang _et al._, 2012_b_). For a sufficiently high Reynolds number, this phenomenon comes earlier than the sheet-like jet that is directly produced by the drop impact and will later generate the crown formation, as observed by Zhang _et al._ (2012_a_) using fast X-ray phase contrast imaging. The combination of these two jets was analysed by Agbaglah _et al._ (2015) who used a combination of X-ray imaging and axisymmetric simulations to discuss the interaction between the two jets.
One important output is that this interaction may be strongly related to instabilities observed in the neck region. Indeed, the base of the ejecta may become unstable and generate some alternate vortex shedding in the liquid below the neck region (Thoraval _et al._, 2012; Castrejon-Pita _et al._, 2012). A strong interplay between this von Karman vortex street and the inception/break-up of the ejecta sheet is suggested by several recent studies (Agbaglah _et al._, 2015; Li _et al._, 2018), but the mechanisms are still not elucidated. Non-axisymmetric behaviours have been observed in the vortex shedding and the ejecta formation processes from the bottom visualizations in cases of impact on thin water film (Thoraval _et al._, 2013; Li _et al._, 2018), which shows that these phenomena break the axisymmetry at the fine scale, and may be related to three-dimensional mechanisms. Local streamwise vortices, generated at the sharp corners at the basis of the ejecta at an early stage of the drop impact, are mentioned by these authors to possibly explain these non-axisymmetric effects. Using ultra-fast imaging, an early azimuthal instability was also detected by Li _et al._ (2018), which consists of small azimuthal waves that grow at the edge of the outer line of the contact, before the inception of the shedding. This clearly poses a new challenge to numerical studies, which mostly assume axisymmetry (Thoraval _et al._, 2012; Agbaglah _et al._, 2015). The authors point out the need of three-dimensional simulations with sufficient resolutions of the finest scale to explore the vortex structure in the neck region and the interaction between the two corners of the base of the ejecta sheet, which may be a primary mechanism of this instability.
As for the later stages after impact, phenomena occurring specifically at high Weber numbers, like the formation of ligaments at the edges of the crown, the ejection of droplets, the closure of the upper rim, and the subsequent downward jet, have been also investigated by Engel (1966), Deegan _et al._ (2007), Bisighini _et al._ (2010), Castillo Orozco (2015), and most recently by Murphy _et al._ (2015), who provided detailed information about the length of the ligament according to time, and the population of aerosolized droplets.
An exhaustive review of these previous works devoted to the impact of a drop on a deep volume of the same low-viscosity liquid has been conducted by Murphy _et al._ (2015), as shown in figure 1. Five different regimes have been identified by authors, depending on the values of the Froude and Weber numbers. The two directions of variations of the impact speed \(V\) and the drop diameter \(d\) are plotted in figure 1, as well as the specific case of raindrops of different sizes falling at terminal speed (solid black line labelled "Raindrop TS").
As can be seen in figure 1, if the drop size and the impact speed increase (moving typically from the bottom left to the top right on the chart), the following successive regimes are obtained: (1) slow drop that coalesces with the liquid volume and generates a vortex ring that moves downwards as the drop sinks, (2) formation of a cavity together with a surface wave, (3) the crater formed by increasing the drop diameter and the surface wave forms a crown rim from which the secondary droplets and ligaments are ejected, (4) for the highest speeds and the largest raindrop diameters, a large cavity is obtained, around which extends vertically a very thin liquid film, whose elevation and radius keep increasing and it eventually collapses in a very short time. At the upper part of the film, droplets and ligaments are ejected upwards and outwards.
This last type of behaviour (black diamonds in figure 1) is the most energetic and only received very limited attention during the past twenty years (Pan _et al._, 2008; Bisighini _et al._,
2010; Sochan _et al._, 2018; Lherm _et al._, 2021). This high-energy regime is characterized by shorter time scales and increased complexity of the phenomena involved in the splashing, which makes both experimental and numerical approaches more challenging, and may explain why it has received less attention until now. However, it is the case likely to produce the greatest number of aerosol droplets. The present study is focused on this highly energetic case of large raindrops falling near terminal speed, which is representative of raindrops at the surface of the ocean.
This configuration has been studied by Murphy _et al._ (2015) in detail, primarily as a reference case for a study focused on the influence of oil slicks and oil dispersants on the impact. High-speed videos have enabled to characterise the time evolution of the external shape of the cavity, the crown and its upper rim, while microscopic holography has provided some statistics of droplet population ejected at different stages of the process. These previous experiments are used to validate the numerical strategies conducted in the present study in section 3, which is focused on the same conditions of drop diameter and speed at the impact. Further attention is given here to the early-time bubble ring entrapment and splashing near the impact neck, the internal mechanisms of crown formation as well as its violent closure, the liquid jets formed successively upward and downward and the subsequent entrainment of a large air bubble, and the ejection of droplets at the edge of the upper rim, in various directions that depend on the stage of the process. Indeed, it was estimated in the experiments that such a large raindrop produces at least 2000 micro-droplets. However, the details of the production of these tiny droplets and statistics on their size, speed, and ejection direction remain to be found, which motivated the use of numerical simulations in this paper.
Multiphase flow calculations of drop impacts usually consist in laminar simulations, as the Reynolds number \(Re=\rho Vd/\mu\) based on the drop motion in the air at the impact velocity is below \(10^{4}\), so there is no transition to turbulence, and all motions resulting from the impact are characterized by short characteristic times that do not involve any turbulence effect. The challenge of such computations is thus mostly related to the prediction of the multiphase structure. Early numerical works (Harlow & Shannon, 1967; Ferreira _et al._, 1985) based on Marker and Cell and SOLA-VOF techniques of interface tracking could provide some first predictions of the main features of drop impact but could not resolve the small-scale
Figure 1: Various configurations of splashing reported in the literature according to the \(We\) and \(Fr\) numbers reproduced from Murphy _et al._ (2015) with the authorization of the authors.
mechanisms. After that, Oguz & Prosperetti (1990) have included the surface tension effects and Morton _et al._ (2000) have solved the full Navier-Stokes equations, opening the way to modern simulations that are based either on interface tracking (level set, front tracking) or interface reconstruction, using a transport equation for the volume fractions of one component (VOF method).
The recent simulations of liquid drops impinging onto the liquid surface are mostly based on the latter category of models. Both axisymmetric simulations (Morton _et al._, 2000; Berberovic _et al._, 2009; Ervik _et al._, 2014; Ray _et al._, 2015; Agbaglah _et al._, 2015; Deka _et al._, 2017; Fudge _et al._, 2021) and three-dimensional simulations (Rieber & Frohn, 1999; Brambilla & Guardone, 2013; Cheng & Lou, 2015; Shin _et al._, 2017) can be found in literature, using mostly DNS approaches. For three-dimensional calculations, whose primary objective is to capture the non-axisymmetric mechanisms involved in the splashing, efforts are focused on dynamic refinement techniques (Nikolopoulos _et al._, 2007; Brambilla & Guardone, 2015), in order to resolve the multiple interfaces resulting from the splashing, while ensuring a reasonable grid size. In the latter study, the authors show that a minimum cell size of about 50 to 100 \(\upmu\)m is sufficient to capture most of the features of the splash. However, that conclusion was drawn for Weber numbers around 250, so it may not directly apply to the high-energy configuration investigated in the present study.
The paper is organized as follows: the numerical methods and problem statement are described hereafter in section 2, the detailed validation of the numerical strategies is conducted by comparing the numerical results with the experimental data in section 3, the analysis of several mechanisms involved in the splashing is presented in section 4, and the statistics of airborne droplets is finally analyzed in section 5.
## 2 Numerical methods and flow configurations
### Main features of the solver
The drop impact of the gas-liquid system is considered as incompressible flow, and it solves a system of the mass balance equation, the momentum balance equation and the advection of one-fluid formulation, called hereafter the colour function.
\[\nabla\boldsymbol{\cdot}\boldsymbol{U}=\boldsymbol{0} \tag{1}\]
\[\rho\left(\frac{\partial\boldsymbol{U}}{\partial t}+(\boldsymbol{U\cdot} \nabla)\boldsymbol{U}\right)=-\nabla P+\boldsymbol{\cdot}\boldsymbol{\cdot}( \mu\boldsymbol{D})+\rho\boldsymbol{a}+\sigma\kappa\delta_{s}\boldsymbol{n} \tag{2}\]
\[\frac{\partial C}{\partial t}+\nabla\boldsymbol{\cdot}(\boldsymbol{C} \boldsymbol{U})=C\nabla\boldsymbol{\cdot}\boldsymbol{U} \tag{3}\]
Where \(\boldsymbol{U}\) is the velocity vector, \(\rho\) is the density, \(P\) is the pressure, \(\mu\) is the viscosity, \(\boldsymbol{D}\) is the deformation tensor whose components are \(D_{ij}=\partial u_{i}/\partial x_{j}+\partial u_{j}/\partial x_{i}\) with \(u_{i}\) and \(x_{i}\), \(i=1\) to 3 the components of \(\boldsymbol{U}\) and position \(\boldsymbol{X}\), respectively. \(\boldsymbol{a}\) is the body force along the impact direction. The last term in equation 2 is the surface tension force, with \(\kappa\) the curvature of the interface, \(\sigma\) the surface tension coefficient which is taken as constant in the present study, and the \(\boldsymbol{n}\) the unit vector normal to the interface. This term is zero everywhere but at the gas/liquid interface, as controlled by the Dirac function \(\delta_{s}\). The colour function \(C\) is transported by equation 3. For incompressible flow, the right-hand term in equation 3 is zero due to equation 1.
The BASILISK framework is a flow solver developed by Popinet (2015) and available for use and development under a free software GPL license (Popinet & collaborators, 2013-2023). It solves the time-dependent compressible/incompressible variable-density Euler, Stokes,
or Navier-Stokes equations with second-order space and time accuracy. The Momentum-Conserving Volume-of-Fluid (MCVOF) approach implemented by Fuster & Popinet (2018) is employed here to simulate the problem of gas-liquid two-phase flow. The colour function 2.3 is solved based on a volume fraction advection scheme proposed by Weymouth & Yue (2010), which exhibits complete mass conservation and makes it ideal for highly energetic free-surface flows. The interface then can be represented using Piecewise Linear Interface Capturing (PLIC) VOF method (Rudman, 1998). For equation 2.2, the Crank-Nicholson discretization of the viscous terms is second-order accurate in time and unconditionally stable, while the convective terms are computed using the Bell-Colella-Glaz (BCG) second-order unsplit upwind scheme (Bell _et al._, 1989), which is stable for CFL numbers smaller than one. The calculation of the surface tension is one of the most challenging steps of the process, since no continuous definition of the interface is available in the VOF approach. Here the balanced-force surface-tension calculation is used (Francois _et al._, 2006), which is based on the Continuum-Surface-Force (CSF) approach originally proposed by Brackbill _et al._ (1992). In addition, a second-order accurate calculation of the curvature is performed, using the Height-Function technique developed by Popinet (2009). More detailed descriptions of the numerical schemes can be found in Fuster & Popinet (2018), Pairetti _et al._ (2020) and Zhang _et al._ (2020).
Cubic finite volumes organized hierarchically in octree are used for space discretization. The octree structure has been developed initially for image processing (Samet, 1990) and later applied to CFD (Khokhlov, 1998) and multiphase flows (Popinet, 2003). The basic organization is the following, all details can be found in Popinet (2015): when a cell is refined, it is divided into 8 cubic cells, whose edges are half the ones of the parent cell. The base of the tree is the root cell and the last cells with no child are the leaf cells. The cell level is the number of times it is refined, compared with the root cell (level 0). To avoid too much complexity in the gradient and flux calculations, the levels of direct and diagonally neighbouring cells are constrained and cannot differ by more than one and all cells directly neighbouring a mixed cell must be at the same level.
BASILISK employs the Adaptive Mesh Refinement (AMR) (van Hooft _et al._, 2018) to adaptively refine/coarsen the grids based on the wavelet-estimated numerical error of the local dynamics, which makes it especially appropriate for the present application where multiple interfaces and numerous droplets and bubbles are expected. The resolution will be adapted every time step according to the estimated discretisation error of the spatially-discretized fields (volume fraction, velocity, curvature, etc). The mesh will be refined as long as the wavelet-estimated error exceeds the given threshold, which eventually leads to a multi-level spatial resolution from the minimum refinement level \(L_{min}\) to the maximum refinement level \(L_{max}\) over the entire domain.
The BASILISK code has been already applied to the study of drop impact onto liquid film (Josserand & Zaleski, 2003; Reijers _et al._, 2019; Wu _et al._, 2021; Fudge _et al._, 2021; Sanjay _et al._, 2022) and the great superiority of parallel capacity and computational efficiency of the solver were discussed in (Wu _et al._, 2021). Furthermore, the capabilities of the BASILISK solver have been extensively validated on various problems of multi-phase flows here Popinet & collaborators (2013-2023).
### Initial Flow configurations
The configuration of drop impact investigated in the present study mimics the one studied by Murphy _et al._ (2015), using high-speed video. In the experiments, the drop falls in a \(15.2\times 15.2\) cm\({}^{2}\) tank filled with seawater to a depth of 8 cm. The measured horizontal and vertical diameters of the drop just before the impact are \(d_{h}=4.3\) mm and \(d_{v}=3.8\) mm, respectively, which results in an effective drop diameter \(d=(d_{v}d_{h}^{2})^{1/3}=4.1\) mm. The ratio
between the width of the tank \(L\) and the drop diameter \(d\) is therefore defined as \(L=38d\). The speed before impact is \(U_{0}=7.2\) m/s, which is \(81\%\) of the drop terminal speed.
In our numerical simulation, the computational domain is reduced down to a cube with a side length \(L=16d\), which represents about \(1/14\) of the water tank volume in the experiments, as shown in figure 2(_a_). It has been found to be the best compromise to avoid any effect of the boundary conditions on the splashing, while decreasing as much as possible the dimensions of the domain. The free surface is located at the mid-distance between the bottom and the top, thus the depth of the pool is \(H=8d\), to give enough space to the aerosolized droplets without any interaction with the boundaries. The free outflow boundary condition is imposed on the top of the domain, while the default slip boundary condition (symmetry) is applied for the four side walls and the bottom. A zoomed-in view of the initial flow set-ups in the vicinity of the drop is depicted in figure 2(_b_). The initial gap between drop and pool is \(\delta=0.1d\), allowing the observation of air sheet entrainment near the contact line. The water in the drop and the tank are the same, which is assumed to have almost no effect on the splashing, as the properties of the freshwater drop and the target seawater in the experiments are very close. The density and viscosity ratios between seawater and air are \(\rho_{w}/\rho_{a}=1018\) and \(\mu_{w}/\mu_{a}=180\), which leads to a system of drop-pool impact with dimensionless numbers \(Re=28800\), \(We=2893\) and \(Fr=1322\).
As illustrated in figure 2(_a_), the initial mesh configuration around the drop is generated based on the AMR algorithm using the estimated-discretization error of volume fraction (\(f_{Err}=1e-6\)) and velocity (\(u_{Err}=1e-4\)) fields, which promises a rounded geometric description of the initial drop and lowers the RAM requirement of initialization. The mesh is
Figure 2: Initial numerical configuration of the three-dimensional simulation. (_a_) Overall view of the computational domain and the initial mesh structure at a plane across the centre of the impacting drop (z = 0). (_b_) Closeup view of the initial flows around the impacting drop. (_c_) Mesh refinement strategy at the initial stage (_S_1). A higher maximum level of refinement (\(L_{max}=14\)) is imposed near the neck region to capture the early-time splashing.
coarsened gradually down to the given minimum level of refinement (\(L_{min}\)) away from the drop interface. The region around the free surface of the pool is refined at \(L_{max}=11\) to avoid any divergence issue. Once the simulation starts, the mesh will be redistributed adaptively based on AMR using the volume fraction field with tolerance \(f_{Err}=1e-4\) and the velocity field with tolerance \(u_{Err}=1e-1\) as adaption criteria. Additionally, we remove the droplets that approach the boundaries of the computational domain, since these tiny droplets have few effects on the evolution of the main impact but are expensive to track. In real experiments, they are considered to evaporate at one point and this is not our focus in this paper. Using the initial contact centre as the reference, any microdroplet whose centroid lies outside of the region of a semi-sphere with diameter \(D_{remove}=15d\) will be removed from the simulation. The effect of gravity is taken into consideration in this study.
### Minimum spatial resolution
Direct Numerical Simulation (DNS) consists in resolving all scales of the flow, from the smallest relevant ones (ideally here the smallest water droplets ejected in the air or air bubbles entrained in the water) up to the large scales of the problem (here the volume of water tank that receives the initial impacting drop). Ideally, it means that the grid resolution of the air/water domain should be fine enough to capture all air/water interfaces created at all steps of the splashing process. We have carried out extensive tests to explore the effects of the minimum spatial resolution in comparison with the available experimental data. It was found that the early dynamics of liquid sheet and air entrapment in the neck region affect significantly the subsequent phenomena, such as the formation of the crown and its closure at high \(Re\) and \(We\) numbers. Our preliminary study has shown that a grid resolution of at least 1024 cells per drop diameter is necessary to capture the correct regime of early splashing for the present work (see Appendix A for details). This will need to apply a maximum level of refinement at \(L_{max}=14\), which corresponds to an equivalent uniform grid of more than 4.3 trillion \(\left[\left(2^{14}\right)^{3}\right]\) cells. Note that the more the maximum refinement level is increased, the more the time steps are reduced to comply with the CFL (Courant-Friedrichs-Lewy) condition, which means that the calculation CPU time is not proportional to the number of cells. Although the total number of cells reduces significantly by employing the AMR algorithm, it is still far too expensive to perform a long-term simulation in full 3D configuration at this level.
Therefore, based on the physical characteristics during the impact, we divide the simulation into three consecutive stages, namely the early-time splashing stage at \(t\leq 0.2\) ms (\(S1\)), the crown formation stage at \(0.2<t<4\) ms (\(S2\)) and the bubble canopy stage at \(t\geq 4\) ms (\(S3\)). At stage \(S1\), very thin liquid sheet emerges on the neck region and interacts strongly with the surface of drop and pool, producing numerous very fine droplets and bubbles, thus the primary objective is to capture the flow dynamics near the contact region in finer scales. Figure 2(\(c\)) shows the mesh refinement strategy employed at the initial stage. A higher maximum level \(L_{max}=14\) (3.9 um) is used in the vicinity of the free surface to solve the dynamics of the neck region in smaller scales, while \(L_{max}=13\) (7.8 um) is used for the rest of the domain to capture the general dynamics. At stage \(S2\), a coherent liquid sheet has been developed above the pool and it grows subsequently into the thin-walled crown. During this rapid expansion stage, a great deal of secondary droplets is sprayed incessantly from the top of the crown (Murphy _et al._, 2015). The extra layer of refinement with \(L_{max}=14\) is thus removed at this stage and \(L_{max}=13\) is used for the entire computational domain to capture the statistics of droplets and bubbles. At stage \(S3\), the droplets shed from the top of the crown are much less and generally larger, comparing with the ones from previous stages. Therefore, we restart the simulation with \(L_{max}=12\) (15.6 um) over the whole computational domain,
which allows capturing the main physical dynamics of crown and cavity till the end of the simulation (t = 48 ms). This mesh refinement strategy ensures that the simulation is doable in a full three-dimensional configuration and can be accomplished in a "reasonable" time, considering the available computational resources for the moment.
All the numerical results presented in the main text of this paper were performed on 1024 cores for 33.5 days, which consumes more than \(8.21\times 10^{5}\) CPU-hours in total, using the computational resources on Advanced Research Computing (ARC) at Virginia Tech.
## 3 Comparisons with the experiments
### Morphology
Figure 3(\(a\)) shows the global evolution of the air-water interfaces generated by high-energy drop-pool impact during the 48 ms after contact. The side-views of the numerical results (bottom) are compared with the experimental high-speed images (top) in time. The time point of contact is defined as \(t=0\) ms. At \(t=-1\) ms, the drop is initialised above the surface of the pool. Once the simulation starts, the drop will fall downwards to hit the liquid surface driven by the combination of the initial impact speed and the gravitational force. Directly after impact (\(t=1\) ms), a cylindrical wave around a flat-bottomed disk-like cavity is produced due to the drop violent penetration and substantial liquid ligaments emanate almost horizontally from the thickened rim of the crown, spraying a large number of micro-droplets
Figure 3: Qualitative comparison between experiments and simulations. (\(a\)) Overall dynamics of the air-water interface during 48 ms after impact. From left to right, the experiment shows -1, 1, 3, 7, 12, 18, 41 and 52 ms after impact, and the simulation shows -1, 1, 3, 7, 12, 18, 37 and 48 ms after impact. The red stars indicate the tracked positions of the upper rim of the crown. The scale bar is 10 mm long. (\(b\)) Closeup view of the early-time splashing during 450 μs after impact. From left to right, the experiment shows 49, 148, 246, 345 and 443 μs after first contact, and the simulation shows 50, 150, 250, 350 and 450 μs after first contact. The scale bar is 1 mm long. The qualitative comparisons show that the simulation successfully reproduced all the distinctive features observed in the experiments.
into the air. At this stage, most of the drop liquid is concentrated on the bottom of the cavity and the submergence of the drop/pool interface can be approximated as half of the initial drop impact velocity (Fedorchenko & Wang 2004). In the following few milliseconds, the drop keeps expanding radially and eventually spreads into a thin layer of liquid that is distributed along the interior surface of the cavity, stretching the cavity into a typical hemispherical shape that has been widely reported in literature (Engel 1966; Berberovic _et al._ 2009; Bisighini _et al._ 2010; Ray _et al._ 2015), as seen at \(t=3\) ms. By \(t=7\) ms, the upper rim of the crown has started to proceed inwards and the orientation of the ligaments transitioned from almost horizontal to vertical, which eventually leads to the upcoming closure event on the upper part. At the time when the crown necks in (\(t\approx 12\) ms), flows from the sidewalls of the cylindrical wave meet along the impact axis and a large volume of air is encapsulated, generating a central liquid jet that moves spirally from the merging point (\(t=18\) ms). The downward-moving jet keeps growing and eventually pierces the cavity bottom as shown at \(t=37\) ms, disturbing the retraction of the cavity bottom. Finally, the rebound of the cavity produces a broad upward jet that merges with the previous central column of fluid, leaving several air bubbles inside the liquid tank, evidenced at \(t=48\) ms.
Furthermore, a close-up view of the early-time splashing near the contact region during the first \(450\) us after impact is provided and compared with the high-speed experimental holograms in figure 3(\(b\)). An immediately ruptured "liquid sheet" appears right after contact (\(t=50\) us), known as the "prompt splash" (Deegan _et al._ 2007). In this process, a great number of very fine droplets are scattered in the air, which has been considered as one of the primary sources of marine aerosols and may raise potential health issues to the public under certain circumstances (Murphy _et al._ 2015). More liquids are subsequently pushed out from the pool to form the liquid wall of the crown. The morphological behaviours of the early-time splashing captured by simulation are quite consistent with the experimental observations.
These qualitative comparisons show that our simulation reproduced all the distinctive features observed in the experiments. The correct prediction of the early-time splashing, the transition of the ligaments' orientation, and the exact time when the upper rim of the crown necks in and the central spiral jet pierces the cavity bottom are especially convincing. A general good agreement to the experiments is obtained.
### Kinematics of crown and cavity
The kinematic behaviours of the crown and the cavity are manually tracked from experiments and simulations, and compared quantitatively in figure 4. The impact centre where the drop firstly contacts the pool is used as the reference point. The time evolution of the upper rim of the crown (marked by red stars in figure 3), its radial distance (figure 4_a_) and height (figure 4_b_), are measured during \(24\) ms after impact. The height is defined as the distance between the initial quiescent free surface of the pool and the position where ligaments are forming. The trajectory of the upper rim of the crown is then plotted in figure 4(\(c\)). For the first few milliseconds, the crown expands rapidly along the horizontal radial direction and reaches its maximum radius very soon (\(t\approx 3\) ms). After the maximum horizontal position, the rim starts to proceed almost immediately inwards, primarily under the effect of surface tension. Meanwhile, the leading edge of the crown rises continuously in the vertical direction and approaches its maximum height right before it necks in (\(t\approx 12\) ms). After the closure of the upper part, this point vibrates slightly near the impact axis along with the shrinking toroidal large air bubble.
The evolution of the submerged cavity has been thoroughly discussed in experiments, simulations and theories, and substantial attention has been particularly given to the estimation of the geometric dimensions of the cavity in previous studies (Engel 1967; Prosperetti & Oguz 1993; Leng 2001; Berberovic _et al._ 2009; Bisighini _et al._ 2010; Jain
_et al._ 2019). Figure 4(\(d\)) demonstrates the temporal variation of the horizontal radius at the intersection edge between the cavity and the initial free surface. The cavity grows rapidly on the horizontal plane at the beginning due to the initial violent impingement and the outward expanding speed decreases gradually. After the closure of the crown, the increase of the cavity radius slows down and is nearly linear. Starting from \(t\approx 3\) ms, the horizontal radius at the bottom of the crown becomes wider than at the top, which may presumably provide an important role in the evolution of the crown.
Figure 4: Analysis of the quantitative data measured with respect to the initial impact centre. (\(a\)) Evolution of the crown radius. (\(b\)) Evolution of the crown height. (\(c\)) Trajectory of the upper rim of the crown. (\(d\)) Evolution of the cavity radius. (\(e\)) Evolution of the cavity height. (\(f\)) Evolution of the cavity volume. The black dashed line in (\(e\)) shows the theoretical prediction of the penetration depth using the proposed model by Bisighini _et al._ (2010).
overall momentum towards the central axis, thus propelling more liquid from the receiving liquid to the sidewalls of the crater. Figure 4(_e_) shows the time evolution of the maximum depth of the cavity. The cavity keeps expanding in depth and reaches its maximum position 24 ms after impact, which takes almost double the time of the maximum crown position (\(t\approx 12\) ms). The volume of the cavity is calculated as half of an ellipsoid as used in Murphy _et al._ (2015) in figure 4(_f_).
Figure 4 confirms this conclusion: both the trajectory of the edges of the upper rim of the crown and the geometric dimensions of the cavity are found in very good agreement with experimental measurements. A very reliable quantitative agreement is obtained between simulations and experiments.
### Droplets
The production of secondary droplets and their distribution induced by the process of normal (perpendicular) (Okawa _et al._, 2006; Guildenbecher _et al._, 2014; Li _et al._, 2019; Wu _et al._, 2020)
Figure 5: Comparisons of the droplet statistics between numerical and experimental data captured in a specific field of view. (_a_) Overall schematic view of the relative position of the observation window. (_b_) Closeup view of the secondary droplets in the observation window. (_c_) Vertical distribution of secondary droplets. (_d_) Size distribution of secondary droplets. The numerical droplet statistics presented in (_c_) and (_d_) are time-averaged statistics using 9 time slices over the time period \(3\sim 4\) ms. The experimental data are ensemble-averaged using more than 25 replications as originally presented in figure 17 and 18 in Murphy _et al._ (2015).
and oblique (Okawa _et al._, 2008; Liu, 2018) drop impact on a liquid surface can be found in literature, focusing mostly on the fairly "smooth" splashing under the relatively lower range of \(Re\) and \(We\) numbers. The rather violent splashing behaviours observed under high-energy impacting conditions suggest that the creation of these tiny droplets might be associated with more complicated "irregular" interfacial deformation and breakups (Thoraval _et al._, 2012; Murphy _et al._, 2015). Nevertheless, the understanding of the governing mechanisms and their population remain insufficient, and the available statistical data on droplet production is very limited in the literature.
Therefore, for further validating the present numerical strategies, the statistics of secondary droplets are extracted from simulations and compared with experimental data. The specific location of the observation window is shown in figure 5(\(a\)), where a \(10\times 10\) mm\({}^{2}\) square field of view is placed 13 mm above the free surface of the pool. In the experiments, the holographic frames for droplet analysis were selected at the time when the first upward-rising droplet exits the top of the observation window (\(t\approx 3\sim 4\) ms), and the droplet statistics are then ensemble-averaged over all replicates (more than 25), as explained in Murphy _et al._ (2015). For the simulation here, the time-averaged droplet statistics are obtained using 9 time slices over the time period \(3\sim 4\) ms. Figure 5(\(c\)) shows the droplet distribution in the vertical direction. Similar to the experimental observations, most of the droplets are still concentrated at a lower position, which can be clearly seen in figure 5(\(b\)). Figure 5(\(d\)) plots the time-averaged droplet diameter distribution. Comparing with the bimodal distribution in experiments, an inapparent second peak can be still found from the numerical results under relatively wider bins (doubling the size of bins used in Murphy _et al._ (2015)). Two primary plateaux centred around 50 \(\upmu\)m and 225 \(\upmu\)m become more pronounced, which is in good agreement with the experimental data. The successful reproduction of droplet statistics in a specific observation window is highly encouraging, as it gives confidence for further comprehensive and in-depth analysis over the broader spatial domain and distinct temporal stages in the following sections.
## 4 Overall dynamics of splashing
### Early-time dynamics
The very early dynamics that occur shortly after contact (\(<100\)\(\upmu\)s) in the process of drop impact onto liquid surface has been widely discussed during the past thirty years (Weiss & Yarin, 1999; Deegan _et al._, 2007; Liang & Mudawar, 2016; Marcotte _et al._, 2019). Conventionally, for low impact velocities, an air disk is entrapped under the centre of the impacting drop by lubrication pressure and it later breaks up into chains of micro-bubbles (Thoroddsen _et al._, 2012; Tran _et al._, 2013) or contracts into several bubbles along the central line (Thoroddsen _et al._, 2003; Jian _et al._, 2020) due to instability. As the contact line expands outwards radially, sheet-like liquid ejecta is sent out nearly axisymmetrically from the outer edge of the neck and possibly emits rings of tiny droplets from its rim for sufficiently large Reynolds numbers (Weiss & Yarin, 1999; Thoroddsen, 2002; Josserand & Zaleski, 2003; Howison _et al._, 2005; Deegan _et al._, 2007; Marcotte _et al._, 2019). However, with increased impact velocity (\(Re>7000\)), azimuthal undulations are found to grow at the base of the ejecta and the entrapment of bubble rings takes place near the neck region (Thoraval _et al._, 2013; Li _et al._, 2018), which thereby breaks the axisymmetry of the motions in fine scales. For even higher impact velocities, irregularly distributed splashing and incoherent liquid sheets are experimentally observed by Thoroddsen (2002). Axisymmetric simulations of Thoraval _et al._ (2012) have suggested that the base of the ejecta may become highly unstable under high-energy configurations, which in turn propels the ejecta base to swing to collide with
drop and pool, trapping alternate air volume on both sides of the ejecta. The existence of the von Karman-type vortical structure was experimentally confirmed later by Castrejon-Pita _et al._ (2012) using Shadowgraph imaging and laser-sheet visualizations. Nevertheless, the study of such complex flow structures is still very challenging for both experimental and numerical approaches as it mainly occurs in microscopic length within the time scale of several microseconds after contact.
Upon contact, most of the liquid remains unperturbed while the connection of two liquid masses propagates outwards instantaneously. Theoretically, the spreading law of \(R_{n}\sim\sqrt{2\tau}\) can be derived from the truncated sphere approximation based only on the geometric considerations (Rioboo _et al._, 2002; Josserand & Zaleski, 2003), but it violates the continuity equation. Combining Wagner's theory (Wagner, 1932), the radial motion of the neck for drop impact on a solid surface can be described using the form \(R_{n}\sim C\sqrt{3\tau}\)(Riboux & Gordillo, 2014; Philippi _et al._, 2016; Li _et al._, 2018). The dimensionless time is defined as \(\tau=(t-t_{0})U_{0}/R_{d}\), where \(R_{d}=d/2\). Figure 6(\(a\)) confirms the good match between simulated neck motion with analytical prediction. The coefficient \(C=1.22\) and \(t_{0}=-2.22\) are obtained by fitting the numerical measurements. At high impacting \(Re\), the initial spreading speed of the neck may reach as high as \(\sim 17\) times the impact velocity (Li _et al._, 2018), which basically requires the time resolutions of the order of \(10^{-7}\sim 10^{-6}\) s to observe the instantaneous motions along the contact line, possibly explaining the limited knowledge in this specific area.
Figure 7 shows the simulated early-time dynamics in the neck region between drop and pool from the bottom view. Here it is focused on the stage before the outer edge of the neck overtakes the outline of the drop. The first frame and the last frame reach 25% and 89% of the drop size respectively. The air-water interface is coloured by the volume fraction of the passive tracer (see section 4.5) and the opacity of the interface is set to 0.5, which enables to visualize the interfacial dynamics for both the inside and outside regions of the neck.
Immediately after the contact, a central disc of air (black arrow) is entrapped and an outer liquid edge connecting the drop and pool is formed. The contact line then expands rapidly in the radial direction, entrapping bubble rings along the drop/target boundary. In the second
Figure 6: Early-time dynamic behaviours of the neck region. (\(a\)) Evolution of the neck radial position \(R_{n}\). The solid line shows the theoretical estimate using the form \(R_{n}\sim C\sqrt{3(t-t_{0})U_{0}/R_{d}}\), where \(C=1.22\) and \(t_{0}=-2.22\) are obtained by fitting the numerical measurements. (\(b\)) Evolution of the ejecta angle \(\theta\) measured from the vertical central slices as shown in figure 8 (two sides), using the definition sketch proposed by Thoraval _et al._ (2012). The sharp decrease of the ejecta angle indicates the “bumping” event, where the contact point of the neck suddenly changes due to the reconnection between the ejecta and the drop/pool.
Figure 7: Early-time dynamics near the contact region induced by high-speed drop impact onto deep liquid pool observed from the bottom view. The first three frames are shown 4, 6 and 8 \(\upmu\)s after the first contact, where the “nearly axisymmetric” bubble rings are entrapped from the neck of the connection. The black arrow points at the central air disk. The red arrows indicate the formation of a new bubble ring. The last four images show a smaller magnification 10, 15, 32 and 50 \(\upmu\)s after impact. The outer edge is the downward-moving drop, the inner edge is the contact line of the neck and the central irregular disc is the entrapped air sheet. Azimuthal instabilities and liquid ejecta are developed along the neck. The outer line of the neck has not reached the size of the impacting drop here. The scale bar is 500 \(\upmu\)m long.
panel of figure 7, it can be seen that a new round of air cylinder (red arrow) is entrapped at the neck and is pinched off shortly into the bulk within the time scale of \(<0.4\)\(\upmu\)s. By \(t=8\)\(\upmu\)s, up to 10 bubble rings are present within the target volume and most of them are entrapped axisymmetrically as concentric circles. For the next \(\sim 20\)\(\upmu\)s, these intact air rings will be stretched longitudinally while rotating, and eventually break up into a necklace of micro-bubbles due to surface-tension Rayleigh instability (Chandrasekhar, 2013) as shown at \(t=32\)\(\upmu\)s and \(t=50\)\(\upmu\)s, which looks very similar to the experimental observations of figure 7 in Thoroddsen _et al._ (2003).
Note that the neck of connection between drop and pool remains rather smooth at this time (\(t<10\)\(\upmu\)s) and no valid liquid ejecta is observed around it, implying that the fundamental mechanism of bubble ring entrapment here differs from the jet-induced air encapsulation predicted by Weiss & Yarin (1999). Figure 8 shows the air-water interface overlaid by the vorticity field near the neck region on the vertical intersection across the drop centre. An air void is rolled up along the entire circumference of the neck and later entrapped axisymmetrically inside the liquid phase. This is reminiscent of the entrapment of toroidal bubbles numerically captured by Oguz & Prosperetti (1989) for two drops collision, where the basic question "Does the contact line between two approaching surfaces move outwards fast enough to prevent further contact after the initial one? " was discussed. At the moment of contact, a very small liquid bridge of radius \(R_{n}\) with high curvature "meniscus" connects two liquid masses and a thin outer air sheet retracts outwards rapidly driven by surface tension and local pressure gradient. A self-similar repeated reconnection of this air gap was predicted at the very early time of contact, thus enclosing a number of tiny toroidal bubbles in the liquid phase. Focusing on the viscous regime of \(R_{e}\ll 1\), \(R_{e}=\sigma R_{n}/(\rho\nu^{2})\), the analytical analysis of Eggers _et al._ (1999) draws the scaling laws for the radius of the entrapped toroidal bubble \(r_{b}\propto R_{n}^{3/2}\) and the width of the thin air gap connecting the bubble \(r_{g}\propto R_{n}^{2}\). Further investigation of Duchemin _et al._ (2003) for inviscid drop coalescence explained that the occurrence of each pinch-off event (\(i_{th}\)) depends only on the local dynamics of
Figure 8: Flow field and vorticity structure in the vicinity of the neck region on the vertical slice at \(z=0\) (see dashed line at \(t=10\)\(\upmu\)s in figure 7). The red and blue colours represent counterclockwise and clockwise rotation respectively. (_a_) Entrapment of air disk and bubble rings \(4\)\(\upmu\)s after the first contact. The scale bar is \(50\)\(\upmu\)m long. (_b_) Formation of liquid ejecta from the neck at \(t=12\)\(\upmu\)s. Secondary droplets are emitted from its tips. Vortex shedding of the alternate sign from the base of the ejecta generates a Von Kármán-type structure along the drop/pool boundary, with only occasional air bubbles/bubble arcs entrapment from the neck. The scale bar is \(100\)\(\upmu\)m long. (_c_) Collisions between the ejecta and the downward-moving drop, so-called the “bumping event”, which leads to the entrapment of air rings at \(t=73\)\(\upmu\)s. The scale bar is \(500\)\(\upmu\)m long.
air gap width \(r_{g}^{i}\), where \(r_{g}^{i}=(R_{n}^{i})^{2}\). The distance between the initial tip of the meniscus and the reconnection point as well as the time interval can be estimated as \(r_{c}^{i}=10(R_{n}^{i})^{2}\) and \(t_{c}^{i}=7.6(R_{n}^{i})^{3}\) respectively. The author emphasised that the reconnection ceases when \(R_{n}>0.05\), thus \(r_{c}^{i}\leq 0.025\) and \(t_{c}^{i}\leq 0.00095\), where the time and space coordinates are scaled by \(\sqrt{\rho R_{d}^{\ 3}/\sigma}\) and \(R_{d}\). The evolution of the bubble rings was however not able to be predicted by the analytical model because of their highly non-circular shape and three-dimensional rotations and stretching. Based on this theoretical model, the distances and time intervals of the neighbouring bubble rings for the present case should be \(\leq 51.25\)\(\upmu\)m and \(\leq 10.41\)\(\upmu\)s. Measurements from our simulation show that the maximum distance and time interval between bubble rings are around \(\sim 31\)\(\upmu\)m and \(\sim 0.5\)\(\upmu\)s, in the same order as the analytical estimate. It should be noted that the existence of the entrapped central air disk as well as the variation of the drop bottom curvature at the moment of impact may alter the local dynamics, but the phenomena should be qualitatively similar.
Starting from \(t\approx 10\)\(\upmu\)s, an azimuthal instability is observed along the neck. Here the diameter of the outer contact line reaches only 38% of the drop size. Figure 9 shows the closeup view of the azimuthal undulations at the neck of connection between drop and pool at
Figure 9: Irregular azimuthal undulations on the neck region between drop and pool 10 \(\upmu\)s after the first contact. The central irregular plate is the entrapped air disk. The white circle indicates the early-time breakups of the ejecta fingers. The scale bar is 100 \(\upmu\)m long.
\(t=10\)\(\upmu\)s, where the entire periphery of the neck is visible. Irregular undulations are captured on both sides of the neck, namely the outer liquid ejecta and the inner air sheet. Liquid fingers are initiated arbitrarily from the outer side and some of them break up immediately on their tips (white dashed circle), producing the very first generation of secondary droplets (see also figure 8_b_). At some locations where fingers are not found (or long wavelength), smooth ejecta and air sheets are present. We have measured that the characteristic wavelength and amplitude of the initial fingers are around \(\sim 30\)\(\upmu\)m and \(\sim 21\)\(\upmu\)m, which are generally longer than the thickness of the ejecta base (\(\sim 7\)\(\upmu\)m). As the neck spreads radially, alternate vortices are shed from the base of the ejecta, thus forming a similar structure of von Karman-type street along the drop/target contact as shown in 8(_b_). Meanwhile, the oscillations of the base of the ejecta pull the local air sheet on the corner of it, entrapping occasionally some isolated bubbles/bubble arcs on both/alternate sides of the base (green arrow in figure 7).
After the initial small amplitude oscillations combined with the "complete" disintegration of the ejecta jets, a coherent ejecta sheet eventually emerges (see \(t=32\)\(\upmu\)s in figure 7). This cylindrical ejecta sheet then rises radially along the contact line until it impacts the drop surface, entrapping a larger bubble ring along the neck as evidenced at \(t=50\)\(\upmu\)s in figure 7 and figure 8(_c_). Figure 6(_b_) plots the early-time evolution of the ejecta angle \(\theta\) measured based on the definition sketch proposed in Thoraval _et al._ (2012). It can be found that \(\theta\) increases almost linearly at the initial time and suddenly decreases at the moment of "bumping", which has been also observed by Thoroddsen _et al._ (2011) and Thoraval _et al._ (2012). Once the new neck of connection is established, the fast-moving rim will stretch the ejecta sheet and tear it immediately into multiple liquid "tori", which later break up and produce a large number of similar-sized microdroplets, as illustrated in figure 8(_c_) and discussed later in section 5.
### Formation of bubble canopy
Figure 10 shows the internal flows of the liquid phase at different stages, overlapped by velocity vector and pressure fields on the cross-section. The grid-based arrow is oriented by the velocity field and its value is represented by colour. Within the first millisecond, the drop has not yet fully sprawled out and most of the momentum is still concentrated in the
Figure 10: Internal flows of the high-speed drop impact overlapped by velocity and pressure fields. From top to bottom and left to right, the corresponding times are 1, 3, 11, 16, 24, 32, 38 and 48 ms after impact. The velocity magnitudes are scaled by the drop impact speed \(U_{0}\). The scale bar is 10 mm long.
impacting drop, thus generating a broad high-pressure area along the drop/pool interface. Meanwhile, an outward-expanding liquid sheet arises along the contact line as the result of the violent extrusion. As the drop moves downwards, more liquid is therefore pushed away from the pool and transported to the uprising crown. At high impact velocities, thin liquid ligaments are generated along the rim of the crown and break up on its tips by instability, producing moderate and large-scale secondary droplets (see section 5). As discussed above in section 3.2, the crown reaches a maximum horizontal radial position soon after impact (\(t\approx 3\) ms) and then its rim thickens and bends towards the impact axis under the effect of surface tension, while it simultaneously rises in the vertical direction. Moreover, the outline of the cavity expands rapidly on the surface of the pool and overtakes the crown expansion at around \(t\approx 4\) ms, which also facilitates generating an inward-directed momentum on the crown top to enclose the upper part. During the period of crown expansion, the maximum velocity is reached on the top where the liquid film is thinnest. Right before the closure (\(t\approx 12\) ms), it is visible that the velocity on the crown rim points almost horizontally inwards. Flows from all directions on the liquid wall meet and interact at the instant of closing, generating a sharp pressure rise around the point of closure, which later protrudes upward and downward moving jets evidenced at \(t=16\) ms. Meanwhile, ligaments above the dome tangle and merge, possibly shedding several large-scale droplets on the top of the dome, as shown at \(t=16\) ms.
It can be found in the literature that large bubble entrapment owing to drop impact onto liquid pool occurs mainly in two types of mechanisms. Firstly, under low impact energy, a vortex-induced roll jet may be formed and grows into a thick liquid tongue, which later collapses near the pool surface to entrap the air bubble (Pumphrey & Elmore, 1990). The shape of the drop at the time of impact is crucial for such mechanism and it occurs almost exclusively for prolate-shaped drops (Zou _et al._, 2012; Wang _et al._, 2013; Thoraval _et al._, 2016; Deka _et al._, 2017). Secondly, with sufficiently high impact energy, the thin-walled liquid crown rises up higher above the pool due to the violent collision and its rim bends towards the impact axis while rising vertically, enveloping the air bubble above the target pool. The formation of such complex flow structures for high-velocity drop-pool collision has been observed from time to time by experiments during the past century (Worthington, 1908; Engel, 1966; Bisighini _et al._, 2010; Murphy _et al._, 2015; Lherm _et al._, 2021), but is probably firstly solved in 3D by high-resolution numerical simulations in the present work. Interestingly, although driven by different mechanisms, similarities are still observed between them, such as the generation of the central jets, the reconnection of the downward-moving jet as well as the bursting of the final "floating bubble". Comparing with the low-energy counterparts, the rather intense interfacial deformation at high impact velocities surely introduces some new phenomena, which could accordingly influence the subsequent physical process like the generation of underwater noise induced by air bubble entrainment (Prosperetti _et al._, 1989; Prosperetti & Oguz, 1993) and the additional source of airborne droplets caused by liquid film rupture (Resch _et al._, 1986; Afeti & Resch, 1990; Resch & Afeti, 1991). Further investigation and analysis could be initiated in the future to compare and contrast these seemingly similar dynamic patterns.
Figure 11 shows the formation of the central jet from the top of the dome. The jet is formed side-biased as the flows on the upper part of the crown are not perfectly axisymmetric and they do not arrive at the merging point simultaneously, which is also consistent with one of the few available experimental observations in literature (Bisighini _et al._, 2010; Murphy _et al._, 2015; Lherm _et al._, 2021). High-speed jet ejections out of a liquid interface are commonly observed in many other physical processes, such as bubble bursting (Boulton-Stone & Blake, 1993; Thoroddsen _et al._, 2009; Berny _et al._, 2022), Faraday waves (Hogrefe _et al._, 1998; Zeff _et al._, 2000), and cavity collapse induced by the process of solid/liquid object impact onto fluid target (Worthington & Cole, 1987, 1900; Ray _et al._, 2015; Gekle & Gordillo, 2010; Jamali
_et al._, 2020; Kim _et al._, 2021). In general, all these jets are ejected as a consequence of a very large axial pressure gradient created at the jet base, which therefore can be further classified based on the way the large overpressure is created and the length scale at which pressure variations take place (Gekle & Gordillo, 2010). Surface tension plays an essential role in the jet formation process.
### Cavity contraction
Now we focus on the contraction of the cavity. For drop impact at low and moderate velocities, the liquid crown collapses and generates capillary waves that move radially towards the cavity bottom along the interior of the crater after it reaches the maximum expansion. The capillary waves then meet concentrically at the bottom of the cavity, forming a classic upward Worthington jet that may break up on its tips (Ray _et al._, 2015). At higher impact velocities, qualitatively different phenomena are observed. As shown in figure 10, during the cavity expansion stage, the direction of the velocity field is outward and upward around the enlarging cavity and the cavity continues to expand even after the closure of the upper part. The cavity depth reaches its maximum value at \(t\approx 24\) ms as the cavity radius extends continuously along its edge (see also figure 4_e_), which is very different from the measurements for an even more energetic case impacting at \(\sim 2U_{0}\) in Engel (1966) where the crown and cavity arrive at their maximum positions at about the same time. At the moment that the penetration depth reaches its maximum value, the flow around the cavity bottom should have reached its minimum value of essentially zero and shortly be redistributed by inertia force, as seen at \(t=24\) ms in figure 10. At approximately the same time (\(t\approx 25\) ms), the downward spiral jet arrives at the cavity bottom and penetrates deeply into the bulk, creating a subcavity that moves also spirally, which may potentially accelerate the re-establishment of the velocity field. By \(t\approx 32\) ms, it can be observed that the velocity field has been completely reversed in the pool and a new circulation has been established. The flow around the cavity in the pool has transitioned from outward-upward expansion to inward-downward contraction. Such a change of the flow direction would result in a pressure buildup around the lower part of the cavity, thus pushing the cavity to shallow. Air bubbles are entrapped in the bulk in this process as can be seen at \(t=32\), 38 and 48 ms in figure 10 and figure 12(_a_). In the last
Figure 11: Formation of the central spiral jet inside the bubble canopy. (_a_) Dynamics of the liquid jet at \(t=24\) ms. The scale bar is 4 mm long. (_b_) Jet motions observed from the bottom view, showing 18, 23, 24 and 38 ms after impact. The scale bar is 1 mm long. The interface is contoured by velocity field.
frame of figure 10, an upward jet eventually rises from the cavity floor and merges with the previous downward jet.
Figure 12(_a_) draws the successive shapes of the entrapped large bubble and (_b_) plots the temporal motion of the bubble centroid in the vertical direction, showing the shallowing steps of the cavity. The bubble expands to a maximum position and then contracts from its bottom, eventually generating a toroidal air bubble floating at the top of the pool.
As for the final collapsing stage, the central column thickens and merges with the outer bubble wall, creating a horseshoe-shaped bubble that transforms later into a hemisphere. For the next long period of time (more than 300 ms), this toroidal bubble will be stretched and thinned under the action of surface tension and eventually ruptures due to instabilities. Subsequently, the film cap recedes along the periphery of the "ruptured hole" from one side, scattering fine scale liquid droplets from the "receding rim" to the air, as shown in the experiments by Murphy _et al._ (2015) (figure 3). The production of such tiny droplets from the receding films has been also studied by Lhuissier & Villermaux (2012) and Dasouqi _et al._ (2021).
### Energy budget
As reviewed by Veron (2015), one of the primary motivations for studying the liquid droplets ejection is to estimate the exchange of momentum, heat and eventually mass transfer through gas-liquid interface induced by the interfacial processes such as large scale breaking wave event (Mostert _et al._, 2022), small scale bubble bursting (Berny _et al._, 2022) and drop impact (Liang & Mudawar, 2016) etc. In the present work, the kinetic energy \(E_{k}\), gravitational potential energy \(E_{g}\) and surface potential energy \(E_{s}\) of the liquid pool are calculated as follows:
\[E_{k}=\frac{1}{2}\int_{V_{p}}\rho\|U\|^{2}dV_{p} \tag{10}\]
\[E_{g}=\int_{V_{p}}\rho gdV_{p}-E_{g0} \tag{11}\]
Figure 12: Kinematic behaviours of the entrapped large bubble. (_a_) Successive positions of the vertical slices for the entrapped large bubble. The time interval between each curve is 4 ms. (_b_) Time evolution of the vertical centroid position (left) and the vertical speed (right) of the entrapped large bubble. The bubble sinks at the expansion stage and then starts to shallow from its bottom due to the concentric axial pressure, which eventually leads to a floating air bubble above the pool surface.
\[E_{s}=\int_{S_{p}}\sigma dS_{p}-E_{s0} \tag{11}\]
The integrals are computed over the volume (\(V_{p}\)) and surface (\(S_{p}\)) of the largest liquid continuum in the domain, which means that the small droplets detached from the pool and the gas phase are not included here. The time point when the first contact occurs between drop and pool is defined as \(t=0\) ms. \(E_{g0}\) and \(E_{s0}\) are the gravitational and surface potential energies of the liquid phase at \(t=0\) ms. In addition, the kinetic, gravitational and surface potential energies of the ejected droplets are integrated over each droplet (\(V_{p}^{i}\) and \(S_{p}^{i}\), respectively) using the same equations like 1, 2 and 3, and the sum of energy carried by small droplets is represented as \(E_{d}\). The total energy of the liquid phase is therefore calculated as \(E_{T}=E_{k}+E_{g}+E_{s}+E_{d}\). \(E_{0}\) is the initial kinetic energy introduced by the impacting drop.
Figure 13: Energy budget for the process of high-speed drop impact onto deep liquid pool. (_a_) Temporal evolution of the energy budget in liquid phase. The energies are normalised by the initial kinetic energy of the impacting drop. (_b_) Temporal evolution of the momentum components in the pool (largest liquid continuum), normalised by the initial impact momentum. From left to right, the vertical dotted lines indicate the time point for the closure of the upper rim of the crown (\(t\approx 12\) μs), the connection of the downward spiral jet to the cavity bottom (\(t\approx 24\) μs) and the formation of the central upward jet from the cavity floor (\(t\approx 40\) μs).
Figure 13(_a_) plots the temporal evolution of the energy aspects throughout the simulation. The calculated energies in the liquid phase are normalized by the initial kinetic energy of the impacting drop \(E_{0}\). Initially, at the time the drop hits the pool, the system's energy is mainly composed of the kinetic energy of the impacting drop. As the drop moves downwards, a sharp decrease of the kinetic energy is observed at the very early time of impact, which can be associated with the immediate "prompt splash" of the liquid fragmentation (\(E_{d}\)) and the energy dissipation due to strong vortical entanglement near the neck region. The gravitational potential energy remains insignificant at this stage since the splashing appears mostly near the pool surface. Subsequently, a period of rapid expansion of crown and cavity takes place (see also figure 3 and 10), which is reflected by the almost linear decrease of the kinetic energy and noticeable increase of the gravitational and surface potential energies. The droplet energy \(E_{d}\) reaches its maximum value at approximately the same time when the droplet count reaches its peak during the sustained droplet shedding stage (see also section 5). Afterwards, the number of droplets reduces continuously and \(E_{d}\) becomes insignificant. Here, the maximum total droplet energy accounts for about 8% of the initial kinetic energy, which is quite close to the theoretical premises by Engel (1967) that around 5% of the initial impact energy is carried away by secondary droplets. At the instant of closing (around \(t\approx 12\) ms), more than 60% of the initial energy is still present in the liquid phase. Once the crown necks in, changes of the slope are observed in both \(E_{k}\) and \(E_{s}\), where the decreasing speed of \(E_{k}\) slows down gradually and \(E_{s}\) reaches a plateau. \(E_{g}\) reaches a maximum value at approximately the same time when the cavity expands to the maximum position (\(t\approx 24\) ms) and it decreases hereafter due to the inverse flow around the bubble canopy (see figure 10). Meanwhile, the kinetic energy in the pool keeps almost constant during the initial retracting stage (\(t\approx 24\sim 40\) ms) and increases slightly later due to the protrusion of the broad upward jet from the cavity floor. The total energy \(E_{T}\) decreases monotonically throughout the entire process and nearly half of the initial energy is eventually converted to other forms, including viscous energy dissipation, airflow/bubble energy as well as the energy contained by the removed droplets.
To better understand the kinematic behaviours of the liquid bulk, we also extract the momentum components of the largest liquid continuum (except small droplets) as shown in figure 13(_b_). The total vertical momentum \(P_{y}\) can be used to describe the overall vertical motion of the liquid bulk and the total momentum on the right half of the bulk along the \(x\)-axis \(P_{xx}\) can be used to describe its horizontal expansion, which are both normalised by the initial impact momentum of the drop \(P_{0}\). For \(t<25\) ms, a general outward-upward motion of the liquid bulk is shown in the graph, where the maximum horizontal and vertical expansion speeds are found at \(t\approx 6\) ms and \(t\approx 8\) ms respectively. Starting from \(t\approx 25\) ms, the bulk sinks downwards and the liquid starts to flow concentrically towards the impact axis, squeezing the large air bubble to float upwards as demonstrated in figure 12(_b_). A turning point of \(P_{y}\) is found at \(t\approx 40\) ms, which indicates the onset of the broad upward jet from the cavity floor.
### Transportation of drop liquid
In the simulation, a passive tracer field \(f_{p}\) is initially added to the impacting drop for the purpose of following the trace of the drop liquid. Experimentally speaking, this can be achieved by adding colours/dye to the liquids (Engel, 1966; Thoroddsen, 2002; Bisighini & Cossali, 2011). The tracer field is then advected by the following equation:
\[\frac{\partial f_{p}}{\partial t}+\mathbf{U_{f}}\mathbf{\cdot}\nabla f_{p}=0 \tag{10}\]
Figure 14 shows the distribution of drop liquid at different stages of impact. Shortly after
contact, the main part of the drop sits on the "pool cavity" and the radially spreading thin film extends from its edges (\(t=0.5\) ms). For the next few milliseconds, the drop deforms and expands rapidly into a thin liquid layer coated along the interior surface of the pool. Meanwhile, liquid threads are emitted from the rim of the drop liquid and then break up into small droplets on its tips (\(t=3\) ms), which is comparable with the formation of azimuthal destabilization on the retracting flattened drop edge at relatively low impact energy reported in Lhuissier _et al._ (2013). By the time of closing, it can be observed that these "drop threads" meet at the closure point and are later transported backwards to the bulk by the downward-moving jet. At the receding stage, the thin drop film starts to propagate towards the cavity bottom due to the surface-tension capillary waves and is later transported deep into the pool when the central spiral jet impinges the cavity bottom, seen at \(t=32\) ms and \(48\) ms in figure 14.
Different from the gas/liquid interface that can be recorded directly by the camera, the "virtual" drop/pool interface is usually invisible if the fluids in the drop and the receiving pool are the same, but the estimation of the kinematic behaviours of this layer is vital for theoretical studies of crater evolution (Bisighini & Cossali, 2011). As indicated in figure 15(\(a\)), the boundaries between different components can be easily differentiated here by the isosurface \(f_{p}=0.5\). Figure 15(\(b\)) plots the temporal variation for the positions of the upper point \(T_{a}\), lower point \(T_{b}\) as well as the thickness of the drop tracer \(T_{\delta}\) along the impact axis. As expected, the upper point \(T_{a}\) moves rapidly adjoining the air while the penetration speed of the lower point \(T_{b}\) is greatly decelerated by the reacting flows from the target liquid, thus causing the decrease in drop thickness. It can be seen that the drop deforms significantly and its thickness decreases noticeably during an initial dimensionless time period \(tU_{0}/d\leq 2\). For the later stages, \(T_{\delta}\) reaches a plateau and will not experience much difference throughout
Figure 14: Transportation of the passive tracer for drop liquid. From left to right and top to bottom, the corresponding times are 0.5, 3, 14, 20, 32 and 48 ms after impact. The scale bar is 10 mm long. Azimuthal destabilization is captured at the edge of the drop liquid, which therefore produces secondary droplets from its tips. At the later stage of impact, the thin drop film is penetrated and mixed by the downward-moving jet, producing the drop liquid plume deep inside the target pool.
the expansion stage. Slight fluctuations of \(T_{\delta}\) can be anticipated while the drop recedes into the cavity bottom at the contraction stage.
Following previous investigations in literature, a critical dimensionless time \(tU_{0}/d\approx 2\) is usually used to subdivide the evolution of drop impact into two phases (Fedorchenko & Wang 2004). During the first phase (\(tU_{0}/d\leq 2\)), the drop deforms and extends above the bulk cavity, and the interfaces of air/drop and drop/pool are clear-cut. The penetration speed of the drop/target interface during this time can be approximated as half of the impact speed \(U_{p}\approx U_{0}/2\), which is well known from the penetration mechanics (Birkhoff _et al._ 1948; Yarin _et al._ 1995) and has been previously applied for the analytical study of drop impact (Fedorchenko & Wang 2004; Berberovic _et al._ 2009). At times \(tU_{0}/d>2\), the flow effects in the thin drop layer are negligibly small and the cavity expansion thus can be approximated as the shape of the drop/pool interface. Berberovic _et al._ (2009) developed a theoretical approach to estimate the penetration depth of drop impact \(T_{d}\) based only on the linear momentum balance of the liquid around the cavity and gave an asymptotic solution as \(T_{d}=2^{-4/5}(5t-6)^{2/5}\) for \(tU_{0}/d>2\) at high \(Fr\), \(We\) and \(Re\) numbers. The predicted results using the above asymptotic formula are shown in figure 15(\(b\)) (dashed line) and a fairly good agreement is found with the simulated results. Since the effects of surface tension, viscosity and gravity are neglected in this theory, it can not predict accurately the later stages of impact where the deformation of the shape of the cavity becomes significant due to gravity and capillary waves. Bisighini _et al._ (2010) proposed a theoretical model based on the potential flow theory that accounted for the effects of inertia, gravity, viscosity and surface tension for sufficiently high Reynolds and Weber numbers, using the combination of the sphere expansion and its translation along impact axis. A system of ordinary differential equations is obtained and numerically solved using initial conditions:
\[\ddot{\alpha}=-\frac{3}{2}\frac{\dot{\alpha}^{2}}{\alpha}-\frac{2}{\alpha^{2} We}-\frac{1}{F_{T}}\frac{\zeta}{\alpha}+\frac{7}{4}\frac{\dot{\zeta}^{2}}{ \alpha}-\frac{4\dot{\alpha}}{\alpha^{2}Re} \tag{10}\]
Figure 15: (\(a\)) Sketch of the drop penetration. Boundaries between different fluid components are differentiated by isosurface \(f_{P}=0.5\). The positions of the upper point \(T_{\alpha}\), lower point \(T_{b}\) and the thickness of the drop tracer \(T_{\delta}\) along the vertical axis of symmetry are tracked. (\(b\)) Time variations of \(T_{a}\), \(T_{b}\) and \(T_{\delta}\) along the axial direction. The dashed line shows the asymptotic solution proposed by Berberović _et al._ (2009). The solid line shows the theoretical estimation of the penetration depth proposed by Bisighini _et al._ (2010). The dimensionless time \(tU_{0}/d=2\) is indicated by vertical dotted line.
\[\ddot{\zeta}=-3\frac{\dot{\alpha}\dot{\zeta}}{\alpha}-\frac{9}{2}\frac{\dot{\zeta}^ {2}}{\alpha}-\frac{2}{Fr}-\frac{12\dot{\zeta}}{\alpha^{2}Re} \tag{10}\]
Where \(\alpha\) and \(\zeta\) donate the dimensionless crater radius and axial coordinate of the centre of the sphere and the dimensionless penetration depth is expressed as \(\alpha+\zeta\). As explained by Bisighini _et al._ (2010), the initial conditions can be obtained from the initial phase (\(tU_{0}/d\leq 2\)) using the forms: \(\dot{\alpha}\approx 0.17\), \(\alpha\approx\alpha+0.17\tau\), \(\dot{\zeta}\approx 0.27\), \(\zeta\approx-\alpha_{0}+0.17\tau\), and the dimensionless width of the cavity can be estimated using the geometrical conditions \(W=2\sqrt{\alpha^{2}-\zeta^{2}}\approx 2\sqrt{(\alpha_{0}+0.17\tau)^{2}-(0.2 7\tau-\alpha_{0})^{2}}\). By fitting the simulated bulk cavity width using the least-mean-square method, the constant is obtained as \(\alpha_{0}=0.79\) in the present case, thus the initial conditions for equations 10 and 10 are \(\alpha(2)=1.13\), \(\dot{\alpha}(2)=0.17\), \(\zeta(2)=-0.25\) and \(\dot{\zeta}(2)=0.27\). As shown in the solid line in figure 15(\(b\)), the temporal variation of the predicted depth of drop/pool interface agrees very well with our numerical results during the expansion stage of impact. As for the retraction stage, discrepancies between the theoretical prediction and the simulation/experiment become more pronounced as demonstrated in figure 4(\(e\)), which can be explained as the fact that the shape of the cavity does not follow the spherical expansion anymore due to the influence of the central spiral jet and the propagation of capillary waves, the model therefore is invalid for the retraction phase.
## 5 Airborne Droplets
### Source of secondary droplet
According to the investigation of Deegan _et al._ (2007) in the parameter range \(Re\leq 5000\) and \(We\leq 1400\), at least three sources of secondary droplets can be identified during the impact: (i) prompt instability of the ejecta sheet occurring immediately after contact, which produces very small droplets, (ii) rim instability of the ejecta sheet that produces medium-sized droplets and (iii) rim instability of the crown that produces large droplets from liquid jets. These different mechanisms are typically interdependent and the earlier ones influence the later ones, which therefore further complicates the characterization of these tiny airborne droplets.
Figure 16 qualitatively illustrates some primary mechanisms of droplet production during the impact captured by simulation. The left panel shows the stages and locations where tiny droplets are generated and the right panel demonstrates the mesh structures on the overlaid section. As shown in figure 16(\(a\)), a great number of microdroplets are generated near the neck region immediately after the first contact due to the very early breakups of ejecta film, so-called the "prompt splash" (Deegan _et al._, 2007; Marcotte _et al._, 2019). We have estimated that around 85% of droplets produced at this time step have the equivalent mean diameter of more than double the size of the smallest cell and 70% of them are in the size range of \(10\sim 30\) um, suggesting that the statistical analysis of the initial smallest droplets should be treated carefully. After the intricate early splashing, a "smooth" liquid sheet rises around the contact line to form the cylindrical crown, emanating thin liquid ligaments on its rims from a fairly regular distance. A sustained droplet ejection, so-called the "crown splash", is therefore observed. The size of droplets produced at this stage is greatly influenced by the thickness of ligaments and increases with time. Figure 16(\(b\)) shows the fragmentation on the tips of the ligaments at \(t=650\) us. From the right panel, it can be clearly seen that the droplets produced at the present time instant are sufficiently larger than the minimum scale of the cell. Figure 16(\(c\)) shows the production of smaller droplets from the "secondary impact" caused by the previous generation. When the first-born droplets fall back and impinge to the pool, it may produce another splashing or breakups, which are generally partially resolved. The radii of
Figure 16: Different mechanisms of droplet production at different stages of impact. The images are shown under different magnifications. (\(a\)) \(t=30\) μs, the “prompt splash” that occurs at the very early time of the impact near the neck region due to irregular rupture and breakup of the ejecta film. (\(b\)) \(t=650\) μs, the sustained “crown splash” due to breakup of the thin liquid ligaments on the top of the crown rim. (\(c\)) Partially resolved tiny droplets near the pool surface produced by secondary impact/bubble bursting. The left panel shows the locations of splashing and the right panel demonstrates the mesh structures nearby.
these smallest droplets are represented approximately by the smallest size of the mesh (right panel). Besides the above sources, very small child-drops can be also ejected from bubble bursting, which are usually not fully resolved as same as figure 16(\(c\)). Fewer larger droplets are also observed later from the downward-moving central jet after the closure of the upper crown.
### Droplet statistics
We now discuss the statistics of droplets. Here we only analyze the droplet information captured with higher resolutions (\(L_{max}\)=13 and 14) for the first 4 ms after impact. The time variation of the total number of droplets is plotted in figure 17(\(a\)). It should be noted that a short time disturbance is presented during \(t\approx 0.21\sim 0.27\) ms on the curve, which might be explained as the loss of the smallest droplets at \(L_{max}\)=14 within the extra refinement layer near the surface of the pool (see section 2.3), as it happens at approximately the same time when many droplets reach this height and the extra refinement layer is removed. Neglecting the artefacts at the early time, a unimodal curve of droplet production is found. Shortly after the first contact, a large number of very fine droplets are scattered from the ruptured liquid ejecta (corresponding to figure 16\(a\)), which is reflected in figure 17(\(a\)) for the sharp increase of the droplet count at the beginning. Subsequently, larger droplets are emitted almost continuously from the tips of the ligaments. The maximum droplet count is reached 650 \(\upmu\)s after impact with around 4340 droplet population, and the dynamics around this time are qualitatively shown in figure 16(\(b\)). As the crown grows, ligaments merge along the rim and become shorter and thicker, producing larger droplets till the closure of the upper part. A change of the decreasing slope can be found at \(t\approx 1.7\) ms in figure 17(\(a\)), which indicates the timing when lots of droplets start to exit the field of view and are removed from the computational domain.
Figure 17(\(b\)) plots the time evolution of the mass ratio of the total secondary droplets (\(M_{d}\)) to the impact drop (\(M_{0}\)). An overall increasing trend of the mass transfer from pool to air is found. The number of droplets starts to decrease after the peak (\(t\approx 650\)\(\upmu\)s) but the total mass of droplets keeps increasing, which also reveals the fact that the droplets produced at the later "crown splash" stage are in much larger scales than the early splashing.
Figure 18(\(a\)) plots the contour map of droplet size distribution per bin size \(\Delta r\) at various times. The equivalent mean diameter of each droplet \(S_{d}\) is calculated as a sphere using the
Figure 17: (\(a\)) Temporal evolution of the total number of secondary droplets. The vertical dotted line indicates the maximum droplet count at \(t\approx 650\)\(\upmu\)s. (\(b\)) Temporal evolution of the total mass of secondary droplets. The time point when the impacting drop touches the pool is chosen as the reference time. The total mass of secondary droplets \(M_{d}\) is scaled by the mass of the impacting drop \(M_{0}\).
integrated volume fraction of the liquid phase. Initially, a lot of relatively small droplets, \(S_{d}\) ranging around \(8\sim 25\) um, are generated immediately upon impact due to the mechanism of "prompt splash" (see figure 16\(a\)). In the experiments, a ring of fine spray containing mostly \(6\sim 19\) um droplets was also recorded at the instant of impact, which agrees well with our numerical results, but the existence of smaller droplets was not able to be determined owing to limited camera resolution. In the simulation, a sharp increase of much smaller droplets (\(S_{d}\leq 8\) um) is captured at the initial splashing stage (\(t<200\) us), which can be associated with the partially resolved very small droplets from the rupture and breakup of the early ejecta film and "secondary impact". It is worth noting that the minimum cell scales are \(\Delta\approx 3.9\) um at stage \(S1\) and \(\Delta\approx 7.8\) um at stage \(S2\) in the present simulation as introduced in 2.3, and we assume that the geometries can not be represented for those droplets whose radii are smaller than the minimum cell size (red dashed line in figure 18\(a\)), which could influence the breakup physics at smaller scales. However, grid convergence of the numerical atomization remains an open question as has been discussed by Herrmann (2013) and Pairetti _et al._ (2020), although the latter author suggested that 8 cells per drop diameter would probably be enough to preserve most of their physical behaviours. Referring to the recent works of Wang _et al._ (2016) and Mostert _et al._ (2022) concerning droplet spray in the action of breaking waves, droplets radii greater than \(2\Delta\) would be able to maintain a spherical shape and are approximately grid-converged with proper time-averaging procedures, suggesting that at least \(4\sim 8\) points per droplet diameter are essential to obtain a numerical convergence.
After the initial splashing, an "intact" liquid sheet rises cylindrically from the contact line and disseminates secondary droplets from the thin liquid ligaments as shown in figure 16\((b)\). The size of droplets increases in time with the gradually thickened ligaments at this period. In figure 18\((a)\), three main size concentrations become pronounced 2 ms after impact, which can be briefly classified as the small-sized class \(S_{d}\leq 16\) um, the medium-sized class \(16<S_{d}<50\) um and the large-sized class \(S_{d}\geq 100\) um. The first concentration range is associated with the partially resolved smallest droplets and their count decreases noticeably
Figure 18: \((a)\) Temporal contour of droplet size distribution during \(0\sim 4\) ms of impact. The horizontal dashed lines (red) indicate the length scale of doubling the minimum cell size \(\sim 2\Delta\). \((b)\) Droplet size distribution at different time slices in \((a)\). Here it shows \(t=30\) μs and \(t=150\) μs when the liquid ejecta ruptures and produces very tiny droplets; \(t=650\) μs when the droplet count reaches the maximum. Large size droplets (\(S_{d}>100\) μm) are only generated from the ligament breakups at the “crown splash” stage. Time-averaged droplet size distribution is calculated using 9 time slices from the time window \(t=3\sim 4\) ms. A bi-model distribution of droplet size is found.
in time, indicating that they are mainly generated at the very beginning of the impact and lots of them fall back to the bulk very quickly after birth. The second concentration range lies in \(16\sim 50\) um (moderate size) and they are the ones produced the most throughout the entire simulation. The last concentration range is composed of large droplets (larger than \(100\) um), increasing and narrowing in time, which follows the experimental observation (Murphy _et al._, 2015) that fewer but larger droplets will be detached from the gradually thickened ligaments with time.
Various Instantaneous time slices of droplet size distribution are provided in figure 18(\(b\)). At \(t\approx 150\) us, the "bumping" event between the ejecta and the drop/pool just stopped and the primary mechanism of droplet production starts to transition from "prompt splash" to "crown splash". Comparing the size distribution at \(t=150\) us and \(t=650\) us, very similar profiles can be found in the group of moderate droplets (\(16\sim 50\) um), which confirms that the medium-sized concentration in figure 18(\(a\)) is composed primarily of the droplets from the mechanism (ii) summarized in Deegan _et al._ (2007): early rim instability from the very thin ejecta sheet. The discrepancy in small-sized droplets between these two time steps is probably caused by the change of mesh resolution, since the smallest cell size at \(t>200\) us has been doubled. Large-sized droplets of \(S_{d}>100\) um appear at \(t>150\) us, suggesting that large scale droplets are only produced by the fragmentation of thin ligaments at the "crown splash" stage.
As the ligaments merge and thicken, the newly detached droplets increase gradually in size, and the most abundant droplet information occurs around \(3\sim 4\) ms after impact (Murphy _et al._, 2015). The time-averaged droplet size distribution over the time window \(t=3\sim 4\) ms is therefore calculated. As shown in figure 18(\(b\)), a bimodal distribution of droplet size can be observed from the histogram, corroborating the separated size concentration ranges in figure 18(\(a\)). The bimodal size distribution of liquid sheet fragmentation has been also reported by some other studies (Afeti & Resch, 1990; Villermaux & Bossa, 2009, 2011; Murphy _et al._, 2015). Two primary distribution ranges can be briefly distinguished. The first one is composed of small-sized and medium-sized droplets peaking around \(30\) um. The second primary range appears for the large scale droplets ranging from \(100\) um to \(500\) um.
Lastly, we would like to focus on those droplets that tend to fall back to the liquid bulk, which is vital for the estimation of mass transfer through the air-sea interface. In practice,
Figure 19: Statistics of droplets who tend to re-merge with the liquid bulk. (\(a\)) Temporal contour of the droplet size distribution for the “re-merging” droplets. (\(b\)) Evolution of the “re-merging” droplets count with time.
those droplets that move along the impact direction (downwards) and whose centroid are located less than 100 \(\upmu\)m to the free surface of the liquid pool are selected and assumed as the ones that will re-join the pool. Figure 19(_a_) shows the temporal contour of the size distribution for the "re-merging" droplets and (_b_) shows its count in time. It can be found that most of the initially produced small-sized droplets from "prompt splash" vibrate near the free surface and tend to re-join the bulk shortly after generation. At the beginning of the "crown splash", very thin ligaments stretch out from the rims of the downward bending ejecta sheet and send out tiny droplets towards the target liquid as demonstrated in figure 3(_b_), which thereby contribute to a second primary peak of the medium-sized droplets in figure 19(_a_,_b_). As the impact advances, the direction of the liquid ligaments transients quickly from horizontal-outward to vertical-upward and the droplets are generally pinched off upwards in larger angles accordingly. The amount of the "re-merging" droplet becomes insignificant 1 ms after impact. Finally, it is worth noting that the presence of wind may significantly advect the smallest droplets and delay or prevent them from rejoining the bulk.
## 6 Conclusion
In this work, the high-energy splash of drop impact onto a deep volume of the same liquid pool has been investigated with high-resolution direct numerical simulation (DNS) in 3D. The calculations have been conducted in the exact same configurations studied previously by Murphy _et al._ (2015), in order to perform detailed comparisons with the experiments and prepare an in-depth analysis of the splashing dynamics. The BASILISK open-source solver, which combines a VOF description of the gas/liquid interfaces and an adaptive octree grid refinement, has been employed to simulate the process of drop impact. Qualitative and quantitative comparisons between numerical and experimental results have been conducted in terms of the morphological behaviours of splashing, kinematics of crown and cavity as well as the distributions of secondary droplets through a particular field of view, which efficiently validate the present numerical strategies and therefore enable the discussion of the internal mechanisms for high-speed drop impact afterwards.
Following the experimental observations of Murphy _et al._ (2015), we performed a detailed investigation on flow physics and splashing behaviours of high-speed drop impact, serving as an important supplementary study for this issue. Firstly, the very early instantaneous motions in the vicinity of the neck region were discussed under sufficient time resolution. We have confirmed the existence of two different mechanisms of air entrapment in the neck of connection, namely the entrapment of axisymmetric bubble rings driven by high localized pressure at \(R_{n}/R_{d}<35\%\)(Oguz & Prosperetti, 1989) and the entrapment of isolated bubbles/bubble rings due to the unstable oscillations of the ejecta base (Weiss & Yarin, 1999). Moreover, the calculation successfully captured the initialization of the irregular azimuthal undulations along the outer edge of the neck, which thereby breaks the axisymmetry of the motions in microscopic scales. Thereafter, detailed information on the internal flows such as velocity, pressure as well as energy budget was extracted from the calculation, to further explain the corresponding physical phenomena observed in experiments. We showed that azimuthal destabilization occurs on the edges of the flattening drop, breaking up on its tips and participating in the production of child-droplets. These "liquid threads", emitted from the initial impact drop, grow together with the uprising crown and meet upon closure, and they are eventually transported backwards to the bulk along with the penetration of the central spiral jet. Lastly, we presented the statistics of airborne droplets produced by the high-energy drop-pool collision. The results showed that a great number of microdroplets are produced immediately after contact by the irregular "prompt splash" within the time scale of \(t<200\)\(\upmu\)s, composing the most populated small and moderate sizes. The earliest
tiny droplets are sprayed right above the surface of the pool and most of them move towards the target liquid, suggesting the great possibility that a big part of them may return to the liquid bulk shortly after birth. Large droplets (\(S_{d}>100\) um) are only observed from the fragmentation of the rim ligaments during the "crown splash" stage, which therefore forms a gradually increasing but narrowing distribution ribbon, reflecting the merging and thickening activities of the ligaments on the rim of the crown. Finally, the bimodal size distribution of secondary droplets has been found.
We are aware that the splashing dynamics induced by the most energetic regime of drop impact are way more complicated than what has been mentioned here. Further analysis of the multi-scale flow physics needs to be conducted under sufficient spatial and temporal resolutions in the future. Details of the bubble ring entrapment are still not well understood, as well as its motions and breakups. The physical mechanism that is responsible for the early azimuthal instability remains unknown. Recent experimental observations (Thoraval _et al._, 2013; Li _et al._, 2018) have suggested that these early dynamics may be greatly affected by the intricate vortical motions and three-dimensional instabilities, which certainly added more complexities to the analysis. For drop impact on the same liquid pool, large parameter space of \(Re\) and \(We\) need to be studied to determine the critical conditions of the large bubble entrapment (bubble canopy regime), as well as to explore its influence on the closure event (closure time, closure height, large bubble volume, floating bubble and its burst). The formation and motion of the central spiral jet also need to be analyzed in detail and compared with various types of liquid column jets. Statistics of droplets and bubbles need to be collected under various impacting conditions to form a more comprehensive database, which could therefore reflect directly the mass/momentum transfer through gas-liquid interfaces as well as enlighten various applications where this process is involved (oil dispersant, spray cooling, metallurgy, etc).
We appreciate beneficial discussions and help from the BASILISK community. Simulations were performed using computational resources on Advanced Research Computing (ARC) at Virginia Tech. This work is supported by the scholarship from China Scholarship Council (CSC) under the Grant CSC NO. 201908320462. The authors report no conflict of interest. Hui Wang, [https://orcid.org/0000-0001-9733-0150](https://orcid.org/0000-0001-9733-0150); Shuo Liu, [https://orcid.org/0000-0002-8530-8359](https://orcid.org/0000-0002-8530-8359).
## Appendix A Effects of spatial resolution
In this appendix, we discuss the effects of the minimum spatial resolution. Using the Adaptive Mesh Refinement (AMR) algorithm, preliminary tests have been carried out by varying the maximum refinement level \(L_{max}\) from 11 to 14, corresponding to the minimum cell size of \(3.9\sim 31.25\) um in reality. Two types of distinctive splashing phenomena are observed under different spatial resolutions. For simulations at lower levels (\(L_{max}\)=12 in figure 20\(a\)), a thin layer of ejecta sheet rises smoothly from the contact line and later becomes the leading edge of the crown, and secondary droplets detach nearly axisymmetrically from the rim of the liquid sheet. Increasing the maximum mesh refinement to higher levels (\(L_{max}\)=14 in figure 20\(b\)), a disturbed incoherent liquid sheet accompanied by more "randomly" distributed secondary droplets are captured (details in section 4.1 and 5). Referring the splashing classifications in Thoroddsen _et al._ (2011) and Thoraval _et al._ (2012) based on the dimensionless Ohnesorge number (\(Oh=\mu/\rho\sigma d\)) and Splash number (\(K=We\sqrt{Re}\)), irregular splashing always occurs for the most energetic cases. As the parameters in our present case are much higher than their
range of study, thinner liquid sheet and more complex interfacial deformations/breakups can be even expected.
Figure 20(\(c\)) compares the shape of the air-water interface captured with \(L_{max}\)=12 (left), high-speed camera (middle) and \(L_{max}\)=14 (right) 50 \(\upmu\)s after impact. The emerge-ruptured liquid sheet together with a great number of irregularly distributed tiny droplets calculated at \(L_{max}\)=14 are consistent well with the experimental observations, while a rather smooth ejecta is presented with \(L_{max}\)=12 when the spatial resolution is insufficient. As a formal
Figure 20: Effects of the maximum mesh refinement level on splashing behaviours. (\(a\)) Evolution of early-time splashing obtained with \(L_{max}=12\), showing 10, 40, 70 and 100 \(\upmu\)s after impact. (\(b\)) Evolution of early-time splashing obtained with \(L_{max}=14\), showing 10, 40, 70 and 100 \(\upmu\)s after impact. (\(c\)) Comparison of air-water interfaces at \(t=50\)\(\upmu\)s captured by calculation with \(L_{max}=12\) (left), experiment (middle) and calculation with \(L_{max}=14\) (right). (\(d\)) Time evolution of cell numbers under different maximum mesh refinement levels.
study of mesh influence could not be performed, given the computational resources required, the successful reproduction of the primary features of the early splashing confirms that a maximum refinement level at \(L_{max}=14\) is essential for capturing the "correct" physical dynamics (see also section 3.1).
It should be noted that each time the maximum refinement level is increased, eight "children" cells will be divided from the tree-based structure in 3D. Although the adaptive wavelet algorithm concentrates the smallest scales mostly near the interfaces of the impacting area, the computational requirements remain harsh at high levels. Figure 20(_d_) shows the time evolution of the cell number for the first 250 us of the simulations under different spatial resolutions. It can be observed that the total number of cells is more than double with each additional increase of the \(L_{max}\), and this number will become considerably huge later when the large crater and the substantial droplets and bubbles are developed.
|
2303.02401 | Open-Vocabulary Affordance Detection in 3D Point Clouds | Affordance detection is a challenging problem with a wide variety of robotic
applications. Traditional affordance detection methods are limited to a
predefined set of affordance labels, hence potentially restricting the
adaptability of intelligent robots in complex and dynamic environments. In this
paper, we present the Open-Vocabulary Affordance Detection (OpenAD) method,
which is capable of detecting an unbounded number of affordances in 3D point
clouds. By simultaneously learning the affordance text and the point feature,
OpenAD successfully exploits the semantic relationships between affordances.
Therefore, our proposed method enables zero-shot detection and can be able to
detect previously unseen affordances without a single annotation example.
Intensive experimental results show that OpenAD works effectively on a wide
range of affordance detection setups and outperforms other baselines by a large
margin. Additionally, we demonstrate the practicality of the proposed OpenAD in
real-world robotic applications with a fast inference speed (~100ms). Our
project is available at https://openad2023.github.io. | Toan Nguyen, Minh Nhat Vu, An Vuong, Dzung Nguyen, Thieu Vo, Ngan Le, Anh Nguyen | 2023-03-04T12:26:47Z | http://arxiv.org/abs/2303.02401v5 | # Open-Vocabulary Affordance Detection in 3D Point Clouds
###### Abstract
Affordance detection is a challenging problem with a wide variety of robotic applications. Traditional affordance detection methods are limited to a predefined set of affordance labels, hence potentially restricting the adaptability of intelligent robots in complex and dynamic environments. In this paper, we present the Open-Vocabulary Affordance Detection (OpenAD) method, which is capable of detecting an unbounded number of affordances in 3D point clouds. By simultaneously learning the affordance text and the point feature, OpenAD successfully exploits the semantic relationships between affordances. Therefore, our proposed method enables zero-shot detection and can detect previously unseen affordances without a single annotation example. Intensive experimental results show that OpenAD works effectively on a wide range of affordance detection setups and outperforms other baselines by a large margin. Additionally, we demonstrate the practicality of the proposed OpenAD in real-world robotic applications with a fast inference speed (\(\approx$100\,\mathrm{ms}$\)).
## I Introduction
The concept of affordance, proposed by the ecological psychologist James Gibson [1], plays an important role in various robotic applications, such as object recognition [2, 3, 4], action anticipation [5, 6, 7], agent's activity recognition [8, 9, 10], and object functionality understanding [11, 12, 13, 14]. In these applications, affordances are used to illustrate the potential interactions between the robot and its surrounding environment. For instance, with a general cutting task, the knife's affordances can guide the robot to use the knife's blade to achieve requirements such as mincing meat or carving wood. Detecting object affordances, however, is not a trivial task since the robots need to understand in real-time the arbitrary correlations between objects, actions, and effects in complex and dynamic environments [15].
Traditional methods for affordance detection utilize classical machine learning methods on the images, such as Support Vector Machine (SVM) [16] based affordance prediction [17], texture-based and object-level monocular appearance cues [18], relational affordance model [19], and human-object interactions [20]. With the rise of deep learning, several works have employed Convolutional Neural Networks (CNN) [21, 22, 23] for different affordance-related tasks, such as affordance reasoning [24, 25], pixel-based affordance detection [26, 27, 28, 29, 30, 31], and functional scene understanding [32, 33, 34]. The key challenge in detecting object affordances via the imagery data is that object affordances may differ in terms of visual information such as shape, size, or geometry while being similar in object functionality [35]. In practice, object affordance detection from the images requires an additional step to be applied to downstream robotic tasks, as we need to transform the detected results from 2D to 3D using the depth information [36].
With the increasing availability of advanced depth cameras, 3D point cloud has become a popular modality for robotic applications [37]. Compared to conventional images, 3D point clouds directly provide the robot with 3D information about surrounding objects and the environment [38]. Consequently, several recent works directly utilize the 3D point clouds for affordance detection [39, 40, 41, 42, 43]. For instance, Kim _et al._[39] detected affordances by dividing point clouds into segments and classifying them using logistic regression. The authors in [44] proposed a new grasp detection method in point cloud data. More recently, Mo _et al._[36] predicted affordance heat maps from human-object interaction via the scene point cloud. In this work, we address the task of affordance detection in 3D point clouds to directly apply the results to robotic application tasks. More specifically, we consider the affordances of point-level objects and propose a new method to generalize the affordance understanding using the open-vocabulary setting.
While several works have been proposed for affordance understanding using 2D images or 3D point clouds, they are mostly restricted to a _predefined affordance label set_. This limitation prevents robots from quickly adapting to a wide range of real-world scenarios or responding to changes in the operating environments. Recently, increasing the flexibility of affordance labels has been studied in [45, 46, 47] as one-shot
Fig. 1: The comparison between traditional affordance detection methods (a) and our method (b). Traditional methods are restricted to predefined affordance label sets, while our OpenAD enables open-set affordance labels.
or few-shot learning problems. However, the authors in [45, 46, 47] only consider the 2D images as input and treat the problem as the classical pixel-wise affordance segmentation task. In this work, we overcome the limitation of the fixed affordance label set by addressing the affordance detection in 3D point clouds under the _open-vocabulary_ setting. Our key idea is to learn collaboratively the mapping between the _language labels_ and the _visual features_ of the point cloud. In contrast to traditional methods that are limited to a predefined set of affordance labels, our approach allows the robot to utilize an unrestricted number of natural language texts as input and, therefore, can be used in a broader range of applications. Moreover, unlike [45, 46, 47], our method does not require annotation examples for unseen affordances and can also work directly with 3D data instead of 2D images. The main concept of our approach is illustrated in Figure 1.
In this paper, we present Open-Vocabulary Affordance Detection (OpenAD). Our main goal is to provide a framework that does not restrict the application to a fixed affordance label set. Our method takes advantage of the recent large-scale language models, i.e., CLIP [48], and enhances the generalizability of the affordance detection task in 3D point clouds. Particularly, we propose a simple, yet effective method for learning the affordance label and the visual feature together in a shared space. Our method enables zero-shot learning with the ability to process new open-language texts as query affordances.
Our contributions are summarized as follows:
* We present OpenAD, a simple but effective method to tackle the task of open-vocabulary affordance detection.
* We conduct intensive experiments to validate our method and demonstrate the usability of OpenAD in real-world robotic applications.
## II Related Work
**Pixel-Wise Affordance Detection.** A large number of works consider affordance detection as a pixel-wise labeling task, see, e.g., [49, 50, 51, 52, 53, 54, 26, 27, 28, 55, 29, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 286, 287, 289, 288, 289, 290, 282, 284, 285, 286, 287, 288, 289, 291, 286, 287, 288, 289, 292, 287, 288, 289, 288, 289, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 332, 334, 34, 35, 36, 37, 38, 39, 310, 33, 34, 36, 38, 39, 32, 30, 32, 34, 36, 38, 39, 32, 35, 39, 33, 36, 39, 37, 39, 38, 39, 39, 40, 41, 42, 43, 43, 44, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 52, 59, 60, 53, 54, 56, 58, 59, 61, 62, 63, 64, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 82, 89, 83, 84, 86, 88, 89, 80, 84, 85, 87, 86, 87, 88, 89, 80, 83, 88, 89, 81, 84, 86, 89, 82, 85, 87, 88, 89, 80, 84, 86, 89, 82, 89, 84, 87, 88, 89, 80, 85, 89, 86, 87, 88, 89, 80, 86, 88, 89, 81, 82, 89, 82, 83, 84, 85, 86, 87, 88, 89, 82, 89, 84, 88, 89, 85, 86, 89, 87, 88, 89, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 89, 82, 87, 88, 89, 80, 83, 84, 86, 89, 87, 88, 89, 80, 84, 88, 89, 80, 85, 89, 86, 87, 88, 89, 89, 80, 89, 81, 82, 83, 84, 86, 88, 89, 82, 83, 85, 89, 84, 86, 87, 88, 88, 89, 80, 89, 82, 89, 80, 86, 89, 87, 88, 88, 89, 80, 88, 89, 82, 89, 80, 83, 84, 85, 89, 86, 87, 88, 89, 88, 89, 80, 89, 81, 82, 83, 84, 86, 89, 85, 89, 86, 87, 89, 88, 88, 89, 80, 87, 89, 88, 89, 80, 89, 80, 89, 80, 81, 82, 84, 86, 89, 80, 82, 83, 85, 89, 86, 87, 88, 89, 80, 89, 82, 89, 80, 83, 84, 89, 85, 86, 89, 80, 87, 88, 89, 82, 89, 80, 89, 80, 89, 80, 81, 82, 83, 84, 89, 86, 89, 80, 83, 84, 88, 89, 80, 84, 89, 85, 86, 87, 89, 80, 86, 88, 89, 80, 87, 88, 89, 89, 80, 89, 82, 89, 80, 83, 84, 89, 80, 85, 86, 89, 87, 89, 88, 89, 80, 89, 80, 89, 80, 89, 80, 89, 81, 82, 83, 84, 89, 86, 89, 80, 89, 80, 89, 80, 81, 82, 89, 83, 85, 89, 86, 89, 87, 88, 89, 80, 89, 82, 89, 80, 89, 80, 89, 80, 89, 80, 81,80, 89, 82, 89, 80, 89, 80, 89, 80, 89, 80, 89, 81,80, 89, 82, 89, 80, 89, 80, 89, 81, 82, 89, 83, 89, 80, 89, 80, 89, 80, 89, 81, 80, 89, 80, 89, 80, 89, 80, 89, 81,80, 89, 80, 89, 80, 89, 81, 80,8, 89, 82, 89,
however, effective correlation metric to jointly learn the point cloud visual feature and the text embedding features. In this manner, our method leverages the similarities of text embeddings of unseen affordances that are semantically related to the ones seen in the training process. The overall framework of our method is depicted in Figure 2.
### _Open-Vocabulary Affordance Detection_
**Text encoder.** The text encoder \(f_{\mathrm{text}}\left(\cdot\right)\) embeds the set of potential affordance labels into an embedding space \(\mathbb{R}^{D}\). Similar to other works [59, 60], the text encoder can be an arbitrary network. In this work, we employ the state-of-the-art pre-trained ViT-B/32 text encoder from CLIP [48]. The text encoder produces \(m\) word embeddings \(\mathbf{T}_{1},\mathbf{T}_{2},...,\mathbf{T}_{m}\in\mathbb{R}^{D}\) as the presentation of the input affordance labels. Note that we only use the CLIP text encoder to extract the text features and freeze \(f_{\mathrm{text}}\left(\cdot\right)\) during both training and testing.
**Point cloud network.** The second component of our method is the point cloud network. The \(n\) input points are plugged into the point cloud network \(f_{\mathrm{pc}}\left(\cdot\right)\) producing an embedding vector for every input point. Similar to the text encoder, the architecture of the point cloud network can be various. In this work, we use the state-of-the-art point cloud model, i.e., PointNet++[66] as the underlying architecture. Furthermore, we append to the end of the backbone with a convolutional layer with \(D\) output units shared across all points, followed by a batch norm layer. More specifically, the point cloud network \(f_{\mathrm{pc}}\left(\cdot\right)\) produces a set of \(n\) vectors \(\mathbf{P}_{1},\mathbf{P}_{2},...,\mathbf{P}_{n}\in\mathbb{R}^{D}\). In contrast to the text encoder, the weights of the point cloud network \(f_{\mathrm{pc}}\left(\cdot\right)\) are updated during the training.
**Learning text-point correlation.** To enable open-vocabulary affordance detection, the semantic relationships between the point cloud affordance and its potential labels have to be computed. Particularly, we correlate point-wise embeddings of the input point cloud \(\mathbf{P}_{i}\) and text embeddings of affordance labels \(\mathbf{T}_{j}\) using the cosine similarity function. The correlation value \(F_{i,j}\), which is an element of the \(i\)-th row and \(j\)-th column of the correlation matrix \(\mathbf{F}\in\mathbb{R}^{n\times m}\), is computed as
\[F_{i,j}=\frac{\mathbf{P}_{i}^{\top}\mathbf{T}_{j}}{\left\|\mathbf{P}_{i} \right\|\left\|\mathbf{T}_{j}\right\|}\enspace. \tag{1}\]
The point-wise softmax output of a single point \(i\) is then computed in the form
\[S_{i,j}=\frac{\exp\left(F_{i,j}/\tau\right)}{\sum_{k=1}^{m}\exp(F_{i,k}/\tau)} \enspace, \tag{2}\]
where \(\tau\) is a learnable temperature parameter [67]. This computation is applied for every point in the point cloud.
During the training, we encourage the point cloud network \(f_{\mathrm{pc}}\left(\cdot\right)\) to provide point embeddings that are close to the text embeddings. These text embeddings are produced by the text encoder \(f_{\mathrm{text}}\left(\cdot\right)\) of the corresponding ground-truth classes. Specifically, given the embedding \(\mathbf{P}_{i}\in\mathbb{R}^{D}\) of point \(i\), we aim to maximize the value of the entry \(F_{i,j}\) that is the similarity of \(\mathbf{P}_{i}\) and the text embedding \(\mathbf{T}_{j}\) corresponding to the ground-truth label \(j=y_{i}\). This can be accomplished by optimizing the weighted negative log-likelihood loss of the point-wise softmax output over the entire point cloud in the form
\[L=-\sum_{i=1}^{n}w_{y_{i}}\log S_{i,y_{i}}\enspace, \tag{3}\]
where \(w_{y_{i}}\) is the weighting parameter to the imbalance problem of the label classes during the training. Inspired by [66], we define this weight as
\[w_{j}=\left(\frac{\max\left\{c_{1},c_{2},...c_{m}\right\}}{c_{j}}\right)^{1/3}, \tag{4}\]
where \(c_{j}\) is the number of points of the class \(j\) of the training set.
### _Training and Inference_
During the training, we fix the text encoder and train the rest of the network end-to-end. Similar to [36], we fix the number of points in a point cloud to \(n=2048\). We set \(D\) to \(512\). We train our network using the Adam optimizer [68] with the learning rate \(\alpha=10^{-3}\) and the weight decay \(\gamma=10^{-4}\). The proposed framework is trained over \(200\) epochs on a 24GB-RAM NVIDIA Geforce RTX 3090 Ti with a
Fig. 2: The overview of the proposed Open-Vocabulary Affordance Detection (OpenAD) network. First, the input point cloud is fed into a point cloud network to extract per-point embeddings. Second, the affordance labels are passed into a text encoder to extract the text embeddings. Subsequently, the correlation between the point-wise features and the corresponding text embeddings is computed using the cosine similarity function. Finally, a softmax layer is employed to predict language-driven affordances.
batch size of \(16\). We initialize the value of \(\ln{(1/0.07)}\) for \(\tau\). During the inference, we feed the point cloud and any text as the input label to detect the wanted affordance from the point cloud. The inference process takes approximately \(100\,\mathrm{ms}\) on average with our proposed network.
## IV Experiments
In this section, we perform several experiments to validate the effectiveness of our OpenAD. We start with a zero-shot detection setting to verify the ability of OpenAD to generalize to previously unseen affordances. Secondly, we present OpenAD's notable qualitative results together with visualizations. Finally, we conduct additional ablation studies to further investigate other aspects of OpenAD.
### _Zero-Shot Open-Vocabulary Affordance Detection_
**Dataset.** We use the 3D AffordanceNet dataset [36] in our experiments. 3D AffordanceNet dataset is currently the largest dataset for affordance understanding using 3D point cloud data with \(22,949\) instances from \(23\) object categories and \(18\) affordance labels. As in other zero-shot setups [69, 70, 71], we need more label classes to verify the robustness of the methods, therefore, we re-label the 3D AffordanceNet with the extra \(18\) affordance classes and also consider the background as a class, hence making the total number of affordance labels \(37\). Following [36], we benchmark on the two tasks: the full-shape and partial-view tasks. The partial-view setup is more useful in robotics as the robot usually can only observe a partial-view of the object's point cloud.
**Baselines and Evaluation Metrics.** We compare our method with the following recent methods for zero-shot learning in 3D point clouds: ZSLPC [69], TZSLPC [70], and 3DGenZ [71]. Note that, since these baselines used GloVe [72] or Word2Vec [73] for word embedding, which are less powerful models than CLIP, we replace their original text encoders with CLIP for a fair comparison. For ZSLPC [69] and TZSLPC [70], we change their classification heads to the segmentation task. As in [74, 66, 75], we use three metrics to evaluate the results: mIoU (mean IoU over all classes), Acc (overall accuracy over all points), and mAcc (mean accuracy over all classes).
**Results.** Table I shows that our OpenAD achieves the best results on both tasks and all three metrics. In particular, on the full-shape task, OpenAD significantly surpasses the runner-up model (ZSLPC) by 4.40% on mIoU. OpenAD also outperforms others on Acc (0.84% over 3DGenZ) and on mAcc (0.81% over ZSLPC). Similarly, for the partial-view task, our method has the highest mIoU (3.98% higher than the second-best ZSLPC), and also the highest Acc and mAcc.
### _Qualitative Results_
We present several examples to demonstrate the generality and flexibility of OpenAD. Primarily, we use objects from the 3D AffordanceNet [36] for our visualizations. We also select objects from the ShapeNetCore dataset [76] to analyze the capability of OpenAD to generalize to unseen object categories and new affordance labels.
**Generalization to New Affordance Labels.** We illustrate several examples showing the ability of OpenAD to generalize to unseen affordance classes in Figure 3. In the upper row of Figure 3, we present the detection results of OpenAD for nine seen affordances on appropriate objects. As trained on these affordances, OpenAD produces good detection results. Next, for each object, we feed new affordance labels to the models while keeping the same corresponding object, and present the detection results in the two rows below. The visualization shows that OpenAD successfully detects the associated regions for the queried new affordance labels, even though the labels are not included in the training set.
**Generalization to Unseen Object Categories.** In this work, OpenAD is trained on 3D AffordanceNet dataset [36], which covers 23 object categories. To verify the generalization ability of our method on new unseen objects, we select new novel objects from the ShapeNetCore dataset [76] and test them in both cases: seen and open-vocabulary affordance labels. We use the farthest point sampling algorithm to uniformly sample \(2,048\) points from the surface of each object. All points are then centered and scaled before being fed into OpenAD. Figure 4 summarises the results. This figure shows that OpenAD is able to detect the affordance classes on new object categories. This confirms the generalization of our OpenAD for downstream robotic tasks.
**Multi-Affordance Detection.** In OpenAD, the number of affordances in the label set, \(m\), can vary. This flexible design allows OpenAD to detect multiple affordances at once. Figure 5 presents the detection results of two objects given label sets with different numbers of affordances. Moreover, we observe a notable ability of OpenAD that it does not fix the label for a particular point but finds the most suitable label in a specific label set. Concretely, given an object, OpenAD first detects affordances in a particular label set. By maintaining the earlier affordances, once a new affordance label is added to the label set, there are certain points will be labeled by new affordances over the previous affordance classes. For instance, in the left column of Figure 5, the points in the upper body of the bottle are labeled as contain in the first run, then they are re-labeled as wrap-grasp in the second run and finally as grasp in the last run. This OpenAD's ability can also be observed in the case of the knife object in the right column of Figure 5. It further demonstrates our OpenAD's flexibility.
### _Ablation Study_
**Will language-driven architecture affect the detection results?** In this work, we mainly focus on jointly learning a vision-language model to improve the generalization in downstream robotic tasks, and do not aim for improving the accuracy of the traditional affordance detection tasks. This leads to the question that whether our language-driven model affects the accuracy of the traditional affordance detection task. To verify this, we train our OpenAD on the original 3D AffordanceNet with its original label set and training split, and compare our result with other state-of-the-art methods, including PointNet++ [66], Dynamic Graph CNN (DGCNN) [74] and Point Transformer [75]. For PointNet++ and DGCNN, we follow similar designs in [36] and change the final classifier to a linear layer detecting the affordance classes. For Point Transformer, we apply the same architecture in [75]. Table II presents the results of all methods on 3D AffordanceNet [36]. From this table, we find that OpenAD performs competitively when compared to other methods. Therefore, we can conclude that while our method is designed for a different purpose, it still can be used as a strong benchmark for closed-set affordance detection.
**Backbones and Text Encoders.** In table III, we conduct an ablation study on two different point cloud backbones, i.e., PointNet++ [66] and DGCNN [74], and two different pretrained text encoders, i.e., CLIP ViT-B/32 [48] and BERT [58]. Note that with BERT, the parameter \(D\) is set to \(768\). We observe that different combinations of backbones and text encoders perform equivalently on the closed-set tasks. Meanwhile, on the open-vocabulary tasks, PointNet++ performs better than DGCNN, and CLIP ViT-B/32 performs better than BERT. The gap in the performance of frameworks using CLIP text encoder compared to those using BERT is significant, demonstrating the superiority of CLIP in semantic language-vision understanding.
### _Robotic Demonstration_
The experiment setup, shown in Fig. 6, comprises five main components, i.e. the robot KUKA LBR iiwa R820, the PC1 running the real-time automation software Beckhoff
Fig. 4: Results on unseen object categories. OpenAD gives reasonable outputs when detecting affordances on object categories that are not in the training set. _Upper row:_ Seen affordances. _Lower row:_ Unseen affordances.
Fig. 5: Multi-affordance detection using our method.
Fig. 3: The visualization of our OpenAD’s capability to detect seen and unseen affordances. _The upper row:_ Detection results of OpenAD on seen affordances. _The middle and lower rows:_ Detection results of OpenAD on unseen affordances that do not exist in the training set.
TwinCAT, the Intel RealSense D435i camera, the Robotiq 2F-85 gripper, and the PC2 running Robot Operating System (ROS) Noetic 20.04. PC1 communicates with the robot via a network interface card (NIC) using the EtherCAT protocol, marked by the blue region in Fig. 6. Note that the robot control is implemented in a C++ module in PC1. The sampling time is set to \(125\,\upmu\)s for the robot sensors and actuators. PC2 controls the gripper and the camera via the USB protocol in the ROS environment. Additionally, the two PCs communicate with each other via an Ethernet connection. After receiving point cloud data of the environment from the RealSense D435i camera, we utilize the state-of-the-art object localization method [77] to identify the object, then perform point sampling to get \(2048\) points. We then feed this point cloud with a natural language affordance command. Note that, using our OpenAD, we can have a general input command and are not restricted to a predefined affordance label set. Our OpenAD returns the affordance region, which can be used for the grasp pose detection module [44], the analytical inverse kinematics module [78], and the trajectory optimization module [79]. Several demonstrations, such as holding and raising a bag, wrapping a bottle, and pushing an earphone, can be found in our supplementary material.
### _Discussion_
Despite achieving promising results, OpenAD has its limitation. Our method is still far from being able to detect completely unseen affordances. The upper row of Figure 7 shows cases when OpenAD fails to detect unseen affordances on seen objects. Moreover, we present false-positive predictions of OpenAD in the lower row of Figure 7. In these cases, OpenAD detects affordances that the objects do not provide, i.e., display for the bag, support for the microwave, and openable for the hat. From our intensive experiments, we see several improvement points for future work: _i)_ learning the visual-language correlation plays an important role in this task and can be further improved by using more complicated techniques such that cross-attention mechanism [80], _ii)_ applying a stronger point cloud backbone would likely improve the result, and _iii)_ having a large-scale dataset with several affordance classes would be beneficial for benchmarking and real-world robotic applications. Finally, we will also release our source code to encourage further study.
## V Conclusions
We proposed OpenAD, a simple yet effective method for open-vocabulary affordance detection in 3D point clouds. Different from traditional approaches, OpenAD, with its capability of semantic understanding, can effectively detect unseen affordances without requiring annotated examples. Empirical results show that OpenAD outperforms other methods by a large margin. We further verified the capability of OpenAD to detect unseen affordances on both known and unseen objects. Additionally, we demonstrated OpenAD's usefulness in real-world robotic applications.
Fig. 6: Example of a robot demonstration. (a) Experimental setup. (b) Result from OpenAD.
Fig. 7: Failure cases of OpenAD. _Upper row:_ Cases when OpenAD fails to detect unseen affordances. _Lower row:_ OpenAD detects affordances that are not furnished by the objects. |
2309.01153 | Photon noise correlations in millimeter-wave telescopes | Many modern millimeter and submillimeter (``mm-wave'') telescopes for
astronomy are deploying more detectors by increasing detector pixel density,
and with the rise of lithographed detector architectures and high-throughput
readout techniques, it is becoming increasingly practical to overfill the focal
plane. However, when the pixel pitch $p_{\rm pix}$ is small compared to the
product of the wavelength $\lambda$ and the focal ratio $F$, or
$p_{\mathrm{pix}} \lesssim 1.2 F \lambda$, the Bose term of the photon noise
correlates between neighboring detector pixels due to the Hanbury Brown & Twiss
(HBT) effect. When this HBT effect is non-negligible, the array-averaged
sensitivity scales with detector count $N_{\mathrm{det}}$ less favorably than
the uncorrelated limit of $N_{\mathrm{det}}^{-1/2}$. In this paper, we present
a general prescription to calculate this HBT correlation based on a quantum
optics formalism and extend it to polarization-sensitive detectors. We then
estimate the impact of HBT correlations on the sensitivity of a model mm-wave
telescope and discuss the implications for focal-plane design. | Charles A. Hill, Akito Kusaka | 2023-09-03T12:13:33Z | http://arxiv.org/abs/2309.01153v1 | # Photon noise correlations in millimeter-wave telescopes
###### Abstract
Many modern millimeter and submillimeter ("mm-wave") telescopes for astronomy are deploying more detectors by increasing detector pixel density, and with the rise of lithographed detector architectures and high-throughput readout techniques, it is becoming increasingly practical to overfill the focal plane. However, when the pixel pitch \(p_{\rm pix}\) is small compared to the product of the wavelength \(\lambda\) and the focal ratio \(F\), or \(p_{\rm pix}\lesssim 1.2F\lambda\), the Bose term of the photon noise correlates between neighboring detector pixels due to the Hanbury Brown & Twiss (HBT) effect. When this HBT effect is non-negligible, the array-averaged sensitivity scales with detector count \(N_{\rm det}\) less favorably than the uncorrelated limit of \(N_{\rm det}^{-1/2}\). In this paper, we present a general prescription to calculate this HBT correlation based on a quantum optics formalism and extend it to polarization-sensitive detectors. We then estimate the impact of HBT correlations on the sensitivity of a model mm-wave telescope and discuss the implications for focal-plane design.
###### Contents
* 1 Introduction
* 2 Theoretical foundations
* 2.1 Photon correlations
* 2.2 Simple example without polarization
* 2.3 Simple example with polarization
* 2.4 Detector photon noise
* 3 Model optical system
* 3.1 Telescope
* 3.2 Focal plane
* 4 Correlation calculation
* 4.1 Stop radiation
* 4.2 Aperture radiation
* 4.3 Intensity correlation patterns
* 4.4 Polarized correlation patterns
* 4.5 VCZT's assumptions and applicability
* 5 Impact of correlations on sensitivity
* 5.1 Mapping speed
* 5.2 Pixel size optimization
* 6 Implications for experiment design
* 7 Conclusion
* A Thermal photon density matrix
* B Partial Coherence of Sources
* C Goodness of Flat-Illumination Approximation
* 1.1 Aperture Radiation
* 2 Stop Radiation
## 1 Introduction
Modern millimeter and submillimeter ("mm-wave") telescopes for astronomy are often limited by fluctuations in the background radiation. This is especially true for ground-based experiments where emission from the atmosphere and telescope are substantial. At high frequencies (e.g., optical wavelengths), the mode's mean occupation number \(\bar{n}\ll 1\), and photon fluctuations are dominated by uncorrelated shot noise such that \(\Delta\bar{n}\approx\sqrt{\bar{n}}\). At low frequencies (e.g., radio wavelengths), \(\bar{n}\gg 1\) and photon fluctuations are dominated by the Bose term of the photon noise ("wave noise") which correlates such that \(\Delta\bar{n}\approx\bar{n}\). Millimeter wavelengths lie in a cross-over regime where \(\bar{n}\sim 1\), making the calculation of array-averaged sensitivity in general nontrivial.
In addition, many modern mm-wave telescopes, particularly those equipped with cryogenic bolometric detector arrays, are field-of-view-limited and therefore aim to increase detector count by increasing pixel density, which is typically cheaper than building more telescopes. In this high-pixel-density paradigm, it is possible to overfill the focal plane such that neighboring detectors sample the same spatial mode. As we will show, this mode sharing introduces photon-noise correlations when the pixel spacing \(p_{\rm pix}<1.2F\lambda\), where \(F\) is the effective focal ratio at the focal plane and \(\lambda\) is the operational wavelength of interest. In this oversampled regime, photon noise correlations can have substantial impacts on the array-averaged sensitivity.
The theory of intensity correlations from incoherent
sources has been studied extensively [1; 2; 3; 4; 5; 6], and the phenomenon was experimentally demonstrated by Hanbury Brown and Twiss (HBT) via measurements of the angular diameter of distant astronomical sources [7; 8; 9]. The impact of HBT correlations on mm-wave telescopes is discussed by Padin [10], where an empirical factor is introduced in an attempt to account for the corresponding sensitivity degradation.
In this paper, we present a prescription to estimate HBT correlations among detectors in millimeter- or submillimeter-wave telescopes based on a quantum optics formalism adopted from a circuit-based formalism for thermal photon correlations in quantum detectors developed by Zmuidzinas [11]. We then extend this formalism to polarization-sensitive detectors and use it to calculate the impact of HBT correlations on the sensitivity of a model mm-wave telescope.
This paper is organized as follows. In Sec. 2, we review the theoretical foundations of our formalism based on Ref. [11] and show how they relate to the HBT effect [7; 8; 9] and the van Cittert-Zernike theorem (VCZT) [12; 13]. We also show the formalism's relation to the standard single-detector sensitivity calculations for a bolometer (e.g., see Ref. [14] and references therein). Section 3 defines a model optical system for estimating the array-averaged sensitivity impact of HBT correlations. In Sec. 4, we derive an expression for the intensity correlation using this model optical system. Section 5 discusses the impact of HBT correlations on the sensitivity of a telescope system with close-packed detectors on the focal plane. In Sec. 6, we discuss the implications of the presented sensitivity optimization for the design of mm-wave detector arrays. Finally, Sec. 7 presents our conclusions.
## 2 Theoretical foundations
In this section, we review the theory of photon-count statistics and reformulate them to the context of astronomical telescope systems. We first adopt the treatment of thermal photon correlations derived by Zmuidzinas [11], which uses the machinery of transmission lines and scattering matrices to calculate the propagation of quantum modes in a linear optical system. We then apply this treatment to optical systems, where the optical equivalence theorem [15] allows us to equate the scattering matrix for quantum modes with the mode-mode coupling of classical waves (e.g., those obtained via physical optics calculations). We then show a few simple examples that relate this formalism to the the HBT effect, VCZT, and photon-noise calculations.
### Photon correlations
We first consider a linear, lossy network of \(k=1,2,\cdots\) input ports detected at an output port \(i\).1 Input modes enter the network along semi-infinite transmission lines via the photon creation operator \(a_{k}^{\dagger}(\nu)\) and are mapped onto the outputs via the scattering matrix \(S_{ik}\). Loss in the system is modeled by an orthogonal scattering matrix \(S_{ik}^{\prime}\), which governs the noise added between input mode \(k\) and output mode \(i\). Given this structure (Fig. 1), the creation operator \(b_{i}^{\dagger}(\nu)\) at output \(i\) and mode frequency \(\nu\) is
Footnote 1: There is no fundamental distinction between the inputs and outputs, and every port has both incoming and outgoing photons, even though we will relate the input ports to optical input and the output ports to detectors.
\[b_{i}^{\dagger}(\nu)=\sum_{k}S_{ik}(\nu)a_{k}^{\dagger}(\nu)+\sum_{k^{\prime} }S_{ik^{\prime}}^{\prime}(\nu)a_{k^{\prime}}^{\dagger}(\nu)\,. \tag{1}\]
As demonstrated in Eq. (1), there is no fundamental distinction between the input/output ports and the lossy ports. Therefore, for simplicity, we hereafter absorb the scattering matrix for the lossy ports \(S_{ik}^{\prime}\) into \(S_{ik}\) and treat both mechanisms via a single unified scattering matrix.
The two-photon expectation value at detector outputs \(i\) and \(j\) is given by
\[\left\langle b_{i}^{\dagger}(\nu)b_{j}(\nu^{\prime})\right\rangle=\sum_{k} \sum_{m}S_{ik}^{*}(\nu)S_{jm}(\nu^{\prime})\left\langle a_{k}^{\dagger}(\nu)a _{m}(\nu^{\prime})\right\rangle. \tag{2}\]
Here, the expectation values \(\left\langle\cdots\right\rangle\) are taken over quantum-statistical mixed states, governed by the density matrix, and represent the quantum coherence of the photon modes at frequencies \(\nu\) and \(\nu^{\prime}\). See Appendix A for further discussion regarding the thermal photon density matrix.
When the mixed states are in thermal equilibrium, which is a good approximation for the photon sources
Figure 1: A schematic of the scattering matrix quantum circuit formalism. The creation operator for incoming modes is \(a^{\dagger}\), while that of the outgoing modes is \(b^{\dagger}\). The scattering matrix \(S\) maps the input modes onto the output modes, while the noise matrix \(S^{\prime}\) calculates noise and loss within the system. \(S\) and \(S^{\prime}\) are assumed to be orthogonal.
in the calculations that follow [11]2,
Footnote 2: The Kronecker delta \(\delta_{km}\) in Eq. (3) indicates complete incoherence between the input source elements \(k\) and \(m\). This is a good approximation for the applications discussed in this paper. Further discussion on partial coherence of sources can be found in Appendix B.
\[\langle a_{k}^{\dagger}(\nu)a_{m}(\nu^{\prime})\rangle=n(T_{k},\nu)\,\delta_{km }\,\delta(\nu-\nu^{\prime})\,, \tag{3}\]
where \(T_{k}\) is the temperature of port \(k\) and
\[n(T,\nu)\equiv\frac{1}{e^{h\nu/k_{\rm B}T}-1} \tag{4}\]
is the mean occupation number at frequency \(\nu\) of a blackbody at temperature \(T\). We can write the two-photon output expectation value as
\[\left\langle b_{i}^{\dagger}(\nu)b_{j}(\nu^{\prime})\right\rangle\equiv B_{ij} (\nu)\,\delta(\nu-\nu^{\prime})\,, \tag{5}\]
where \(B_{ij}(\nu)\) is the quantum mutual intensity and satisfies
\[B_{ij}(\nu)=\sum_{k}S_{ik}^{*}(\nu)\,S_{jk}(\nu)\,n(T_{k},\nu)\,. \tag{6}\]
When calculated for a single detector \(i\), \(B_{ii}(\nu)\) represents the mean occupation number of the incoming photons at that detector.
Thermal detectors, which are commonly used in mm-wave applications, integrate photon power over time \(\tau\) and sense mean intensity
\[\langle d_{i}\rangle=\frac{1}{\tau}\int_{0}^{\tau}{\rm d}t\,\langle b_{i}^{ \dagger}(t)b_{i}(t)\rangle\simeq\int_{\nu_{1}}^{\nu_{2}}{\rm d}\nu\,h\nu B_{ ii}(\nu)\,, \tag{7}\]
where we define the time-dependent operators as
\[\begin{split} b_{i}(t)&\equiv\int_{\nu_{1}}^{\nu_{2 }}{\rm d}\nu\,\exp\left[2\pi i\nu t\right]\,b_{i}(\nu)\sqrt{h\nu}\,,\\ b_{i}^{\dagger}(t)&\equiv\int_{\nu_{1}}^{\nu_{2}}{ \rm d}\nu\,\exp\left[-2\pi i\nu t\right]\,b_{i}^{\dagger}(\nu)\sqrt{h\nu}\,. \end{split} \tag{8}\]
Here, the integration limits are set by the detection bandwidth \(\Delta\nu=\nu_{2}-\nu_{1}\), and the factors of \(\sqrt{h\nu}\) arise due to power detection as opposed to photon counting. In the context of free-space propagating modes, the operators \(b_{i}(t)\) and \(b_{i}^{\dagger}(t)\), with the factor \(\sqrt{h\nu}\) inserted, can be regarded as electric field operators. In typical detector readout configurations, the integration time \(\tau\) can be regarded as the inverse of the detector sampling rate. The second equality in Eq. (7) is a good approximation when \(\tau\gg 1/\Delta\nu\), which is often true in mm-wave experiments where \(\tau\sim\mathcal{O}(10^{-2}\)-\(10^{-3}\) s) and \(1/\Delta\nu\sim\mathcal{O}(10^{-10}\) s). Since the operators \(b_{i}(t)\) and \(b_{i}^{\dagger}(t)\) represent electric fields, a generalized form of Eq. (7) corresponds to an expression of first-order coherence:
\[\Gamma_{ij}^{(1)}=\frac{1}{\tau}\int_{0}^{\tau}{\rm d}t\,\langle b_{i}^{ \dagger}(t)b_{j}(t)\rangle\simeq\int_{\nu_{1}}^{\nu_{2}}{\rm d}\nu\,h\nu B_{ ij}(\nu)\,. \tag{9}\]
The normalized amplitude coherence \(\gamma_{ij}\) can be written as
\[\gamma_{ij}\equiv\frac{\Gamma_{ij}^{(1)}}{\sqrt{\Gamma_{ii}^{(1)}\Gamma_{jj }^{(1)}}}\simeq\frac{B_{ij}(\bar{\nu})}{\sqrt{B_{ii}(\bar{\nu})B_{jj}(\bar{ \nu})}}\equiv\gamma_{ij}(\bar{\nu})\,, \tag{10}\]
where \(\bar{\nu}\equiv(\nu_{1}+\nu_{2})/2\) is the mean frequency and the second equality is a good approximation when the variation of \(B_{ij}(\nu)\) is small within the detection band of \(\nu_{1}<\nu<\nu_{2}\).
Finally, the covariance for quantum thermal detectors \(\sigma_{ij}^{2}\equiv\langle\Delta d_{i}\Delta d_{j}\rangle=\langle d_{i}d_{j }\rangle-\langle d_{i}\rangle\langle d_{j}\rangle\) can be written as
\[\sigma_{ij}^{2}\simeq\frac{1}{\tau}\int_{\nu_{1}}^{\nu_{2}}{\rm d}\nu\,(h\nu)^ {2}\left[B_{ij}(\nu)\delta_{ij}+|B_{ij}(\nu)|^{2}\right] \tag{11}\]
as shown in Ref. [11]. The first term in the integrand of Eq. (11) represents uncorrelated shot noise, while the second term represents wave noise, which can correlate between output ports. This second \(|B_{ij}(\nu)|^{2}\) term is often referred to as the "bunching term," as it quantifies the degree to which photon arrival times are correlated. For convenience, we define the shot-noise and wave-noise parts of the covariance as
\[\begin{split}\sigma_{ij,\rm shot}^{2}&\equiv\frac{1}{ \tau}\int_{\nu_{1}}^{\nu_{2}}{\rm d}\nu\,(h\nu)^{2}B_{ij}(\nu)\delta_{ij}\,, \\ \sigma_{ij,\rm wave}^{2}&\equiv\frac{1}{\tau}\int_{ \nu_{1}}^{\nu_{2}}{\rm d}\nu\,(h\nu)^{2}|B_{ij}(\nu)|^{2}\,.\end{split} \tag{12}\]
Given the thermal detector covariance \(\sigma_{ij}^{2}\) in Eq. (11), the (quantum) second-order coherence \(\Gamma_{ij}^{(2)}\) can be defined as
\[\Gamma_{ij}^{(2)}\equiv\tau\,\langle\Delta d_{i}\Delta d_{j}\rangle=\tau\sigma _{ij}^{2}\,, \tag{13}\]
where the factor \(\tau\) comes from the fact that \(\langle\Delta d_{i}\Delta d_{j}\rangle\) depends on the integration time \(\tau\) (or the detector sampling rate \(1/\tau\)). Therefore, the second-order coherence \(\Gamma_{ij}^{(2)}\) represents the system's intrinsic degree of intensity coherence in \({\rm W}^{2}\cdot{\rm s}\) and is independent of integration time \(\tau\). These sampling-rate-independent fluctuations of detected photon power \(\sqrt{\tau}\sigma_{ii}\) are equivalent to the detector's photon noise noise-equivalent power (NEP), as in Ref. [14].
The normalized intensity coherence can be defined as
\[\gamma_{ij}^{(2)}=\frac{\tau\,\langle\Delta d_{i}\Delta d_{j}\rangle}{\langle d _{i}\rangle\langle d_{j}\rangle/\sqrt{\Delta\bar{\nu}_{i}\Delta\bar{\nu}_{j}}} \simeq\frac{|B_{ij}(\bar{\nu})|^{2}}{B_{ii}(\bar{\nu})B_{jj}(\bar{\nu})}\equiv \gamma_{ij}^{(2)}(\bar{\nu})\,, \tag{14}\]
with the detector bandwidth defined as
\[\Delta\bar{\nu}_{i}\equiv\frac{\int_{\nu_{1}}^{\nu_{2}}{\rm d}\nu\,(h\nu)^{2}B_ {ii}^{2}(\nu)}{\left[\int_{\nu_{1}}^{\nu_{2}}{\rm d}\nu\,h\nu B_{ii}(\nu)\right] ^{2}}\simeq\frac{\tau\,\sigma_{ii,\rm wave}^{2}}{\langle d_{i}\rangle^{2}}\,. \tag{15}\]
The second equality in Eq. (14) is a good approximation when the variation of the integrand \(|B_{ij}(\bar{\nu})|\) is small
within the detection band \(\nu_{1}<\nu<\nu_{2}\). The normalized intensity coherence corresponds to the correlation coefficient of the wave-noise covariance \(\gamma^{(2)}_{ij}\simeq\sigma^{2}_{ij,\text{wave}}/\sigma_{ii,\text{wave}}\sigma _{jj,\text{wave}}\). In other words, in the limit of a large occupation number \(B_{ii}(\nu)\gg 1\), the mean intensity \(\langle d_{i}\rangle\) and its variance are related via the radiometer equation:
\[\sqrt{\langle\Delta d_{i}^{2}\rangle}=\frac{\langle d_{i}\rangle}{\sqrt{\tau \,\Delta\bar{\nu}_{i}}}\,. \tag{16}\]
The normalized intensity and amplitude coherences can be related as
\[\gamma^{(2)}_{ij}\simeq\gamma^{(2)}_{ij}(\bar{\nu})=|\gamma_{ij}(\bar{\nu})|^ {2}. \tag{17}\]
This can also be derived for generic classical fields with complex Gaussian-random fluctuations (see, e.g., Ref. [16] and references therein). For reasons described in Sec. 2.2, we hereafter call \(\gamma_{ij}(\nu)\) and \(\gamma^{(2)}_{ij}(\nu)\) the VCZT and HBT coefficients, respectively.
The intensity coherence \(\gamma^{(2)}_{ij}\) is not affected by decoherence. Decoherence is present, or the complex phase of \(\gamma_{ij}(\nu)\) rotates over the detection band such that \(|\gamma_{ij}|<|\gamma_{ij}(\bar{\nu})|\), when the path-length difference between the light source and detectors \((i,j)\) is larger than the inverse of the detection bandwidth \(|R_{i}-R_{j}|\gtrsim c/\Delta\nu\). However, \(\gamma^{(2)}_{ij}\) is a real-valued positive quantity, and thus such an effect is nonexistent.3 The intensity signal would still decorrelate if \(|R_{i}-R_{j}|\) were larger than \(c\tau\), but for typical mm-wave experiments, \(\tau\sim\mathcal{O}(10^{-2}\text{--}10^{-3}\text{ s})\) and detectors are arranged such that \(|R_{i}-R_{j}|\lesssim 10^{-3}\,\text{m}\), leading to \(|R_{i}-R_{j}|\ll c\tau\). Therefore, de-correlation can be safely ignored.
Footnote 3: In other words, while the approximation in Eq. (10) neglects decoherence, those in Eqs. (14) and (17) do not, and the second equality in Eq. (17) is exact.
### Simple example without polarization
We now apply the formalism in Sec. 2.1 to mm-wave optical systems. As noted in the Appendix A of Ref. [11], the quantum circuit treatment is readily applicable to free-space propagating waves, as the optical equivalence theorem [15] allows us to equate the scattering matrix to mode-mode coupling coefficients of classical electromagnetic wave amplitudes.
First, we consider the simplest case shown in Fig. 2 with two identical planar detectors at \(z=z_{\text{pix}}\) and a far-field planar source at \(z=z_{s}\). We assume that \(|z_{s}-z_{\text{pix}}|\equiv L\gg 2D_{\text{pix}}^{2}/\lambda\), where \(D_{\text{pix}}\) is each detector's aperture diameter and \(\lambda\equiv c/\nu\) is the free-space electromagnetic wavelength, and we assume that the source is thermal with 100% emissivity. Given the classical-wave amplitude of the electric field \(E_{i}\) detected by detector \(i\), the partial field amplitude \(\Delta E_{i}\) from an infinitesimal area of the planar source \(\Delta s_{k}\) can be written as
\[\Delta E_{i}=\mathcal{C}\,G(\theta_{i,\text{pix}},\phi_{i,\text{pix}})\, \sqrt{\cos\theta_{i,\text{pix}}}\,\frac{e^{2\pi i\nu R_{i}/c}}{R_{i}}\,\Delta s _{k}\,, \tag{18}\]
where \((\theta_{i,\text{pix}},\phi_{i,\text{pix}})\) is the polar coordinate of the line between the detector and the infinitesimal source area, \(R_{i}\) is the distance between the detector and the infinitesimal area, \(G(\theta_{i,\text{pix}},\phi_{i,\text{pix}})\) is the detector's angular response function, \(\mathcal{C}\) is a constant, and \(\sqrt{\cos\theta_{i,\text{pix}}}\) is a Lambertian factor. While not explicitly written, \(\mathcal{C}\) and \(G(\cdots)\) may depend on frequency \(\nu\).
As noted previously, the optical equivalence theorem allows us to relate the right-hand side of Eq. (18) with the scattering matrix \(S_{ik}(\nu)\). Thus, following Eq. (6), the mutual intensity can be calculated as
\[B_{ij}^{\text{np}}(\nu)=|\mathcal{C}|^{2}\iint_{\sigma}\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Simple example with polarization
Building on Sec. 2.2, we now introduce the polarization degree of freedom. We adopt the same geometry as Fig. 2 and assume that each pixel is an ideal dual-polarization detector equipped with two orthogonal linear polarimeters. When decomposing the propagating field into two polarization degrees of freedom, it is convenient to adopt the Ludwig-3 basis set [17]
\[\hat{e}_{1}^{L3}(\theta,\phi) \equiv\hat{\theta}\cos\phi-\hat{\phi}\sin\phi\:, \tag{22}\] \[\hat{e}_{2}^{L3}(\theta,\phi) \equiv\hat{\theta}\sin\phi+\hat{\phi}\cos\phi\:,\]
where \((\theta,\phi)\) defines the wave propagation direction in polar coordinates, and \(\hat{\theta}\) and \(\hat{\phi}\) are unit vectors in the direction of \((\theta,\phi)\).
Each detector pixel \(i\) comprises two detectors \(i_{1}\) and \(i_{2}\) with polarization angles \(\psi_{i}\) and \(\psi_{i}+\pi/2\), respectively. The angle \(\psi_{i}\) is defined in the \(x\)-\(y\) plane such that \(\psi_{i}=0\) when the polarization angle of detector \(i_{1}\) is along the \(x\)-axis. Assuming ideal polarimetry,4 detector \(i_{1}\) only responds to the propagating electric field with polarization direction
Footnote 4: Ideal polarimetry is often characterized as having low cross polarization. See, e.g., Refs. [18; 19; 20; 21] for further discussion.
\[\hat{e}_{i1}=\hat{e}_{1}^{L3}\cos\psi_{i}+\hat{e}_{2}^{L3}\sin\psi_{i}\:. \tag{23}\]
In other words, in the reverse-time sense, the electric field emitted from detector \(i_{1}\) has polarization direction \(\hat{e}_{i1}\) as a function of \((\theta,\phi)\) in the far field. Similarly, detector \(i_{2}\) only responds to polarization direction
\[\hat{e}_{i2}=-\hat{e}_{1}^{L3}\sin\psi_{i}+\hat{e}_{2}^{L3}\cos\psi_{i}\:. \tag{24}\]
The surface source can also be decomposed into two polarization degrees of freedom, denoted by \(k_{1}\) and \(k_{2}\). This decomposition is arbitrary as long as \(k_{1}\) and \(k_{2}\) are independent and orthogonal Gaussian-random emitters, which is the case for unpolarized thermal sources. We therefore assume for simplicity that sources \(k_{1}\) and \(k_{2}\) emit with polarization \(\hat{e}_{1}^{L3}\) and \(\hat{e}_{2}^{L3}\), respectively.
The coupling between detector \(i_{1}\) and source \(k_{1}\) is similar to Eq. (18), except for an additional polarization overlap factor \((\hat{e}_{i1}\cdot\hat{e}_{1}^{L3})\):
\[\Delta E_{i1}=(\hat{e}_{i1}\cdot\hat{e}_{1}^{L3}) \,\mathcal{C} \,G(\theta_{i,\text{pix}},\phi_{i,\text{pix}}) \tag{25}\] \[\cdot \sqrt{\cos\theta_{i,\text{pix}}}\,\frac{e^{2\pi i\nu R_{i}/c}}{ R_{i}}\,\Delta s_{k1}\:.\]
Taking the coefficient on the right-hand side as scattering matrix element \(S_{ik}(\nu)\) and following Eq. (6), the mutual intensity between detectors \(i_{1}\) and \(j_{1}\) can be expressed as
\[B_{i1,j1}(\nu) =\sum_{p=1,2}(\hat{e}_{i1}\cdot\hat{e}_{p}^{L3})(\hat{e}_{j1} \cdot\hat{e}_{p}^{L3})B_{ij}^{\text{np}}(\nu) \tag{26}\] \[=\cos(\psi_{i}-\psi_{j})B_{ij}^{\text{np}}(\nu)\:,\]
where \(B_{ij}^{\text{np}}(\nu)\) is the "no polarization" mutual intensity in Eq. (19). The mutual intensity for other detector and polarization combinations can be calculated similarly, and the resulting amplitude coherences are
\[\gamma_{i1,j1}(\nu) =\gamma_{i2,j2}(\nu)=\cos(\psi_{i}-\psi_{j})\,\gamma_{ij}^{\text{ np}}(\nu)\:, \tag{27}\] \[\gamma_{i1,j2}(\nu) =-\gamma_{i2,j1}(\nu)=\sin(\psi_{i}-\psi_{j})\,\gamma_{ij}^{\text{ np}}(\nu)\:,\] \[\gamma_{i1,i2}(\nu) =\gamma_{j1,j2}(\nu)=0\:,\]
where \(\gamma_{ij}^{\text{np}}(\nu)\) is defined in Eq. (21).
It is convenient to calculate the covariance of the Stokes parameters, which are defined as the difference in power measured by two orthogonal polarimeters in a single detector pixel, \(Q_{i}\equiv(d_{i1}-d_{i2})/2\). The Stokes-parameter covariance between two pixels can be calculated as
\[\left\langle\Delta Q_{i}\Delta Q_{j}\right\rangle=\frac{1}{2}\cos[2(\psi_{i}- \psi_{j})]\left(\sigma_{ij}^{\text{np}}\right)^{2}\:, \tag{28}\]
where \(\sigma_{ij}^{\text{np}}\) is defined by substituting \(B_{ij}^{\text{np}}(\nu)\) into Eq. (11). We can then define the Stokes \(Q\) normalized coherence analogously to the intensity coherence in Eq. (14)
\[\gamma_{ij}^{Q,(2)}\equiv\frac{\tau\left\langle\Delta Q_{i}\Delta Q_{j} \right\rangle}{\left\langle I_{i}\right\rangle\!\left\langle I_{j}\right\rangle \!\left/\sqrt{\Delta\hat{p}_{i}\Delta\hat{p}_{j}}\right.}=\cos[2(\psi_{i}-\psi_ {j})]\,\gamma_{ij}^{\text{np},(2)}\:, \tag{29}\]
where \(I_{i}\equiv(d_{i1}+d_{i2})/2\) is the Stokes intensity, and \(\gamma_{ij}^{\text{np},(2)}\) is the "no polarization" intensity coherence defined via Eq. (14). As expected, \(\gamma_{ij}^{Q,(2)}\) corresponds to the correlation coefficient of the wave noise components of \(Q_{i}\) and \(Q_{j}\).
As previously mentioned and further discussed in Sec. 3, the form in Eq. (29) that only introduces a \(\cos[2(\psi_{i}-\psi_{j})]\) factor to the "no polarization" case is general as long as the telescope and detector conform to the assumption of ideal polarimetry. It is worth noting that, according to Eq. (29), the focal plane can be designed to minimize Stokes \(Q\) correlation by assigning \(\pm 45^{\circ}\) polarization angles to neighboring pixels.
Figure 2: A simple example where two detectors (\(i\) and \(j\)) on the \(z=z_{\text{pix}}\) plane observe light from a surface source on the \(z=z_{\text{s}}\) plane. The direction of the rays are defined by spherical coordinate \((\theta_{\text{pix}},\phi_{\text{pix}})\).
### Detector photon noise
We now relate the formalism in Sec. 2.1 to the forms of photon noise for bolometric detectors often seen in the literature.
It is convenient to decompose the scattering matrix \(S_{ik}\) between photon source \(k\) and detector \(i\) into that of the detector optics \(S_{ik}^{\mathrm{d}i}\) and that of the telescope optics that couple to detector \(i\), \(S_{k}^{(i)}\). The former includes the detector's quantum efficiency (see below), and the latter includes coupling to the atmosphere, lossy optical elements, and any other detectable photon sources. The detector-only scattering matrix can be written as a simple three-port system as shown in Fig. 3, where \(i\), \(\mathrm{L}i\), and \(\mathrm{C}i\) represent the detection, loss, and optical-coupling ports, respectively. The three corresponding scattering matrix elements, \(S_{i,i}^{\mathrm{d}i}\), \(S_{i,\mathrm{L}i}^{\mathrm{d}i}\), and \(S_{i,\mathrm{C}i}^{\mathrm{d}i}\), can be related to the reflection, loss, and transmission while satisfying the normalization
\[|S_{i,i}^{\mathrm{d}i}(\nu)|^{2}+|S_{i,\mathrm{L}i}^{\mathrm{d}i}(\nu)|^{2}+| S_{i,\mathrm{C}i}^{\mathrm{d}i}(\nu)|^{2}=1\,. \tag{30}\]
We define the quantum efficiency of detector \(i\) as
\[\eta_{i}(\nu)\equiv|S_{i,\mathrm{C}i}^{\mathrm{d}i}(\nu)|^{2}=1-|S_{i,i}^{ \mathrm{d}i}(\nu)|^{2}-|S_{i,\mathrm{L}i}^{\mathrm{d}i}(\nu)|^{2}\,, \tag{31}\]
which includes both the detector's efficiency and its spectral response.
For bolometric focal planes, the temperature of detection port \(i\) and lossy port \(\mathrm{L}i\) is typically \(<0.5\) K. Therefore, when evaluating Eq. (6), the occupation number \(n(\nu,T)\) due to thermal emission from these detection ports are small at (sub)millimeter frequencies and can be neglected in further calculations. We can then write Eq. (6) as
\[B_{ij}(\nu)=\sqrt{\eta_{i}(\nu)\,\eta_{j}(\nu)}\sum_{k}S_{k}^{(i)*}(\nu)\,S_{k} ^{(j)}(\nu)\,n(T_{k},\nu) \tag{32}\]
since
\[S_{ik}(\nu)=S_{i,\mathrm{C}i}^{\mathrm{d}i}(\nu)\,S_{k}^{(i)}(\nu)=\sqrt{\eta _{i}(\nu)}\,S_{k}^{(i)}(\nu)\;. \tag{33}\]
Note that we can ignore the overall complex phase of \(S_{i,\mathrm{C}i}^{\mathrm{d}i}(\nu)=\sqrt{\eta_{i}(\nu)}\) without loss of generality.
We can define the occupation number at the detector input as an average of the occupation numbers of all photon sources
\[n(T_{(i)},\nu)\equiv\sum_{k}|S_{k}^{(i)}(\nu)|^{2}\;n(T_{k},\nu)\;, \tag{34}\]
since \(S_{k}^{(i)}(\nu)\) satisfies the normalization
\[\sum_{k}|S_{k}^{(i)}(\nu)|^{2}=1\,. \tag{35}\]
In Eq. (34), we define \(T_{(i)}\) as the effective brightness temperature of the photons impinging on detector \(i\). This quantity \(T_{(i)}\) is generally frequency dependent, and thus \(n(T_{(i)},\nu)\) does not follow a blackbody spectrum. However, the photon statistics at each frequency \(\nu\) do follow the mixed-state thermal density matrix of temperature \(T_{(i)}\) (see Appendix A), providing the physical foundation for the outcome presented below.
Given the detector quantum efficiency and the input effective brightness temperature, the covariance between detectors \(i\) and \(j\) in Eq. (12) can now be written as
\[\begin{split}\sigma_{ij,\mathrm{shot}}^{2}&=\frac{1 }{\tau}\int_{\nu_{1}}^{\nu_{2}}\mathrm{d}\nu\,(h\nu)^{2}\,\eta_{i}(\nu)\,n(T_ {(i)},\nu)\,\delta_{ij}\;,\\ \sigma_{ij,\mathrm{wave}}^{2}&=\frac{1}{\tau}\int_{ \nu_{1}}^{\nu_{2}}\mathrm{d}\nu\,(h\nu)^{2}\,\gamma_{ij}^{(2)}(\nu)\,\eta_{i} (\nu)\,\eta_{j}(\nu)\,n(T_{(i)},\nu)\,n(T_{(j)},\nu)\,.\end{split} \tag{36}\]
As shown in Eq. (36), the problem of calculating wave noise correlations is reduced to finding the HBT coefficients \(\gamma_{ij}^{(2)}(\nu)=|\gamma_{ij}(\nu)|^{2}\), the input mode's effective brightness temperature \(T_{(i)}\), and the detector's quantum efficiency \(\eta_{i}(\nu)\). As shown in Eqs. (14) and (32), the HBT coefficient \(\gamma_{ij}^{(2)}(\nu)\) is solely determined by \(S_{k}^{(i)}\) and \(S_{k}^{(j)}\)
Figure 3: A schematic representing the detector scattering matrix \(S^{\mathrm{d}i}(\nu)\) and optics scattering matrix \(S_{k}^{(i)}(\nu)\) for detector \(i\). The detector scattering matrix consists of three ports: the detection port \(i\), the lossy port \(\mathrm{L}i\), and the optical coupling port \(\mathrm{C}i\).
which are in turn defined by the telescope's optical configuration and the detector's angular response function.
When \(i=j\), Eq. (36) is consistent with the standard bolometer noise model (see, e.g., Ref. [14] and references therein). Each optical element can be simply expressed by its transmission, emission, and scattering, as shown in Fig. 4. The scattering matrix can then be written as
\[S_{k}^{(i)}(\nu)=\begin{cases}f_{k}\,\sqrt{\epsilon_{\rho}(\nu)}\sqrt{\mathcal{ H}_{\rho}(\nu)}&(k\in\mathcal{M}_{\rho})\\ f_{k}\,\sqrt{\delta_{\rho}(\nu)}\sqrt{\mathcal{H}_{\rho}(\nu)}&(k\in \mathcal{M}_{\delta;\rho})\end{cases} \tag{37}\]
with
\[\mathcal{H}_{\rho}(\nu)\equiv\prod_{l=\rho+1,\cdots,N}\eta_{l}(\nu)\:, \tag{38}\]
where \(\eta_{\rho}(\nu)\), \(\epsilon_{\rho}(\nu)\), and \(\delta_{\rho}(\nu)\) are the fraction of transmission, emission, and scattering, respectively, of each optical element \(\rho\) and satisfy \(\eta_{\rho}(\nu)+\epsilon_{\rho}(\nu)+\delta_{\rho}(\nu)=1\); \(\mathcal{M}_{\rho}\) and \(\mathcal{M}_{\delta;\rho}\) represent the modes due to thermal emission from element \(\rho\) and scattering from element \(\rho\) (denoted as \(\delta\);\(\rho\)), respectively; and \(f_{k}\) is the fractional contribution of each mode given the following normalization
\[\sum_{k\in\mathcal{M}_{\rho}}|f_{k}|^{2}=\sum_{k\in\mathcal{M}_{\delta;\rho}} |f_{k}|^{2}=1\:. \tag{39}\]
By plugging Eqs. (37)-(39) into Eq. (34), we obtain5
Footnote 5: While the right-hand side of Eq. (40) does not have an explicit dependence on the pixel index \(i\), an implicit dependence enters into \(\eta_{\rho}(\nu)\), \(\epsilon_{\rho}(\nu)\), and \(\delta_{\rho}(\nu)\) since different pixels have slightly different viewing angles of and path length differences to each optical element. These differences are minor, however, and can be ignored in most of practical cases.
\[n(T_{(i)},\nu)=\] \[\sum_{\rho=1,\cdots,N}\mathcal{H}_{\rho}(\nu)\,\left\{\epsilon_{ \rho}(\nu)\,n(T_{\rho},\nu)+\delta_{\rho}(\nu)\,n(T_{\delta;\rho},\nu)\right\}. \tag{40}\]
Equations (36) and (40) lead to a formalism consistent with that presented in literature, (e.g., Ref. [14]). In our convention defined by Eq. (36), \(\sqrt{\tau}\sigma_{ii}\) is the photon-noise NEP with an S.I. unit of \(\mathrm{W}\cdot\sqrt{\mathrm{s}}\) and may differ by a factor \(\sqrt{2}\) when compared to literature where the NEP is often presented in \(\mathrm{W}/\sqrt{\mathrm{Hz}}\).
## III Model optical system
Using the photon noise formulation in Eq. (36), we now move to quantify the impact of HBT correlations on the sensitivity of telescopes for mm-wave astronomy. The topic of intensity correlations from astronomical sources with various states of coherence is discussed extensively in the literature [5; 22; 23; 24; 25; 26], and in the sections that follow, we apply these findings to mm-wave telescope design. Modern mm-wave telescopes employ a wide variety of lens and mirror systems, infrared filter stacks, anti-reflection coatings, and sensing architectures. Despite this variety in real experiments, we can distill a few key instrument characteristics to create a simple yet representative optical system in which to study the sensitivity impact of HBT correlations. More specifically, this simple system, characterized by its focal ratio and the effective brightness temperatures inside and outside of its pupil stop, emulates the radiation environment at the focal plane of general, modern millimeter and sub-millimeter telescopes with sufficient accuracy.
Our goal is to calculate intensity correlations within a practical telescope system with \(\sim 10\,\%\) accuracy. As shown in the following sections, the HBT correlation coefficient has a maximum sensitivity impact of \(\mathcal{O}(10\,\%)\); therefore our HBT accuracy goal maps to a sensitivity accuracy of \(\sim 1\,\%\), which is typically sufficient for the purposes of telescope design. While our idealized optics and detector focal planes may not exactly reproduce those of real telescopes, they do capture the characteristics needed to calculate the HBT coefficient of real systems with \(\sim 10\,\%\) accuracy.
### Telescope
The assumed example telescope model is depicted in Fig. 5. It consists of an objective lens with focal length \(f_{\mathrm{obj}}\) within a blackened enclosure at \(T_{\mathrm{stop}}=4\) K. The cold box has a circular aperture of diameter \(D_{\mathrm{ap}}\) at \(z=z_{\mathrm{ap}}\) that truncates incoming radiation from external field-filling sources. The objective lens, which is both cold and transparent, focuses the aperture-truncated radia
Figure 4: A schematic showing how optical elements contribute to the impinging photons on a detector. Each optical element \(\rho=1,2,3,\cdots\) has a thermal emissivity \(\epsilon_{\rho}(\nu)\) and temperature \(T_{\rho}\). At the same time, each optical element \(\rho=1,2,3,\cdots\) scatters a fraction \(\delta_{\rho}(\nu)\) of photons from sources \(\rho=\delta\);\(1\), \(\delta\);\(2\), \(\delta\);\(3,\cdots\) into the line of sight. The transmission efficiency of the optical element \(\rho\) is then given as \(\eta_{\rho}(\nu)=1-\epsilon_{\rho}(\nu)-\delta_{\rho}(\nu)\).
tion onto a circular focal plane at \(z=z_{\rm fp}\) with a size determined by the telescope's plate scale. The focal plane houses an array of close-packed detector pixels with diameter \(D_{\rm pix}\) operating at \(T_{\rm fp}=0.1\) K. The detector + telescope system is assumed to be diffraction limited such that the optical throughput per mode is \(A\Omega=\lambda^{2}\).
This model optical system does not include many common features of real telescopes--such as fore-optics, thermal filters, or additional lenses--which are needed to form high-fidelity images over a moderate field of view (FOV). Such details are experiment-dependent and are therefore beyond the scope of this paper, but we can capture their effects by imposing several assumptions onto our simple system. These assumptions are not strictly necessary to calculate photon-noise correlations, as the scattering matrix formalism in Sec. 2.1 is completely general, but they simplify the correlation calculation significantly while encapsulating the salient features of practical instruments.
Firstly, we assume that all sources--both external and internal to the telescope--are isothermal blackbody emitters large enough to uniformly illuminate the aperture across the telescope's FOV. The assumption of blackbodies allow us to readily evaluate each mode's occupation number using the Bose-Einstein distribution \(n(\nu,T_{\rm b})\) in Eq. (4) given each source's effective brightness temperature \(T_{\rm b}\). The assumption that each source is FOV-filling6 and isothermal generalizes the correlation integrals that follow and is a good approximation for experiments whose incoming photon power is mainly from extended sources. In practice, thermal gradients develop across optical elements and atmospheric brightness varies with elevation and cloud structure; however, these variations are typically small and experiment-dependent and are therefore beyond the scope of this paper.
Footnote 6: An obvious exception is the aperture stop, which we treat separately from aperture-filling radiation.
Secondly, we assume a diffraction-limited, single-moded optical system that converts stop-truncated plane waves into spherical waves converging onto a telecentric focal plane. In other words, pixel rays with angle \((\theta_{\rm pix},\phi_{\rm pix})\) are mapped via the objective lens onto parallel rays with aperture-plane location \(\vec{r}_{\rm ap}\) (see Fig. 5). In this configuration, the optical path length between any given detector pixel and a spot on the aperture stop is identical regardless of \((\theta_{\rm pix},\phi_{\rm pix})\) or \(\vec{r}_{\rm ap}\), which simplifies the calculations that follow. We note that when re-imaging optics and a pupil stop are employed, modes which map onto spherical waves at the focal plane do not in general correspond to plane waves passing through the pupil stop. However, such reimaging optics can always be modeled as a simplified equivalent system with an aperture stop provided that each optic's clear-aperture diameter is large enough to pass all pupil-permitted modes.
Thirdly, we assume an ideal aperture stop, such that all detector pixels have the same mapping between ray angle \((\theta_{\rm pix},\phi_{\rm pix})\) and aperture plane location \(\vec{r}_{\rm ap}\) (see Fig. 5). In other words, the aperture illumination is identical regardless the detector pixel location.7 Strictly speaking, this condition is not generally satisfied for a system with a large FOV, as telecentricity and aperture truncation may differ significantly between the central and peripheral regions of the focal plane. However, as we discuss later, photon-noise correlations arise predominantly between neighboring pixels where such non-idealities are negligible.
Footnote 7: In theory, the necessary condition for an ideal aperture is for the mapping to be identical only on the stop circumference, but in practice, when this condition is met, the mapping becomes identical within the aperture stop as well.
Fourthly, we assume that the telescope optics achieve polarization fidelity across the focal plane. As shown in Fig. 6, incident linearly-polarized plane waves with propagation direction \((\theta_{\rm in},\phi_{\rm in})\) and orthogonal polarization vectors \(\hat{e}_{1}^{L3}(\theta_{\rm in},\phi_{\rm in})\) and \(\hat{e}_{2}^{L3}(\theta_{\rm in},\phi_{\rm in})\) are focused onto detector pixels as spherical waves with Ludwig-3 polarization distributions (Eq. 22[17]):
\[\begin{split}\hat{e}_{1}^{L3}(\theta_{\rm in},\phi_{\rm in})& \rightarrow\hat{e}_{1}^{L3}(\theta_{\rm pix},\phi_{\rm pix})\:,\\ \hat{e}_{2}^{L3}(\theta_{\rm in},\phi_{\rm in})& \rightarrow\hat{e}_{2}^{L3}(\theta_{\rm pix},\phi_{\rm pix})\:.\end{split} \tag{41}\]
For an on-axis incident plane-wave, this relation simplifies to
\[\hat{x}\rightarrow\hat{e}_{1}^{L3}(\theta_{\rm pix},\phi_{\rm pix})\:,\quad \hat{y}\rightarrow\hat{e}_{2}^{L3}(\theta_{\rm pix},\phi_{\rm pix})\:. \tag{42}\]
Figure 5: The assumed optical model for all calculations and simulations in this paper. The model includes an objective lens, aperture stop, and focal plane filled with an array of sensing antennas coupled to planar detectors. In the reverse-time sense, each pixel emits a collection of rays, defined by spherical coordinate \((\theta_{\rm pix},\phi_{\rm pix})\), which the objective uniquely maps onto aperture-plane coordinate \(\vec{r}_{\rm a}\) with idealized polarization fidelity and telecentricity. The system is enclosed in a black box of temperature \(T_{\rm stop}\), and the focal plane is cooled to \(T_{\rm fp}\).
It follows from this assumption that HBT correlations cannot develop between orthogonal polarimeters. In practice, some cross polarization does exist within real telescopes, and the degree of polarization leakage can vary across the focal plane. However, modern polarimetry experiments are specifically designed to suppress cross polarization [27, 28, 29], especially over localized areas on the focal plane where intensity correlations are important.
Finally, we assume that the objective lens is cold and transparent such that its emission and scattering terms are negligible compared to those of other internal and external thermal sources.
### Focal plane
The assumed focal plane model is shown in Fig. 7. We assume single-model, dual-polarization detector pixels with circular apertures and diffraction-limited Gaussian beams. Each pixel's angular response function is determined solely by its beam waist \(w_{0}\) and takes the far-field form
\[E(\theta)\approx E_{0}\,\exp\left[-\frac{\theta^{2}}{(\lambda/\pi w_{0})^{2}} \right]\,. \tag{43}\]
Each polarimeter has a diffraction-limited throughput of \(A\Omega=\lambda^{2}\), regardless of the pixel's aperture size, and we assume that the beam pattern in Eq. (43) is symmetric between the antenna's \(E\) and \(H\) planes and follows the Ludwig-3 polarization response (Eqs. 23 and 24).8 A larger/smaller pixel results in a narrower/wider far-field response, and we linearly relate the pixel diameter \(D_{\mathrm{pix}}\) to the beam waist via a scaling constant \(w_{f}\)
Footnote 8: A Ludwig-3 polarization beam pattern often results from an angular response function with \(E\)-plane/\(H\)-plane symmetry. See also the footnote in Sec. 2.3.
\[w_{0}=\frac{D_{\mathrm{pix}}}{w_{f}}\,. \tag{44}\]
Typical mm-wave detector pixels, such a corrugated feedhorns, spline-profiled feedhorns, and lenslet-coupled planar antennas achieve \(w_{f}\approx 3\), which we assume for the calculations that follow.
Plugging Eq. (44) into Eq. (43) yields a simple relationship between \(D_{\mathrm{pix}}\) and aperture stop spillover efficiency
\[\eta_{\mathrm{ap}}=\frac{\int_{0}^{\theta_{\mathrm{stop}}}E^{2}(\theta)\, \mathrm{d}\theta}{\int_{0}^{\pi/2}E^{2}(\theta)\,\mathrm{d}\theta}=1-\exp \left[-\frac{\pi^{2}}{2}\left(\frac{D_{\mathrm{pix}}}{F\lambda w_{\mathrm{f} }}\right)^{2}\right]\,, \tag{45}\]
where \(F\equiv D_{\mathrm{ap}}/f_{\mathrm{obj}}\) is the F-number at the focal plane and \(\theta_{\mathrm{stop}}=\arctan\left[1/(2F)\right]\). This assumption of Gaussian spillover efficiency does not necessarily hold when \(D_{\mathrm{pix}}<\lambda\), as diffraction at the pixel edges will create substantial ringing in the far-field beam pattern. However, in an effort to remain agnostic to the specifics of the detector coupling architecture, we assume that Eqs. (43) and (45) remain valid for all values of \(D_{\mathrm{pix}}\) in the calculations that follow.
While detector pixels can be arranged in a variety of ways, we must select a specific focal plane arrangement to find \(\left|\gamma_{ij}(\nu)\right|^{2}\) explicitly. In the calculations that follow, we assume hex-packed circular pixels, and we assume
Figure 6: A schematic of the ideal optics that focus incident plane waves onto detectors as spherical waves with Ludwig-3 polarization distributions. In this example, the incident wave has a propagation direction \((\theta_{\mathrm{in}},\phi_{\mathrm{in}})\) and polarization direction \(\hat{\varepsilon}=\hat{e}_{1}^{L3}(\theta_{\mathrm{in}},\phi_{\mathrm{in}})\), and it is converted to a spherical wave with polarization direction \(\hat{\varepsilon}=\hat{e}_{1}^{L3}(\theta_{\mathrm{pix}},\phi_{\mathrm{pix}})\) for all \((\theta_{\mathrm{pix}},\phi_{\mathrm{pix}})\) mapped within the aperture stop.
Figure 7: The assumed layout of detector pixels on the focal plane at \(D_{\mathrm{pix}}=1.2F\lambda\) spacing. Each pixel’s angular response is assumed to be a Gaussian whose width scales with pixel diameter, as described in Eq. (43). Additionally, each pixel has two polarimeters (dotted lines) that sense orthogonal polarizations and whose noise outputs do not correlate, given the idealized optical system described in Sec. 3. Neighboring pixels are rotated by \(\pm\) 45 deg to minimize Stokes Q HBT correlations.
that pixel pitch is equal to pixel diameter \(p_{\rm pix}=D_{\rm pix}\). This assumption allows us to relate pixel packing density to pixel size as \(n_{\rm pix}\propto D_{\rm pix}^{-2}\), where \(n_{\rm pix}\) is the number of pixels per unit focal plane area. In practice, a small amount of dead space typically exists between pixels that does not scale with pixel size, and in this case, a more complex relationship between \(n_{\rm pix}\) and \(D_{\rm pix}\) is needed [10]. However, these details are experiment specific and are therefore beyond the scope of the following discussions.
We note that the correlation calculation in Sec. 4 does not rely on most of the assumptions presented in this subsection. Specifically, only the detector beam's assumed polarization properties are relevant to the HBT coefficient estimation. All the other assumptions related to the shape of the detector beam, the beam waist, and pixel packing are used only to calculate an explicit instrument sensitivity in Sec. 5 and Sec. 6.
## 4 Correlation Calculation
Given the optical and detector models presented in Sec. 3, we now find the correlation patterns at the focal plane due to thermal radiation within the aperture and from the stop. A schematic of the radiation model for the central detector pixel is shown in Fig. 8.
### Stop radiation
The stop is located in the far field of the detectors and is effectively a black, annular source with temperature \(T_{\rm stop}\). We therefore model the stop as a collection of infinitesimal, Gaussian-random, uncorrelated thermal emitters that generate Lambertian spherical wavelets, as shown in Fig. 8. These point sources represent atomic thermal motion within the stop's absorbing material, and their wavelets superpose to form incoherent waves that the objective lens focuses onto the focal plane.
The above stop radiation treatment relies on two assumptions, which we justify here. First, while we consider the stop as a collection of uncorrelated thermal sources, it is known that blackbody radiators have non-zero correlation over the distance of a wavelength [30; 31; 5; 22]. However, as we will show in Sec. 4.2, Sec. 4.3, and Appendix B, this discrepancy leads to negligible errors for the calculations in this paper. Second, we assume for simplicity that all stop radiation reaches the detectors by propagating through all optics between the aperture plane and the focal plane, which impractically requires infinite optical throughput. That said, the cold box's radiation environment has temperature \(T_{\rm stop}\), and the detectors sense that radiation when \(\theta_{\rm pix}>\theta_{\rm stop}\), regardless of the optical configuration. Therefore, our simplifying assumption of infinite optical throughput accurately accounts for stop-generated photons at each detector's input.
### Aperture radiation
Radiation incident on the sky side of the aperture (right side of Fig. 8) can be regarded as blackbody emission9 with an effective brightness temperature \(T_{\rm(ap)}\). Here, \(T_{\rm(ap)}\) is defined such that \(n(T_{\rm(ap)},\nu)\) is the mean occupation number of the blackbody radiation within the aperture. This mean occupation number can be written by adopting the definitions in Sec. 2.4 (Fig. 4 and Eqs. 37-39) and using \(\mathcal{H}_{\rho}(\nu)\), \(\epsilon_{\rho}(\nu)\), \(\delta_{\rho}(\nu)\) as
Figure 8: A schematic of the aperture, stop, and pixel radiation models. Radiation within the aperture is decomposed into a basis set of plane waves, and the normally incident mode \((k_{x},k_{y},k_{z})=(0,0,2\pi/\lambda)\) is depicted here. The plane waves are assumed to uniformly illuminate a diameter \(D_{\rm 1\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[n(T_{\rm(ap)},\nu)=\frac{\sum_{\rho<\rho_{\rm ap}}\left\{\epsilon_{\rho}(\nu)\,{ \cal H}_{\rho}(\nu)\,n(T_{\rho},\nu)+\delta_{\rho}(\nu)\,{\cal H}_{\rho}(\nu)\,n (T_{\delta;\rho},\nu)\right\}}{\sum_{\rho<\rho_{\rm ap}}\left\{\epsilon_{\rho}( \nu)\,{\cal H}_{\rho}(\nu)+\delta_{\rho}(\nu)\,{\cal H}_{\rho}(\nu)\right\}}\:, \tag{46}\]
where \(\rho=\rho_{\rm ap}\) labels the aperture stop element. This blackbody radiation can be decomposed into plane-wave modes, and the FOV-filling nature of the sources (Sec. 3.1) ensures the above stated radiation property for all plane-wave modes whose propagation direction is within the FOV.10
Footnote 10: While plane-wave modes with propagation vector outside the FOV may have different radiation properties, they are irrelevant in our context of evaluating photon correlation among detectors.
We then consider a virtual, infinitely large sheet at \(z=z_{\rm ap}\) that consists of Gaussian-random, infinitesimal, uncorrelated thermal emitters of temperature \(T_{\rm(ap)}\) that generate Lambertian spherical wavelets. Radiation from these virtual emitters has equivalent statistical properties to the sky-side radiation in Fig. 8 so long as the distances from the sheet are significantly larger than the wavelength \(\lambda\). To calculate this virtual sheet's aperture truncation, we can simply remove elements outside of the aperture area \(\sqrt{x^{2}+y^{2}}\geq D_{\rm ap}/2\). In Sec. 4.3 and Fig. 9, we compare aperture-truncated plane waves with this sheet of emitters and demonstrate their equivalence.
There is a small error that arises from the presented aperture emitter treatment. As noted above, the virtual sheet's statistics become that of blackbody radiation at distances sufficiently larger than \(\lambda\), and as noted in Sec. 4.1, non-zero correlations arise between emitters separated by \(\lesssim\lambda\). Thus, the emitter-sheet model deviates from that of aperture truncation in the region of \(z\simeq z_{\rm ap}\) and \(D_{\rm ap}/2-\lambda\lesssim\sqrt{x^{2}+y^{2}}\lesssim D_{\rm ap}/2+\lambda\). Therefore when \(\lambda\gtrsim D_{\rm ap}\), aperture truncation produces additional non-trivial correlations that include polarization [33], but when \(\lambda\ll D_{\rm ap}\), as is true for any reasonable telescope design, the correction can be safely neglected.
In summary, the VCZT coefficient \(\gamma_{ij}(\nu)\) can be calculated using the radiation field from a sheet of infinitesimal thermal emitters at \(z=z_{\rm ap}\) with temperature \(T_{\rm stop}\) at \(\sqrt{x^{2}+y^{2}}\geq D_{\rm ap}/2\) and \(T_{\rm(ap)}\) at \(\sqrt{x^{2}+y^{2}}<D_{\rm ap}/2\).
### Intensity correlation patterns
Here we calculate spatial correlation patterns for the geometry in Fig. 5 while neglecting the polarization degree of freedom, similarly to Sec. 2.2. We then consider polarization in the next subsection. Consider the classical-wave electric field amplitude \(E_{i}\) detected by detector \(i\) at location \((x,y,z)=(x_{i},y_{i},z_{\rm fp})\). The detected partial amplitude \(\Delta E_{i}\) due to an infinitesimal source area at the aperture plane \((x,y,z)=(u,v,z_{\rm ap})\) can then be written as
\[\begin{split}\Delta E_{i}&={\cal C}\,G_{i}(u,v)\, \sqrt{\cos\theta_{i}}\\ &\quad\cdot\,e^{[2\pi i\nu(u\sin\theta_{i}\cos\phi_{i}+v\sin \theta_{i}\sin\phi_{i})/c]}\,\Delta u\Delta v\:.\end{split} \tag{47}\]
Here, \(G_{i}(u,v)\) is the aperture illumination function for detector \(i\), and \((\theta_{i},\phi_{i})\) denotes the propagation direction of the incident plane wave focused by the objective lens onto detector \(i\), which can be related to the detector's focal plane position \((x_{i},y_{i})\) via the objective's focal length \(f_{\rm obj}\) as
\[\begin{split}\sin\theta_{i}\cos\phi_{i}&\simeq \frac{x_{i}}{f_{\rm obj}}=\frac{x_{i}}{FD_{\rm ap}}\:,\\ \sin\theta_{i}\sin\phi_{i}&\simeq\frac{y_{i}}{f_{\rm obj }}=\frac{y_{i}}{FD_{\rm ap}}\:.\end{split} \tag{48}\]
The illumination function \(G_{i}(u,v)\) is mapped from the detector's angular-response function (Eq. (43)) via the telescope's optics. We adopt the following normalization
\[\iint\mathrm{d}u\,\mathrm{d}v\,|G_{i}(u,v)|^{2}=1\:, \tag{49}\]
which leads to
\[\iint\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(x_{ji}\equiv x_{j}-x_{i}\), \(y_{ji}\equiv y_{j}-y_{i}\), \(\lambda\equiv c/\nu\), and \(p_{ij}\equiv\sqrt{x_{ji}^{2}+y_{ji}^{2}}\), and where the second equality assumes that \(G_{i}(u,v)\simeq G_{j}(u,v)\) and that \(G_{i}(u,v)\) is approximately circularly symmetric. The VCZT coefficient (Eq. (10)) for the aperture radiation is
\[\begin{split}\gamma^{\rm np}_{{\rm ap},ij}(\nu)&= \frac{B^{\rm np}_{{\rm ap},ij}(\nu)}{\sqrt{B^{\rm np}_{{\rm ap},ii}(\nu)\,B^{ \rm np}_{{\rm ap},ii}(\nu)}}\;,\\ &=\frac{1}{\eta_{\rm ap}}\iint\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\gamma^{\rm np}_{ij}(\nu) =\frac{n(T_{(\rm ap)},\nu)}{n(T_{(i)},\nu)}\iint\limits_{\sqrt{u^{2}+v ^{2}}<D_{\rm ap}/2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
in Sec. 2.1 and therefore no such bandwidth limit exists for calculations in this paper.
## 5 Impact of correlations on sensitivity
Using Eqs. (59), (17), and (62), we now investigate the impact of detector-to-detector correlations on instrument sensitivity, which is the primary goal of this paper. As shown in Fig. 9, detector outputs can correlate if their pixel pitch is \(D_{\rm pix}<1.2F\lambda\), and these correlations will slow noise averaging during coaddition and therefore degrade array sensitivity.13 In this section, we introduce a formalism for mapping speed, which measures the total sensitivity of the detector array, and we inspect the impact of HBT correlations on mapping speed vs. pixel size, which is a key metric used for focal plane design.14
Footnote 13: Strictly speaking, a positive correlation between neighboring pixels can technically improve sensitivity when the angular diameter of interest approaches the pixel spacing projected on the sky (e.g., for Sunyaev Zel’dovich galaxy cluster surveys).
Footnote 14: The code used to generate plots in this section can be found at [https://github.com/chil90/HBT-Correlations](https://github.com/chil90/HBT-Correlations).
### Mapping speed
We assume that each detector in the imaging array has three noise components: photon shot noise, photon wave noise, and internal noise. The covariance between detection output ports \(i\) and \(j\) is
\[\sigma_{ij}^{2}=\sigma_{ij,\rm shot}^{2}+\sigma_{ij,\rm wave}^{2}+\sigma_{ij, \rm int}^{2}\:, \tag{63}\]
and the total variance of the detector array is
\[\sigma_{\rm arr}^{2}=\frac{1}{N_{\rm det}^{2}}\sum_{i,j}\sigma_{ij}^{2}\:, \tag{64}\]
which effectively quantifies the instrument's array-averaged sensitivity.
To both simplify and clarify the calculations that follow, we assume that all detectors in the array have the same noise properties. Internal detector noise and photon shot noise cannot correlate between outputs, which allows the covariance to be written as
\[\sigma_{ij}^{2}=\left(\sigma_{\rm shot}^{2}+\sigma_{\rm int}^{2}\right)\delta _{ij}+\gamma_{ij}^{(2)}\sigma_{\rm wave}^{2}\:, \tag{65}\]
where \(\gamma_{ij}^{(2)}\) is the HBT coefficient and where \(\sigma_{\rm shot}^{2}\), \(\sigma_{\rm wave}^{2}\), and \(\sigma_{\rm int}^{2}\) are the variances of the shot, wave, and internal noise components, respectively, for every detector in the array. While array uniformity is not in general true for real experiments, it is common practice to use the median noise expectation when forecasting instrument performance, making a uniform treatment useful for instrument designers. Additionally, the details of detector-to-detector variation are experiment-dependent and are therefore beyond the scope of this paper.
Given the simplification in Eq. (65) and noting that \(\gamma_{ii}^{(2)}=1\), the total variance of the detector array can be written as
\[\sigma_{\rm arr}^{2}=\frac{\sigma_{\rm shot}^{2}+\sigma_{\rm wave}^{2}+\sigma _{\rm int}^{2}}{N_{\rm det}^{2}}+\frac{\sigma_{\rm wave}^{2}}{N_{\rm det}^{2}} \sum_{i}\sum_{j\neq i}\gamma_{ij}^{(2)}\:. \tag{66}\]
Figure 9: Simulated correlation patterns at the focal plane due to radiation within the aperture (left) and radiation from the stop (right). The points represent the Monte Carlo simulations described in Sec. 4.3, and the lines show the expectation of the van Cittert-Zernicke theorem for an aperture/stop with uniform illumination.
The first term represents uncorrelated array noise while the second term quantifies noise augmentation due to HBT correlations. Let us further define an array-averaged correlation coefficient across the detector array as
\[\gamma^{(2)}\equiv\frac{1}{N_{\mathrm{det}}}\sum_{i}\sum_{j\neq i} \gamma^{(2)}_{ij}\,, \tag{67}\]
which allows us to write the array sensitivity more compactly as
\[\sigma^{2}_{\mathrm{arr}}=\frac{\sigma^{2}_{\mathrm{shot}}+(1+ \gamma^{(2)})\sigma^{2}_{\mathrm{wave}}+\sigma^{2}_{\mathrm{int}}}{N_{\mathrm{ det}}}\,. \tag{68}\]
In this form, the impact of intensity correlations on the detector array noise is reduced to calculating the array-averaged HBT coefficient \(\gamma^{(2)}\). In the limit of \(\gamma^{(2)}_{ij}\to 1\), \(\gamma^{(2)}\to(N_{\mathrm{det}}-1)\) and the array-averaged wave noise is not at all suppressed by detector coaddition. This fact drives the use of interferometers at low frequencies where \(n(\nu,T)\gg 1\), where \(\sigma_{\mathrm{wave}}\gg\sigma_{\mathrm{shot}}\), and where correlation lengths are long for astronomical sources. In the other limit of \(\gamma^{(2)}_{ij}\to 0\), wave noise averages in the familiar way for uncorrelated measurements \(\sigma^{2}_{\mathrm{wave}}/N_{\mathrm{det}}\). It is worth emphasizing that the augmentation of \(\sigma^{2}_{\mathrm{arr}}\) by \(\gamma^{(2)}\) not only depends on the HBT coefficient \(\gamma^{(2)}_{ij}\) but also on the relative contribution of wave noise to that of the other noise terms. As shown in Eq. (36), \(\sigma^{2}_{\mathrm{wave}}\propto n^{2}(\nu,T)\) while \(\sigma^{2}_{\mathrm{shot}}\propto n(\nu,T)\), and therefore \(\sigma^{2}_{\mathrm{wave}}\) becomes more important at lower frequencies and higher brightness temperatures.
Internal detector noise \(\sigma_{\mathrm{int}}\) depends on many factors, including the detector's architecture, thermal noise properties, amplifier noise properties, linearity, and dynamic range, among other things. To remain agnostic to these experiment-specific characteristics, we set \(\sigma^{2}_{\mathrm{int}}/\sigma^{2}_{\mathrm{ph}}=0.1\) hereafter, noting that modern mm-wave observatories aim to be photon-noise dominated.15
Footnote 15: For the ubiquitous transition-edge sensor (TES) bolometric detectors [36, 37], internal thermal noise is proportional to the detector’s saturation power \(P_{\mathrm{att}}=P_{\mathrm{opt}}+P_{\mathrm{bias}}\), where \(P_{\mathrm{opt}}\) is the detected optical power and \(P_{\mathrm{bias}}\) is the detector bias power [38]. Therefore, scaling \(\sigma_{\mathrm{int}}\) and \(\sigma_{\mathrm{ph}}\) together during experiment design and optimization, while not an exact metric, is well motivated.
Finally, we quantify the experiment's signal-to-noise using its mapping speed, which is defined as the square ratio of the input signal \(S\) to the array-averaged noise \(\sigma_{\mathrm{arr}}\)
\[MS=\frac{S^{2}}{\sigma^{2}_{\mathrm{arr}}}\propto\frac{N_{\mathrm{det}}\,\eta ^{2}}{\sigma^{2}_{\mathrm{shot}}+(1+\gamma^{(2)})\,\sigma^{2}_{\mathrm{wave} }+\sigma^{2}_{\mathrm{int}}}\,. \tag{69}\]
Here, \(\eta\) is the optical efficiency of the entire system, which is a product of the detector's quantum efficiency (Eq. (31)), the aperture spillover efficiency (Eq. (45)), and all the other transmission efficiencies, including those of the telescope's optical elements and the atmosphere. Mapping speed is a powerful measure of an experiment's efficacy, as it is \(\propto N_{\mathrm{det}}\) and is therefore analogous to detector yield and observation efficiency.
### Pixel size optimization
Using the optics and detector assumptions in Sec. 3 and the VCZT and HBT coefficients in Eqs. (59), (17), and (62), we now calculate mapping speed vs. pixel size. Provided a fixed FOV, or equivalently a fixed focal plane size, decreasing pixel diameter \(D_{\mathrm{pix}}\) increases the number of detectors as \(N_{\mathrm{det}}\propto D_{\mathrm{pix}}^{-2}\) but decreases aperture spillover efficiency as \(\eta_{\mathrm{ap}}\propto\exp[-\left(\pi D_{\mathrm{pix}}/(F\lambda w_{ \mathrm{f}})\right)^{2}/2]\). These competing effects combine to form a peak in mapping speed vs. pixel size, which reveals the optimal packing density.
An example "classic" mapping speed vs. pixel size curve [39, 40, 41, 42, 43]--one for which \(\gamma^{(2)}\equiv 0\)--of a model 90 GHz instrument with a 4 K stop is shown in Fig. 10. Historically, ground-based CMB experiments have observed at 95, 150, and 220 GHz with \(D_{\mathrm{pix}}=1\sim 2F\lambda\)[44, 45, 46, 47, 48, 49, 50, 51] where the HBT correlation coefficient is small. However, as new readout architectures become available [52, 53] and as CMB experiments push to lower frequencies for improved synchrotron characterization [54, 55, 56], focal planes with \(D_{\mathrm{pix}}\lesssim 1.2F\lambda\) are be
Figure 10: An example of a “classic” mapping speed calculation—which ignores the impact of HBT correlations—for a 90 GHz telescope with a 4 K aperture stop. The \(MS\) peak arises from the opposing effects of more detectors to average vs. less aperture efficiency with decreasing pixel size. The optimum when ignoring correlations is \(\sim 0.7\,F\lambda\), but as shown below, the addition of the HBT coefficient modifies this classic curve.
coming increasingly practical. Therefore, the impact of HBT correlations on mapping speed is of interest to upcoming mm-wave experiments, such as CMB-S4 [57]. In this section, we calculate HBT-modified mapping speed vs. pixel size curves for our model telescope at several observation frequencies, and we discuss the results.
We assume that our telescope is ground-based with cryogenically cooled optics, infrared filters, and sub-Kelvin detectors. As shown in Eq. (46), radiation at the aperture plane can be summed over all sky-side sources and represented by a single effective brightness temperature \(T_{\rm(ap)}\). To simplify and generalize the following analysis, we assume only three sources viewed through the aperture: the CMB with \(T_{\rm CMB}\), the atmosphere with \(T_{\rm atm}\), and telescope optics with \(T_{\rm tel}\). The telescope's effective temperature \(T_{\rm tel}\) can vary considerably depending on the specifics of the mirrors, ground shield, cryostat window, and anti-reflection coatings, but as an example, we assert a default configuration where \(T_{\rm tel}=10\) K. We also assert the stop's physical temperature to be \(T_{\rm stop}=4\) K and that each detector's quantum efficiency is \(\eta_{\rm det}=0.8\). To model power due to atmospheric emission, we assume that the telescope observes from the the Chajnantor Plateau in the Atacama Desert of Chile, and we use the AM model [58] to generate the atmosphere's effective brightness temperature and transmittance at 1 mm precipitable water vapor (PWV) and 50 deg elevation above the horizon.
Given this telescope + sky model, we consider four top-hat observation bands centered at (35, 95, 150, 220) GHz with bandwidths of (17, 33, 39, 44) GHz. The chosen bands, along with \(T_{\rm CMB}\), \(T_{\rm atm}\), and \(T_{\rm tel}\), are shown in Fig. 11. The channels are chosen to fit within atmospheric windows and are similar to those of existing Atacama instruments. Additionally, Fig. 11 shows the bunching fraction for each frequency band vs. sky temperature at \(D_{\rm pix}=3F\lambda\), where \(\eta_{\rm ap}>0.95\). As discussed in Sec. 5.1, while lower-frequency modes tend to have larger occupation numbers, the sky is brighter at higher frequencies, and therefore correlations substantially impact sensitivity in all four bands.
Fig. 12 shows an example of four pixel pitch scenarios \(D_{\rm pix}=(0.2,0.5,0.7,1.0)\,F\lambda\) given the hex packing described in Sec. 3.2, and the contour shows the Stokes \(Q\) HBT coefficient \(\gamma_{ij}^{Q,(2)}\) due to both aperture and stop radiation. As expected, the degree of correlation is a two-dimensional version of the VZT curves in Fig. 9. The effect of pushing \(D_{\rm pix}\lesssim F\lambda\) is for the detected modes (the pixel apertures) to oversample the input modes (the \(\gamma_{ij}^{Q,(2)}\) contour), giving rise to correlated noise between nearby detectors.
When calculating Stokes \(Q\) mapping speed, we sum the HBT coefficients in Fig. 12 over a 4 \(F\lambda\) radius to find
\[\gamma_{i}^{Q,(2)}\equiv\sum_{j\neq i}\gamma_{ij}^{Q,(2)}\;, \tag{70}\]
noting that because each pixel has two orthogonal polarimeters, \(\left|\gamma_{ij}\right|^{2}\) vanishes for half of all \((i,j)\) output pairs. The impact of HBT correlations becomes roughly twice when considering mapping speed for measurements of intensity or Stokes \(I\). We then assume that this sum applies to all detectors on the focal plane such that \(\gamma^{Q,(2)}\simeq\gamma_{i}^{Q,(2)}\). This treatment ignores the fact that edge pixels have fewer neighbors than internal ones, which is a reasonable approximation for focal planes of large area. For small or moderately-sized detector arrays, the fraction of edge pixels may become important, but such details are experiment-dependent and are therefore beyond the scope of this paper. Fig. 13 shows HBT-impacted mapping speed vs. pixel size curves for each observation band, normalized to the peak of the "classic" curve for which \(\gamma_{ij}^{Q,(2)}\equiv 0\). Three additional mapping speed curves with in-aperture loading from only the CMB, only the atmosphere, and only the telescope are also plotted to demonstrate the dependence of HBT correlations on various source temperatures.
There are several features in Fig. 13 that are worth noting explicitly. Firstly, the impact of HBT correlations depends on source temperature and is most pronounced in the presence of a brightly illuminated aperture. This effect is most clearly seen when contrasting the CMB and atmosphere, especially at 220 GHz where the CMB's photon occupation number is falling while that of the atmosphere is rising. Secondly, while HBT correlations impact curve shape most prominently at low frequencies, the atmosphere is brighter at higher frequencies, inducing a similar HBT suppression across all bands. Thirdly, the mapping speed peak is located at a slightly larger \(D_{\rm pix}\) than that of the classic curves at 35 and 95 GHz but resides at a similar location to that of the classic curves at 150 and 220 GHz. This effect arises because the stop is significantly fainter than the sky at 150 and 220 GHz, and therefore as \(D_{\rm pix}\) falls below \(1.2F\lambda\), \((1+\gamma^{(2)})\sigma_{\rm wave}^{2}\) decreases less rapidly than \(N_{\rm det}\eta_{\rm ap}^{2}\). Lastly, the impact of correlations on mapping speed starts to become most important when \(D_{\rm pix}<1.2F\lambda\), but there also percent-level impacts at larger spacings, which correspond to local maxima in the aperture/stop VZT patterns, as shown in Fig. 9. This effect gets smoothed out when averaging the HBT coefficients across each channel's finite bandwidth. Regardless of the input assumptions in this section, the mapping-speed gain by undersized pixels when \(\gamma^{(2)}\neq 0\) is suppressed compared to the classic \(\gamma^{(2)}\equiv 0\) case, especially for ground-based telescopes.
## 6 Implications for experiment design
As shown in Fig. 13, HBT correlations both modify the optimal pixel packing density and suppress the achievable mapping speed with respect to the "classic" \(\gamma^{(2)}\equiv 0\) calculation. However, the degree of modification depends on a plethora instrument details, including internal detector noise, observation site and conditions, stop temperature, detector efficiency, telescope optical throughput, and ex
taneous noise sources, such as electromagnetic interference, vibrational pickup, and detector nonidealities. A more comprehensive handling of correlations within a more general experiment is available via the BoloCalc sensitivity calculator [14], but in this section, we sweep a few parameters in our model telescope to serve as a quick reference for focal plane designers. The results of these calculations are shown in Fig. 14.
The first column of Fig. 14 shows mapping speed vs. pixel size for various stop temperatures \(T_{\rm stop}=(2,3,4,5)\) K. As stop temperature decreases, so does photon loading due to stop spillover, which in turn favors smaller pixels. A colder stop also suppresses the relative contribution of \(\sigma_{\rm wave}\), especially at higher frequencies, modulating the slope of the mapping speed curve below \(D_{\rm pix}=1.2F\lambda\). The second column of Fig. 14 shows mapping speed vs. pixel size for various telescope temperatures \(T_{\rm tel}=(10,20,30,40)\) K. As telescope temperature increases, so too does the photon load within the aperture, which in turn favors smaller pixels. In addition, brighter aperture radiation increases \(\sigma_{\rm wave}\) and hence also increases the HBT suppression. The third column of Fig. 14 shows mapping speed vs. pixel size in the presence of a constant internal detector noise \(\sigma_{\rm int}^{2}=(0.1,0.5,0.7,1.0)\times\sigma_{\rm ph}^{2}\) at \(D_{\rm pix}=1.2F\lambda\), where \(\sigma_{\rm ph}^{2}\equiv(\sigma_{\rm shot}^{2}+\sigma_{\rm wave}^{2})\). As \(\sigma_{\rm int}\) increases with respect to \(\sigma_{\rm ph}\), larger pixel sizes are favored to improve signal strength via an increased \(\eta_{\rm ap}\). Simultaneously, the impact of HBT correlations is reduced due to a smaller relative contribution of \(\sigma_{\rm wave}\) and due to an optimum \(D_{\rm pix}\gtrsim 1.2F\lambda\) where \(\gamma^{(2)}\) is small.
These few examples only graze the rich topic of focal plane optimization, and we leave a more comprehensive discussion of experiment-specific applications to other publications. Nonetheless, regardless of the context, HBT correlations should be considered when designing dense focal planes, especially for ground experiments where the sky and telescope brightness temperatures are substantially larger than that of the CMB.
## VII Conclusion
We have presented a theoretical formalism for photon noise correlations by extending the quantum optical circuit-based model in Zmuidzinas [11] to a free-space classical model using the optical equivalence theorem of Glauber and Sudarshan [15]. We have used this formalism to estimate the Hanbury Brown-Twiss (HBT) coefficient [7; 8; 9] in a simplified telescope optical system, and we have shown that these simulations match the expectation of the van Cittert-Zernike theorem (VCZT) [12; 13]. This equivalence allows the HBT coefficient to be calculated with only a knowledge of the radiation intensity profile at the aperture plane.
We then uniformly illuminated our model telescope with blackbody sources representative of radiation from the CMB, atmosphere, and telescope, and we have calculated the impact of HBT correlations on experiment mapping speed vs. pixel size within observation bands centered at 35, 95, 150, and 220 GHz. Acknowledging that sensitivity calculations have many inputs and assumptions, we have further discussed three useful variations to the simplified instrument--stop temperature, telescope temperature, and internal detector noise--and showed how each parameter modulates the HBT-modified mapping speed curves. This work builds on an initial discussion by Padin [10] and formalizes the calculation of photon noise correlations between detector pixels in mil
Figure 11: The assumed observation bands plotted over the assumed CMB, atmosphere, and telescope temperatures (left), and the wave-noise fraction vs. sky temperature for each band at \(D_{\rm pix}=3F\lambda\), where \(\eta_{\rm ap}>0.95\) (right).
Figure 12: The Stokes Q HBT coefficient \(\gamma_{ij}^{\text{Q},(2)}=\cos 2(\psi_{i}-\psi_{j})\,\gamma_{ij}^{(2)}\) given pixel pitches \(D_{\text{pix}}=\) (0.2, 0.5, 0.7, 1.0) \(F\lambda\) from top to bottom for radiation from within the aperture (left), from the stop (right), and their cross term (middle). When the pixel pitch is \(\lesssim F\lambda\), the detectors oversample the spatial modes on the focal plane, giving rise to intensity correlations between nearby pixels.
limeter and sub-millimeter telescopes for astronomy. The presented formalism and results are useful to GHz focal plane designers, especially as emerging readout technologies enable the deployment of dense detector arrays.
## Acknowledgement
Work at LBNL is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under contract No. DE-AC0205CH11231. We acknowledge the support by JSPS Grant Number JP19K21873. We thank our Simons Array and Simons Observatory colleagues for fruitful discussions on CMB telescope designs, and we thank Masahito Ueda for teaching us some of the basics of quantum statistical mechanics.
## Appendix A Thermal photon density matrix
The statistical state, or the mixed state, of photons can be described using density matrix \(\hat{\rho}\). We consider a single-mode photon state, where the state is single-moded in both spatial and frequency domains as well as in polarization state. Using creation and annihilation operators \(a^{\dagger}\) and \(a\), respectively, the density matrix can
Figure 13: Stokes \(Q\) mapping speed vs. pixel size in each frequency band for four sets of external sources illuminating the aperture: the CMB only with \(T_{\rm CMB}=2.725\) K (top left), the atmosphere only assuming 1 mm PWV and 50 deg elevation at the Chajnantor Plateau (top right), the telescope emission only assuming \(T_{\rm tel}=10\) K (bottom left), and all three sources combined (bottom right). We assume that each detector’s quantum efficiency is \(\eta_{\rm det}=0.8\) and that \(\sigma^{2}_{\rm int}=0.1(\sigma^{2}_{\rm shot}+\sigma^{2}_{\rm wave})\). Each band is normalized to the peak of its classic curve, which is shown as faded lines. Therefore, the opaque curves represent the achievable mapping speed when \(\gamma^{(2)}\neq 0\) with respect to the maximum of the \(\gamma^{(2)}\equiv 0\) case. Pixel size is plotted in units of \(F\lambda\), where \(\lambda\) is the mean wavelength in each band.
Figure 14: The impact of stop temperature \(T_{\rm stop}\) (left column), telescope temperature \(T_{\rm tel}\) (middle column), and detector internal noise \(\sigma_{\rm int}^{2}\) (right column) on Stokes \(Q\) mapping speed vs. pixel size in the presence of HBT correlations for each observation band (rows). These parameters are among many that vary between experiments, and we include them here as a reference for focal plane designers. The default parameters from Fig. 13 of \(T_{\rm stop}=4\) K, \(T_{\rm tel}=10\) K, and \(\sigma_{\rm int}^{2}/\sigma_{\rm ph}^{2}=0.1\) are assumed when not being swept.
be written as that of a Bose-Einstein distribution
\[\hat{\rho}=\frac{e^{-\gamma a^{\dagger}a}}{\text{Tr}\left(e^{-\gamma a^{\dagger}a }\right)}=\sum_{n}p(n;\bar{n})\left|n\right>\left<n\right|\,, \tag{115}\]
with
\[\gamma\equiv\frac{h\nu}{k_{\text{B}}T}\quad\text{and}\quad p(n;\bar{n})\equiv \frac{1}{1+\bar{n}}\left(\frac{\bar{n}}{1+\bar{n}}\right)^{n}\;. \tag{116}\]
Here, \(\{\left|n\right>\}\) is the Fock state and \(\bar{n}\) is a mean occupation number. We consider a detection process whose integration time \(\tau\) (the inverse of sampling rate) is significantly longer than the coherence time \(\tau_{c}\equiv 1/\Delta\nu\), where \(\Delta\nu\) is the detection bandwidth. This is usually the case for a CMB instrument, where \(\Delta\nu\sim\mathcal{O}(10\,\text{GHz})\) and \(\tau\sim\mathcal{O}(10\,\text{msec})\). In this situation where \(\tau\gg\tau_{c}\), the mean occupation number \(\bar{n}\) can be written as
\[\bar{n}=n(T,\nu)=\frac{1}{e^{\gamma}-1}\;. \tag{117}\]
We can now rewrite the density matrix in terms of Glauber's coherent state \(\left|\alpha\right>\). The coherent state is written as
\[\left|\alpha\right>\equiv e^{aa^{\dagger}-\alpha^{*}a}\left|0\right> \tag{118}\]
and satisfies
\[a\left|\alpha\right>=\alpha\left|\alpha\right>\;. \tag{119}\]
The density matrix can be rewritten in the Glauber-Sudarshan \(P\) representation [15; 2] as
\[\hat{\rho}=\int\!d^{2}\alpha\,p_{g}(\alpha;\bar{n})\left|\alpha\right>\left< \alpha\right| \tag{120}\]
with
\[p_{g}(\alpha;\bar{n})\equiv\frac{1}{\pi\bar{n}}\exp\left(-\frac{|\alpha|^{2}}{ \bar{n}}\right)\,, \tag{121}\]
where the integral is over the entire complex plane. Here, the complex amplitude \(\alpha\) follows a Gaussian distribution \(p_{g}(\alpha;\bar{n})\), in agreement with the expectation in the classical limit. The photon counting of a coherent state follows a Poisson distribution as
\[\left|\left<n\middle|\alpha\right>\right|^{2}=\exp\left(-|\alpha|^{2}\right) \frac{|\alpha|^{2n}}{n!}\;. \tag{122}\]
Eqs. (120) and (122) immediately lead to a special case of Mandel's formula [59; 60]
\[\left<n\right|\hat{\rho}\left|n\right>=\int_{0}^{\infty}\!dW\;e^{-W}\frac{W^{ n}}{n!}\;\;\frac{e^{-W/\bar{n}}}{\bar{n}}=p(n;\bar{n})\;. \tag{123}\]
## Appendix B Partial Coherence of Sources
In this paper, we assume complete incoherence between two source elements that are physically apart from one another. However, it is known that blackbody sources have finite correlation at the length scale of a wavelength [5; 22; 30; 31]. The effect of source coherence on the applicability of VCZT for quasihomogeneous sources, whose spatial intensity variations are slow compared to its coherence length, are extensively discussed in literature [61; 23; 24; 6]. Thus, it is worthwhile to clarify the assumptions in this paper regarding source coherence. In the end, we find that the assumption of completely incoherent sources is a good approximation for telescope systems relevant to our discussion. We first consider source coherence for the simple case presented in Sec. 2.2, which is readily comparable to examples in the literature. We then discuss source coherence in the general formalism of Sec. 2.1.
Equation (20) in combination with (10) constitutes VCZT of a completely incoherent source for the simple geometry in Fig. 2. For sources with partial coherence (e.g., see Ref. [35]), the first-order coherence is
\[\Gamma_{ij}^{(1)}\propto\iint_{\sigma}\!\mathrm{d}^{2}\vec{r}\iint_{\sigma}\! \mathrm{d}^{2}\vec{r}^{\prime}\,\gamma_{c}(|\vec{r}\!-\!\vec{r}^{\prime}|)\,e^{ 2\pi i\phi(R_{j}^{\prime}-R_{i})/e}\,, \tag{124}\]
with
\[R_{i}\equiv|\vec{r}_{i}-\vec{r}|\qquad\quad R_{j}^{\prime}=|\vec{r}_{j}-\vec{ r}^{\prime}|\;, \tag{125}\]
where \(\vec{r}_{i}\) and \(\vec{r}_{j}\) are the positions of detectors \(i\) and \(j\), respectively, and where \(\gamma_{c}(|\vec{r}-\vec{r}^{\prime}|)\) is the coherence of the field between locations \(\vec{r}\) and \(\vec{r}^{\prime}\) on the source surface \(\sigma\). Here, we assume that the source's coherence is statistically isotropic and thus that the source coherence function can be written as \(\gamma_{c}(\vec{r},\vec{r}^{\prime})=\gamma_{c}(|\vec{r}-\vec{r}^{\prime}|)\). Complete incoherence of the source corresponds to the limit of \(\gamma_{c}(|\vec{r}-\vec{r}^{\prime}|)\to\delta(\vec{r}-\vec{r}^{\prime})\) and we immediately find that Eq. (124) reduces to Eq. (20) in such a limit.
We define a coherence length \(R_{c}\) such that \(\gamma_{c}(|\vec{r}-\vec{r}^{\prime}|)\simeq 0\) for \(|\vec{r}-\vec{r}^{\prime}|>R_{c}\). For blackbody radiators, \(R_{c}\sim\lambda\) since we assume detectors with a limited detection band.16 Focusing on cases where \(R_{c}\) is significantly smaller than the source size, Eq. (124) can be approximated as
Footnote 16: See, e.g., Ref. [30] for consideration without a band limit.
\[\Gamma_{ij}^{(1)} \propto\iint_{\sigma}\!\mathrm{d}^{2}\vec{r}\iint_{|\vec{\Delta}| \leq R_{c}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(\theta_{r}\) is the angle between \(\vec{r}-\vec{r}_{j}\) and the source plane's normal vector. The expression in Eq. (48) becomes equivalent to Eq. (20) when \(\mathcal{J}(\theta_{r})\) can be regarded as a constant function of \(\vec{r}\).
To evaluate \(\mathcal{J}(\theta_{r})\), we consider a blackbody surface source that may not be centered at \(x=y=0\), and we assume that the solid angle of the source is small compared to \(\pi\). We consider two regions of the parameter space depending on the source's position. Region 1 is when \(\theta_{r}\ll 1\). In this case,
\[\left|\sin\theta_{r}\frac{|\vec{\Delta}|}{\lambda}\right|\leq|\sin\theta_{r}| \frac{R_{c}}{\lambda}\ll 1\:, \tag{49}\]
and thus \(\mathcal{J}(\theta_{r})\simeq\mathcal{J}(0)\) and Eq. (48) becomes equivalent to Eq. (20). Region 2 is when \(\theta_{r}\gtrsim 1\). In this case, \(\theta_{r}\) is approximately constant across the source \(\sigma\), and thus \(\mathcal{J}(\theta_{r})\simeq\mathcal{J}(\bar{\theta}_{r})\) where \(\bar{\theta}_{r}\) is the mean of \(\theta_{r}\) for the source \(\sigma\). The factor \(\mathcal{J}(\bar{\theta}_{r})/\mathcal{J}(0)\) maps to the radiance reduction due to source coherence, which shows up in \(\Gamma^{(1)}_{ii}\) and \(\Gamma^{(1)}_{jj}\) as well. Thus, the normalized amplitude coherence \(\gamma_{ij}\) remains identical to the case of a completely incoherent source, which is consistent with results presented in the literature [6]. When the source comprises Lambertian blackbody emitters, the radiance satisfies \(\mathcal{J}(\bar{\theta}_{r})/\mathcal{J}(0)\simeq 1\) by construction.
In summary, for a source with the geometry assumed in Sec. 2.2, partial coherence between spatially independent blackbody radiators can be neglected and Eq. (20) is a good approximation.
We now look to Sec. 2.1, which considers a more general, VCZT-free formalism. For quasihomogeneous sources with partial coherence, in contrast to completely incoherent sources, the Kronecker delta \(\delta_{km}\) in Eq. (3) is replaced by a source coherence function \(\gamma_{c,km}\). The mutual intensity is then expressed as
\[B_{ij}(\nu)=\sum_{\sigma}\sum_{k,m\in\sigma}S^{*}_{ik}(\nu)S_{jm}(\nu)\,\gamma ^{\sigma}_{c,km}\,n(T_{\sigma},\nu)\:, \tag{50}\]
where the index \(\sigma\) denotes each quasihomogeneous source with a temperature \(T_{\sigma}\), and \(\gamma^{\sigma}_{c,km}\) is the coherence function of the source. Equation (50) is the generalized version of Eq. (47). Similarly to Eq. (48), we can decompose Eq. (50) as
\[B_{ij}(\nu)\simeq\sum_{\sigma}\sum_{k\in\sigma}S^{*}_{ik}(\nu)S_{jk}(\nu)\,n( T_{\sigma},\nu)\:\mathcal{J}_{k}\:, \tag{51}\]
with
\[\mathcal{J}_{k}\equiv\sum_{m\in\rho(k)}\frac{S_{jm}(\nu)}{S_{jk}(\nu)}\, \gamma^{\sigma}_{c,km}\:, \tag{52}\]
where \(\rho(k)\) is a collection of modes close enough to mode \(k\) such that \(\gamma^{\sigma}_{c,km}\) is non-zero. The expressions for normalized coherence, both \(\gamma_{ij}\) and \(\gamma^{(2)}_{ij}\), become identical to the case with completely incoherent sources if \(\mathcal{J}_{k}\) is constant across \(k\in\sigma\).17 Whether \(\mathcal{J}_{k}\) can be regarded as constant should be evaluated on a case-by-case basis.
Footnote 17: Strictly speaking, reducing \(\mathcal{J}_{k}\) modifies the normalized coherence from the case with complete incoherence. However, these changes in \(\mathcal{J}_{k}\) lead to changes in apparent brightness temperature, which can be absorbed into \(T_{\sigma}\), and therefore the formal equivalence for the normalized coherence still holds.
For the cases discussed in this paper, we can regard \(\mathcal{J}_{k}\) as constant given the following arguments. For aperture radiation, \(k\) and \(m\) correspond to modes emitted by infinitesimal sources at \((u_{k},v_{k})\) and \((u_{m},v_{m})\), respectively. Using Eqs. (47) and (48), \(\mathcal{J}_{k}\) can be written as
\[\begin{split}\mathcal{J}_{k}=\iint_{m\in\rho(k)}\mathrm{d}u_{m} \,\mathrm{d}v_{m}\,\frac{G_{j}(u_{m},v_{m})}{G_{j}(u_{k},v_{k})}\\ \cdot e^{2\pi i[(u_{m}-u_{k})x_{j}+(v_{m}-v_{k})y_{j}]/D_{\text{ap }}F\lambda}\gamma^{\sigma}_{c,km}\:,\end{split} \tag{53}\]
with the integrated region \(m\in\rho(k)\) being
\[s\equiv\sqrt{(u_{m}-u_{k})^{2}+(v_{m}-v_{k})^{2}}\leq R_{c}\:.\]
Since \(R_{c}\sim\lambda\ll D_{\text{ap}}\), the aperture illumination function varies minimally within the integrated range and thus \(G_{j}(u_{m},v_{m})\simeq G_{j}(u_{k},v_{k})\). The source coherence function \(\gamma^{\sigma}_{c,km}\) is isotropic and thus depends only on \(s\) as \(\gamma^{\sigma}_{c,km}=\gamma^{\sigma}_{c}(s)\), leading to
\[\mathcal{J}_{k}\simeq\int_{0}^{2\text{d}}\!\!\phi\int_{8}^{R_{c}}\!\!\mathrm{d }s\,\gamma^{\sigma}_{c}(s)\,e^{2\pi i\,s\,d_{j}\cdot\cos\phi/D_{\text{ap}}F \lambda}\:, \tag{54}\]
with
\[d_{j}\equiv\sqrt{x_{j}^{2}+y_{j}^{2}}\:.\]
Thus, \(\mathcal{J}_{k}\) is constant as a function of \(k\) and source coherence can therefore be ignored.
For stop radiation, the physical geometry may be more complicated than for the aperture radiation, and therefore \(\mathcal{J}_{k}\) may sometimes vary across the source. However, because \(\mathcal{J}_{k}\) typically varies slowly compared to \(\lambda\), we can divide the source into sections that are significantly larger than \(\lambda\) and in which \(\mathcal{J}_{k}\) is effectively constant. Given this setup, we can ignore source coherence by regarding each of these sections as independent sources indexed by \(\sigma\) in Eq. (50).
In summary, for the model optical system presented in Sec. 3, the assumption of completely incoherent sources is a good approximation for the context of this paper.
## Appendix C Goodness of Flat-Illumination Approximation
### Aperture Radiation
In this section, we show that Eq. (55) is a good approximation of Eq. (52) even for a general Gaussian illumination function \(G_{i}(u,v)\). The key assumption is that
the beam can be approximated by Eqs. (43) and (44) and that the pixel spacing satisfies
\[p_{ij}\geq D_{\rm pix}\,. \tag{120}\]
As described in Sec. 4.3 and Eq. (54), we approximate the illumination function as
\[G_{i}(u,v)=\frac{1}{\sqrt{\pi\sigma_{\rm ap}^{2}}}\exp\left(-\frac{u^{2}+v^{2}} {2\sigma_{\rm ap}^{2}}\right)\,, \tag{121}\]
and as described in Eq. (43), the circumference of the aperture corresponds to an angle of \(\theta\simeq 1/2F\). It thus follows that
\[\sigma_{\rm ap}=D_{\rm ap}\frac{w_{f}}{\sqrt{2}\pi}\frac{F\lambda}{D_{\rm pix }}\geq D_{\rm ap}\frac{w_{f}}{\sqrt{2}\pi}\frac{F\lambda}{p_{ij}}\,, \tag{122}\]
assuming a linear mapping from detector-beam angle \((\theta,\phi)\) to aperture position \((u,v)\).
To evaluate the approximation in Eq. (55), which corresponds to the limit of \(\sigma_{\rm ap}\to\infty\) or \(D_{\rm pix}\to 0\), we evaluate the opposite limit of \(p_{ij}=D_{\rm pix}\) or
\[\sigma_{\rm ap,min}(p_{ij})=D_{\rm ap}\frac{w_{f}}{\sqrt{2}\pi}\frac{F\lambda }{p_{ij}}\,,\]
where the approximation is its worst. In Fig. 9, we show the HBT correlation coefficient \(|\gamma(p_{ij})|^{2}\) calculated using \(\sigma_{\rm ap}=\infty\) and \(\sigma_{\rm ap}=\sigma_{\rm ap,min}(p_{ij})\) for two typical \(w_{f}\) values, and all curves match well.
### Stop Radiation
As Fig. 12 suggests, stop radiation contributes only minorly to the total correlation among pixels. To demonstrate this contribution explicitly, Fig. 16 shows \((1-\eta_{\rm ap})\,\gamma_{\rm stop,\,\,\,\,ij}^{\rm np}\) for various \(\sigma_{\rm ap}\) of the Gaussian illumination in Eq. (121). When \(\sigma_{\rm ap}\gg D_{\rm ap}\), the VCZT coherence \(\gamma_{\rm stop,\,\,\,ij}^{\rm np}\) asymptotes to a Dirac delta function and thus contributes negligibly to the intensity correlation. In the other limit of \(\sigma_{\rm ap}\ll D_{\rm ap}\), \((1-\eta_{\rm ap})=\exp(-D_{\rm ap}^{2}/4\sigma_{\rm ap}^{2})\) asymptotes to zero and the stop becomes irrelevant.
The contribution of stop radiation may become non-negligible, though still small, when \(\sigma_{\rm ap}\) is in neither of these limits, or when \(\sigma_{\rm ap}\sim D_{\rm ap}/2\). In this parameter region, an approximation can be obtained by calculating the coherence of a flat-illuminated annulus with width \(\sigma_{\rm ap}\)
\[\gamma_{\rm stop,\,\,\,\,ij}^{\rm np}(\nu)\] \[=\frac{1}{(1-\eta_{\rm ap})}\iint_{D_{\rm ap}/2\leq\sqrt{u^{2}+v^ {2}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The dashed lines in Fig. 16 show this approximation for a few relevant examples.
|
2307.00939 | Solitonic symmetry as non-invertible symmetry: cohomology theories with
TQFT coefficients | Originating from the topology of the path-integral target space $Y$,
solitonic symmetry describes the conservation law of topological solitons and
the selection rule of defect operators. As Ref.~\cite{Chen:2022cyw}
exemplifies, the conventional treatment of solitonic symmetry as an invertible
symmetry based on homotopy groups is inappropriate. In this paper, we develop a
systematic framework to treat solitonic symmetries as non-invertible
generalized symmetries. We propose that the non-invertible solitonic symmetries
are generated by the partition functions of auxiliary topological quantum field
theories (TQFTs) coupled with the target space $Y$. We then understand
solitonic symmetries as non-invertible cohomology theories on $Y$ with TQFT
coefficients. This perspective enables us to identify the invertible solitonic
subsymmetries and also clarifies the topological origin of the
non-invertibility in solitonic symmetry. We finally discuss how solitonic
symmetry relies on and goes beyond the conventional wisdom of homotopy groups.
This paper is aimed at a tentative general framework for solitonic symmetry,
serving as a starting point for future developments. | Shi Chen, Yuya Tanizaki | 2023-07-03T11:27:49Z | http://arxiv.org/abs/2307.00939v2 | # Solitonic symmetry as non-invertible symmetry:
###### Abstract
Originating from the topology of the path-integral target space \(Y\), solitonic symmetry describes the conservation law of topological solitons and the selection rule of defect operators. As Ref. [1] exemplifies, the conventional treatment of solitonic symmetry as an invertible symmetry based on homotopy groups is inappropriate. In this paper, we develop a systematic framework to treat solitonic symmetries as non-invertible generalized symmetries. We propose that the non-invertible solitonic symmetries are generated by the partition functions of auxiliary topological quantum field theories (TQFTs) coupled with the target space \(Y\). We then understand solitonic symmetries as non-invertible cohomology theories on \(Y\) with TQFT coefficients. This perspective enables us to identify the invertible solitonic subsymmetries and also clarifies the topological origin of the non-invertibility in solitonic symmetry. We finally discuss how solitonic symmetry relies on and goes beyond the conventional wisdom of homotopy groups. This paper is aimed at a tentative general framework for solitonic symmetry, serving as a starting point for future developments.
## 1 Introduction
### Basic concepts
* Target space of the path integral
* 2.2 Solitonic symmetry
* 2.2.1 Elementary properties
* 2.2.2 Physical significance
* 2.2.3 Conventional wisdom and homotopy groups
* 3 Topological functional
* 3.1 The identity problem
* 3.2 The coherence problem
* 3.3 Our ansatz
* 4 Algebraic structure of solitonic symmetry
* 4.1 Preliminaries on higher-categories
* 4.2 Fully-extended TQFT
* 4.2.1 Bordism domain \((\infty,n)\)-category
* 4.2.2 Physical codomain \((\infty,n)\)-category
* 4.2.3 Summary of the formulation
* 4.3 Cohomology with TQFT coefficients
* 4.3.1 Non-invertible: (Super) solitonic cohomology
* 4.3.2 Invertible: (Super) unitary cohomology
* 5 Non-invertible structure beyond homotopy groups
* 5.1 Rectification vs. condensation
* 5.2 Examples of rectification
* 5.2.1 Spherical rectification
* 5.2.2 Non-spherical rectification
* 5.3 Examples of condensation
* 6 Discussion
* 6.1 Remarks
* 6.2 Outlooks
Introduction
Symmetry provides one of the guiding principles when studying strongly-coupled physics in quantum field theories (QFTs), and astonishingly, the notion of symmetry itself has been vastly generalized in these recent years. The generalization is achieved under the motto that the topologicalness of operators should always represent conservation laws, and we identify the algebraic structure of topological operators in QFTs with the generalized symmetries. Such generalizations mainly include two directions: One is the higher-form symmetry [2] and higher-group symmetry [3; 4; 5; 6; 7; 8; 9], where the symmetry operators are defined on various codimensions, and the more recent one is the non-invertible symmetry, where the fusion rule obeys a suitable algebraic structure beyond the usual group multiplications. The non-invertible symmetries in \((1+1)\)-dim are now well-understood, and the fusion category captures their algebraic structure (when finitely generated) [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. The non-invertible symmetries in higher dimensions start to be realized in various QFTs [1; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45], and there is also a massive endeavor to identify the precise mathematical structures behind generalized symmetries [46; 47; 48; 49; 50].
In this paper, we shall investigate non-invertible symmetries organizing conservation laws of topological solitons, which we call (non-invertible) solitonic symmetries [1]. Topological solitons are nonperturbative objects in QFTs that appear due to the nontrivial topology of the path-integral target space, such as kinks, vortices, monopoles, etc. They are created/annihilated by defect operators. It has been common wisdom that their topological stability, an intriguing aspect of topological solitons, is captured by the homotopy group of the target space [61; 62; 63]. In the previous study [1], the present authors revealed that solitonic symmetry also becomes a non-invertible symmetry in general, and the solitonic symmetry generators are given by the partition functions of auxiliary lower-dim topological QFTs (TQFTs) coupled with the original system. This finding provides us an opportunity to reconsider the foundations of solitonic symmetries, and, surprisingly, deep mathematics turns out to be there behind the solitonic symmetries, which echoes those behind classifying gapped phases [64; 65; 66; 67; 68; 69; 70].
The fact that solitonic symmetries become non-invertible implies that the usual homotopy group of the target space \(Y\) does not always correctly characterize the topological conservation laws of solitons, which raises the following questions:
* What is the new foundation for solitonic symmetry? (Sec. 2)
* What are the topological operators for solitonic symmetry? (Secs. 3)
* What is the algebraic structure of solitonic symmetry? (Sec. 4)
* What makes solitonic symmetry go beyond homotopy groups? (Sec. 5)
We give proposals and/or solutions to these questions in the indicated sections, and let us summarize the results as follows.
When discussing symmetries, we need to specify the symmetry generators and their action on charged objects. We showed in Sec. 2 that, in the case of the topological con
servation law of solitons, the charged objects are given by the defect operators that create/annihilate solitons, and then the symmetry generators should be given by topological functionals using the fundamental fields in the path-integral formulation. Then in Sec. 3, we would like to find the most general form of topological functionals to define the non-invertible solitonic symmetries, and we clarify the physical requirements to be satisfied by the topological functionals. In particular, it turns out that the essential requirement comes from locality. As a natural ansatz satisfying such requirements, we propose that the solitonic symmetries are generated by partition functions of auxiliary fully-extended TQFT coupled to the fundamental fields (Ansatz 3.2).
The rigorous treatment of Ansatz 3.2 requires us to employ the knowledge of fully-extended TQFTs. We concisely review the relevant mathematical treatments in Sec. 4. Also in that section, we clarify the algebraic structure of solitonic symmetry, which is described by symmetric fusion higher-categories \(\mathsf{Rep}^{\bullet}(Y)\) for the bosonic case and \(\mathsf{sRep}^{\bullet}(Y)\) for the fermionic case. These observations are consistent with the latest progress on generalized symmetry in the literature, such as Refs. [52; 53; 54; 55]. We see that solitonic symmetry serves as non-invertible generalizations of cohomology theories with TQFT coefficients. We also discuss the invertible solitonic subsymmetry given by orthodox cohomology theories.
Armed with this mathematical guidance, in Sec. 5, we can systematically study the origin of non-invertibility in the solitonic symmetry and how the conventional wisdom of homotopy groups is surpassed. We shall see that \((\mathsf{s})\mathsf{Rep}^{\bullet}(Y)\) can be decomposed into two parts. The first part comes from \((\mathsf{s})\mathsf{Rep}^{\bullet-1}(Y)\) via formulating condensations and contains topological functionals that are trivial on spheres. The second part comes from the homotopy group \(\pi_{\bullet}-\) and contains topological functionals that are nontrivial on spheres. But according to the topological data present in the theory, these spherically-nontrivial topological functionals have to take a non-invertible fusion rule. This decomposition unpacks the structure of solitonic structure inductively and provides us with insight into the connection between the generalized solitonic symmetry from the contemporary perspective and the conventional wisdom since Coleman etc.
This work was initiated during S. C.'s visit to YITP with the Atom-type visiting program, and the authors appreciate the hospitality of Yukawa Institute. The authors are grateful to Mayuko Yamashita for bringing Ref. [68] to their scope in a seminar. They also thank Kantaro Ohmori for valuable comments on the manuscript. S. C. thanks Yuji Tachikawa for cheerful discussions and useful comments on the idea developed in this paper. This work was supported by JSPS KAKENHI Grant No. 21J20877 (S. C.), 22H01218 and 20K22350 (Y. T.), and also by the Center for Gravitational Physics and Quantum Information (CGPQI).
Basic concepts
Solitonic symmetry is generated by topological functionals. It (i) describes the conservation law in the solitonic sector, and (ii) prescribes the selection rule in correlation functions between solitonic defects, and (iii) determines the possible topological couplings with a background gauge field. We explain these basic notions in this section. The spacetime dimension is denoted by \(d\) throughout.
### Target space of the path integral
Let us consider a general situation where the quantum field theory (QFT) is defined by the path integral, and its partition function is given by
\[\mathcal{Z}=\int\mathcal{D}\sigma\exp(-S[\sigma]), \tag{1}\]
and \(\sigma\) is some field on the \(d\)-dim spacetime. We focus on the non-Grassmann sector of this path integral, and thus \(\sigma\) might be a scalar field, a gauge field, a higher-form gauge field, or even a combination of them coupled. We note that, except in the simplest case of pure scalar fields, all other fields are not maps to a fixed space.
In this paper, we are interested in a series of nonperturbative phenomena insensitive to continuous deformations of field configurations. Namely, we only care about the deformation classes of field configurations on some closed (smooth) manifold \(M\). The actual choice of \(M\) depends on specific problems; it might be the spacetime itself, a submanifold like a space slice or a world line, or a virtual submanifold like the normal sphere bundle of a defect operator (see Sec. 2.2.2). Conveniently, we can always find a topological space \(Y\) such that there exists a one-to-one correspondence,
\[\begin{split}\text{Deformation classes of field configurations on $M$}\\ \|\\ \text{Deformation classes of maps from $M$ to $Y$}\,.\end{split} \tag{2}\]
We shall refer to this topological space \(Y\) as the _(homotopy) target space_ of the path integral and abuse the symbol \(\sigma\) to denote also the auxiliary maps to the target space, i.e.,
\[\sigma|_{M}:M\mapsto Y. \tag{3}\]
Formally, the set of deformation classes of field configurations are now expressed by the set of homotopy classes of maps to the target space \(Y\),
\[[M,Y]\equiv\{f:M\mapsto Y\}/\text{homotopies}. \tag{4}\]
Because the dimension of closed manifold \(M\) cannot exceed spacetime dimension \(d\), the set \([M,Y]\) depends merely on the homotopy \(d\)-type of \(Y\).
**Proposition 2.1**: _In a \(d\)-dim QFT defined by a path integral, the deformation classes of field configurations on any submanifold depend only on the homotopy \(d\)-type of the target space \(Y\)._
Thus, \(Y\) is understood up to \((d{+}1)\)-connected maps in this paper. Without loss of generality, we can always require \(Y\) to be a \(d\)-aspherical space.
**Definition 2.2**: _Topological space \(X\) is \(n\)-aspherical if \(\pi_{\bullet}(X,x)\simeq 0\) for all \(\bullet>n\) and \(x\in X\)._
Namely, we can always replace a general \(Y\) with its \(d\)-th Postnikov truncation. Nevertheless, in actual practice, despite Proposition 2.1, we often render more topological and even geometrical structures on \(Y\) to keep in touch with other physics that are sensitive to continuous deformations of field configurations.
We now construct several common \(Y\)'s to illustrate the above abstract concepts.
1. \(Y\simeq X\) for a \(X\)-valued scalar field, where \(X\) is a topological space.
2. \(Y\simeq BG\), the delooping of \(G\) (or the classifying space of \(G\)), for a gauge field with gauge group \(G\), where \(G\) is a topological group (can be discrete).
3. If we couple the two fields above via a continuous \(G\)-action on \(X\), then \(Y\) fits into a fibration \(X\to Y\to BG\) determined by the \(G\)-action1. Footnote 1: If \(G\) acts on \(X\) freely (i.e., the action has no fixed point), we just have \(Y\simeq X/G\). If \(X\) is contractible, we just have \(Y\simeq BG\). Otherwise, \(Y\) has a quite complicated structure, such as many orbifolding examples.
4. \(Y\simeq B\mathsf{G}\), the \(p\)-th delooping of \(G\), for a \(p\)-form gauge field with Abelian \(G\).
5. \(Y\simeq B\mathsf{G}\) for a gauge field of a local higher-group symmetry \(\mathsf{G}\) (see the discussions around Philosophies 4.1 and 4.2, and also the remark in Sec. 6.1).
Concrete examples will appear later.
### Solitonic symmetry
From a contemporary viewpoint of QFTs, the notion of symmetry is vastly generalized and can be summarized as
\[\text{Generalized symmetry}\equiv\text{Algebra of topological operators.} \tag{2.5}\]
In the path-integral formalism, there are basically two constructions of operators2, defects and functionals. Both types of operators have the chance to be topological. Topological defects are the most orthodox topological operators and all symmetries "on the electric side" are generated by them. Topological functionals are more or less atypical and symmetries "on the magnetic side" are generated by them. In this paper, we shall refer to the symmetry generated by topological functionals as _solitonic symmetry_, i.e.,
Footnote 2: We note that these two constructions can be interchanged under the duality operation, and thus the notion of the solitonic symmetry depends on the explicit path-integral realization of a given QFT.
\[\text{Solitonic symmetry}\equiv\text{Symmetry generated by topological functionals}\,. \tag{2.6}\]
The core purpose of this paper is to reveal the structure of solitonic symmetry by studying the behavior of topological functionals.
#### 2.2.1 Elementary properties
For a functional on an \(n\)-dim closed manifold \(M\) to be topological, it must factor through the deformation classes of field configurations. Namely, it factors through \([M,Y]\) for the target space \(Y\). Given that \([M,Y]\) depends only on the homotopy \(n\)-type of \(Y\), we see the following property of topological functionals.
**Proposition 2.3**: _The \(n\)-dim topological functionals depend only on the homotopy \(n\)-type of the target space._
This is consistent with Proposition 2.1. As a prototypical example of topological functionals, let us pick up a cohomology class \(\omega\in\mathbb{E}^{\bullet}(Y)\) for some multiplicative cohomology \(\mathbb{E}^{3}\). It can be an ordinary cohomology \(\mathbb{H}R\) for some ring \(R\) or an extraordinary cohomology such as \(\mathbb{K}\) and \(\mathbb{K}\mathbb{O}\). Then, on any closed \(n\)-dim \(\mathbb{E}\)-orientable manifold \(M\) that acquires a chosen \(\mathbb{E}\)-orientation, we can define a topological functional as
\[U_{g}(M)\equiv g\left(\int_{M}\sigma^{*}\omega\right)\,,\qquad\forall g\in \operatorname{Hom}\bigl{(}\mathbb{E}_{n-\bullet},U(1)\bigr{)}\,. \tag{7}\]
We can readily see the fusion rule \(U_{g_{1}}U_{g_{2}}=U_{g_{1}g_{2}}\) and thus we obtain a \(p\)-form symmetry with \(p=d\!-\!n\!-\!1\). The operator dimension \(n\) can range from \(d\) to \(0\), corresponding to the symmetry form \(p\) ranging from \(\neg\!1\) to \(d\!-\!1\). Note that \(n\)-dim topological functionals are exactly \(\theta\)-angles, i.e. topological terms in the action that depend on \(Y\) only4. We shall see concrete examples of operator (7) as soon as in Sec. 3.1.
Footnote 4: In contrast, topological terms that also depend on the geometry beyond the mere topology of \(Y\) are not \(n\)-dim topological functionals. CS terms and WZW terms are such counter-examples.
The solitonic symmetry defined above has an invertible fusion rule, which is captured by an Abelian group, \(\operatorname{Hom}(\mathbb{E}_{n-\bullet},U(1))\) or one of its quotients. Actually, operator (7) gives the universal construction of invertible solitonic symmetry. In this paper, we shall discuss the most generalized connotation of topological functional and solitonic symmetry with more complicated non-invertible fusion rules. In particular, non-invertible \(\theta\)-angles mean couplings to topological orders, as explained in Refs. [1, 71]. Nevertheless, in any case, the fusion rule of topological functionals must still be commutative, because on each supporting manifold, the fusion is just the multiplication of complex numbers:
**Proposition 2.4**: _Solitonic symmetry is commutative._
Namely, topological functionals never care about their order. Solitonic symmetry is also insensitive to the theory details, including the action and ambient spacetime, because topological functionals and their fusions do not care about these theory details. Therefore, when a \(d\)-dim and a \((d\!+\!1)\)-dim theory share the same target space \(Y\) (i.e., the homotopy \(d\)-types of the target spaces are identical), we have for \(0\leq n\leq d\),
\[n\text{-dim topological functional in $d$-dim QFT}\] \[\| \tag{8}\] \[n\text{-dim topological functional in $(d\!+\!1)$-dim QFT}\,,\]
and accordingly, for \(d\!-\!1\geq p\geq-\!1\), we have the following equivalence,
\[\begin{split} p\text{-form solitonic symmetry in $d$-dim QFT}\\ \parallel\\ (p\!+\!1)\text{-form solitonic symmetry in $(d\!+\!1)$-dim QFT}\,.\end{split} \tag{9}\]
From this point of view, the algebraic structure of solitonic symmetry is supposed to be a sort of cohomology theory on the target space \(Y\); we shall justify this in Sec. 4.3. Thus we shall neglect to mention spacetime when discussing topological functionals. The spacetime dimension implicitly enters as an upper limit for the possible dimension of topological functionals, according to Prop. 2.1.
Before continuing the journey to more details about topological functionals, let us pause here to acquaint readers with solitonic symmetry's physical consequences.
#### 2.2.2 Physical significance
First, a symmetry shows the presence of certain conserved charges. To find charged objects of solitonic symmetry, let us consider a correlation function involving a topological functional. Then this topological functional can produce a nontrivial number as long as the field configuration on its supporting manifold cannot be continuously deformed to a trivial configuration. This can be achieved only if the correlation function includes proper _solitonic defects_ or the spacetime has a special topology.
Let us now introduce the notion of solitonic defects. For some \(0\leq p\leq d\!-\!1\), we take a \(p\)-dim submanifold \(N\) of the spacetime and excise its infinitesimal neighborhood. Then a \(p\)-dim Dirichlet defect operator on \(N\), which we call a solitonic defect operator, is defined by putting the Dirichlet boundary condition on the boundary of the excised region. More formally, it is a Dirichlet boundary condition on the normal sphere bundle \(\mathcal{S}N\) of \(N\) (Locally, \(\mathcal{S}N\simeq N\times S^{d-p-1}\)). It can be virtually viewed as a \((d\!-\!1)\)-dim submanifold in the spacetime via a tubular neighborhood. According to Sec. 2.1, the deformation classes of such Dirichlet boundary conditions can be expressed by deformation classes of maps
\[\sigma|_{\mathcal{S}N}:\mathcal{S}N\mapsto Y. \tag{10}\]
Namely, the deformation classes of solitonic defects on \(N\) are classified by
\[\left[\mathcal{S}N,Y\right]. \tag{11}\]
Solitonic defects can be either topological or non-topological operators, depending on the details of the theory. If topological, they also generate symmetry, but "on the electric side" and thus non-solitonic.
Solitonic defects couple to a nonperturbative sector called the _solitonic sector_ which appears because of the nontrivial topology of the target space5. When we put solitonic
defects in the spacetime and evaluate the correlation function via the path integral, the configurations to be integrated are topological solitons bounded by those defects (see Sec. 2.2.3 for a few examples). In particular, the tree-level contribution to the correlation functions comes from solitonic solutions of the classical equation of motion. Therefore, the solitonic sector in quantum theory can be viewed as the quantization of classical solitons, and the solitonic defects are the creation/annihilation operators for quantum solitons. Aside from putting solitonic defects, arranging a nontrivial spacetime topology is also a common method to visualize the solitonic sector.
It is now clear that the charged objects of solitonic symmetry are exactly those solitonic objects introduced above. Solitonic symmetry puts conserved charges on these solitonic objects, which we call _topological charges_. The conservation law of topological charges constrains the correlation functions among solitonic defects and prescribes the selection rule in the physical processes in which topological solitons are involved.
Second, a symmetry prescribes the possible couplings to background gauge fields. This coupling is topological for solitonic symmetry, namely a topological term in the action. Typically, the coupling is directly related to the solitonic sector. However, some couplings are not related to any authentic solitonic sector. For example, a U(1) Chern-Simons path integral
\[\int{\cal D}a\,\exp\left\{\frac{{\rm i}k}{4\pi}\int a{\rm d}a\right\} \tag{12}\]
has no physical solitonic sector since monopoles are not gauge invariant as point-like local operators. However, we can still couple a \(U(1)\) background gauge field \(A\) topologically via
\[\int{\cal D}a\,\exp\left\{\frac{{\rm i}k}{4\pi}\int a{\rm d}a+\frac{{\rm i}} {2\pi}\int a{\rm d}A\right\}\,. \tag{13}\]
The information of the existence of such possible topological couplings with a background \(U(1)\) gauge field is also encoded in a 0-form \(U(1)\) solitonic symmetry from \(Y\simeq BU(1)\), although the corresponding solitonic objects are unphysical.
Another general case is that, due to dimensional reasons, no solitonic defect carries a (\(-1\))-form topological charge. Instantons, the charged objects under (\(-1\))-form solitonic symmetry, are just classical objects. Thus (\(-1\))-form solitonic symmetry, generated by \(\theta\)-angles, never really rules a quantum solitonic sector but instead prescribes the topological couplings to a background "0-form gauge field", i.e., a background axion field.
#### 2.2.3 Conventional wisdom and homotopy groups
In the conventional discussion on topological solitons, their stability and the selection rules are usually discussed using the homotopy group of the target space. To be concrete, let us consider two simple examples:
1. An \(S^{1}\) sigma model has a \((d{-}2)\)-dim solitonic defect, on which a kink can end. Their \((d{-}2)\)-form topological charge is related to \(\pi_{1}S^{1}\simeq\mathbb{Z}\). This gives rise to a \((d{-}2)\)-form \(U(1)\) solitonic symmetry.
2. A pure U(1) gauge theory or a U(1) Higgs model has a \((d\!-\!3)\)-dim 't Hooft defect, on which a magnetic flux or a gauge vortex can end. Their \((d\!-\!3)\)-form topological charge is related to \(\pi_{2}BU(1)\simeq\mathbb{Z}\). This gives rise to a \((d\!-\!3)\)-form \(U(1)\) solitonic symmetry.
In these examples, the solitons and their conservation laws are controlled by \(\pi_{\bullet}Y\), the homotopy groups of the target space \(Y\). This conventional treatment suggested a conventional wisdom that the \(p\)-form solitonic symmetry is described by
\[\text{Hom}\Big{(}\pi_{d-p-1}Y,\,U(1)\Big{)}\,. \tag{14}\]
Indeed, in the two examples above, the relevant topological functionals can be constructed via the universal invertible form (7). Thus the solitonic symmetries in these two examples are indeed invertible and described by Eq. (14).
We would like to note that the above examples are actually selected rare cases. In general, most of the topological charges prescribed by homotopy groups cannot be detected by the universal invertible construction (7), and the solitonic symmetry is not given by Eq. (14). The key problem is the following.
* The notion of the dimension of a solitonic configuration is, in general, ambiguous. A generic soliton may be a mixture of solitons of clear dimensions.
This is especially often the case for the solitonic configuration bounded by solitonic defects, which determines the correlation function of these solitonic defects.
* A \(p\)-dim solitonic defect may be able to create/annihilate not only \((p+1)\)-dim solitons but also solitons of all dimensions \(<p+1\).
* A \(p\)-dim solitonic defect can carry \(q\)-form topological charges for all \(0\leq q\leq p\).
This is not surprising: Any conserved \(q\)-form charge can be carried by operators of dimension \(\geq q\), which has already been noticed since the beginning of generalized symmetry [2]. This phenomenon is the starting point that led us to topological charges beyond homotopy groups in the \(\mathbb{C}P^{1}\) model [1], which provides an example of non-invertible solitonic symmetry. The entire Sec. 5 will be devoted to discussing how general solitonic symmetry goes beyond homotopy groups and becomes non-invertible.
## 3 Topological functional
From the contemporary perspective, understanding solitonic symmetry exactly means understanding topological functionals. We shall carefully inspect the notion of topological functionals in this section. At first glance, functional operators look more orthodox and simpler than defect operators. However, we will find that they are much subtler than defects, and several physical requirements vastly constrain a well-behaved notion of topological functionals. Pursuing a well-behaved notion of topological functionals, we will eventually bring ourselves to Ansatz 2 which claims that topological functionals are best understood as the partition functions of auxiliary fully-extended TQFTs.
### The identity problem
As we have mentioned at the beginning of Sec. 2.2.1, a topological functional on a manifold \(M\) has to factor through \([M,Y]\). However, not every \([M,Y]\mapsto\mathbb{C}\) gives rise to topological functionals. A priori, only the topology of \(M\) matters for a topological functional. We cannot distinguish the original \(M\) and a new \(M\) transformed by a self-diffeomorphism \(M\stackrel{{ f}}{{\to}}M\). Accordingly, let us consider two different configurations \(M\stackrel{{ a}}{{\to}}Y\) and \(M\stackrel{{ b}}{{\to}}Y\) such that \(f\) transforms one to the other, say \(b=f\circ a\). Then, a topological functional we can construct with our bare hands must be blind to the difference between \(a\) and \(b\). More formally, let us consider the mapping class group \(\pi_{0}\mathrm{Diff}(M)\), i.e., the group of the isotopy classes of self-diffeomorphisms on \(M\). This group acts on \([M,Y]\) as described above. Then, a topological functional should actually factor through the equivalence classes,
\[[M,Y]\Big{/}\pi_{0}\mathrm{Diff}(M)\,. \tag{3.1}\]
Unfortunately, in most interesting cases, the \(\pi_{0}\mathrm{Diff}(M)\)-action on \([M,Y]\) results in vast degeneracies. Many elements in \([M,Y]\) must be considered identical. We thus call this the identity problem. This problem terribly prevents us from having sufficiently many meaningful topological functionals.
There is one way out, to add some extra structure to \(M\). The self-diffeomorphism \(f\) may transform this structure into a different structure of the same type. If this happens, we can distinguish \(a\) and \(b\) by their relative relationship to the extra structure. More formally, now we should consider the mapping class group \(\pi_{0}\mathrm{Diff}(M,\gamma)\subseteq\pi_{0}\mathrm{Diff}(M)\) of isotopy classes of self-diffeomorphisms that preserve the extra structure \(\gamma\). Then a topological functional that relies on \(\gamma\) should factor through the equivalence classes,
\[[M,Y]\Big{/}\pi_{0}\mathrm{Diff}(M,\gamma)\,. \tag{3.2}\]
In the consistent practice of physicists, we almost always unconsciously assume some extra structure; recall "\(\mathbb{E}\)-orientation" for Eq. (2.7). It is precisely the identity problem that motivates our subconscious to do so. Such \(\mathbb{E}\)-orientations are primary examples of the structure \(\gamma\). Thus we shall just call \(\gamma\) a generalized orientation. Consequently, a topological functional that can detect sufficiently many deformation classes of field configurations needs to rely on a generalized orientation \(\gamma\).
#### Example: orientation
For example, let us consider \(Y\simeq S^{2}\) and \(M\simeq S^{2}\). Then we have \([M,Y]\simeq\pi_{2}(S^{2})\simeq\mathbb{Z}\). Let us consider the reflection self-diffeomorphism \(f\) defined by \(f(\vec{n})=-\vec{n}\) where we view \(S^{2}\subseteq\mathbb{R}^{3}\). This self-diffeomorphism generates
\[\pi_{0}\mathrm{Diff}(S^{2})\simeq\mathbb{Z}_{2}\,. \tag{3.3}\]
Clearly, \(f\) acts on \([M,Y]\) via \(\mathbb{Z}\to-\mathbb{Z}\). Thus with bare hands, we cannot distinguish two configurations labeled by opposite integers. To break the ice, we note that \(M\) is orientable.
Recall that orientations form a \(H^{0}(-;\mathbb{Z}_{2})\)-torsor. Thus there are two orientations on \(M\), \(\xi\) and \(\xi^{\prime}\), which are exchanged under the transformation of \(f\). That is,
\[\pi_{0}\text{Diff}(S^{2},\xi)\simeq 0\,. \tag{3.4}\]
Combining configurations with orientations, we see that for an \(n\in\mathbb{Z}\simeq[M,Y]\), \((n,\xi)\stackrel{{ f}}{{\longleftrightarrow}}(-n,\xi^{\prime})\) and \((-n,\xi)\stackrel{{ f}}{{\longleftrightarrow}}(n,\xi^{\prime})\) are distinct from each other and cannot be mixed by \(f\). Based on an orientation, we can construct the following topological functional,
\[U_{\theta}(M)\ \equiv\ \exp\left\{\mathrm{i}\theta\int_{M}\sigma^{*}b\right\}\,, \qquad\forall\theta\in\mathbb{R}/2\pi\mathbb{Z}\,, \tag{3.5}\]
where \(b\) denotes the canonical 2-form on \(S^{2}\) that integrates to 1. This operator can distinguish any two elements in \([M,Y]\). We can also recast operator (3.5) into the universal form (2.7) by choosing \(\mathbb{E}\simeq\mathbb{H}\mathbb{Z}\). Concretely, in Eq. (2.7), we take \(\omega\) as a generator of \(H^{2}(Y;\mathbb{Z})\simeq\mathbb{Z}\), and \(g\in\text{Hom}(\mathbb{Z},U(1))\simeq\mathbb{R}/2\pi\mathbb{Z}\).
#### Example: spin structure
Besides orientations, other structures may also be needed, such as spin structures. An interesting example was presented in our earlier work [1] (also implicitly in Ref. [71]). We consider \(Y\simeq S^{2}\) and \(M\simeq S^{2}\times S^{1}\). With some efforts we can compute
\[[S^{2}\!\times\!S^{1},S^{2}]\,\simeq\,\left\{(m,\ell)\,\big{|}\,m\in\mathbb{Z },\,\ell\in\mathbb{Z}_{2|m|}\right\}, \tag{3.6}\]
where \(\mathbb{Z}_{0}\) means \(\mathbb{Z}\). For a configuration \(\sigma\), the \(m\) labels \(\sigma|_{S^{2}\!\times\!\{p\}}\) in \(\pi_{2}(S^{2})\simeq\mathbb{Z}\) for an arbitrary \(p\in S^{1}\). Now consider the twist diffeomorphism \(f\) defined by \(f(\vec{n},t)\equiv(\mathrm{e}^{t\vec{i}\times}\vec{n},t)\), where we view \(S^{2}\subseteq\mathbb{R}^{3}\) and \(S^{1}\simeq\mathbb{R}/2\pi\mathbb{Z}\), as well as another diffeomorphism \(g\) defined by \(g(\vec{n},t)\equiv(-\vec{n},-t)\). Both \(f\) and \(g\) preserve an orientation. Actually, for an orientation \(\xi\), they generate
\[\pi_{0}\text{Diff}(S^{2}\!\times\!S^{1},\xi)\simeq\mathbb{Z}_{2}\times\mathbb{ Z}_{2}\,. \tag{3.7}\]
Both \(f\) and \(g\) induce an almost double degeneracy on \([M,Y]\): \((m,\ell)\stackrel{{ g}}{{\longleftrightarrow}}(-m,\ell)\) while \(f\) exchanges \((m,\ell_{1})\) and \((m,\ell_{2})\) if \(2\ell_{1}=2\ell_{2}\). For example, \(f\) transforms \((\vec{n},t)\mapsto\vec{n}\) into \((\vec{n},t)\mapsto\mathrm{e}^{t\vec{i}\times}\vec{n}\), and these two maps belong to different classes \((1,0)\) and \((1,1)\) in \([M,Y]\), respectively.
We would like to lift the \(f\)-degeneracy. We note that \(M\) is spinnable. Recall that, on top of a given orientation, spin structures form a \(H^{1}(-;\mathbb{Z}_{2})\)-torsor. Since \(H^{1}(S^{2}\!\times\!S^{1};\mathbb{Z}_{2})\simeq\mathbb{Z}_{2}\), \(M\) have two different spin structures \(\rho\) and \(\rho^{\prime}\) on top of an orientation \(\xi\). They are exchanged by \(f\). That is,
\[\pi_{0}\text{Diff}(S^{2}\!\times\!S^{1},\xi,\rho)\simeq\mathbb{Z}_{2}\,, \tag{3.8}\]
where only the \(g\)-degeneracy remains. Based on a spin structure, we can construct the following topological functional as a spin Chern-Simons integral,
\[U_{k}(M)\ \equiv\ \exp\left\{\mathrm{i}k\int_{M}\frac{\sigma^{*}\!a\, \mathrm{d}\sigma^{*}\!a}{4\pi}\right\}\,, \tag{3.9}\]
where \(k\in\mathbb{Z}\), \(k\sim k+2\), and \(a\) denotes the \(U(1)\) gauge field on \(S^{2}\) associated with the Hopf fibration \(S^{1}\to S^{3}\to S^{2}\), which is related to \(b\) in the former example (10) via \(b=\mathrm{d}a/2\pi\). We can also recast operator (11) into the universal form (8) by choosing \(\mathbb{E}\simeq\mathbb{K}\mathbb{O}\) since a \(\mathbb{K}\mathbb{O}\)-orientation is exactly a spin structure. Concretely, in Eq. (8), we take \(\omega\) as a generator of \(KO^{2}(Y)\simeq\mathbb{Z}\), and \(g\in\mathrm{Hom}(KO_{1},U(1))\simeq\mathrm{Hom}(\mathbb{Z}_{2},U(1))\). The operator (11) can lift the almost double degeneracy in \([M,Y]\) caused by \(f\). Nevertheless, distinguishing other elements in \([M,Y]\) (except the \(g\)-degeneracy) requires non-invertible topological functionals based on a spin structure as the present authors showed in Ref. [1], which will also be discussed in Sec. 5.2.2 in this paper.
### The coherence problem
The identity problem concerns a topological functional on a single supporting manifold only. An even more severe problem appears if we move from one supporting manifold to another. Namely, for manifolds \(M_{1}\not\simeq M_{2}\), given a function on \([M_{1},Y]\) and another function on \([M_{2},Y]\), how can we tell whether they are just different topological functionals or different realizations of the same topological functional? An incorrect assignment would lead to wrong solitonic physics. We call this the coherence problem.
We can learn hints from the universal construction (8) for invertible solitonic symmetry. It is naturally defined on any closed \(\mathbb{E}\)-orientable \(\mathbb{E}\)-oriented manifolds. Also, its concrete incarnations in operators (10) and (11) are automatically defined on any closed oriented orientable manifolds and any closed spin spinnable manifolds, respectively. It is natural to regard the operators on different manifolds but defined by the same expression Eq. (8) as the different realizations of the same topological functional.
The above observation suggests that the solution to the coherence problem is to require locality. Recall that defect operators are defined by infinitesimal boundary conditions around the operator (see Sec. 2.2.2). This definition concerns field configurations in the vicinity of each point on the supporting manifold and thus satisfies a good sense of locality. However, a naively defined functional operator may behave quite non-local and might not yield a physically sensible operator. We propose that a functional operator satisfies locality if it is the multiplication of piecewise data localized around each point. The universal invertible construction (8) provides the special cases where multiplication is given the "exponentiation" of summation, represented by morphisms to \(U(1)\). Furthermore, gluing these local data may require a generalized orientation on supporting manifolds. In summary,
\[\mathrm{Locality}\equiv\mathrm{Multiplying\ local\ data\ with\ respect\ to\ generalized\ orientation}. \tag{12}\]
A functional satisfying locality automatically renders coherence on a class of manifolds with the prescribed generalized orientation.
A natural subsequent question is whether there is a universal construction of multiplying local data, which goes beyond the exponentiation (8) and can capture non-invertible cases. We propose a positive answer: The universal way of multiplying local data is another path integral, and a most general functional that satisfies locality is the partition function
of another fully-extended QFT. Namely, we can consider some auxiliary fields that inhabit the operator manifold only, couple them to the dynamical quantum fields, and perform a path integral of the auxiliary fields. The output of this auxiliary path integral is the partition function of an auxiliary fully-extended QFT that inhabits the operator manifold only. We optimistically assume that all functional operators satisfying locality can be produced by the partition functions of some auxiliary fully-extended QFT. In particular, we require all topological functionals to satisfy locality, i.e., all topological functionals \(\mathcal{T}[M,\sigma]\) are assumed to be produced by the partition functions of auxiliary topological fully-extended QFTs (TQFTs) coupled to the target space \(Y\):
\[\mathcal{T}[M,\sigma]=\text{TQFT partition function that couples with }\sigma|_{M}:M\to Y. \tag{3.11}\]
In particular, this construction includes Eq. (2.7) as its special cases for invertible topological functionals. Also, this construction is consistent with Propositions 2.3 and 2.1. We now flesh out the precise connotation of "TQFT" in our proposal.
### Our ansatz
In a specific theory, which generalized orientation the topological functionals rely on should be determined by the theory itself. Therefore, instead of studying topological functionals for an arbitrary generalized orientation, we focus on the most common generalized orientations in this paper. The most fundamental property of a theory that determines the generalized orientation is particle statistics. As might be surprising at first glance, a non-Grassmann path integral can be used to define not only a bosonic QFT that inhabits any oriented spacetime but also a fermionic QFT that inhabits any spin spacetime. These fermionic theories are not bosonic in disguise, i.e., the \(\mathbb{Z}_{2}\)-grading \((-)^{F}\) on states is nontrivial, as long as the action includes proper spin topological terms.
In a bosonic (resp. fermionic) theory, topological functionals are supposed to inhabit oriented (resp. spin) manifolds. This is natural since the theory itself inhabits oriented (resp. spin) spacetime. A more persuasive rationale is that all the solitonic defects (see Sec. 2.2.2), the charged operators of topological functionals, are defined by field configurations on oriented (resp. spin) manifolds. To see this, one notes that the normal sphere bundle \(\mathcal{S}N\) of any closed submanifold \(N\) in the spacetime naturally inherits an orientation (resp. a spin structure) from the spacetime, even if \(N\) itself is not orientable or spinnable. Consequently, the example in Eq. (3.9) cannot exist in bosonic theories. We focus on the elementary cases of bosonic and fermionic theories in the paper while leaving the theories where other interesting generalized orientations6 are involved to future work.
Footnote 6: Such theories appear typically when one wants to take into account (1) unorthodox statistics, (2) discrete spacetime symmetry, (3) mixing between spacetime and internal (non-solitonic) symmetry, and (4) conditions on higher objects than particles.
We now have all the ingredients to formulate an Ansatz for topological functionals, however, unfortunately, given that \(Y\) satisfies a finiteness condition.
**Definition 3.1**: _Topological space \(Y\) is \(n\)-finite if \(\pi_{0}Y\) is finite and \(\pi_{q}(Y,y)\) is finite for all \(0\leq q\leq n\) and \(y\in Y\)._
If \(M\) is a closed manifold of dimension \(\leq n\), \([M,Y]\) is finite when \(Y\) is \(n\)-finite. \([M,Y]\) has the chance to have infinitely many elements if \(Y\) is not \(n\)-finite. Infinitely many topological charges are a sign of continuous symmetry. The existence of infinitely many elements in \([M,Y]\) and infinitesimal symmetry transformations cause tricky technical troubles. We shall present a systematic treatment of discrete solitonic symmetry but only an approximating treatment of continuous solitonic symmetry.
Let us describe our ansatz for topological functionals responsible for discrete solitonic symmetry. To produce bosonic (resp. fermionic) topological functionals, TQFTs themselves must be bosonic (resp. fermionic). Their partition functions inhabit closed manifolds equipped with a map to the target space \(Y\), \(\sigma|_{M}:M\to Y\). To justify "multiplication of local data", these TQFTs must be maximally local, i.e., they should be fully-extended TQFTs, and we reach the following ansatz7:
Footnote 7: When the theory itself is topological, there have been attempts to classify defect operators [72], which is consistent with our Ansatz.
**Ansatz 3.2**: _For an \(n\)-finite space \(Y\), an \(n\)-dim bosonic (resp. fermionic) topological functional to \(Y\) is the partition function of an \(n\)-dim bosonic (resp. fermionic) \(Y\)-enriched fully-extended TQFT._
We regard this ansatz as a complete characterization of topological functionals. Based on the physical principle that a QFT is completely determined by its partition functions (in the presence of various kinds of background fields), we make the following conjecture.
**Conjecture 3.3**: _For an \(n\)-finite space \(Y\), inequivalent \(n\)-dim bosonic (resp. fermionic) \(Y\)-enriched fully-extended TQFTs produce different \(n\)-dim bosonic (resp. fermionic) topological functionals to \(Y\)._
Ansatz 3.2 and Conjecture 3.3 will be the foundation for our analysis of solitonic symmetry in this paper. We note that similar TQFTs often appear in a thriving contemporary theme of physics, the classification of gapped phases. In particular, they are tightly related to the notion of symmetry-protected topological phases and symmetry-enriched topological orders.
To conclude this section, we slightly discuss continuous solitonic symmetry. There are basically two problems. First, although infinitesimal symmetry generators can be unbounded (self-adjoint operators in the invertible case), finite symmetry operators must be bounded (unitary operators in the invertible case), which means we have to require a topological functional \([M,Y]\mapsto\mathbb{C}\) to be bounded. Second, Conjecture 3.3 fails due to the existence of non-semisimple TQFTs which superfluous produce duplicated partition functions as semisimple TQFTs; recall that for group representations \(R\not\simeq A\oplus R/A\), we still have \(\mathrm{tr}_{R}=\mathrm{tr}_{A}+\mathrm{tr}_{R/A}=\mathrm{tr}_{A\oplus R/A}\). This paper does not attempt a systematic treatment of continuous solitonic symmetry. Instead, we shall be satisfied with the discrete subsymmetries of continuous solitonic symmetry. This is realized by considering \(n\)-finite homotopy quotients of \(Y\).
**Definition 3.4**: \(Z\) _is a homotopy quotient of \(Y\) if there is a map \(f:Y\mapsto Z\) such that \(f_{*}:\pi_{0}Y\to\pi_{0}Z\) is surjective and \(f_{*}:\pi_{q}(Y,y)\to\pi_{q}(Z,f(y))\) is an epimorphism for all \(q>0\) and \(y\in Y\)._
Picking up an \(n\)-finite homotopy quotient \(Z\) of \(Y\) and considering topological functionals that factor through \([-,Z]\), we obtain a discrete solitonic subsymmetry. We believe that the colimit of all discrete subsymmetries (see Sec. 4.3.1) leads to an almost faithful approximation to a continuous symmetry, just like approximating \(U(1)\) by \(\mathbb{Q}/\mathbb{Z}\simeq\bigcup_{n\in\mathbb{N}}\mathbb{Z}_{n}\). Besides, we shall find it easy to describe the continuous solitonic symmetry directly in some concrete examples.
## 4 Algebraic structure of solitonic symmetry
We are going to reveal the universal algebraic structure of solitonic symmetry based on Ansatz 3.2 and Conjecture 3.3. First in Sec. 4.1, we present a short mathematical preliminary on _higher-categories_. Then in Sec. 4.2, we shall formulate the mathematical notion of fully-extended TQFTs to clarify the accurate connotation of Ansatz 3.2. Finally in Sec. 4.3, we shall discuss the mathematical structure that describes the algebraic structure of solitonic symmetry.
This section aims to establish a more or less rigorous mathematical ground for solitonic symmetry. Thus our expositions might inevitably look more or less abstract. However, readers acquainted with the issue of classifying gapped phases by fully-extended TQFTs will find the expositions familiar and recognize tremendous echoes.
### Preliminaries on higher-categories
The most efficient way to formulate fully-extended TQFTs needs the package of higher-categories. The goal here is to overview this nice mathematical package briefly. We are not attempting a self-contained exposition, but instead, we present an oversimplified introduction following Sec. 1.3 of Ref. [64]. Unfamiliar readers could find good entrances in the literature like Refs. [64, 73, 74] to get started into this evolving contemporary discipline.
#### \(n\)-category
To start with, let us recollect the definition of categories. A category comprises (i) objects, (ii) a set \(\operatorname{Hom}(x,y)\) between any two objects \(x\) and \(y\), and (iii) an associative unital composition map \(\operatorname{Hom}(x,y)\times\operatorname{Hom}(y,z)\to\operatorname{Hom}(x,z)\) for any three objects \(x\), \(y\), and \(z\). In particular, the composition map makes \(\operatorname{Hom}(x,x)\) a monoid. Elements of \(\operatorname{Hom}(x,y)\) are called morphisms. One of the tremendous reasons why categories are useful is that they are regarded as soft instead of rigid. Namely, we regard two categories as "the same" as long as they are equivalent rather than isomorphic. This is similar to algebraic topology, which cares about (weak) homotopy equivalences instead of homeomorphisms.
Generalizing the above definition, we can sketch the notion of \(n\)-categories by an induction. At the root of this induction, a \(0\)-category is a set, and its elements are called objects or \(0\)-morphisms. The induction then goes as follows. An \(n\)-category comprises (i) objects,
also called \(0\)-morphisms, (ii) a small \((n\!-\!1)\)-category \(\operatorname{Hom}(x,y)\) between any two objects \(x\) and \(y\), and (iii) an associative unital composition functor \(\operatorname{Hom}(x,y)\times\operatorname{Hom}(y,z)\to\operatorname{Hom}(x,z)\) for any three objects \(x\), \(y\), and \(z\). In particular, the composition functor makes \(\operatorname{Hom}(x,x)\) a monoidal \((n\!-\!1)\)-category. For all \(0\leq p<n\), \(p\)-morphisms of \(\operatorname{Hom}(x,y)\) are called \((p\!+\!1)\)-morphisms of this \(n\)-category. Clearly, if we take \(n=1\), we just recover the definition for an ordinary category. We can conceive an \(\infty\)-category as a proper limit of \(n\)-categories as \(n\) approaches \(\infty\). Its \(\operatorname{Hom}(x,y)\)'s are also \(\infty\)-categories.
The above definition sketch looks promising but hides the subtlety in treating the associativity and the unitality for compositions. We do not need the too rigid notion of strict \(n\)-categories, where these conditions are satisfied literally. Instead, we need weak \(n\)-categories, where these conditions are satisfied up to specified equivalences. For example, in the case of \(n=2\), we want \(\operatorname{Hom}(X,X)\) to be a (weak) monoidal category rather than a strict monoidal category. However, as \(n\) increases, characterizing accurately all the axioms gets rapidly a formidable task. Different models for organizing this have been proposed, and the equivalence between them, though widely believed, is a matter of ongoing research. We shall drop the prefix "weak" henceforth.
We now introduce two convenient notations. Let us consider an \(n\)-category \(\mathsf{C}\) with a distinguished object \(1_{\mathsf{C}}\). For example, \(\mathsf{C}\) might be a monoidal \(n\)-category, which is an \(n\)-category with the "tensor product" \(\otimes\) and the unit object \(1_{\mathsf{C}}\). We conceive a monoidal \((n\!-\!1)\)-category \(\Omega\mathsf{C}\) via
\[\Omega\mathsf{C}\equiv\operatorname{Hom}(1_{\mathsf{C}},1_{\mathsf{C}})\,. \tag{4.1}\]
When \(\mathsf{C}\) is a monoidal \(n\)-category, we conceive a one-object \((n\!+\!1)\)-category \(B\mathsf{C}\) such that
\[\Omega B\mathsf{C}=\mathsf{C}\,. \tag{4.2}\]
Remarkably, if \(\mathsf{C}\) is further symmetric, \(B\mathsf{C}\) has a canonical symmetric monoidal structure. In this particular case, we can further define iterated \(B^{n}\mathsf{C}\). The two operations \(\Omega\) and \(B\) are apparently borrowed from _looping_ and _delooping_ in algebraic topology. The reason why they are adopted will be clear shortly.
#### Space and \(n\)-groupoid
We can extract an \(n\)-category \(\pi_{\leq n}X\) from each topological space \(X\). Objects thereof are points in \(X\), \(1\)-morphisms are homotopies of objects (paths), \(2\)-morphisms are homotopies of \(1\)-morphisms (based homotopies of paths), \(3\)-morphisms are homotopies of \(2\)-morphisms (based homotopies of based homotopies of paths), and so on inductively until that \(n\)-morphisms are the homotopy classes of homotopies of \((n\!-\!1)\)-morphisms. Note that we can allow \(n=\infty\) by canceling terminating the induction and deleting the eventual command of taking the homotopy class in the above construction.
The \(n\)-category \(\pi_{\leq n}X\) has a special property that all of its morphisms are invertible up to equivalence. In general, an \(n\)-category with such a property is called an \(n\)-groupoid. Thus \(\pi_{\leq n}X\) is called the fundamental \(n\)-groupoid of \(X\). This \(\pi_{\leq n}X\) knows a great deal about the topology of \(X\). First, the equivalence classes of objects in \(\pi_{\leq n}X\) exactly constitute \(\pi_{0}X\). Second, for any base point \(x\in X\), the fusion monoid of the equivalence classes of objects
in \(\Omega^{q}\pi_{\leq n}X\) is exactly the homotopy group \(\pi_{q}(X,x)\) for \(q\leq n\) and the trivial group for \(q>n\), accordingly. Furthermore, the entire first \(n\)-th stages of the Postnikov tower of each path component of \(X\) are encoded in \(\pi_{\leq n}X\). In other words, the fundamental \(n\)-groupoid completely determines the homotopy \(n\)-type of the space. Namely, \(\pi_{\leq n}X\simeq\pi_{\leq n}Y\) as long as there is an \((n\!+\!1)\)-connected map between \(X\) and \(Y\).
It is a classical theorem that every groupoid is equivalent to the fundamental groupoid of some space. Since Quillen [75] and Grothendieck [76], it has also been generally accepted that every \(n\)-groupoid is equivalent to the fundamental \(n\)-groupoid of some space. Therefore, based on the homotopical property discussed above, we have the following equivalence.
**Philosophy 4.1**: _An \(n\)-groupoid is equivalent to a homotopy \(n\)-type. That is, an \(n\)-groupoid is an \(n\)-aspherical space, and the equivalence between \(n\)-groupoids is the weak homotopy equivalence between \(n\)-aspherical spaces._
One can interpret this equivalence as saying that \(n\)-categories are non-invertible generalizations of homotopy \(n\)-types. Let us look at the particular case of \(n=\infty\). Then \(\pi_{\leq\infty}X\) encodes the entire Postnikov tower of \(X\) and completely determines the homotopy type of \(X\). The above equivalence then suggests that we can model \(\infty\)-groupoids by spaces.
**Philosophy 4.2**: _An \(\infty\)-groupoid is equivalent to a homotopy type. That is, an \(\infty\)-groupoid is a space, and the equivalence between \(\infty\)-groupoids is the weak homotopy equivalence between spaces._
An invertible monoidal \((n\!-\!1)\)-groupoid is called an \(n\)-group. An \(\infty\)-group is equivalent to a loop space. \(\Omega\) and \(B\) establish a one-to-one correspondence between \(n\)-groups and one-object \((n\!+\!1)\)-groupoids.
We also unify the symbols for spaces and higher-groupoids. We shall write \(\pi_{\leq\infty}X\) just as \(X\) and abandon the now tautological symbol \(\pi_{\leq\infty}X\). We shall also write the homotopy \(n\)-type of \(X\), incarnated by the \(n\)-th Postnikov truncation of \(X\), just as \(\pi_{\leq n}X\). It is reasonable to regard the higher-category theory as the non-invertible generalization of algebraic topology, in the sense that any correct model for higher-categories should produce philosophies 4.1 and 4.2 as theorems.
#### \((n,r)\)-category
We now learned that the invertibility of morphisms is a distinguished property for higher-categories. Thus people introduced the notion of \((n\!+\!r,n)\)-categories to specify the information about invertibilities. From one perspective, \((n\!+\!r,n)\)-category is just an \((n\!+\!r)\)-category whose \(p\)-morphisms are invertible up to equivalence for all \(p>n\). For example, an \((r,0)\)-category just means an \(r\)-groupoid and an \((n,n)\)-category just means an \(n\)-category. For \(n_{1}\leq n_{2}\leq n\), an \((n,n_{1})\)-category is also an \((n,n_{2})\)-category.
From another perspective, an \((n\!+\!r,n)\)-category is an \(n\)-category enriched over \(r\)-groupoids. Namely, in the previous definition sketch of \(n\)-categories, we now choose to start the induction from \(r\)-groupoids instead of the mere sets. These \(r\)-groupoids are directly modeled by topological spaces according to philosophies 4.1 (and 4.2). This second perspective has bonus advantages in some aspects and has thus drawn vast attention. In particular, researches on \((\infty,n)\)-categories are thriving.
### Fully-extended TQFT
A general \(n\)-dim fully-extended TQFT is formulated as a symmetric monoidal functor between two symmetric monoidal \((\infty,n)\)-categories, which axiomatizes the "results" of the path integral. The domain is a bordism \((\infty,n)\)-category; the codomain is a fully-dualizable \((\infty,n)\)-category. The specific physics context determines the choice of domains and codomains.
#### 4.2.1 Bordism domain \((\infty,n)\)-category
An \(n\)-tangential structure is a map \(X\stackrel{{\Gamma}}{{\rightarrow}}BO(n)\). The simplest examples include
\[n\text{-framing:} \{\ast\}\xrightarrow{fr}BO(n)\,, \tag{4.3a}\] \[n\text{-orientation:} \text{canonical }BSO(n)\xrightarrow{SO}BO(n)\,,\] (4.3b) \[n\text{-spin structure:} \text{canonical }BSpin(n)\xrightarrow{Spin}BO(n)\,. \tag{4.3c}\]
For an \(n\)-tangential structure \(\Gamma\) and for \(q\leq n\), a \(q\)-dim \(\Gamma\) manifold means a \(q\)-dim manifold \(M\) equipped with a map \(M\to X\), such that the composition \(M\to X\stackrel{{\Gamma}}{{\rightarrow}}BO(n)\) classifies the \(n\)-stabilized tangent bundle of \(M\) (i.e. \(TM\oplus\mathbbm{R}^{n-q}\)).
Given an \(n\)-tangential structure \(\Gamma\), Lurie [64, Sec. 2.2] conceives a symmetric monoidal \((\infty,n)\)-category \(\mathsf{Bord}_{n}^{\Gamma}\) as follows. In the non-invertible-morphism region of \(q\leq n\), a \(q\)-morphism is a \(q\)-dim \(\Gamma\) bordism, i.e., a \(q\)-dim \(\Gamma\) manifold with corners. In the invertible-morphism region, the homomorphism \(\infty\)-groupoids between \(n\)-morphisms are the spaces of boundary-fixed diffeomorphisms along the philosophy 4.2. The symmetric monoidal structure on \(\mathsf{Bord}_{n}^{\Gamma}\) is prescribed by the disjoint union. Such \(\mathsf{Bord}_{n}^{\Gamma}\)'s give the domains of a fully-extended TQFT. However, given that the codomains we will be considering are essentially \(n\)-categories, the spaces of diffeomorphisms will not contribute to our results.
We want to equip all manifolds with a map to the target space \(Y\). The simplest way to implement this is to adopt a special \(n\)-tangential structure \(e_{Y}\!\times\!\Gamma:Y\!\times\!X\to BO(n)\), the product of the collapse map \(Y\stackrel{{ e_{Y}}}{{\rightarrow}}\{\ast\}\) and another \(n\)-tangential structure \(X\stackrel{{\Gamma}}{{\rightarrow}}BO(n)\), such as those listed in Eq. (4.3). In this case, we shall particularly call \(e_{Y}\!\times\!\Gamma\) manifolds \(Y\)-enriched \(\Gamma\) manifolds, and particularly write
\[\mathsf{Bord}_{n}^{\Gamma}(Y)\equiv\mathsf{Bord}_{n}^{e_{Y}\times\Gamma}\,. \tag{4.4}\]
Such \(\mathsf{Bord}_{n}^{\Gamma}(Y)\)'s give the domains for \(Y\)-enriched fully-extended TQFTs. This notion of enriched TQFT is tightly related to symmetric TQFT (or say equivariant TQFT). Due to the dimensional reason, a map from a manifold of dimension \(\leq n\) to \(Y\) factors into \(Y\)'s homotopy \(n\)-type, \(\pi_{\leq n}Y\). That is, we have the following equivalence,
\[\mathsf{Bord}_{n}^{\Gamma}(Y)\simeq\mathsf{Bord}_{n}^{\Gamma}(\pi_{\leq n}Y)\,. \tag{4.5}\]
If \(Y\) is path connected, an \(n\)-dim \(Y\)-enriched TQFT is exactly an \(n\)-dim TQFT that acquires an action by the higher-group \(\Omega\pi_{\leq n}Y\), i.e. an \(\Omega\pi_{\leq n}Y\)-symmetric TQFT. If \(Y\) has multiple path components, we obtain a TQFT consisting of a \(\pi_{0}Y\) worth of universes, each of which is a higher-group-symmetric TQFT.
Physically, a priori, any TQFT suffers from a framing anomaly and requires a framing dependence. Therefore, \(\mathsf{Bord}_{n}^{fr}(Y)\) would become a suitable choice of the domain for all TQFTs. A universal \(Y\)-enriched fully-extended TQFT with framing is a symmetric monoidal functor
\[\mathcal{Z}:\mathsf{Bord}_{n}^{fr}(Y)\to\mathsf{D}_{n}\,, \tag{4.6}\]
where the target category is a fully-dualizable symmetric monoidal \((\infty,n)\)-category \(\mathsf{D}_{n}\) as we shall see later. The maximal \(\infty\)-groupoid inside \(\mathsf{D}_{n}\) acquires a natural homotopy \(O(n)\)-action according to the cobordism hypothesis [64, Theorem 2.4.6 and Corollary 2.4.10]. This homotopy \(O(n)\)-action can be lifted to a homotopy \(\Omega X\)-action by an \(n\)-tangential structure \(X\stackrel{{\Gamma}}{{\to}}BO(n)\). If this \(\Omega X\)-action is (canonically) trivializable, the a priori framing anomaly turns out to be merely a \(\Gamma\) anomaly, and then the TQFT \(\mathcal{Z}\) does not really depend on an \(n\)-framing but an \(n\)-tangential structure \(\Gamma\) instead. Consequently, in such a situation, we can (canonically) extend the domain of \(\mathcal{Z}\) to any \(Y\)-enriched \(\Gamma\) manifolds so that
\[\mathcal{Z}^{\Gamma}:\mathsf{Bord}_{n}^{\Gamma}(Y)\to\mathsf{D}_{n}\,. \tag{4.7}\]
Along this philosophy of treatment, what manifolds a TQFT can inhabit are determined by the property of its codomain8.
Footnote 8: This treatment may be phrased as codomain-dominated. There is also a domain-dominated treatment, where we take a universal codomain \(\mathsf{D}_{n}\) (for each \(n\)) and ask \(\Gamma\) to vary. Then the logic will be reversed, i.e., the choice of \(\Gamma=SO\) (resp. \(\Gamma=Spin\)) implies that bosonic (resp. fermionic) state spaces will be picked up in the universal codomain. Since a universal codomain is challenging to construct, we take the codomain-dominated approach.
#### 4.2.2 Physical codomain \((\infty,n)\)-category
Let \(\mathsf{D}_{n}\) denote a fully-dualizable symmetric monoidal \((\infty,n)\)-category that we want to use as the codomain for TQFTs (see Sec. 2.3 of Ref. [64] for the definition of full dualizability). For a physical TQFT from domain \(\mathsf{Bord}_{n}^{\Gamma}(Y)\), we want to assign a complex number to each closed \(n\)-dim \(Y\)-enriched \(\Gamma\) manifold as its partition function, and assign a vector space to each closed \((n{-}1)\)-dim \(Y\)-enriched \(\Gamma\) manifolds as its state space. In other words, we want \(\Omega^{n-1}\mathsf{D}_{n}\) to be an \((\infty,1)\)-completion of a symmetry monoidal category of proper vector spaces. There are two inequivalent natural \((\infty,1)\)-completions of a category of vector spaces.
* Only identity higher morphisms are added. \(\mathrm{Hom}(-,-)\) has the discrete topology.
* Iterated isotopies are added. \(\mathrm{Hom}(-,-)\) has the subspace topology from some \(\mathbb{C}^{m}\).
The second completion is appropriate for classifying gapped phases because it identifies TQFT deformation classes9. However, the first completion is appropriate for our purpose because we want to speak of the partition function of each individual TQFT. Thus in this paper, we regard the first completion as the canonical \((\infty,1)\)-completion and abuse the same symbol of the category to denote also its canonical \((\infty,1)\)-completion.
The choice of \(\Omega^{n-1}\mathsf{D}_{n}\) is determined by what state spaces we want. In the bosonic case, we consider \(\Omega^{n-1}\mathsf{D}_{n}\simeq\mathsf{Vect}^{fd}\) whose objects are finite-dim \(\mathbb{C}\)-linear spaces and morphisms are \(\mathbb{C}\)-linear maps. The tensor product gives rise to a monoidal structure on \(\mathsf{Vect}^{fd}\). A swap map comes from the index exchange,
\[x\otimes y\,\mapsto\,y\otimes x\,, \tag{101}\]
which makes \(\mathsf{Vect}^{fd}\) a symmetric monoidal category. All objects have duals due to the finite-dim condition. It also has finite biproducts given by the direct sum, is semisimple with respect to it, and has a unique class of simple objects. It is further \(\mathbb{C}\)-linear. All these structures make \(\mathsf{Vect}^{fd}\) a symmetric fusion category.
As for the fermionic case, the state space \(V\) has a \(\mathbb{C}\)-linear involution \(\varepsilon\) called fermionic parity, which makes the state space \(\mathbb{Z}_{2}\)-graded. Such a pair \((V,\varepsilon)\) is called a super vector space. The \((+1)\)-eigenspace of \(\varepsilon\) is the bosonic sector and the \((-1)\)-eigenspace is the fermionic sector. Let \(\mathsf{sVect}^{fd}\) denote the category whose objects are finite-dim super \(\mathbb{C}\)-linear spaces and morphisms are \(\mathbb{C}\)-linear maps that commute with fermionic parities. Its monoidal structure also comes from the tensor product,
\[\big{(}X,\varepsilon_{X}\big{)}\otimes\big{(}Y,\varepsilon_{Y}\big{)}\simeq \big{(}X\otimes Y,\,\varepsilon_{X}\otimes\varepsilon_{Y}\big{)}\,. \tag{102}\]
\(\mathsf{sVect}^{fd}\) is also \(\mathbb{C}\)-linear, has finite biproducts, and has duals for all objects, making it a fusion category. As fusion categories, \(\mathsf{sVect}^{fd}\) is equivalent to \(\mathsf{Rep}(\mathbb{Z}_{2})\), the representation category of \(\mathbb{Z}_{2}\). It is the swap map that distinguishes them as inequivalent symmetric fusion categories. The swap map in \(\mathsf{sVect}^{fd}\) encodes the fermionic statistics following the Koszul sign rule. Namely, for fermionic parity eigenstates \(x\in X\) and \(y\in Y\), the swap map sends
\[x\otimes y\,\mapsto(-)^{|x||y|}\,y\otimes x\,, \tag{103}\]
where \(|\bullet|\!=\!0\) if \(\bullet\) is bosonic and \(|\bullet|\!=\!1\) if \(\bullet\) is fermionic. We take \(\Omega^{n-1}\mathsf{D}_{n}\simeq\mathsf{sVect}^{fd}\) for the fermionic case.
The final step is to determine the entire \(\mathsf{D}_{n}\). Since we do not have any requirement on higher objects than particles, our guiding principle is that no superfluous data on lower-dim manifolds, except when necessary, should be introduced. Gaiotto and Johnson-Freyd found an elegant solution in Ref. [68] (see also Ref. [77]). They noticed the Karoubi-completeness among various other features and clarified that the minimal necessary complexity is exactly the \(n\)-categorical generalization of being Karoubi-complete. Roughly speaking, being Karoubi-complete means that every idempotent comes from a splitting, among all \(p\)-morphisms. Given a Karoubi-complete monoidal \(n\)-category \(\mathsf{C}\), they defined its "stable suspension" \(\Sigma\mathsf{C}\) as the Karoubi completion of its delooping \(B\mathsf{C}\). Symbolically,
\[\Sigma\mathsf{C}\equiv\operatorname{Kar}(B\mathsf{C})\,. \tag{104}\]
When the above \(n\)-category \(\mathsf{C}\) is further symmetric, \(\Sigma\mathsf{C}\) turns out to be a Karoubi-complete symmetric monoidal \((n+1)\)-category [68, Theorem 4.1.1]. Given that both \(\mathsf{Vect}^{fd}\) and
\(\mathsf{sVect}^{fd}\) are Karoubi-complete, the codomain \(\mathsf{D}_{n}\) we are looking for can be given by
\[(\text{bosonic}) \quad\Sigma^{n-1}\mathsf{Vect}^{fd}\,, \tag{4.12a}\] \[(\text{fermionic}) \quad\Sigma^{n-1}\mathsf{sVect}^{fd}\,. \tag{4.12b}\]
We shall flexibly regard them as either \(n\)-categories or \((\infty,n)\)-categories (with identity higher morphisms) according to the context. Gaiotto and Johnson-Freyd also conjecture [68, Conjecture 1.4.6] that the canonical homotopy \(SO(n)\)-action [resp. \(Spin(n)\)-action] on the maximal \(\infty\)-groupoid inside \(\Sigma^{n-1}\mathsf{Vect}^{fd}\) (resp. \(\Sigma^{n-1}\mathsf{sVect}^{fd}\)) is canonically trivializable. If these conjectured properties are true, according to the discussion around Eq. (4.7), bosonic (resp. fermionic) topological functionals indeed inhabit any closed oriented (resp. spin) manifold.
#### 4.2.3 Summary of the formulation
We now summarize all the ingredients found above to present accurate formulations of the relevant TQFTs to clarify the connotation of Ansatz 3.2. A priori, any TQFT suffers from a framing anomaly and requires a framing dependence.
**Definition 4.3**: _An \(n\)-dim bosonic (resp. fermionic) \(Y\)-enriched fully-extended TQFT is a symmetric monoidal functor between symmetric monoidal \((\infty,n)\)-categories,_
\[\mathcal{B}:\mathsf{Bord}_{n}^{fr}(Y)\to\Sigma^{n-1}\mathsf{Vect}^{fd}\,, \qquad\Big{[}\text{resp.}\ \ \mathcal{F}:\mathsf{Bord}_{n}^{fr}(Y)\to\Sigma^{n-1}\mathsf{sVect}^{fd}\Big{]}\,. \tag{4.13}\]
It is reasonable to anticipate that having bosonic (resp. fermionic) state spaces logically implies inhabiting any oriented (resp. spin) spacetime, i.e., the framing anomaly should merely be an orientation (resp. spin) anomaly. This anticipation is realized by the following conjectured property of \(\Sigma^{n-1}\mathsf{Vect}^{fd}\) and \(\Sigma^{n-1}\mathsf{sVect}^{fd}\)[68]:
**Conjecture 4.4**: _The canonical homotopy \(SO(n)\)-action [resp. \(Spin(n)\)-action] on the maximal \(\infty\)-groupoid inside \(\Sigma^{n-1}\mathsf{Vect}^{fd}\) (resp. \(\Sigma^{n-1}\mathsf{sVect}^{fd}\)) is canonically trivializable._
By the cobordism hypothesis combined with this conjecture, every \(\mathcal{B}\) (resp. \(\mathcal{F}\)) has no genuine framing dependence but just an orientation (resp. spin) dependence, so every \(\mathcal{B}\) or \(\mathcal{F}\) can be canonically extended to
\[\mathcal{B}^{SO} :\mathsf{Bord}_{n}^{SO}(Y)\to\Sigma^{n-1}\mathsf{Vect}^{fd}\,, \tag{4.14a}\] \[\mathcal{F}^{Spin} :\mathsf{Bord}_{n}^{Spin}(Y)\to\Sigma^{n-1}\mathsf{sVect}^{fd}\,. \tag{4.14b}\]
Therefore, we will talk about the partition functions of \(\mathcal{B}\) (resp. \(\mathcal{F}\)) on any closed oriented (resp. spin) \(n\)-manifold \(M\), which really mean those of \(\mathcal{B}^{SO}\) (resp. \(\mathcal{F}^{Spin}\)).
When \(Y\) is path-connected, the evaluation of \(\mathcal{B}\) (resp. \(\mathcal{F}\)) on a point gives a linear \(n\)-group action on objects in \(\Sigma^{n-1}\mathsf{Vect}^{fd}\) (resp. \(\Sigma^{n-1}\mathsf{sVect}^{fd}\)) imposed by the \(n\)-group \(\Omega\pi_{\leq n}Y\). Namely, an \(n\)-dim \(Y\)-enriched TQFT is the same as an \(n\)-dim \(\Omega\pi_{\leq n}Y\)-symmetric TQFT. When \(Y\) has multiple path components, we have a \(\pi_{0}(Y)\) worth of universes of higher-group symmetric TQFTs. The cobordism hypothesis [64] asserts that such a characterization by evaluation on a point is always faithful and complete. In general, \(\mathsf{Bord}_{n}^{fr}(Y)\) is the free fully-dualizable symmetric monoidal \((\infty,n)\)-category generated by \(\infty\)-groupoid \(Y\).
**Definition 4.5**: _A \(n\)-representation (resp. super \(n\)-representation) of space \(Y\) is a functor between \((\infty,n)\)-categories,_
\[\underline{\mathcal{B}}:Y\to\Sigma^{n-1}\mathsf{Vect}^{fd}\,,\qquad\left(\text{ resp. }\ \underline{\mathcal{F}}:Y\to\Sigma^{n-1}\mathsf{sVect}^{fd}\right). \tag{4.15}\]
Note that since \(\Sigma^{n-1}\mathsf{Vect}^{fd}\) (resp. \(\Sigma^{n-1}\mathsf{sVect}^{fd}\)) is essentially an \(n\)-category, \(\underline{\mathcal{B}}\) (resp. \(\underline{\mathcal{F}}\)) actually factors through the homotopy \(n\)-type of \(Y\). Namely, \(\underline{\mathcal{B}}\) (resp. \(\underline{\mathcal{F}}\)) is in essence a functor between \(n\)-categories, from \(\pi_{\leq n}Y\) to \(\Sigma^{n-1}\mathsf{Vect}^{fd}\) (resp. \(\Sigma^{n-1}\mathsf{sVect}^{fd}\)). The cobordism hypothesis specialized for our purpose asserts the following equivalence (see [64, Theorem 2.4.18]):
**Proposition 4.6**: _Via point evaluation, an \(n\)-dim bosonic (resp. fermionic) \(Y\)-enriched fully-extended TQFT (Definition 4.3) is equivalent to a \(n\)-representation (resp. super \(n\)-representation) of \(Y\) (Definition 4.5)._
Higher-representations at lower dimensions in the context of generalized symmetries are extensively discussed by, e.g., Refs. [57, 59].
As a preliminary example of topological functionals from these TQFTs, let us consider 1D topological functionals to path-connected 1-finite \(Y\). Namely, \(\pi_{1}Y\) is finite. We start with the bosonic case. A 1-representation \(\underline{\mathcal{B}}\) of \(Y\) is just a representation \(R\) of \(\pi_{1}Y\). An \(S^{1}\) bosonic topological functional \(\mathcal{B}(g)\) for \(g\in[S^{1},Y]\) is a \(\mathbb{C}\)-valued function on
\[[S^{1},Y]\,\simeq\,[S^{1},B\pi_{1}Y]\,\simeq\,\{\text{conjugacy classes of }\pi_{1}Y\}\,. \tag{4.16}\]
Note that \([S^{1},Y]\not\simeq\pi_{1}Y\) as long as \(\pi_{1}Y\) is not commutative10. Therefore, an \(S^{1}\) topological functional is equivalent to a class function of \(\pi_{1}Y\), such as a character. Note that \(g\in[S^{1},Y]\) prescribes a \(\pi_{1}Y\) gauge field on \(S^{1}\) whose holonomy conjugacy class is \(g\). Therefore, the partition function is indeed given by a character, i.e.,
Footnote 10: This discrepancy between \([S^{1},Y]\) and \(\pi_{1}Y\) comes from the difference between free homotopy and based homotopy. As we shall see in Sec. 5.2.1, this difference accounts for a large class of non-invertible fusion rules in solitonic symmetry.
\[\mathcal{B}(g)=\operatorname{tr}_{R}(g)\,. \tag{4.17}\]
One can generalize this relation to \(T^{n}\) bosonic topological functionals and define the notion of \(n\)-characters11. The traces of representations amount to all characters, and the space of characters is \(\mathbb{N}\)-linearly spanned by simple characters. The characters from 1-dim representations factor through \(H_{1}(Y;\mathbb{Z})\), the Abelianization of \(\pi_{1}Y\), and thus can be constructed via the universal invertible form (2.7). The characters from higher-dim representations, especially the irreducible ones, go beyond Eq. (2.7).
Footnote 11: Ideas of \(n\)-characters originate from Ref. [78]. 2-characters are defined in Ref. [79] and are developed in, e.g., Refs. [80, 81]. Unlike ordinary characters, the input of an \(n\)-character is a set of \(n\) mutually commutative elements in \(G\), and the \(n\)-character is invariant under simultaneous conjugations. One can immediately recognize that this input describes the isomorphism classes of \(G\)-bundles on \(T^{n}\), i.e., \([T^{n},BG]\). It is then natural to connect these \(n\)-characters with \(T^{n}\) bosonic topological functionals to \(BG\).
For the fermionic case, a super 1-representation \(\underline{\mathcal{F}}\) of \(Y\) is just a super representation of \(\pi_{1}(Y)\), which is further a pair of ordinary representations \((B,F)\) on the bosonic and the
fermionic sectors, respectively. There are two spin structures on \(S^{1}\), \(0\) and \(1\), corresponding to the spin bordism group \(\Omega^{Spin}_{1}=\mathbb{Z}_{2}\). Then a similar analysis to the bosonic case shows that the \(S^{1}\) fermionic topological functional for \((g,\varepsilon)\in[S^{1},Y]\times\Omega^{Spin}_{1}\) is given by [recall that \(g\) represents a conjugacy class in \(\pi_{1}Y\)]
\[\begin{split}\mathcal{F}(g,0)&=\operatorname{tr}_{ B}(g)+\operatorname{tr}_{F}(g)\,,\\ \mathcal{F}(g,1)&=\operatorname{tr}_{B}(g)- \operatorname{tr}_{F}(g)\,.\end{split} \tag{4.18}\]
We thus learned that \(S^{1}\) fermionic topological functionals are virtual characters of \(\pi_{1}Y\). They are \(\mathbb{Z}\)-linearly spanned by simple characters rather than \(\mathbb{N}\)-linearly. The virtual characters from \(1\)-dim representations factor through the super integral homology \(\mathbb{SZ}_{1}(Y)\) (see Sec. 4.3.2) and thus can also be constructed via the universal invertible form (2.7). Tremendous examples of higher-dim topological functionals will be discussed in Sec. 5.
### Cohomology with TQFT coefficients
As we mentioned in Sec. 2.2.1, a crucial feature of solitonic symmetry is its independence of system details such as the action and the ambient spacetime. It is just the algebra of topological functionals and is determined by the target space \(Y\) only. Formally, it gives homotopy-invariant contravariant functors on the topological space \(Y\). Thus the algebraic structure of solitonic symmetry can be interpreted as a cohomology theory on \(Y\), in a vastly generalized sense.
#### 4.3.1 Non-invertible: (Super) solitonic cohomology
Given two TQFTs \(\mathcal{Z}_{1}\) and \(\mathcal{Z}_{2}\), \(\mathcal{Z}_{1}(-)\otimes\mathcal{Z}_{2}(-)\) also defines a TQFT denoted by \(\mathcal{Z}_{1}\otimes\mathcal{Z}_{2}\). We can transfer the fusion of two topological functionals to the fusion of the two TQFTs beneath. To see the tensor product between TQFTs, instead of looking at each individual TQFT, we should consider the functor \((\infty,n)\)-category containing all \(n\)-representations12
Footnote 12: One may first come up with
\[\operatorname{\sf Fun}^{\otimes}\Big{(}\operatorname{\sf Bord}^{fr}_{n}(Y)\,, \,\Sigma^{n-1}\mathsf{Vect}^{fd}\Big{)}\quad\text{ and }\quad\operatorname{\sf Fun}^{\otimes}\Big{(} \operatorname{\sf Bord}^{fr}_{n}(Y)\,,\,\Sigma^{n-1}\mathsf{sVect}^{fd}\Big{)}\,. \tag{4.19}\]
According to the cobordism hypothesis, they are the maximal \(\infty\)-groupoids (in essence \(n\)-groupoids) inside \(\operatorname{\sf Rep}^{n}(Y)\) and \(\mathsf{sRep}^{n}(Y)\), respectively. They have too few morphisms to support the rich structures we shall discuss shortly.
**Definition 4.7**: _The \(n\)-th solitonic cohomology [resp. super solitonic cohomology] of an \(n\)-finite space \(Y\) is a symmetric multi-fusion \(n\)-category,_
\[\operatorname{\sf Rep}^{n}(Y)\,\equiv\,\operatorname{\sf Fun}\big{(}\,Y,\, \Sigma^{n-1}\mathsf{Vect}^{fd}\,\big{)}\,,\quad\Big{[}\text{resp. }\mathsf{sRep}^{n}(Y)\,\equiv\,\operatorname{\sf Fun}\big{(}\,Y,\,\Sigma^{n-1} \mathsf{sVect}^{fd}\,\big{)}\Big{]}\,. \tag{4.20}\]
They are symmetric fusion \(n\)-categories when \(Y\) is further path-connected. In this case, they can be understood as higher-representation higher-categories of the \(\infty\)-group \(\Omega Y\). In particular, the \(n\)-category nature of the codomains suggests the following.
* When \(Y\) is path-connected, \(\operatorname{\sf Rep}^{1}(Y)\) [resp. \(\mathsf{sRep}^{1}(Y)\)] is equivalent to the representation (resp. super-representation) category of the group \(\pi_{1}Y\).
* When \(Y\) is path-connected, \(\mathsf{Rep}^{n}(Y)\) [resp. \(\mathsf{sRep}^{n}(Y)\)] is equivalent to the representation (resp. super-representation) \(n\)-category of the \(n\)-group \(\Omega\pi_{\leq n}Y\).
The 1-dim case reproduces the discussion on 1-dim topological functionals at the end of Sec. 4.1. The higher-group representation higher-category has caught vast attention in recent literature on generalized symmetry, see, e.g., Refs. [52; 53; 54; 55; 57; 59]. When \(Y\) is not path-connected, the solitonic cohomologies are just the Cartesian product of the solitonic cohomologies of each path component.
The fusion monoid of bosonic (resp. fermionic) topological functionals can be equated with the fusion monoid of the equivalence classes of objects in \(\mathsf{Rep}^{n}(Y)\) [resp. \(\mathsf{sRep}^{n}(Y)\)], which gives what we have been pursuing. However, we should not quickly throw away the far richer structures contained in \(\mathsf{Rep}^{n}(Y)\) and \(\mathsf{sRep}^{n}(Y)\) than their mere fusion monoids. To see their significance, let us analyze the physical meaning of morphisms in \(\mathsf{Rep}^{n}(Y)\) and \(\mathsf{sRep}^{n}(Y)\). Here 1-morphisms are natural transformations between \(n\)-representations. We can readily note that the natural endo-transformations of the trivial \(n\)-representation are simply functors from \(Y\) to \(\Sigma^{n-2}\mathsf{Vect}^{fd}\) and \(\Sigma^{n-2}\mathsf{sVect}^{fd}\), respectively. Namely, we arrive at the following relations between different \(n\).
**Proposition 4.8**: \(\Omega\mathsf{Rep}^{n}(Y)\simeq\mathsf{Rep}^{n-1}(Y)\) _and \(\Omega\mathsf{sRep}^{n}(Y)\simeq\mathsf{sRep}^{n-1}(Y)\)._
This result hints at the physical meaning of other morphisms in \(\mathsf{Rep}^{n}(Y)\) and \(\mathsf{sRep}^{n}(Y)\):
* 1-morphisms are \((n\!-\!1)\)-dim topological interfaces between \(n\)-dim TQFTs.
* \((p\!+\!1)\)-morphisms are \((n\!-\!p\!-\!1)\)-dim topological interfaces between \((n\!-\!p)\)-dim topological interfaces.
Topological functionals, which are auxiliary-TQFT partition functions defined on proper closed manifolds, cannot capture these richer data, which concern auxiliary TQFTs themselves and inhabit networks made by non-closed manifolds via connections and junctions. In recent literature on generalized symmetry, people have recognized the significance of such networks of non-closed topological operators as a formulation of background gauge fields. It starts to become customary to recognize the algebraic structure of such networks as the total generalized symmetry of a theory, given that it contains literally all information about a symmetry, not just the fusion rule. We have no reason not to follow this custom.
**Proposition 4.9**: _Consider a \(d\)-dim bosonic (resp. fermionic) theory defined by a path integral with \(d\)-finite target space \(Y\)._
* _The total solitonic symmetry is described by_ \(\mathsf{Rep}^{d}(Y)\) _[resp._ \(\mathsf{sRep}^{d}(Y)\)_], a symmetric fusion_ \(d\)_-category._
* _For_ \(-1\leq p\leq d\!-\!1\)_, the_ \((\geq\!p)\)_-form solitonic symmetry is described by_ \(\mathsf{Rep}^{d-p-1}(Y)\) _[resp._ \(\mathsf{sRep}^{d-p-1}(Y)\)_], a symmetric fusion_ \((d\!-\!p\!-\!1)\)_-category._
* _For_ \(-1\leq p\leq d\!-\!1\)_, the_ \(p\)_-form solitonic symmetry is described by the fusion monoid of_ \(\mathsf{Rep}^{d-p-1}(Y)\) _[resp._ \(\mathsf{sRep}^{d-p-1}(Y)\)_], a commutative rig._
Note that we also include \((-1)\)-form symmetry in the total solitonic symmetry. The result here echoes recent progresses on categorical generalized symmetry.
We may understand \(\mathsf{Rep}^{\bullet}(-)\) and \(\mathsf{sRep}^{\bullet}(-)\) as non-invertible generalizations of cohomology theories. We may view the collections,
\[\left\{\Sigma^{n-1}\mathsf{Vect}^{fd}\right\}_{n\in\mathbb{N}}\ \text{ and }\ \left\{\Sigma^{n-1}\mathsf{sVect}^{fd}\right\}_{n\in\mathbb{N}}, \tag{101}\]
as non-invertible generalizations of spectra. We shall shortly see in Sec. 4.3.2 which orthodox spectra they generalize. The coefficients of these non-invertible cohomologies are non-enriched fully-extended TQFTs:
\[\mathsf{Rep}^{\bullet} \equiv\,\mathsf{Rep}^{\bullet}\big{(}\{*\}\big{)}\,\simeq\, \Sigma^{\bullet-1}\mathsf{Vect}^{fd}\,, \tag{102}\] \[\mathsf{sRep}^{\bullet} \equiv\,\mathsf{sRep}^{\bullet}\big{(}\{*\}\big{)}\,\simeq\, \Sigma^{\bullet-1}\mathsf{sVect}^{fd}\,.\]
\(\mathsf{Rep}^{\bullet}(-)\) and \(\mathsf{sRep}^{\bullet}(-)\) are indeed homotopy-invariant contravariant functors from topological spaces because a map \(Y\to Z\) induces pullbacks \(\mathsf{Rep}^{\bullet}(Z)\to\mathsf{Rep}^{\bullet}(Y)\) and \(\mathsf{sRep}^{\bullet}(Z)\to\mathsf{sRep}^{\bullet}(Y)\) simply by their definitions.
When \(Y\) is not \(n\)-finite, Def. 4.7 contains too much superfluous data to capture the authentic algebraic structure of solitonic symmetry. For example, recall the discussion about 1-dim topological functionals at the end of Sec. 4.2.3. Recall that \(\mathsf{Rep}^{1}(Y)\) and \(\mathsf{sRep}^{1}(Y)\) are simply representation categories of \(\pi_{1}Y\). If we allowed \(\pi_{1}Y\) to be infinite like \(\mathbb{Z}\) or \(SL(2,\mathbb{Z})\), non-semisimple representations would appear since the Maschke theorem does not apply. They are not unitarizable and produce superfluous duplicated topological functionals. There are also semisimple non-unitarizable representations whose \(S^{1}\) partition functions are not bounded. It is natural to expect that only unitarizable representations capture the solitonic symmetry. Nevertheless, for convenience, we still formally use \(\mathsf{Rep}^{\bullet}(Y)\) or \(\mathsf{sRep}^{\bullet}(Y)\) to indicate the algebraic structure of solitonic symmetry even when \(Y\) is not \(\bullet\)-finite.
We do not attempt to generalize the above unitarizability condition to higher-dim general cases in this paper. Instead, as we discussed at the end of Sec. 3.3, we shall just focus on \(n\)-finite homotopy quotients \(Z\) of \(Y\) and discuss the solitonic subsymmetries that can be faithfully described by \(\mathsf{Rep}^{\bullet}(Z)\) or \(\mathsf{sRep}^{\bullet}(Z)\). The collection of them for all different choices of \(Z\), together with natural functors between them, form a diagram in the \((n+1)\)-category of symmetric multi-fusion \(n\)-categories. We expect that the colimit of this diagram exists and believe that this colimit gives an almost complete approximation of the continuous solitonic symmetry for \(Y\). Furthermore, we expect that the continuous solitonic symmetry may be constructed as a generalized Cauchy completion of this colimit, provided we can cook up a well-behaved notion of topologized/uniformized monoidal \(n\)-categories13.
Footnote 13: The prototype of our anticipation is just the Cauchy completion of \(\mathbb{Q}/\mathbb{Z}\) to \(U(1)\). It is possible to prescribe a topology on \(\mathbb{Q}/\mathbb{Z}\) through the colimit construction \(\mathbb{Q}/\mathbb{Z}\simeq\bigcup_{n\in\mathbb{N}}\mathbb{Z}_{n}\). Topological groups are naturally uniformizable; thus, we can take the Cauchy completion of \(\mathbb{Q}/\mathbb{Z}\) to obtain \(U(1)\).
#### 4.3.2 Invertible: (Super) unitary cohomology
As the first application, let us find out the invertible subsymmetry of the generically non-invertible solitonic symmetry. Invertible topological functionals are the partition functions
of invertible fully-extended TQFTs. A fully-extended TQFT \(\mathcal{Z}:\mathsf{Bord}_{n}^{\Gamma}\to\mathsf{D}_{n}\) is said invertible if we can find another fully-extended TQFT \(\mathcal{Z}^{-1}:\mathsf{Bord}_{n}^{\Gamma}\to\mathsf{D}_{n}\) such that \(\mathcal{Z}\otimes\mathcal{Z}^{-1}\simeq 1\), the trivial TQFT. Therefore, \(\mathcal{Z}\) must factor through the following commutative diagram (see the discussion around Sec. 6.2 of Ref. [67]):
\[\begin{CD}\mathsf{Bord}_{n}^{\Gamma}@>{\mathcal{Z}}>{}>\mathsf{D}_{n}\\ \big{\|}\mathsf{Bord}_{n}^{\Gamma}\big{\|}@>{\mathcal{Z}^{\times}}>{}>\mathsf{D}_ {n}^{\times}\end{CD} \tag{4.23}\]
Here, \(\mathsf{D}_{n}^{\times}\) is the maximal Picard \(\infty\)-groupoid inside \(\mathsf{D}_{n}\), where a Picard \(\infty\)-groupoid means an invertible symmetric monoidal \(\infty\)-groupoid (see Appendix A.4 of Ref. [67]). And \(\|\mathsf{Bord}_{n}^{\Gamma}\|\) is the \(\infty\)-groupoid completion of \(\mathsf{Bord}_{n}^{\Gamma}\), which turns out to have invertible objects only and becomes a Picard \(\infty\)-groupoid. Thus \(\mathcal{Z}^{\times}\) is a symmetric monoidal functor between two Picard \(\infty\)-groupoids.
According to Philosophy 4.2, a Picard \(\infty\)-groupoid is an infinite loop space, i.e., the \(0\)-space of a spectrum. Also, \(\mathcal{Z}^{\times}\) is an infinite loop map and the fusion between \(\mathcal{Z}_{1}^{\times}\) and \(\mathcal{Z}_{2}^{\times}\) is induced by loop concatenation. Therefore, classifying invertible TQFTs reduces to classifying infinite loop maps between two infinite loop spaces, which further reduces to classifying spectrum maps between two spectra. Therefore, the treatment of invertible TQFTs belongs to the realm of stable homotopy theory.
The Galatius-Madsen-Tillmann-Weiss theorem [82], a theorem derived out of the cobordism hypothesis [64], asserts that \(\|\mathsf{Bord}_{n}^{\Gamma}\|\) is weakly homotopy equivalent to the \(0\)-space of the Madsen-Tillmann spectrum \(\Sigma^{n}\mathbb{M}\mathbb{T}\Gamma\) (see also [67, Theorem 6.67]). In the case of our interest, \(\|\mathsf{Bord}_{n}^{fr}(Y)\|\) is independent of \(n\) and is the free infinite loop space generated by \(Y_{+}\equiv Y\sqcup\{*\}\), i.e. \(Y\) with an extra base point. Namely,
\[\|\mathsf{Bord}_{n}^{fr}(Y)\|\ \simeq\ \underset{q\to\infty}{\mathrm{colim}}\ \Omega^{q}\Sigma^{q}Y_{+}\,. \tag{4.24}\]
It is the \(0\)-space of the suspension spectrum of \(Y_{+}\). The codomain is more complicated due to the complexity of the higher-categorical Karoubi completion. We thus leave the full analysis of the maximal invertible subsymmetry to future works. Here we just focus on a sufficiently interesting part we can obtain immediately by noticing
\[\Omega^{n-1}\left(\Sigma^{n-1}\mathsf{D}\right)^{\times}\ \simeq\ \left(\Omega^{n-1}\Sigma^{n-1}\mathsf{D}\right)^{\times}\ \simeq\ \mathsf{D}^{\times}\,. \tag{4.25}\]
In general, the spectrum \(\{\big{(}\Sigma^{n-1}\mathsf{D}\big{)}^{\times}\}_{n\in\mathbb{N}}\) may have lousy connectivity, i.e., the Karoubi-completion procedure may add new invertible objects and make many \(\left(\Sigma^{n-1}\mathsf{D}\right)^{\times}\) not path-connected. However, here we shall neglect such contributions from the Karoubi-completion procedure. In other words, we shall consider spectrum
\[\{B^{n-1}\mathsf{D}^{\times}\}_{n\in\mathbb{N}}\,, \tag{4.26}\]
which is the \((-1)\)-connective cover of spectrum \(\{\big{(}\Sigma^{n-1}\mathsf{D}\big{)}^{\times}\}_{n\in\mathbb{N}}\). We thus obtain at least part of the maximal invertible subsymmetry14.
Footnote 14: We cannot help speculating a relationship between \(\{\big{(}\Sigma^{n-1}\mathsf{Vect}^{fd}\big{)}^{\times}\}_{n\in\mathbb{N}}\) [resp. \(\{\big{(}\Sigma^{n-1}\mathsf{sVect}^{fd}\big{)}^{\times}\}_{n\in\mathbb{N}}\)] and the \(\mathbb{C}^{\times}\)-dual of \(\mathbb{M}SO\) (resp. \(\mathbb{M}Spin\)). We leave the verification of our speculations to future works.
Let us start with the bosonic case. Only the 1-dim vector space is invertible in \(\mathsf{Vect}^{fd}\), and its invertible endomorphisms constitute the group \(\mathbb{C}^{\times}\). Therefore, we have
\[B^{n-1}\left(\mathsf{Vect}^{fd}\right)^{\times}\,\simeq\,B^{n}\mathbb{C}^{\times }_{\delta}\;. \tag{103}\]
When ambiguity may arise, we use the subscript \(\delta\) to indicate the discrete topology. These infinite loop spaces assemble to form the Eilenberg-Maclane spectrum \(\mathbb{H}\mathbb{C}^{\times}\). We then arrived at the following theorem:
**Proposition 4.10** (bosonic): _The fusion monoid of solitonic cohomology \(\mathsf{Rep}^{\bullet}(Y)\) contains a group \((\mathbb{H}\mathbb{C}^{\times})^{\bullet}(Y)\simeq H^{\bullet}\big{(}Y; \mathbb{C}^{\times}\big{)}\)._
We now turn to the more interesting fermionic case. \(\mathsf{sVect}^{fd}\) has two classes of invertible objects, the bosonic and the fermionic 1-dim vector spaces. Their fusion monoid is \(\mathbb{Z}_{2}\). The invertible endomorphisms of each class constitute \(\mathbb{C}^{\times}\). The symmetric monoidal structure on \(\mathsf{sVect}^{fd}\), especially the Koszul sign rule (102), positions the infinite loop space \(\big{(}\mathsf{sVect}^{fd}\big{)}^{\times}\) into a Puppe sequence,
\[\cdots\longrightarrow B^{n}\mathbb{C}^{\times}_{\delta}\longrightarrow B^{n- 1}\left(\mathsf{sVect}^{fd}\right)^{\times}\longrightarrow B^{n-1}\mathbb{Z}_ {2}\stackrel{{\rho\circ\mathrm{Sq}^{2}}}{{\longrightarrow}}B^{n+1 }\mathbb{C}^{\times}_{\delta}\longrightarrow\cdots\,, \tag{104}\]
classified by the stable cohomology operation \(\rho\circ\mathrm{Sq}^{2}\).15 Here \(\mathrm{Sq}^{2}\) is the second Steenrod square, and \(\rho\) is the change-of-coefficient for the canonical inclusion \(\mathbb{Z}_{2}\to\mathbb{C}^{\times}\). These infinite loop spaces assemble to form a spectrum and let us denote it as \(\mathbb{SC}\). We have thus proved the following theorem:
Footnote 15: The direct determination of \(\rho\circ\mathrm{Sq}^{2}\) requires to evaluate the \(E_{\infty}\)-structure of \(\big{(}\mathsf{sVect}^{fd}\big{)}^{\times}\). The Koszul sign rule should prescribe an \(E_{\infty}\)-structure that leads to a nontrivial stable cohomology operation. We note however that \(\rho\circ\mathrm{Sq}^{2}\) is the only existing nontrivial stable cohomology operation from \(\mathbb{H}\mathbb{Z}_{2}\) to \(\Sigma^{2}\mathbb{H}\mathbb{C}^{\times}\).
**Proposition 4.11** (fermionic): _The fusion monoid of super solitonic cohomology \(\mathsf{sRep}^{\bullet}(Y)\) contains a group \(\mathbb{SC}^{\bullet}(Y)\), where \(\mathbb{SC}\) is defined via Eq. (104)._
For a \(d\)-finite target space \(Y\), these theorems tell us that there are invertible \((d{-}n{-}1)\)-form solitonic symmetry given by \(H^{n}(Y;\mathbb{C}^{\times})\) in a \(d\)-dim bosonic theory and \(\mathbb{SC}^{n}(Y)\) in a \(d\)-dim fermionic theory. We can further ask what higher-group they make up, which requires us to consider the entire map spectrum \(\mathrm{Map}(\mathbb{Y}_{+},\mathbb{H}\mathbb{C})\) or \(\mathrm{Map}(\mathbb{Y}_{+},\mathbb{SC})\), where \(\mathbb{Y}_{+}\) denotes the suspension spectrum of \(Y\), rather than merely its \(\pi_{0}{-}\). We again leave such analysis to future works.
When we focus on \(\bullet\)-finite spaces only, many other cohomology theories can also provide the same results as above. For example, for \(\bullet\)-finite space \(Y\) we have
\[H^{\bullet}(Y;\mathbb{C}^{\times}) \,\simeq\,H^{\bullet}\big{(}Y;U(1)\big{)}\,, \tag{105a}\] \[\mathbb{SC}^{\bullet}(Y) \,\simeq\,\mathbb{SU}^{\bullet}(Y)\,, \tag{105b}\]
but they are different for general \(Y\). Here \(\mathbb{SU}\) is defined as the spectrum obtained by substituting \(\mathbb{C}^{\times}\) with \(U(1)\) in Eq. (104) and changing \(\rho\) into the change-of-coefficient for \(\mathbb{Z}_{2}\to U(1)\). We shall call \(\mathbb{H}U(1)\) the _unitary_ spectrum and call \(\mathbb{SU}\) the _super unitary_ spectrum. We conjecture that these two spectra correctly capture invertible solitonic symmetry even when we remove the finiteness condition on \(Y\).
**Conjecture 4.12**: _Consider a \(d\)-dim bosonic (resp. fermionic) theory defined by a path integral with target space \(Y\)._
* _For_ \(-1\leq p\leq d\!-\!1\)_, the_ \(p\)_-form solitonic symmetry contains the unitary cohomology_ _[resp. super unitary cohomology] group_ \(H^{d-p-1}\big{(}Y;U(1)\big{)}\) _[resp._ \(\mathbb{SU}^{d-p-1}(Y)\)_]._
_(Note: This is a theorem mentioned above when \(Y\) is \(d\)-finite.)_
This conjecture is motivated by our discrete approximation discussed at the end of Sec. 3.3 and Sec. 4.3.1. They are also consistent with physicists' long experience with invertible \(\theta\)-angles and the universal invertible construction (2.7).
Before concluding this section, let us unpack these two cohomology theories. \(\mathbb{H}U(1)\) is the U(1)-dual of \(\mathbb{H}\mathbb{Z}\) and its universal coefficient theorem takes the naive form,
\[H^{\bullet}\big{(}-;U(1)\big{)}\,\simeq\,\mathrm{Hom}\big{(}H_{\bullet}(-, \mathbb{Z}),U(1)\big{)}\,. \tag{4.30}\]
As for the fermionic case, the modified version of the Puppe sequence (4.28) leads to a long exact sequence16,
Footnote 16: This is also the Atiyah–Hirzebruch spectral sequence for \(\mathbb{SU}^{\bullet}(-)\). Namely, \(d_{2}:E_{2}^{\bullet,1}\to E_{2}^{\bullet+2,0}=\rho\circ\mathrm{Sq}^{2}\).
\[\cdots\longrightarrow H^{\bullet}\big{(}-;U(1)\big{)}\longrightarrow\mathbb{ SU}^{\bullet}(-)\longrightarrow H^{\bullet-1}\big{(}-;\mathbb{Z}_{2}\big{)} \stackrel{{\rho\circ\mathrm{Sq}^{2}}}{{\longrightarrow}}H^{ \bullet+1}\big{(}-;U(1)\big{)}\longrightarrow\cdots\,. \tag{4.31}\]
\(\mathbb{SU}\) is the U(1)-dual of \(\mathbb{S}\mathbb{Z}\), which we call the _super integral_ spectrum. Namely,
\[\mathbb{SU}^{\bullet}(-)\,\simeq\,\mathrm{Hom}\big{(}\mathbb{S}\mathbb{Z}_{ \bullet}(-),U(1)\big{)}\,. \tag{4.32}\]
\(\mathbb{S}\mathbb{Z}\) can be defined as the first Postnikov truncation of the sphere spectrum \(\mathbb{S}\) or the Thom spectrum \(\mathbb{M}Spin\). Namely, when \(Y\) is \((n\!-\!1)\)-connected, we have
\[\widetilde{\mathbb{SZ}}_{\bullet}(Y)\,\simeq\,\pi_{\bullet}^{s}(Y)\,\simeq\, \widetilde{\Omega}_{\bullet}^{Spin}(Y)\,,\qquad\text{for $\bullet\leq n\!+\!1$}\,. \tag{4.33}\]
Nonzero homotopy groups of the spectra mentioned here are summarized:
\[\begin{array}{c||c|c||c
Non-invertible structure beyond homotopy groups
Following Ansatz 3.2 and Conjecture 3.3, we have studied the algebraic structure of solitonic symmetry in Sec. 4. These results are pretty formal, and we would like to make some down-to-earth illustrations. For this purpose, we adopt the following angle of looking at solitonic symmetry: What role does the conventional wisdom \(\operatorname{Hom}(\pi_{\bullet}-,U(1))\) (see Sec. 2.2.3) play in the general solitonic symmetry? In this section, we focus on path-connected target space \(Y\) since different path components correspond just to different universes17.
Footnote 17: When we are interested in the relationship between different universes, it is also helpful to render multiple path components. In such a case, we can consider a point-like topological functional that specifies one of the path-connected components, i.e., a local operator \(1_{i}(x)\) that takes \(1\) if the field at \(x\) is valued in \(i\in\pi_{0}Y\) and takes \(0\) otherwise. Due to the continuity of the field, this operator is indeed topological. It generates the \((d-1)\)-form solitonic symmetry,
\[\mathsf{Rep}^{0}(Y)\,\simeq\,\mathsf{sRep}^{0}(Y)\,\simeq\,\left\{\pi_{0}Y \mapsto\mathbb{C}\right\}, \tag{5.1}\]
which is non-invertible. The state space decomposes into different sectors, i.e., universes, even at finite volumes [85, 86, 87, 88, 89, 90, 6, 19]. Interfaces between different universes, i.e. \((d-1)\)-dim solitonic defects, are the charged objects of the \((d-1)\)-form solitonic symmetry.
The conventional wisdom classifies topological charges by homotopy groups \(\pi_{\bullet}-\). From the perspective of topological functionals, conventional wisdom concerns topological functionals only on spheres, and solitonic symmetry has the structure \(\operatorname{Hom}(\pi_{\bullet}-,U(1))\). Due to Alexander's trick, a sphere necessarily has
\[\pi_{0}\mathrm{Diff}(S^{\bullet},\xi)\simeq 0 \tag{5.2}\]
for an orientation \(\xi\). Thus spherical topological functionals do not suffer from the identity problem discussed in Sec. 3.1. Therefore, \(\operatorname{Hom}(\pi_{\bullet}-,U(1))\) should not be regarded as just wrong. Instead, it is the precursor of the non-invertible structure.
### Rectification vs. condensation
Let us speculate the structure of \(\mathsf{Rep}^{n}(Y)\) and \(\mathsf{sRep}^{n}(Y)\) inductively [We use \(\mathsf{Rep}^{n}(Y)\) to illustrate the idea]. We first suppose that we have already understood \(\mathsf{Rep}^{n-1}(Y)\simeq\mathsf{Rep}^{n-1}(\pi_{\leq n-1}Y)\), and we now would like to understand \(\mathsf{Rep}^{n}(Y)\simeq\mathsf{Rep}^{n}(\pi_{\leq n}Y)\). Recall that \(\pi_{\leq\bullet}Y\) denotes \(Y\)'s homotopy \(\bullet\)-type, i.e., the \(\bullet\)-th Postnikov truncation of \(Y\). The \(n\)-th floor of \(Y\)'s Postnikov tower is a fibration
\[B^{n}\pi_{n}Y\to\pi_{\leq n}Y\to\pi_{\leq n-1}Y\,. \tag{5.3}\]
This fibration allows us to complete our mission in two steps:
\[\mathsf{Rep}^{n-1}(Y)\ \stackrel{{ 1}}{{\implies}}\ \mathsf{Rep}^{n}(\pi_{\leq n -1}Y)\ \stackrel{{ 2}}{{\implies}}\ \mathsf{Rep}^{n}(Y)\,. \tag{5.4}\]
In the first step, we need to construct \(n\)-dim topological functionals for \(\pi_{\leq n-1}Y\) from lower-dim topological functionals. Because \(\pi_{\leq n-1}Y\) is \((n-1)\)-aspherical, i.e., it has no \(n\)-dim homotopical data at all, the \(n\)-dim topological functionals can be obtained via _condensation_.
Physically, following Ref. [30], condensation can be formulated via higher-gauging the sub-symmetries of solitonic symmetry on the \(n\)-dim operator manifold only. Mathematically, following Ref. [68], condensation corresponds to formulating the Karoubi completion. Thus we expect
\[\mathsf{Rep}^{n}(\pi_{\leq n-1}Y)\ \simeq\ \Sigma\mathsf{Rep}^{n-1}(Y)\,. \tag{100}\]
The higher-gauging procedure shows that condensation relies on nontrivial lower-dim cycles on the operator manifold. Therefore, the \(n\)-dim topological functionals obtained by condensation must take trivial values on \(S^{n}\).
In the second step, the Postnikov fibration (101) induces an injective functor
\[\Sigma\mathsf{Rep}^{n-1}(Y)\ \to\ \mathsf{Rep}^{n}(Y)\,. \tag{101}\]
What new objects are added in this step? The homotopy fiber of the fibration (101) points out the answer: the topological functionals that are nontrivial on \(S^{n}\). Namely, we expect that the Postnikov fibration induces a "short exact sequence",
\[0\ \to\ \Sigma\mathsf{Rep}^{n-1}(Y)\ \to\ \mathsf{Rep}^{n}(Y)\ \to\ \widetilde{\mathsf{Rep}}^{n}(B^{n}\pi_{n}Y)\ \to\ 0\,. \tag{102}\]
Here the "reduced" solitonic cohomology \(\widetilde{\mathsf{Rep}}^{n}(B^{n}\pi_{n}Y)\) is the \(n\)-category completion of the commutative rig, the \(\mathbb{N}\)-span of \(\mathrm{Hom}\big{(}\pi_{n}Y,U(1)\big{)}\), by contractible morphisms. This fits our experience with \(n\)-form gauge fields of gauge group \(\pi_{n}Y\) and can be directly verified when \(\pi_{n}Y\) is finite. When the Postnikov fibration (101) is a trivial product, the "short exact sequence" (102) canonically splits, and the conventional wisdom makes the correct prediction. But when the Postnikov fibration (101) is nontrivial, the map \(\mathsf{Rep}^{n}(Y)\to\widetilde{\mathsf{Rep}}^{n}(B^{n}\pi_{n}Y)\) has a non-invertible kernel, and the fusion rule of spherical topological functionals becomes non-invertible in general. This non-invertible _rectification_ of \(\mathrm{Hom}\big{(}\pi_{n}Y,U(1)\big{)}\) is of our primary interest.
Using the invertible solitonic subsymmetry we discussed in Sec. 4.3.2, we can measure the rectification of \(\mathrm{Hom}(\pi_{\bullet}\!-,U(1))\) by (generalized) Hurewicz maps,
\[\begin{split}\mathrm{(bosonic)}&\pi_{\bullet}-\ \xrightarrow{h_{b}}\ H_{\bullet}(-;\mathbb{Z})\,,\\ \mathrm{(fermionic)}&\pi_{\bullet}-\ \xrightarrow{h_{f}}\ \SS {\mathbb{Z}}_{\bullet}(-)\,.\end{split} \tag{103}\]
Since these homology groups describe the invertible topological charges, the rectification of conventional wisdom is measured by the non-injectivity of Hurewicz maps.
* Two elements in \(\pi_{\bullet}\!-\) differed by an element in \(\ker h_{b}\) or \(\ker h_{f}\) must share the same invertible topological charge.
* The image of the dual Hurewicz map \(H^{\bullet}\big{(}-;U(1)\big{)}\to\mathrm{Hom}\big{(}\pi_{\bullet}\!-,U(1) \big{)}\) or \(\SS\mathsf{U}^{\bullet}(-))\to\mathrm{Hom}\big{(}\pi_{\bullet}\!-,U(1)\big{)}\) characterizes the survivors in the conventional wisdom as invertible operators.
Recall that the invertible solitonic symmetry we found in Sec. 4.3.2 is probably not maximal. Thus we may overestimate the rectification and regard some invertible operators as non-invertible. Nevertheless, when \(Y\) is \((n\!-\!1)\)-connected, such overestimation does not happen for dimension \(\leq n+1\) because our spectra are the \((-\!1)\)-connective covers of the authentic spectra. Besides, the non-rectified operators we find should be truly invertible.
### Examples of rectification
The failure of Hurewicz maps' being injective originates from the non-Abelianness in the homotopy structure of the target space \(Y\). Namely, we say \(Y\) is Abelian if
all homotopy groups are Abelian and
\[Y\simeq\prod_{n=1}^{\infty}B^{n}\pi_{n}Y\,.\]
Hurewicz maps on Abelian spaces are always injective, and thus the conventional wisdom \(\operatorname{Hom}\bigl{(}\pi_{\bullet}\!-,U(1)\bigr{)}\) is not rectified. However, a general topological space is far from Abelian, and its Postnikov tower describes its non-Abelianness.
#### 5.2.1 Spherical rectification
First of all, the homotopy classes of field configurations on a sphere are, in general, not the homotopy group, i.e., \(\pi_{\bullet}Y\not\simeq[S^{\bullet},Y]\). This comes from a distinction between homotopies and based homotopies. Let us write based homotopy classes of based maps between two based spaces as \([-,-]_{\ast}\). Homotopy groups are precisely \(\pi_{\bullet}Y\equiv[S^{\bullet},Y]_{\ast}\). Through the homotopy extension property, \(\pi_{1}Y\) always naturally acts on \([-,Y]_{\ast}\). Then \([-,Y]\) is precisely the orbit space of this \(\pi_{1}Y\)-action, i.e. (see Prop. 4A.2 of Ref. [91])
\[[-,Y]\ \simeq\ \frac{[-,Y]_{\ast}}{\pi_{1}Y\text{-action}}\,. \tag{5.9}\]
On homotopy groups \(\pi_{n}Y\), the \(\pi_{1}Y\)-action comprises automorphisms of \(\pi_{n}Y\). As long as \([S^{\bullet},Y]\not\simeq\pi_{\bullet}Y\), the conventional wisdom \(\operatorname{Hom}\bigl{(}\pi_{\bullet}\!-,U(1)\bigr{)}\) is rectified.
In the simplest case of \(\bullet=1\), \(\pi_{1}Y\) acts on itself as inner automorphisms and thus
\[[S^{1},Y]\,\simeq\,\{\text{conjugacy classes of }\pi_{1}Y\}\,, \tag{5.10}\]
We have already seen this around the end of Sec. 4.2.3. The Hurewicz map \(\pi_{1}Y\to H_{1}(Y;\mathbb{Z})\simeq\widetilde{\mathbb{S}}\widetilde{\mathbb{ Z}}_{1}(Y)\) is just the Abelianization. Therefore, we have
\[H^{1}\bigl{(}Y;U(1)\bigr{)}\simeq\operatorname{Hom}\bigl{(}\pi_{1}Y,U(1) \bigr{)}\,,\qquad\mathbb{SU}^{1}(Y)\simeq\operatorname{Hom}\bigl{(}\pi_{1}Y,U( 1)\bigr{)}\times\mathbb{Z}_{2}\,, \tag{5.11}\]
which comprises 1-dim (super-)representations of \(\pi_{1}Y\). The higher-dim (super-)representations of \(\pi_{1}Y\) constitute the non-invertible objects in \(\mathsf{Rep}^{1}(Y)\) and \(\mathsf{sRep}^{1}(Y)\), the \([\geq\!(d\!-\!2)]\)-form solitonic symmetry.
The \(\pi_{1}Y\)-action on higher homotopy groups18 has no chance to be inner since higher \(\pi_{\bullet}Y\) is Abelian. To describe topological functionals on
Footnote 18: There is an illuminating way to understand these actions. \(Y\)’s universal cover \(\bar{Y}\) naturally carries a free \(\pi_{1}Y\)-action. Each \(\pi_{1}Y\) element as a homeomorphism of \(\bar{Y}\) induces a group automorphism on \(\pi_{n}(\bar{Y})\). These automorphisms assemble to a \(\pi_{1}Y\)-action on \(\pi_{n}(\bar{Y})\), which is then turned into a \(\pi_{1}Y\)-action on \(\pi_{n}Y\) by the natural isomorphism \(\pi_{n}(\bar{Y})\simeq\pi_{n}Y\) for \(n>1\).
\[[S^{\bullet},Y]\ \simeq\ \pi_{\bullet}Y\Big{/}\pi_{1}Y\text{-action}\,, \tag{5.12}\]
we can prescribe a semidirect product from the \(\pi_{1}Y\)-action,
\[\pi_{\bullet}Y\rtimes\pi_{1}Y\,. \tag{5.13}\]
Then a \(S^{\bullet}\) topological functional is a character of \(\pi_{\bullet}Y\rtimes\pi_{1}Y\) which vanishes for all group elements outside \(\pi_{\bullet}Y\). A nontrivial \(\pi_{1}Y\)-action implies the existence of such representations of dimension \(>1\) and accordingly, the \(S^{\bullet}\) topological functionals have a non-invertible fusion rule19. The simplest example for this effect is perhaps the \(S^{2}\) topological functionals for20
Footnote 19: The fact that the \(\pi_{1}\)-action can lead to a nontrivial interplay between solitons of different dimensions and an unconventional topological conservation law was also noticed by, e.g., Refs. [92, 93].
Footnote 20: The latter example \(Y\simeq BO(2)\) was discussed as semi-Abelian gauge theories in Refs. [25, 26, 94]. The non-invertible symmetries for the electric 1-form symmetries have been discussed, while our discussion here is devoted to the magnetic one. In both cases, the mechanism for constructing non-invertible symmetries is essentially the same, and we can be exchanged under a duality transformation. We note a mixed ’t Hooft anomaly between the electric and the magnetic symmetry, so we cannot find a path-integral description where both symmetries are solitonic.
\[Y\,\simeq\,\mathbb{R}P^{2}\,\simeq\,S^{2}/\mathbb{Z}_{2}\quad\text{or}\quad Y \,\simeq\,BO(2)\,\simeq\,BU(1)/\mathbb{Z}_{2}\,, \tag{5.14}\]
or any other \(Y\) whose \(\pi_{\leq 2}Y\) is given by the unique nontrivial split fibration of the following form,
\[B^{2}\mathbb{Z}\to\pi_{\leq 2}Y\to B\mathbb{Z}_{2}\,. \tag{5.15}\]
In these examples, \(\pi_{1}Y\simeq\mathbb{Z}_{2}\) acts on \(\pi_{2}Y\simeq\mathbb{Z}\) through \(\mathbb{Z}\to-\mathbb{Z}\), which means
\[[S^{2},Y]\,\simeq\,\mathbb{N}\,. \tag{5.16}\]
We can find its homology to be
\[H_{2}(Y;\mathbb{Z})\,\simeq\,0\,,\qquad\mathbb{S}\mathbb{Z}_{2}(Y)\,\simeq\,0\,, \tag{5.17}\]
which implies that the conventional wisdom \(\text{Hom}\big{(}\pi_{\bullet}\!-,U(1)\big{)}\) is completely rectified into non-invertible symmetry. \(S^{2}\) topological functionals are given by characters of \(D_{\infty}\simeq\mathbb{Z}\rtimes\mathbb{Z}_{2}\) which vanish outside \(\mathbb{Z}\). Since there are infinitely many topological charges, according to the discrete approximation, we should consider the representations that are arbitrarily close to representations of quotient \(D_{n}\simeq\mathbb{Z}_{n}\rtimes\mathbb{Z}_{2}\) for any finite \(n\). Therefore, \(S^{2}\) topological functionals are \(\mathbb{N}\)-linearly spanned by the characters of 2-dim irreducible representations of \(D_{\infty}\), namely \(2\cos(\phi\bullet)\) with \(\bullet\in\pi_{2}Y\simeq\mathbb{Z}\) for all \(\phi\in\mathbb{R}/2\pi\mathbb{Z}\). The non-invertible fusion rule follows evidently,
\[2\cos(\alpha\bullet)\,\times\,2\cos(\beta\bullet)\ \ =\ \ 2\cos[(\alpha\!+\! \beta)\bullet]\,+\,2\cos[(\alpha\!-\!\beta)\bullet]\,. \tag{5.18}\]
In summary, the the \(\pi_{1}\)-action \(\pi_{1}\times\pi_{\bullet}\to\pi_{\bullet}\) rectifies the conventional wisdom by directly assigning a non-invertible fusion rule to spherical topological functionals.
#### 5.2.2 Non-spherical rectification
Let us now discuss subtler effects that cause non-injectivity beyond \(\pi_{1}\)-actions. When \(Y\) is \((k\!-\!1)\)-connected with \(k>1\), the Hurewicz map, \(h_{b}:\pi_{\bullet}Y\to H_{\bullet}(Y;\mathbb{Z})\), is isomorphism for \(\bullet\leq k\) (and same is true for \(h_{f}\)), which shows that the solitonic symmetry is invertible at least up to \(k\)-dim topological functionals. Non-invertibility can appear for larger dimensions, \(\bullet\geq k+1\). Since there is no \(\pi_{1}\)-action here, topological functionals must be invertible
as long as they inhabit spheres. Thus rectification happens because a spherical topological functional can also inhabit non-spheres.
To see how the non-invertible symmetry appears in this situation, we need to take into account the inter-dimensional effects between solitons. As we have discussed in Sec. 2.2.3, for some \(k\leq p<n\), a codimension-\((p+1)\) solitonic defect may also carry not only the \(p\)-dim solitonic charge but also the \(n\)-dim solitonic charge, and such \(n\)-dim solitonic charge needs to be measured by non-spherical topological functionals. As the Hurewicz map of degree \(n\) is not injective, this may violate the selection rule given by \(\mathrm{Hom}(\pi_{n}Y,U(1))\), which suggests that the conventional selection rule, \(\mathrm{Hom}(\pi_{n}Y,U(1))\), is valid only if solitons with higher dimensions are absent and selection rules are modified otherwise. Accordingly, \(n\)-dim solitonic symmetry has to be non-invertible to capture this intriguing selection rule.
As the present authors pointed out in the previous paper Ref. [1], this effect actually happens in the 4-dim \(\mathbb{C}P^{1}\) sigma model. Let us discuss it here as an example. Homotopy groups of the simply-connected \(\mathbb{C}P^{1}\simeq S^{2}\) at low degrees are well-known to be
\[\pi_{2}(\mathbb{C}P^{1})\simeq\mathbb{Z}\,,\qquad\pi_{3}(\mathbb{C}P^{1}) \simeq\mathbb{Z}\,. \tag{111}\]
The third Postnikov truncation of \(\mathbb{C}P^{1}\) then sits in a principal fibration
\[B^{3}\mathbb{Z}\ \longrightarrow\ \pi_{\leq 3}\mathbb{C}P^{1}\ \longrightarrow\ B^{2}\mathbb{Z}\,. \tag{112}\]
It is well-known that this fibration is classified by a generator of
\[H^{4}(B^{2}\mathbb{Z};\mathbb{Z})\,\simeq\,\mathbb{Z}\,. \tag{113}\]
This structure of the homotopy 3-type of \(\mathbb{C}P^{1}\) prescribes the following homologies,
\[H_{2}(\mathbb{C}P^{1};\mathbb{Z})\,\simeq\,\mathbb{Z}\,, \tag{114a}\] \[H_{3}(\mathbb{C}P^{1};\mathbb{Z})\,\simeq\,0\,, \tag{114b}\]
and requires all the relevant Hurewicz maps to be epimorphic. The last map \(\pi_{3}(\mathbb{C}P^{1})\to\mathbb{S}\mathbb{Z}_{3}(\mathbb{C}P^{1})\)'s being epimorphic can be understood via the Freudenthal suspension theorem due to the natural \(\widetilde{\mathbb{S}\widetilde{\mathbb{Z}}}_{3}(\mathbb{C}P^{1})\simeq\pi_{3 }^{s}(\mathbb{C}P^{1})\).
On the one hand, the conventional wisdom is not rectified at dimension 2, which gives a \(U(1)\) 1-form symmetry acting on the line solitonic defects. The concrete topological functionals were constructed by Eq. (109) in Sec. 3.1. On the other hand, the conventional wisdom is indeed rectified at dimension 3, which gives non-invertible 0-form symmetry acting on point solitonic defects. The extent of rectification depends on the bosonic/fermionic nature of the theory. The conventional wisdom is completely rectified and non-invertible in the bosonic theory, while a \(\mathbb{Z}_{2}\) part survives in the fermionic theory. The concrete topological functional for this \(\mathbb{Z}_{2}\) part was constructed by Eq. (110) in Sec. 3.1.
We have also constructed non-invertible 3-dim topological functionals in Ref. [1] according to the discrete approximation. Concretely, we construct the rectified operators for \(\mathbb{Z}_{2N}\subseteq\mathrm{Hom}(\pi_{3}(\mathbb{C}P^{1}),U(1))\) for each \(N\in\mathbb{N}\). In particular, a generator of \(\mathbb{Z}_{2N}\) is rectified into the following non-invertible form:
\[\mathcal{H}_{\pi/N}(M)\equiv\int\mathcal{D}b\,\exp\left(-\mathrm{i}\int_{M} \frac{N}{4\pi}b\,\mathrm{d}b+\mathrm{i}\int_{M}\frac{1}{2\pi}b\,\mathrm{d} \sigma^{*}a\right)\,, \tag{115}\]
where \(\sigma^{*}a\) is the same as that in Eq. (3.9), i.e., \(\sigma|_{M}:M\mapsto\mathbb{C}P^{1}\) denotes the field map and \(a\) denotes the \(U(1)\) gauge field on \(S^{2}\) associated with the Hopf fibration \(S^{1}\to S^{3}\to S^{2}\). In the fermionic theory, these operators (5.23) are the building block of \(\mathsf{sRep}^{3}(\mathbb{C}P^{1})\) beyond condensation \(\mathsf{sRep}^{3}(B^{2}\mathbb{Z})\simeq\Sigma\mathsf{sRep}^{2}(\mathbb{C}P^{ 1})\). In the bosonic theory, \(N\) needs to be restricted to even integers without spin structures. These even-\(N\) operators are also building blocks of \(\mathsf{Rep}^{3}(\mathbb{C}P^{1})\).
When \(M_{3}=S^{3}\), these topological functionals recover a naive integral expression for the Hopf invariant,
\[\mathcal{H}_{\pi/N}(S^{3})=\frac{1}{\sqrt{N}}\exp\left(\mathrm{i}\frac{\pi}{N }\int_{S^{3}}\frac{\sigma^{*}a\,\mathrm{d}\sigma^{*}a}{4\pi^{2}}\right)=\frac {1}{\sqrt{N}}\exp\left(\mathrm{i}\frac{\pi}{N}\raisebox{-1.0pt}{\scalebox{1.2}{$\bullet$}}\right)\,, \tag{5.24}\]
for \(\bullet\in\mathbb{Z}\simeq\pi_{3}(\mathbb{C}P^{1})\), which indeed echoes the conventional wisdom. The non-invertible feature of Eq. (5.23) becomes transparent by evaluating it around the line solitonic defect, which is also the charged object of the \(U(1)\) 1-form solitonic symmetry. The line solitonic defect is defined by setting the boundary condition for the \(\mathbb{C}P^{1}\) field on the normal sphere bundle around \(S^{1}\), which is nothing but \(S^{2}\!\times\!S^{1}\). Based on to the structure of \([S^{2}\!\times\!S^{1},S^{2}]\) described by Eq. (3.6), for \((m,\ell)\in[S^{2}\!\times\!S^{1},S^{2}]\), we can evaluate to have
\[\mathcal{H}_{\pi/N}(S^{2}\!\times\!S^{1})=\begin{cases}\exp\left(\mathrm{i} \frac{\pi}{N}\ell\right)\,,\quad m=0\mod N\\ 0\,,\quad m\neq 0\mod N\end{cases}\,. \tag{5.25}\]
It does not lift the \(g\)-degeneracy discussed below Eq. (3.7) as we expected in Sec. 3.1. We see that the topological functional becomes nonzero only if the level \(N\) divides the 1-form topological charge \(m\), which clarifies that the presence of line solitonic defects causes the non-invertibility of the 0-form solitonic symmetry. Also, we see that different manifolds can support coherent topological charges like that \(\ell\) is correlated to \(\pi_{3}(\mathbb{C}P^{1})\).
### Examples of condensation
Finally, we describe some examples of topological functionals that are trivial on spheres, i.e., those that lie in the image of Eq. (5.6). In fact, Sec. 5.2.2 already implies examples when we try to fusion some operators there. However, here we shall present several more clean examples where conventional wisdom totally vanishes. We focus on the simplest 2-dim case, so suppose \(M\) is a closed 2-dim manifold.
Prototypical bosonic examples are the 2-dim topological functionals for two \(\mathbb{Z}_{n}\) gauge fields \(s_{1,2}\in H^{1}(M;\mathbb{Z}_{n})\), or for two \(S^{1}\)-valued scalars \(\phi_{1,2}:M\mapsto\mathbb{R}/2\pi\mathbb{Z}\). The former case has \(Y\simeq B\mathbb{Z}_{n}^{2}\) while the latter has \(Y\simeq T^{2}\). We see always that \(\pi_{2}Y\simeq 0\) but
\[H^{2}\big{(}B\mathbb{Z}_{n}^{2};U(1)\big{)}\,\simeq\,\mathbb{Z}_{n}\quad\text{ and }\quad H^{2}\big{(}T^{2};U(1)\big{)}\,\simeq\,U(1)\,. \tag{5.26}\]
The corresponding 2-dim bosonic topological functionals inhabit Riemann surfaces \(M\). They are trivial on \(M\simeq S^{2}\) but non-trivial on higher-genus surfaces such as \(M\simeq T^{2}\). These topological functionals can be explicitly constructed via the universal invertible form (2.7). For \(Y\simeq B\mathbb{Z}_{n}\), we have
\[U_{k}(M)\ \equiv\ \exp\left\{\mathrm{i}k\frac{2\pi}{n}\int_{M}s_{1}\cup s_{2} \right\}\,,\qquad k\in\mathbb{Z}\,,\ k\sim k+n\,. \tag{5.27}\]
For \(Y\simeq S^{1}\), we have
\[U_{\theta}(M)\ \equiv\ \exp\left\{{\rm i}\theta\int_{M}\frac{{\rm d}\phi_{1}}{2 \pi}\wedge\frac{{\rm d}\phi_{2}}{2\pi}\right\}\,,\qquad\theta\sim\theta+2\pi\,. \tag{109}\]
They are apparently trivial on \(S^{2}\). They can be defined just via field configurations on several properly selected loops inside \(M\).
Prototypical fermionic examples are the 2-dim topological functionals for a \(\mathbb{Z}_{2n}\) gauge field \(s\in H^{1}(M;\mathbb{Z}_{2n})\), or for a \(S^{1}\)-valued scalar \(\phi:M\mapsto\mathbb{R}/2\pi\mathbb{Z}\). The former has \(Y\simeq B\mathbb{Z}_{2n}\) while the later has \(Y\simeq S^{1}\). We can readily see \(\pi_{2}Y\simeq H_{2}(Y;\mathbb{Z})\simeq 0\) and find via the long exact sequence (106) that
\[\mathbb{SZ}_{2}(Y)\,\simeq\,H_{1}(Y;\mathbb{Z}_{2})\,\simeq\,\mathbb{Z}_{2}\,. \tag{110}\]
We thus see that the corresponding topological functionals are fermionic, i.e., they inhabit spin Riemann surfaces \(M\). Also, they are trivial on \(S^{2}\) but non-trivial on other spin Riemann surfaces. Via the universal invertible form (8), we can construct these fermionic topological functionals through the \({\rm Arf}\) invariant of spin structures. Namely, we pick up a spin structure \(\rho\) on \(M\) such that \({\rm Arf}(\rho)=0\), i.e., \([\rho]\in 0\in\Omega_{2}^{Spin}\). Recall that, on top of an orientation, spin structures form an \(H^{2}(M;\mathbb{Z}_{2})\)-torsor. Then for \(Y\simeq B\mathbb{Z}_{2n}\), we have
\[U_{k}(M)\ \equiv\ (-)^{k\,{\rm Arf}\left(\rho+\bar{s}\right)}\,,\qquad k \in\mathbb{Z}\,,\ k\sim k+2\,, \tag{111}\]
where \(\bar{s}\) denotes the mod-2 reduction of \(s\). For \(Y\simeq S^{1}\), we just substitute \(\bar{s}\) above with the mod-2 reduction of \([\phi]\in H^{2}(M;\mathbb{Z})\). It is evident to see that this topological functional vanishes on \(S^{2}\).
We now carefully analyze the identity problem (see Sec. 3.1) of the above example. Concretely, we consider torus topological functionals for a \(S^{1}\)-valued scalar or a \(\mathbb{Z}_{n}\)-valued gauge field. Recall that the target space takes the form \(Y\simeq BR\) for \(R\simeq\mathbb{Z}\) in the former case and for \(R\simeq\mathbb{Z}_{n}\) in the latter case. We readily recognize
\[[T^{2},BR]\simeq H^{1}(T^{2};R)\simeq R^{2}\,. \tag{112}\]
The mapping class groups of \(T^{2}\) are well-known, e.g.,
\[\pi_{0}{\rm Diff}(T^{2}) \simeq GL(2,\mathbb{Z})\,, \tag{113a}\] \[\pi_{0}{\rm Diff}(T^{2},\xi) \simeq SL(2,\mathbb{Z})\,,\] (113b) \[\pi_{0}{\rm Diff}(T^{2},\xi,\rho) \simeq p^{-1}\Big{(}\mathbb{Z}_{2}\Big{)}\,,\quad\text{for $\mathbb{Z}_{2}\subseteq SL(2,\mathbb{Z}_{2})$ and $SL(2,\mathbb{Z})\overset{p}{\to}SL(2,\mathbb{Z}_{2})$}\,. \tag{113c}\]
where \(\xi\) is an orientation and \(\rho\) is still a spin structure such that \({\rm Arf}(\rho)=0\). Without loss of generality, \(\rho\) can be taken as antiperiodic-periodic. Since \([T^{2},BR]\) is just a 2-dim lattice, these mapping class groups act on \([T^{2},BR]\) just as 2-by-2 \(\mathbb{Z}\)-valued matrices. These actions are far from trivial, and concretely, the criteria for classifying \((a,b)\in[T^{2},BR]\) into its action orbit are as follows:
\[\pi_{0}{\rm Diff}(T^{2})\text{:}\quad\text{gcd}(a,b)\,, \tag{114a}\] \[\pi_{0}{\rm Diff}(T^{2},\xi)\text{:}\quad\text{gcd}(a,b)\,,\] (114b) \[\pi_{0}{\rm Diff}(T^{2},\xi,\rho)\text{:}\quad\text{gcd}(a,b)\,, \ a\ {\rm mod}\ 2\,, \tag{114c}\]
where we stipulate \(\gcd(0,0)\equiv 0\). We see that a spin structure slightly lifts the degeneracy when \(R\) has an even characteristic. This tiny degeneracy lifting is exactly realized by the invertible topological functional (105). Despite huge degeneracy, equation (106) still gives tremendous inequivalent topological charges, and they have to be distinguished by non-invertible topological functionals. They are examples of non-invertible condensations.
## 6 Discussion
In this paper, we discussed the general structure of the non-invertible solitonic symmetry by attempting a precise mathematical formulation (Sec. 4) after a pursuit of the proper physical constraints (Sec. 3). It allows us to discuss how the non-invertible structure in general solitonic symmetry goes beyond the conventional wisdom of homotopy groups (Sec. 5) in terms of concrete examples. Nevertheless, our analysis in this paper is still far from complete to be complemented by future studies. In this section, we are going to make some remarks and discuss the outlooks for future studies.
### Remarks
First, we point out an interesting connection between the solitonic symmetry and the higher-group symmetry, which was actually already mentioned around the end of Sec. 2.1. Assume we have a QFT \(\mathcal{T}\) with an anomaly-free discrete higher-group \(\mathsf{G}\). Then we can consider another QFT \(\mathcal{T}/\mathsf{G}\) by dynamically gauging the higher-group \(\mathsf{G}\). This gauging procedure can be realized by adding a path integral over \(\mathsf{G}\) gauge fields. The homotopy target space of this path integral is precisely the classifying space \(B\mathsf{G}\). Thus the theory \(\mathcal{T}/\mathsf{G}\) acquires the solitonic symmetry \((\mathsf{s})\mathsf{Rep}^{\bullet}(B\mathsf{G})\). Therefore, gauging a non-anomalous higher-group symmetry produces a QFT with non-invertible solitonic symmetry. In particular, this picture suggests the last property of solitonic symmetry described in this paper.
**Conjecture 6.1**: _Solitonic symmetry is free of 't Hooft anomaly._
It comes from the belief that the dynamical gauging procedure is reversible. Namely, if we could develop the proper gauging procedures of the solitonic symmetry, then it would be natural to expect that gauging of \((\mathsf{s})\mathsf{Rep}^{\bullet}(B\mathsf{G})\) in \(\mathcal{T}/\mathsf{G}\) produces the original theory \(\mathcal{T}\). This picture echoes the ideas developed in Refs. [52; 53; 54; 55]. One may consider a Tannaka duality between higher-group symmetry and solitonic symmetry. This also suggests a wide adaption of solitonic symmetry: A broad class of non-solitonic symmetry can also be described by solitonic symmetry via a properly selected virtual target space \(Y\).
Second, we point out that QFTs with discrete higher-group symmetry also possess solitonic sectors. Their solitonic sectors are the higher-domain-walls of spontaneously breaking higher-group symmetry and inhabit only non-closed manifolds (recall Footnote 5). However, via the dynamical gauging discussed above, these non-closed solitonic sectors are the counterparts of our closed solitonic sectors addressed in this paper. For example, when a discrete symmetry is spontaneously broken, there are domain walls connecting different vacua, and they are well-defined on the infinite space. However, when we consider the closed space with the periodic boundary condition, a single domain wall is inconsistent with
the boundary condition. Even in this case, the single domain wall becomes well-defined on closed manifolds by considering the gauging of the broken symmetry or the symmetry-twisted sector. Therefore, solitons in different theories can behave locally similarly but have different global behaviors.
### Outlooks
We present a list of outlooks based on the subtle points in the analysis of fully-extended TQFTs in the present paper. They should also be of interest to those concerned about classifying gapped phases.
* Does the Karoubi completeness, as well as the operation \(\Sigma\), provide a fully satisfactory solution to the codomains for fully-extended TQFTs? Does \(\Sigma\) correctly capture the physical condensation?
* The validity of Conjecture 4.4 on the homotopy actions should be confirmed.
* The maximal invertible solitonic symmetry, i.e., the structure of the infinite loop spaces \(\left(\Sigma^{n-1}\mathsf{Vect}^{fd}\right)^{\times}\) and \(\left(\Sigma^{n-1}\mathsf{Svect}^{fd}\right)^{\times}\) for \(n>1\), should be clarified. The higher-group structure of invertible solitonic symmetry should also be clarified.
As for the solitonic symmetry itself, we also have many prospects, such as exploring the relation between solitonic symmetry and other generalized symmetry, constructing explicitly more examples of topological functionals, and presenting a comprehensive classification of low-dim topological functionals. A list of intriguing general questions is as follows.
* The validity of Conjecture 3.3 on the correspondence between TQFTs and their partition functions should be confirmed.
* A systematic treatment of continuous solitonic symmetry and infinitely many topological charges should be developed. A proposal is to consider things like \(\Sigma^{n-1}\mathsf{Hilb}^{fd}\) (c.f. Ref. [81]). Another is to make the discrete approximation rigorous.
* We should rigorously formulate the inductive decomposition into non-spherical condensation and spherical rectification discussed in Sec. 5.1. Namely, we should prove our expected "short exact sequence" (5.7) and determines its structure by the Postnikov fibration.
* Can we tackle solitonic symmetry from the side of topological charges? This requires a complicated analysis of the homotopy classes of maps on manifolds, including their relations via bordisms (see Ref. [1]) to resolve the coherence problem (see Sec. 3.2), as well as a direct treatment of the identity problem (see Sec. 3.1).
If these problems can be solved, we will further strengthen our confidence in Ansatz 3.2 or know how to modify it. This work thus serves as an outset in the pursuit of a perfect description of solitonic symmetry. |
2310.04148 | Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning | The performance of existing supervised neuron segmentation methods is highly
dependent on the number of accurate annotations, especially when applied to
large scale electron microscopy (EM) data. By extracting semantic information
from unlabeled data, self-supervised methods can improve the performance of
downstream tasks, among which the mask image model (MIM) has been widely used
due to its simplicity and effectiveness in recovering original information from
masked images. However, due to the high degree of structural locality in EM
images, as well as the existence of considerable noise, many voxels contain
little discriminative information, making MIM pretraining inefficient on the
neuron segmentation task. To overcome this challenge, we propose a
decision-based MIM that utilizes reinforcement learning (RL) to automatically
search for optimal image masking ratio and masking strategy. Due to the vast
exploration space, using single-agent RL for voxel prediction is impractical.
Therefore, we treat each input patch as an agent with a shared behavior policy,
allowing for multi-agent collaboration. Furthermore, this multi-agent model can
capture dependencies between voxels, which is beneficial for the downstream
segmentation task. Experiments conducted on representative EM datasets
demonstrate that our approach has a significant advantage over alternative
self-supervised methods on the task of neuron segmentation. Code is available
at \url{https://github.com/ydchen0806/dbMiM}. | Yinda Chen, Wei Huang, Shenglong Zhou, Qi Chen, Zhiwei Xiong | 2023-10-06T10:40:46Z | http://arxiv.org/abs/2310.04148v1 | # Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement Learning
###### Abstract
The performance of existing supervised neuron segmentation methods is highly dependent on the number of accurate annotations, especially when applied to large scale electron microscopy (EM) data. By extracting semantic information from unlabeled data, self-supervised methods can improve the performance of downstream tasks, among which the mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images. However, due to the high degree of structural locality in EM images, as well as the existence of considerable noise, many voxels contain little discriminative information, making MIM pretraining inefficient on the neuron segmentation task. To overcome this challenge, we propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy. Due to the vast exploration space, using single-agent RL for voxel prediction is impractical. Therefore, we treat each input patch as an agent with a shared behavior policy, allowing for multi-agent collaboration. Furthermore, this multi-agent model can capture dependencies between voxels, which is beneficial for the downstream segmentation task. Experiments conducted on representative EM datasets demonstrate that our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation. Code is available at [https://github.com/ydchen0806/dbMiM](https://github.com/ydchen0806/dbMiM).
## 1 Introduction
Neuron segmentation is a crucial task for neuroscientists that allows for the analysis of the distribution and morphology of neurons, providing valuable insights into the connectomics research [2, 16]. Electron microscopy (EM) is the mainstream method for accurately identifying neural structures, but the dense nature of neurons and the presence of artifacts and deformations in EM images make the labeling process costly and decrease the credibility of existing annotation data [1, 13, 14]. Therefore, fully supervised neuron segmentation methods meet great challenges, especially when applied to large scale EM data.
Self-supervised methods have emerged as a solution to the limitations of fully supervised methods, which can be roughly divided into two categories: contrastive learning-based approach and mask image model (MIM)-based approach. The former requires a large number of positive and negative samples [1, 15, 16, 17] and relies heavily on data augmentation [1], making it a high-cost option for 3D biomedical images. The latter aims to learn useful structural information in images by masking and recovering certain voxels, which has been recently applied to pretraining biomedical images, showing improvements in downstream tasks [15, 14, 13]. However, the highly localized and
Figure 1: A comparison of the reconstruction effectiveness of our proposed method with MAE. The first column shows the original EM image, the second column shows the image after masking 85% of the voxels, the third column shows the reconstruction using the MAE method, and the fourth column shows the reconstruction using our method.
structured nature of EM data, as well as the existence of considerable noise, make it inefficient to directly employ the existing MIM in extracting useful information for neuron segmentation. It has also been observed that the masking ratio and masking strategy of MIM are highly sensitive and the optimal ones vary greatly across different datasets. Adjusting these configurations to train large models requires significant manual efforts and resources.
In this paper, targeting the neuron segmentation task, we propose a novel decision-based MIM relying on multi-agent reinforcement learning (MARL) [11] for automatically selecting the appropriate masking ratio and masking strategy, which consists of a target network and a policy network. Our approach partitions the input EM volume into patches and treats each patch as a basic control unit. The overall multi-agent task is modeled as a search process for patch masking strategies, where the action space for each patch is to either keep the original voxels or mask them. The feedback of the target network, in the form of the reconstruction loss, serves as the team reward signal for guiding the policy network to adaptively learn masking strategies that are beneficial for the pretraining task [14]. To improve the stability of training and achieve optimal joint decision-making for the entire volume, all agent networks share parameters and are trained in parallel. Furthermore, we introduce the HOG feature as an additional reconstruction loss to enable the target network to learn more structure information. Finally, following a UNETR decoder design [1], we add a segmentation head to the pretrained target network in the finetuning stage. Experimental results in Figure 1 show that our decision-based MIM achieves clearer reconstruction results than the original MAE [10] in the pretraining phase.
Overall, our main contribution lies in the following aspects:
1) We propose an efficient self-supervised method, named decision-based MIM, for EM neuron segmentation using unlabeled EM data. To the best of our knowledge, it is the first effort that large-scale transformer pretraining is conducted on this task.
2) We propose a MARL-based approach for searching the optimal masking ratio and masking strategy by treating each patch as an agent with a shared policy, effectively exploring the search space and capturing dependencies between voxels.
3) We introduce the HOG feature as an additional reconstruction loss of our decision-based MIM, improving the convergence speed of network training and the performance of the downstream segmentation task.
4) We comprehensively demonstrate the effectiveness of our proposed method on two representative EM datasets, especially against alternative self-supervised methods on the task of neuron segmentation.
## 2 Related Work
### Neuron Instance Segmentation
In the field of EM image processing, neuron instance segmentation is an important task. [15] first proposed a convolutional neural network based on affinity generation, followed by clustering voxels in the affinity graph into instances based on post-processing methods such as watershed and LMC [1]. In recent years, there have been more advanced networks for the affinity-based approach. Funke et al.[14] introduced the MALIS loss[11] during the training process to encourage the network to generate correct topological segmentation. [10] introduced an embedding pyramid module to simulate affinity at different scales. [18] incorporated both embedding and affinity information and combined it with the graph neural network to further improve the distinguishability of adjacent objects in the feature space. However, due to the anisotropic resolution of 3D EM images in lateral and axial directions, the usage of transformer structures remains unexplored in the field of neuron segmentation. In this paper, we use an affinity-based setup and upgrade the decoder of UNETR [1] to adapt to the anisotropic EM features.
### Mask Image Model (MIM)
The MIM is an important branch of self-supervised learning. Masked autoencoders (MAE) [10] used an asymmetric encoding-decoding structure, encoding only the unmasked patches and using a lightweight decoder to recover the masked patches, which greatly reduced the resources required for computation and quickly became a mainstream structure for MIM. [11] and [12] separately validated the effectiveness of MAE on video datasets and proved that higher image masking ratios can be used in 3D datasets. [1] was the first to introduce MAE to the medical image field. [1] introduced a multi-modal multi-task adaptation to MAE, resulting in better performance than the original MAE. [1] proposed a multi-scale mixed convolution to encode images and achieved improved results on fine-grained downstream tasks such as image segmentation. [15] focused on the reconstruction target of the decoder and found that reconstructing artificial features such as HOG and SIFT can facilitate the network to classify and localize objects.
In summary, existing MIM-based methods mainly focus on the design of the encoder architecture, prediction head, and prediction target. Although many works have demonstrated the impact of masking strategies on downstream tasks, there has been little research along this line, and the latest method also requires a search-based masking strategy with a fixed masking ratio [1]. This paper proposes a novel MARL-based approach for adaptively learning the optimal masking ratio and masking strategy, making the pre-trained model more robust and achieving better performance on downstream tasks.
### Multi-Agent Reinforcement Learning (MARL)
In the field of deep RL, multiple agents working together can improve the efficiency and robustness of the model due to the limited observation and action space of a single agent. Given the complexity of computer vision tasks, MARL is often used to interact with the common environment to make decisions. [10] proposed a method for iteratively refining medical image segmentation using interactive MARL, where
rough image segmentation is initially provided and the network is iteratively refined through user feedback in the form of rewards until the segmentation is sufficient. [11] proposed a method for augmenting images through the use of blocks, with each block acting as an agent and working together to produce optimal data augmentation. Still, the application of MARL in computer vision suffers from the large decision space and difficulty in obtaining rewards. In this paper, to reduce the complexity of the state and the difficulty of searching for RL policies, we first segment the image into patches through a transformer-encoder and treat each patch as an agent. The action space for each patch is limited to only two options, masking or keeping the original voxels, and the rewards are obtained through the reconstruction loss of MAE. Therefore, the masking decision can be naturally modeled as a Markov process and optimized through MARL.
## 3 Proposed Method
Our decision-based MIM consists of two stages of training: pretraining and finetuning. Figure 2 illustrates the overall flow of the network training. In this section, we will first introduce some basic theories of Vision Transformer (ViT) and MARL, and then explain in detail the encoders, decoders, and loss functions of the two stages, as well as specific MARL modeling methods.
### Encoder-Decoder Design
We use ViT [13] as the backbone architecture for decision-based MIM pretraining and downstream segmentation tasks. To represent high-dimensional data in a ViT, we must transform it into a sequence of patches. Given an input 3D volume \(\mathbf{x}\in\mathbb{R}^{H\times W\times D\times C}\), where \(C\) is the number of channels and \((H,W,D)\) is the resolution, we reshape it into a sequence of flattened 3D patches \(\mathbf{x}_{p}\in\mathbb{R}^{N\times(P^{3}.C)}\). The patch resolution is given by \((P/4,P,P)\), and the number of patches is calculated as \(N=\frac{4HWD}{P^{3}}\). These patches are then transformed into patch embeddings via a trainable linear projection.
Consistent with the MAE setup, we divide the image patches into visible and masked groups. The encoder in the MAE and ViT architectures processes only the visible blocks. To enhance the performance of our decoder, we incorporated a histogram-of-oriented-gradients (HOG) feature [1], which has been shown to improve pretraining. This is achieved by providing the decoder with various markers, including a patch representation from the encoder and learnable position embeddings. By incorporating positional embedding in all input markers, we enable the decoder to simultaneously recover both the HOG feature and the original voxels, resulting in superior performance compared to the MAE method.
In our pretraining process, we utilize a loss function that combines both the mean squared error (MSE) loss for reconstructing the original voxels and the HOG loss for recovering the HOG features. The HOG feature can be calculated using the following equation
\[\text{HOG}_{i,j}=\frac{\sum_{x\in S_{i,j}}w(x)g(x)}{\sum_{x\in S_{i,j}}w(x)}, \tag{1}\]
where \(\text{HOG}_{i,j}\) is the histogram of oriented gradients for the cell located at position \((i,j)\), \(S_{i,j}\) is the set of voxels in the cell \((i,j)\), \(w(x)\) is a weighting function that assigns a weight to each voxel \(x\) in the cell, and \(g(x)\) is the gradient orientation of the voxel \(x\).
Our overall loss function can be expressed as
\[\mathcal{L}_{pretrain}=\lambda_{1}\mathcal{L}_{MSE}+\lambda_{2}\mathcal{L}_{ HOG}, \tag{2}\]
Figure 2: Our proposed network architecture is divided into two main components: (a) the decision-based Mask Image Model (MIM) pretraining process, which employs our proposed decision module to select appropriate patches for masking and then utilizes a 3D Vision Transformer (ViT) encoder to encode the visible patches. The resulting tokens are then passed through a lightweight decoder to reconstruct the original voxels and histograms of oriented gradient (HOG) features. (b) The fine-tuning process for the downstream segmentation task utilizes the encoder weights from the pretraining process and adds a UNETR segmentation head to output the affinity map. The final segmentation results are obtained through post-processing methods such as waterz.
where \(\lambda_{1}\) and \(\lambda_{2}\) denote the weights assigned to the MSE and HOG losses, respectively. We set \(\lambda_{1}\) to 0.1 and \(\lambda_{2}\) to 1.
### Decision Module
In our proposed decision module, we model the masking strategy of image patches in MAE as a multi-agent cooperative decision-making problem and adopt a multi-agent reinforcement learning method to solve it. Here, we will introduce in detail the modeling methods of model states, observations, actions, the design of multi-agent team rewards, and the learning method used to update the policy network.
As shown in Figure 3, given the original input batch of image \(\mathbf{x}\), according to the ViT setup, we divide the image into equal-sized and non-overlapping patches. Our MARL policy aims to determine the overall joint masking policy based on the current input state and the observation of each agent. Our basic setup is as follows.
**State.**
The enhancement policy for each patch is closely related to the information of the context, so the decision-making process of MARL requires perceiving the semantic information of the image rather than directly inputting the patches of the raw image. In order to ensure the consistency and convergence of training, we use the target network, i.e. ViT, as the backbone to extract deep semantic features of the image. The global state at time step \(t\) is denoted as \(S^{t}\) and is visible to all agents.
**Observation.**
In addition to capturing global information \(S^{t}\), each agent needs to make a masking policy decision based on its own local observation. In general, the observation in MARL tasks is often a part of the global state. Considering these factors, we take the deep feature ViT\((P_{i})\) of patch \(P_{i}\) as the current observation value \(O_{i}^{t}\) for the i-th agent. The feature extractor for local features is the same as the one for global features, both using the ViT backbone.
**Action.**
The action of the i-th agent aims to output whether patch \(P_{i}\) needs to be masked. We define the action of the i-th agent as a vector \(A_{i}\). The joint action space can be represented as \(A=\{A_{1},A_{2},...,A_{N}\}\), where \(N\) represents the total number of patches. Since the action space only has two possibilities, masking or keeping the original voxels, the dimension of \(A_{i}\) is 2. Given the current state \(S^{t}\) and observation \(O_{i}^{t}\), each agent \(i\) will determine an action \(a_{i}(S^{t},O_{i}^{t})\in A_{i}\) based on the current policy. The final output is the global joint action \(\mathbf{a^{t}}=\{a_{1}^{t}\cup a_{2}^{t},...,\cup a_{N}^{t}\}\). After all patches have taken their corresponding actions through the decision policy, the time step is updated to \(t=t+1\) and we obtain the enhanced volume through the decision module.
**Rewards.**
Rewards are introduced in our MARL decision-making process in order to guide the agents to learn expected behaviors that will improve the main task's performance through more reasonable masking ratios and masking strategies, allowing the target net to better learn semantic information in the volume. Previous works [11, 22] attempted to increase the training loss of the target network based on rewards in order to generate deeper, more difficult-to-learn features. Inspired by these works, we refine the reward design based on the MAE pretraining paradigm. By comparing the loss difference between the data obtained from the masked data \(\mathbf{x}\cdot\mathbf{a^{t-1}}\) generated by the previous time step's decision module and the data obtained from the current decision module \(\mathbf{x}\cdot\mathbf{a^{t}}\), we compute the reward for the MARL policy. This encourages higher training loss during the MARL decision-making process, as shown by the equation
\[r^{t}=\mathcal{L}_{pretrain}(\phi(\mathbf{x}\cdot\mathbf{a^{t}}))-\mathcal{L}_{ pretrain}(\phi(\mathbf{x}\cdot\mathbf{a^{t-1}})). \tag{3}\]
In the above equation, \(\mathcal{L}_{pretrain}\) represents the reconstruction loss generated by the target network, and \(\phi\) denotes the target network. The accumulated reward of one sequence is
\[R^{t}=\sum_{i=t-T+1}^{t}\gamma^{i-1}\bar{r}^{i}, \tag{4}\]
where \(T\) represents the desired time step length to be calculated, the discount factor \(\gamma\) takes a value in (0, 1], and \(\bar{r}^{t}\) is the mean rewards at time \(t\).
**Policy Learning.**
Considering that the action space for MARL decisions is discrete, we utilize the widely used Asynchronous Advantage Actor-Critic (A2C) algorithm [11] to perform MARL policy learning. Since the search space for actions is not large, we use simple convolution and pooling modules to adjust the Actor and Critic networks to the structure shown in Figure 3. The policy network is divided into an Actor and a Critic, which are adapted to the RL training algorithm. The Actor network learns a discrete control policy \(\pi(a_{i}^{t}|S^{t},O_{i}^{t})\), while the Critic network aims to estimate the value of the state \(V_{\pi}(S^{t})\). We model the centralized action value function \(Q\), which takes in the state information \(S\) and the actions of all agents, and outputs a \(Q\) value for the team, given by
\[Q^{\mathbf{\pi}}(S^{t},\mathbf{a^{t}})=E_{\mathbf{\pi}}\left[R_{t}\mid S^{t},a_{1}^{t}, \cdots,a_{N}^{t}\right], \tag{5}\]
where \(\mathbf{a^{t}}\) represents the joint action of all agents, defined as \(\mathbf{a}=\{a_{i},\cdots,a_{N}\}\), and \(R_{t}\) is the long-term discounted reward, given by equation 4. The advantage function on the policy is then given by
\[A^{\pi}(S^{t},\mathbf{a^{t}})=Q^{\pi}(S^{t},\mathbf{a^{t}})-V^{\pi}(S^{t}), \tag{6}\]
where \(A^{\pi}(S^{t},\mathbf{a^{t}})\) is the advantage of taking action \(\mathbf{a^{t}}\) given state \(S^{t}\) at time step \(t\), \(V^{\pi}(S^{t})\) is the current state estimate output by Critic. It indicates that the actual accumulated reward is independent of the state and reduces the variance of the gradient. We use \(\theta_{p}\) and \(\theta_{v}\) to denote the parameters of the Actor and Critic, respectively. The squared value of the dominance function \(A^{\pi}\) is taken as the loss function to update \(\theta_{v}\) as
\[\mathcal{L}(\theta_{v})=A^{\pi}(S^{t},\mathbf{a^{t}})^{2}. \tag{7}\]
To further achieve cooperative capability, the loss function of the updated Actor \(\theta_{p}\) is defined as
\[\mathcal{L}(\theta_{p})=-\log\pi_{\theta}(\mathbf{a^{t}}\mid S^{t})A^{\pi}(S^{t}, \mathbf{a^{t}}), \tag{8}\]
where \(\pi_{\theta}(\mathbf{a^{t}}\mid S^{t})\) is the Actor output, that is, the probability of taking each action \(a_{i}^{t}\). The Actor and Critic are jointly trained in an end-to-end manner.
### Neuron Instance Segmentation Method
The UNETR model is specifically designed for 3D segmentation tasks, such as organ segmentation. It is built upon the pretrained ViT encoder from decision-based MIM and a randomly-initialized convolutional decoder. The UNETR architecture is inspired by the U-Net model [15], with the inclusion of skip connections between the encoder and decoder at multiple resolutions. The input to the UNETR decoder is a sequence of representations, denoted as \(z_{3},z_{6},z_{9},z_{12}\), which are reshaped to restore the spatial dimension of \(\frac{H}{P}\times\frac{W}{P}\times\frac{D}{P}\times K\), where \(K\) is the feature channel.
Starting from the deepest feature, \(z_{12}\), each representation is processed through a varying number of deconvolutional blocks to increase its resolution by a specific factor. For example, \(z_{12}\) and \(z_{9}\) are upsampled by a factor of \(\times 2\), \(z_{6}\) is upsampled by a factor of \(\times 4\), and \(z_{3}\) is upsampled by a factor of \(\times 8\). Then, representations at the same spatial resolution, such as \(z_{12}\) and \(z_{9}\), are concatenated and further upsampled to match the shape of a shallower feature. This process of concatenation and upsampling is repeated until the full resolution of the input is restored. Finally, the output layer combines the upsampled feature and the original full-resolution input to predict the segmentation map.
In this paper, for the segmentation task of EM neurons, we use a combination of affinity map-based and post-processing approaches. We first create an affinity graph based on the voxel affinity. This graph serves as the foundation for our post-processing techniques, which utilize both water [17] and LMC [1] to cluster the affinity map and produce the final neuron segmentation results. Affinity-based methods have proven to be highly effective in accurately segmenting and analyzing these complex structures.
## 4 Experiments
### Training Strategy
Our strategy consists of two phases: pretraining and fine-tuning. In the pretraining phase, to improve the training efficiency of the framework, we first pretrain decision-based MIM for 100k iterations, synchronously updating the parameters of MAE and the policy network in the decision module, and then fix the parameters of policy network and only update the parameters of MAE for another 100k iterations. In the fine-tuning phase, we load the pretrained ViT weights into the model for the downstream task and train for 200k iterations.
We use the Adam optimizer in both the pretraining and fine-tuning phases, with \(\beta_{1}=0.9,\beta_{2}=0.999\). The only difference lies in the pretraining process, where we set the learning rate to 0.0001 and perform batch size 16 pretraining on 8 RTX 3090s. In the fine-tuning phase, we adopt a Layer-wise Learning Rate Decay (LLRD) training method, which adjusts the learning rate layer by layer during training. We set the learning rate of the last layer's parameters to 0.001 and the learning rate of the previous layer's parameters to 0.95 times the learning rate of the next layer's parameters. We conduct batch size 8 fine-tuning on 2 RTX 3090s.
### Datasets and Evaluation Metrics
FafbThe Full Adult Fly Brain (FAFB) dataset [11] is a highly valuable resource for neuroinformatics research, offering a comprehensive and detailed view of the neural architecture of the _Drosophila melanogaster_ (fruit fly) brain. With a size of approximately 40 terabytes, this dataset features high-resolution 3D images with a resolution of approximately 4 nanometers per pixel, as well as manually annotated segmentation data identifying various brain structures such as neurons, glial cells, blood vessels, synapses, and other neuropil regions. In our work, we utilize the FAFB dataset as a key component in our pretraining process. To optimize the efficiency of our model, we first downsample the original dataset by a factor of 4, carefully curating a selection of 60G images that exhibit exceptional imaging quality from the entire dataset. This strategic selection ensures that our model is trained on the most accurate and reliable data possible, setting the foundation for its future performance.
CremiThe CREMI dataset is derived from the FAFB dataset, which has three manually labeled subvolumes from _drosophila_ brain, of which CREMI A has more regular forms, CREMI C has higher size disparities, and CREMI B has in-between segmentation difficulty. Each sub-volume has 125 slices of 1250\(\times\)1250 images, and we choose the first 60 slices for training, 15 slices for validation, and the remaining 50 slices for testing, which are utilized to validate our method's performance on segmentation tasks of varying difficulty.
Figure 3: The framework of our proposed decision module. The first step in the process is to extract features from the encoder of the target net. In the second step, the policy network is used to decide whether or not the current patch needs to be masked. The output of the joint action constitutes the final decision.
**AC3/Ac4.** AC3/AC4 [14] are mouse somatosensory cortex datasets with 256 and 100 successive EM images (1024\(\times\)1024), respectively. The first 80 slices of AC3 are used as the training set, the following 20 slices as the validation set, and the first 50 slices of AC4 are used as the testing set.
**Evaluation Metrics.** We are more interested in the model's performance in the downstream task in the self-supervised training. To assess the influence of segmentation on EM neurons, we primarily use Variation of Information (VOI) [15] and Adapted Rand Error (Arand) [1] metrics. Smaller VOI and ARAND values represent better segmentation results.
### Experimental Results
**Decision Making Process.** Decision-based MIM is pretrained using the FAFB dataset. During pretraining, the policy network in the decision module is also updated, and the real-time change in masking ratio is recorded as in Figure 4. The decision-making process starts with random decisions, resulting in an overall masking ratio of around 0.5. Then, using the reconstruction loss as a reward, the decision-making principles of the agents are updated. After 50k iterations, the decision module converges and produces better reconstruction results with less lost information. During pretraining, it is found that starting with a lower masking ratio is more beneficial for the EM dataset, and gradually increasing the masking ratio helps the network's learning process progress from easy to difficult datasets, which is effective for both upstream reconstruction and downstream segmentation tasks.
**Results on CREMI.** We compare our method to the original MAE pretraining method [1] and the Dino pretraining method [1] based on contrastive learning, as well as two affinity-based Unet structures for instance segmentation, superhuman [10] and MALA [12]. The outcomes of the trials are provided in Table 1 after applying both waterz and LMC post-processing procedures. Figure 5 shows the visualization results. Compared to existing self-supervised methods, our decision-based MIM approach shows a significant improvement for the downstream task and it also performs better
\begin{table}
\begin{tabular}{l l c c c|c c c c|c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{4}{c|}{waterz [12]} & \multicolumn{4}{c}{LMC [1]} \\ \cline{3-10} & & VOI\({}_{N}\) & VOI\({}_{M}\) & VOI & ARAND \(\downarrow\) & VOI\({}_{S}\) & \(\downarrow\) & VOI\({}_{M}\) & VOI & ARAND \(\downarrow\) \\ \hline \multirow{8}{*}{CREMI 1} & superhuman [10] & 0.443 & **0.320** & 0.763 & 0.132 & 0.578 & 0.272 & 0.850 & **0.131** \\ & MALA [12] & 0.478 & 0.627 & 1.105 & 0.459 & 0.574 & 0.303 & 0.878 & 0.146 \\ & UNETB [11] & 0.495 & 0.359 & 0.854 & 0.153 & 0.630 & 0.278 & 0.908 & 0.141 \\ & MAE [10]+UNETR & 0.463 & 0.324 & 0.787 & 0.144 & 0.563 & 0.268 & 0.831 & 0.139 \\ & Dino [1]+UNETR & 0.482 & 0.351 & 0.833 & 0.150 & 0.580 & 0.287 & 0.867 & 0.147 \\ & ours+UNETR & **0.411** & 0.331 & **0.743** & **0.131** & **0.537** & **0.260** & **0.797** & 0.134 \\ \hline \multirow{8}{*}{CREMI 2} & superhuman [10] & 0.668 & 0.409 & 1.076 & **0.082** & 0.959 & 0.232 & 1.191 & **0.060** \\ & MALA [12] & 0.797 & 0.512 & 1.309 & 0.147 & 1.060 & 0.264 & 1.324 & 0.084 \\ & UNETB [11] & 0.937 & 0.397 & 1.333 & 0.103 & 1.194 & 0.230 & 1.424 & 0.116 \\ & MAE [10]+UNETR & 0.776 & 0.391 & 1.167 & 0.099 & 0.994 & 0.224 & 1.218 & 0.102 \\ & Dino [1]+UNETR & 0.916 & 0.396 & 1.312 & 0.104 & 1.167 & 0.229 & 1.396 & 0.111 \\ & ours+UNETR & **0.642** & **0.381** & **1.023** & 0.092 & **0.893** & **0.220** & **1.113** & 0.097 \\ \hline \multirow{8}{*}{CREMI 3} & superhuman [10] & 0.943 & 0.385 & 1.328 & 0.134 & 1.176 & 0.260 & 1.436 & 0.125 \\ & MALA [12] & **0.901** & 0.621 & 1.522 & 0.169 & **1.137** & 0.289 & 1.426 & 0.127 \\ & UNETB [11] & 0.298 & 1.419 & 0.158 & 1.417 & 0.236 & 1.653 & 0.148 \\ & MAE [10]+UNETR & 1.001 & 0.298 & 1.299 & 0.120 & 1.272 & 0.214 & 1.486 & 0.113 \\ & Dino [1]+UNETR & 1.011 & 0.412 & 1.423 & 0.156 & 1.364 & 0.234 & 1.598 & 0.146 \\ & ours+UNETR & 0.925 & **0.276** & **1.201** & **0.107** & 1.194 & **0.204** & **1.398** & **0.112** \\ \hline \multirow{8}{*}{AC4} & superhuman [10] & 0.721 & 0.295 & 1.016 & **0.187** & **0.770** & 0.343 & 1.113 & 0.110 \\ & MALA [12] & 0.734 & 0.385 & 1.119 & 0.305 & 0.832 & 0.357 & 1.189 & **0.108** \\ \cline{1-1} & UNETB [11] & 0.736 & 0.337 & 1.245 & 0.316 & 1.007 & 0.340 & 1.347 & 0.134 \\ \cline{1-1} & MAE [10]+UNETR & 0.791 & 0.306 & 1.097 & 0.254 & 0.888 & 0.298 & 1.186 & 0.120 \\ \cline{1-1} & Dino [1]+UNETR & 0.889 & 0.329 & 1.218 & 0.298 & 1.001 & 0.314 & 1.315 & 0.129 \\ \cline{1-1} & ours+UNETR & **0.647** & **0.285** & **0.931** & 0.243 & 0.795 & **0.284** & **1.079** & 0.113 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on the CREMI dataset, VOI\({}_{S}\) represents split error, VOI\({}_{M}\) represents merge error, and VOI is the sum of the two. The final segmentation results are generated by using two classic post-processing methods, waterz and LMC.
Figure 4: The variation of the masking ratio in the MARL decision process, where the final convergence result shows that the optimal masking ratio fluctuates around 0.83.
than commonly used CNN-based methods in neuron segmentation, as can be seen in Table 1.
**Results on AC3/AC4.** In order to verify the robustness and generalizability of our pretraining method, we further conduct experiments on mouse cortical neural cells. The results in Table 1 indicate that our approach leads to a clear improvement in the downstream segmentation task, even when applied across different species.
### Ablation Study
We use UNETR to perform a comprehensive ablation study on the AC4 dataset.
**Effectiveness of the multi-task reconstruction.** We conduct ablation experiments on the model's reconstruction targets. As shown in Table 2, it demonstrates that utilizing HOG and MSE losses together for reconstruction in the MIM outperforms using MSE or HOG alone.
**Effectiveness of the decision module.** We compare our proposed method to a straightforward solution that manually adjusts the masking ratio. The ablation results, as shown in Table 3, demonstrate that our approach not only eliminates the need for manual adjustment of masking ratios but also outperforms the best results achieved through manual adjustment.
## 5 Conclusion
In this paper, we propose a decision-based MIM approach for neuron segmentation. Our method eliminates the need for manual adjustment of masking ratios and masking strategies, using multi-agent cooperation to search for the optimal solution. Additionally, during the pretraining process, we incorporate a multi-task reconstruction and utilize HOG features to enhance the model's learning ability. Our method is validated on a variety of EM neuron datasets to demonstrate its generalizability.
## Acknowledgements
This work was supported in part by the National Natural Science Foundation of China under Grant 62021001.
\begin{table}
\begin{tabular}{c c|c|c|c|c} \hline \hline \multirow{2}{*}{MSE} & \multirow{2}{*}{HOG} & \multicolumn{2}{c|}{waterz} & \multicolumn{2}{c}{LMC} \\ \cline{3-6} & & VOI \(\downarrow\) & ARAND \(\downarrow\) & VOI \(\downarrow\) & ARAND \(\downarrow\) \\ \hline \(\surd\) & & 1.097 & 0.254 & 1.186 & 0.120 \\ & \(\surd\) & 0.987 & 0.251 & 1.103 & 0.127 \\ \(\surd\) & \(\surd\) & **0.957** & **0.245** & **1.089** & **0.117** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation results of the multi-task reconstruction.
Figure 5: The visualization results in the CREMI C dataset reveal the areas of over-segmentation and under-segmentation produced by methods such as Superhuman, MALA, and UNETR.
\begin{table}
\begin{tabular}{c c|c|c|c|c} \hline \hline \multirow{2}{*}{Rate} & \multirow{2}{*}{Decision} & \multicolumn{2}{c|}{waterz} & \multicolumn{2}{c}{LMC} \\ \cline{3-5} & & VOI \(\downarrow\) & ARAND \(\downarrow\) & VOI \(\downarrow\) & ARAND \(\downarrow\) \\ \hline
0.65 & & 1.065 & 0.264 & 1.196 & 0.131 \\
0.75 & & 0.997 & 0.258 & 1.132 & 0.129 \\
0.85 & & 0.957 & 0.245 & 1.097 & 0.121 \\
0.95 & & 1.005 & 0.260 & 1.121 & 0.127 \\ / & \(\surd\) & **0.931** & **0.243** & **1.079** & **0.113** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation results of the decision module. |
2304.06818 | Soundini: Sound-Guided Diffusion for Natural Video Editing | We propose a method for adding sound-guided visual effects to specific
regions of videos with a zero-shot setting. Animating the appearance of the
visual effect is challenging because each frame of the edited video should have
visual changes while maintaining temporal consistency. Moreover, existing video
editing solutions focus on temporal consistency across frames, ignoring the
visual style variations over time, e.g., thunderstorm, wave, fire crackling. To
overcome this limitation, we utilize temporal sound features for the dynamic
style. Specifically, we guide denoising diffusion probabilistic models with an
audio latent representation in the audio-visual latent space. To the best of
our knowledge, our work is the first to explore sound-guided natural video
editing from various sound sources with sound-specialized properties, such as
intensity, timbre, and volume. Additionally, we design optical flow-based
guidance to generate temporally consistent video frames, capturing the
pixel-wise relationship between adjacent frames. Experimental results show that
our method outperforms existing video editing techniques, producing more
realistic visual effects that reflect the properties of sound. Please visit our
page: https://kuai-lab.github.io/soundini-gallery/. | Seung Hyun Lee, Sieun Kim, Innfarn Yoo, Feng Yang, Donghyeon Cho, Youngseo Kim, Huiwen Chang, Jinkyu Kim, Sangpil Kim | 2023-04-13T20:56:53Z | http://arxiv.org/abs/2304.06818v1 | # Soundini: Sound-Guided Diffusion for Natural Video Editing
###### Abstract
We propose a method for adding sound-guided visual effects to specific regions of videos with a zero-shot setting. Animating the appearance of the visual effect is challenging because each frame of the edited video should have visual changes while maintaining temporal consistency. Moreover, existing video editing solutions focus on temporal consistency across frames, ignoring the visual style variations over time, e.g., thunderstorm, wave, fire crackling. To overcome this limitation, we utilize temporal sound features for the dynamic style. Specifically, we guide denoising diffusion probabilistic models with an audio latent representation in the audio-visual latent space. To the best of our knowledge, our work is the first to explore sound-guided natural video editing from various sound sources with sound-specialized properties, such as intensity, timbre, and volume. Additionally, we design optical flow-based guidance to generate temporally consistent video frames, capturing the pixel-wise relationship between adjacent frames. Experimental results show that our method outperforms existing video editing techniques, producing more realistic visual effects that reflect the properties of sound. Please visit our page: [https://kusi-lab.github.io/soundini-gallery/](https://kusi-lab.github.io/soundini-gallery/).
## 1 Introduction
Video editing is essential in computer vision fields, with the remarkable development of practical applications in movie-making and social media content creation. Recent user-aided visual editing tools [9, 11, 49] have made it possible to add artistic or realistic visual effects, such as drizzling or explosions, to images and videos by rendering
repetitive patterns. However, editing each frame manually requires significant effort to produce a temporally coherent video.
There are two main approaches for automatic video editing: video decomposition-based and generative models based. The former decomposes the representation of a specific object and background, then edits the appearance of the objects and re-renders to video [7, 29]. Despite their outstanding temporal consistency, these methods can edit only a specific object, not an arbitrary area of the video. The other approach utilizes GAN models pre-trained with large-scale, high-quality images to edit the semantic attributes of each frame over the video [4, 57]. However, both approaches are limited to producing only static styles and struggle to add dynamic visual effects.
We utilize the ability of sound to represent the dynamic scene context through factors specialized in audio, such as timbre, intensity, and volume. Existing works [33, 34, 35] show the usefulness of sound in generating and manipulating visual content by leveraging Generative Adversarial Networks (GANs) [8]. However, when editing real videos, each frame of the video must be projected to the latent space of the GANs [1, 2, 3, 5]. This process involves a trade-off between the quality of image reconstruction and the editability [51]. Therefore, we explore the potential power of sound for natural video editing using Denoising Diffusion Probabilistic Models (DDPMs) [15]. DDPMs have emerged as a powerful architecture for generating high-resolution images from Gaussian noise and can capture fine-detailed knowledge [38, 43]. However, adding visual effects to video with DDPMs remains challenging due to the absence of prior motion knowledge between adjacent frames.
To overcome these problems, we propose a novel framework that takes advantage of the acoustic characteristics of the sound input. Our framework utilizes a guidance-based diffusion sampling strategy incorporating a pre-trained audio latent representation to produce sound-guided motion and appearance editing. Furthermore, rather than denoising each video frame independently, optical flow-based guidance allows sampled adjacent frames to keep temporal consistency by matching warped frames to each other using estimated optical flow.
Experimental results show that our method can produce realistic video editing from various sound sources. For example, a video of the ocean is edited into the ocean with a thunderstorm-like exterior appearance using the sound of thunderstorm (see Fig. 1). We also demonstrate that our procedure produces a temporally consistent video without further training in the video dataset. Our main contributions are listed as follows:
* We propose a novel local sound-guided diffusion for video editing with a sound and user-provided mask. In particular, our method can synthesize a visual effect that reflects properties of sound, such as temporal context, intensity, and volume.
* We develop an optical flow guidance leveraging an image-based diffusion model. We demonstrate that this procedure is useful for obtaining temporal consistency.
* In terms of style animating, we achieve state-of-the-art
Figure 2: **Overview of Soundini Soundini consists of two gradient-based guidance for diffusion: (a) _Local Sound guidance_ and (b) _Optical flow guidance_. In (a), we match the appearance of the sampled frame with the sound in the mask region using the loss \(\mathcal{L}_{\text{SG}}\) and \(\mathcal{L}_{\text{DSG}}\). First, we minimize the cosine distance loss \(\mathcal{L}_{\text{SG}}\) between the denoised frame and the sound segments in the audio-visual latent space. Additionally, the loss term \(\mathcal{L}_{\text{DSG}}\) aligns the variation of frame latent representation and that of sound, capturing the transition of sound semantics and reaction. In (b), our optical flow guidance \(\mathcal{L}_{\text{flow}}\) allows the sampled video to maintain temporal consistency by measuring the mean squared error between warped and sampled frames using optical flow. In particular, gradually increasing the influence on the loss \(\mathcal{L}_{\text{flow}}\) helps to sample the temporally consistent video stably. Background reconstruction loss \(\mathcal{L}_{\text{back}}\) allows the background to be close to the source video.**
performance compared to existing video editing methods, improving perceptual realism.
## 2 Related Work
**Video Editing** For conventional video editing methods, achieving temporal consistency is crucial to ensure naturalness and perceptual realism by preventing temporal jittering and loss of content. Previous GAN-based works [12, 23] have attempted to enforce constraints with optical flow between adjacent frames without further training on the video dataset. Additionally, recent works [4, 24, 47, 53, 57, 58, 59] edit each frame by leveraging StyleGAN [27, 28] with semantic disentanglement. However, these works focus on editing face, and the results are limited to the domain of the GAN latent space.
For open-domain video editing, several works [7] edit the video with a user-provided text prompt using pre-trained image-text joint latent representation. Despite their remarkable performance in video editing, they cannot produce movable visual effects because these works focus only on the contents of consistent objects. On the other hand, our framework can generate more realistic video editing results with movable stylization. Additionally, recent autoregressive approaches [37, 54] propose video manipulation according to given text inputs. However, those works fail to generate dynamic motion because their works highly depend on the order of visual patches.
Recently, text-driven video diffusion models [39, 56] show successful motions and appearance editing for videos. On the other hand, our method has the ability to control continuous visual changes with audio manipulation.
**Diffusion Models** Given text descriptions, recent diffusion models [6, 13, 17, 21, 30, 32, 40, 44, 61] have shown remarkable success in visual editing tasks. From randomly sampled gaussian noise, DDPMs [15], a class of probabilistic generative models, has the ability to denoise the image from noise iteratively [43], which exceeds the image generation quality of state-of-the-art GAN [15]. In particular, blended diffusion models [6] proposes an architecture for region-based image editing with text guidance. Furthermore, gradient-based diffusion guidance techniques [22, 40] effectively control the appearance and style of the image without further training during image sampling. However, their sampling process is inappropriate for generating temporally consistent videos because they do not consider any relationship between frames. To solve the issue mentioned above, we propose an optical flow guidance diffusion to achieve temporal consistency.
**Sound-Driven Visual Synthesis** Recent works [34, 36] have shown the effectiveness of sound in changing the fine-detailed style and appearance of objects in an image or video. Li _et al._[36, 60] proposes to learn visual styles solely based on audio information, leading to anticipated changes in the image style according to volume or mixed audio. In addition, the recent success of learning audio latent representation [18, 26, 48, 55] shows that the joint audio-visual latent space is helpful for sound-guided visual synthesis.
Among them, Lee _et al._[35] also takes advantage of sound properties, such as intensity, as well as semantic cues for sound-guided semantic image manipulation. Furthermore, the work shows high-resolution image synthesis with audio using CLIP [42] and StyleGAN latent space [28]. However, these works only focus on sound semantic cues and are not able to generate audio-relevant motion. In contrast, we match the motion of visual effects with the variation of the audio signal during the diffusion frame sampling process. MM-Diffusion [45] generates audio-video simultaneously, but it requires audio-visual pairs during training.
## 3 Method
We introduce a novel framework for sound-guided video local editing (see Fig. 2). First, we take a one-dimensional audio waveform as input and transform the audio waveform into mel spectrogram. Given the preprocessed mel spectrogram sound input \(s\in\mathbb{R}^{M\times L}\) (2D), binary mask \(m\in\mathbb{R}^{N\times 1\times W\times H}\), and source video \(x\in\mathbb{R}^{N\times C\times W\times H}\), we predict the edited video \(\hat{x}^{N\times C\times W\times H}\), where \(N\) is the number of frames, \(W,H,C\) denote the width, height, and channel, respectively, \(M\) and \(L\) are the number of mel spectrogram filters and the time duration.
Our goal is to make visual changes consistent with the sound in the region of interest. To achieve this goal, our model presents two main guidance for diffusion sampling: (i) Local Sound Guidance and (ii) Optical Flow Guidance. Section 3.1 explains how local sound guidance diffusion matches each frame with semantic cues and temporal variation of sound chunks. In Section 3.2, we propose bidirectional optical flow guidance for achieving temporal consistency across sampled frames.
Figure 3: **Illustration of optical flow guidance for diffusion** The frozen optical flow estimator produces bi-directional optical flows between the sampled frame \(i\) and frame \(i+1\). Then, we apply forward and backward warp to match each pixel and minimize the mean squared error between the adjacent and the corresponding warped frames.
### Local Sound Guidance
**DDPMs for Video Editing** Our proposed method for sound-guided natural video editing utilizes DDPMs [15]. To generate a forward Markov chain \(x_{1},...,x_{T}\), DDPMs obtain \(x_{t}\) by adding noise to the clean image \(x_{0}\) at the time step \(t\). Note that the last image \(x_{T}\) approaches a Gaussian distribution, particularly in the case of large \(T\).
To extend this approach to video, the forward Markov chain \(x_{i}^{1},...,x_{i}^{T}\) is operated independently for each frame where \(i\) denotes the frame index for \(i\in\{1,2,...,N\}\) and \(N\) denotes the frame number of the source video. Given a data distribution \(x_{i}^{0}\sim q(x_{i}^{0})\) of the frame \(i\), we define the Markov transition \(q(x_{i}^{t}|x_{i}^{t-1})\) from the normal distribution using the variance schedule \(\beta_{t}\in(0,1)\) as follows:
\[q(x_{i}^{t}|x_{i}^{t-1})=\mathcal{N}(x_{i}^{t};\sqrt{1-\beta_{t}}x_{i}^{t-1}, \beta_{t}\mathbf{I}),\quad t=1,...,T. \tag{1}\]
Without an intermediate step, the forward noising process produces \(x_{i}^{t}\) from \(x_{i}^{0}\) by adding noise as follows:
\[q(x_{i}^{t}|x_{i}^{0})=\mathcal{N}(x_{i}^{t};\sqrt{1-\beta_{t}}x_ {i}^{0},\beta_{t}\mathbf{I}) \tag{2}\] \[x_{i}^{t}=\sqrt{\bar{\alpha}_{t}}+\sqrt{1-\bar{\alpha}_{t}}\epsilon _{i},\]
where \(\epsilon_{i}\sim\mathcal{N}(0,\mathbf{I})\) and \(\alpha_{t}=1-\beta_{t}\), and \(\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{s}\). In order to denoise and generate the Markov chain, we utilize the reverse process with a prior distribution of \(p(x_{i}^{T})=\mathcal{N}(x_{i}^{T};0,\mathbf{I})\). Estimating the parameters, \(\theta\) ensures that the generated reverse process matches closely the fixed forward process as follows:
\[p_{\theta}(x_{i}^{t-1}|x_{i}^{t})=\mathcal{N}(x_{i}^{t-1};\mu_{\theta}(x_{i}^ {t},t),\Sigma_{\theta}(x_{i}^{t},t)), \tag{3}\]
where \(t=T,...,1\) denotes the time step of the reverse process. Followed by Ho _et al._[21], we allow the denoising autoencoder \(\epsilon_{\theta}(x_{i}^{t},t)\) to denoise \(x_{i}^{T}\) which is a form of noise added \(x_{i}^{0}\) instead of directly estimating \(\mu_{\theta}(x_{i}^{t},t)\) by Bayes' theorem as follows:
\[\mu_{\theta}(x_{i}^{t},t)=\frac{1}{\sqrt{\alpha_{t}}}(x_{i}^{t}-\frac{\beta_{ t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{\theta}(x_{i}^{t},t)). \tag{4}\]
Therefore, the denoising autoencoder \(\epsilon_{\theta}(x_{i}^{t},t)\) predicts the denoised frame \(x_{i}^{0}\) from each noisy latent diffusion \(x_{i}^{t}\). Given \(x_{i}^{t}\) and time step \(t\), the predicted \(i\)-th frame \(\hat{x}_{i}^{0}\) is directly obtained from \(\epsilon_{\theta}(x_{i}^{t},t)\) at each time step as follows:
\[\hat{x}_{i}^{0}=\frac{x_{i}^{t}-\sqrt{1-\bar{\alpha}_{t}}\epsilon_{\theta}(x_ {i}^{t},t)}{\sqrt{\bar{\alpha}_{t}}}. \tag{5}\]
**Local Sound Guidance** Blended-diffusion [6] leverages image-text multi-modal embedding space pre-trained with a large-scale image and text pairs to guide diffusion towards a target prompt. However, we extend this approach to video by averaging the loss of cosine distance of each frame with a given audio fragment \(s_{i}\). To achieve this, we sample frames by sliding the mel spectrogram \(s_{i}\) along the time axis and extract \(d\)-dimensional sound latent representation from given sound chunks \(s_{i}\). Note that the audio encoder \(f_{s}(\cdot)\) makes our sound latent representation semantically consistent with the latent image representation. We measure the cosine distance between the normalized latent vector as follows:
\[\mathcal{L}_{\text{SG}}=\frac{1}{N}\sum_{i=1}^{N}d_{\text{cos}}(f_{v}(\hat{x} _{i}^{0}\odot m_{i}),f_{s}(s_{i})), \tag{6}\]
where \(d_{\text{cos}}(\cdot,\cdot)\) denotes the cosine distance and \(f_{v}(\cdot)\) denotes the CLIP image encoder. Furthermore, \(\odot\) denotes pixel-wise multiplication, and \(m_{i}\) denotes the binary mask according to each frame.
Figure 4: **Effect of sound** (a) Sounds with semantic visual transitions. Given the video and user-provided mask, we use the sound input with two distinctive meanings, _Crackling fire_ and _Explosion_. We observe that visual changes are smooth corresponding to sound semantics. (b) Visual changes with sound volume. As we increase the magnitude of sound _Explosion_, the style of the image also becomes drastic. There are five intervals per column.
We not only focus on matching the appearance and style of objects with the meaning of sound but also on visual changes corresponding to sound transitions. To generate sound-related motion for each frame, we minimize the cosine distance between the direction of audio embedding \(z_{s}\) and the direction of frame embedding between adjacent frames as follows:
\[\mathcal{L}_{\text{DSG}}=\frac{1}{N-1}\sum_{i=1}^{N-1}d_{\text{cos}}(\Delta_{i} ^{i+1}\textbf{v},\Delta z_{s}), \tag{7}\]
where \(d_{\text{cos}}\) is the cosine distance between the latent representation of the frame and the audio. The direction of the visual latent representation \(\Delta_{i}^{i+1}\textbf{v}\) is calculated as \(f_{v}(\hat{x}_{i+1}^{0}\odot m_{i+1})-f_{v}(\hat{x}_{i}^{0}\odot m_{i})\). In this way, we effectively produce movable visual effects corresponding to sound signals by measuring the variation of image embedding.
### Optical Flow Guidance
**Optical Flow Guidance** To achieve temporal consistency, we additionally provide optical flow-based guidance during the diffusion process (see Fig. 3). First, we obtain a bidirectional optical flow from the sampled frame \(i\) and the sampled frame \(i+1\) with RAFT [50], the pre-trained optical flow estimation network. Next, we warp each frame \(\hat{x}_{i}^{0}\), \(\hat{x}_{i+1}^{0}\) into the new warped frames \(\bar{x}_{i}^{0}\), \(\bar{x}_{i+1}^{0}\) with estimated optical flow. Given the source video of \(N\) frames during the diffusion process, we compute the mean squared loss between the frames as follows:
\[\mathcal{L}_{\text{flow}}=\frac{1}{N-1}\sum_{i=1}^{N-1}(\mathcal{L}_{2}(\hat{x }_{i}^{0},\bar{x}_{i}^{0})+\mathcal{L}_{2}(\hat{x}_{i+1}^{0},\bar{x}_{i+1}^{0 })). \tag{8}\]
This term ensures the pixel value across sampled frames from the noisy version of the source frame. We apply optical flow guidance to the global rather than the local area because temporal coherent visual effects can be created considering the global context.
**Background Reconstruction Loss** Background reconstruction is also an important issue in terms of producing seamless results. To maintain the background of source frames, we apply the background preserve guidance as follows:
\[\begin{split}\mathcal{L}_{\text{back}}=\frac{1}{N}\sum_{i=1}^{N} \mathcal{L}_{1}(\hat{x}_{i}^{0}\odot(1-m_{i}),x_{i}^{0}\odot(1-m_{i}))+\\ \mathcal{L}_{\text{LPIPS}}(\hat{x}_{i}^{0}\odot(1-m_{i}),x_{i}^{0 }\odot(1-m_{i})),\end{split} \tag{9}\]
where \(\mathcal{L}_{1}\) and \(\mathcal{L}_{\text{LPIPS}}\) denote the loss of L1 in pixels and the loss of perceptual [25], which captures fine-detailed features of the background of the source frame. \(\odot\) denotes pixel-wise multiplication. After sampling, we substitute the mask area with the matching area from the input frame, thereby maintaining the background.
**Total Guidance Loss** To obtain the temporal consistency of the edited video,
\[\begin{split}\mathcal{L}_{\text{total}}=\lambda_{\text{SG}} \mathcal{L}_{\text{SG}}+\lambda_{\text{DSG}}\mathcal{L}_{\text{DSG}}+\frac{T- t}{T}\lambda_{\text{flow}}\mathcal{L}_{\text{flow}}\\ +\lambda_{\text{back}}\mathcal{L}_{\text{back}},\end{split} \tag{10}\]
where \(\{\lambda_{\text{SG}},\lambda_{\text{DSG}},\lambda_{\text{flow}},\lambda_{\text {back}}\}\) are a set of hyperparameters to determine the power of each term. Especially, we adjust \(\lambda_{\text{flow}}\) to control the trade-off relationship between temporal
Figure 5: **Comparison with local video editing** Given the (a) source video, we visualize the edited video with _crackling fire_ sound and the (b) binary mask, which is annotated manually. We compare (d) our method with (c) Neural Layered Atlases (NLA) [29] (five frame intervals for each column). We warp the same localized style for the first frame.
Figure 6: **Comparison with the extension of image editing works** Given _Wave_ sound and manually annotated binary mask, we edit the (a) input video with other baselines, (b) Blended Diffusion [6], (c) VQGAN+CLIP [14], and (d) ours respectively (five frame intervals per column).
consistency and dynamic motions. We gradually increase \(\lambda_{\text{flow}}\) depending on time step \(t\) because the sampled frames are almost unrecognizable when \(t\) is near to \(T\). Then, we sample noisy latent diffusion \(x_{i}^{t-1}\) at time step \(t-1\) as follows:
\[x_{i}^{t-1}\sim\mathcal{N}(\mu+\Sigma\nabla_{\tilde{x}_{i}^{0}}\mathcal{L}_{ \text{total}},\Sigma), \tag{11}\]
where \(\mu\) and \(\Sigma\) indicate \(\mu_{\theta}(x_{i}^{t},t)\) and \(\Sigma_{\theta}(x_{i}^{t},t)\) respectively.
## 4 Result
In this section, we discuss the details of our implementation and experiments.
### Implementation Details
**Evaluation Metrics** We quantitatively compare our method to baselines on the DAVIS [41] dataset, which provides the high-resolution video and the ground truth binary mask manually annotated per frame. For fairness, we resize the videos at 256\(\times\)256 resolution and sample 10 frames. To evaluate the temporal consistency of the video editing results, we adopt the conventional temporal metric, T-diff [12], which calculates the pixel-wise difference of warped frames. For evaluating video quality, we adopt three widely acceptable quantitative metrics: Frechet Video Distance (FVD) [52], Frechet Image Distance (FID) [20], and Inception Score (IS) [46]. In particular, we measure the average of activations across frames because FID and IS are image-based metrics. Then, we leverage the Inception3D [10] network pre-trained with the Kinetics-400 dataset [31] for FVD. To measure semantic consistency, we use the CLIP score [19], where a higher score indicates a stronger semantic alignment between the edited video and the given semantics.
**Hyperparameter Setting** Regarding the optical flow estimation network, we use the pre-trained RAFT [50] with 900 maximum long edges. Each guidance weight \(\lambda_{\text{SG}},\lambda_{\text{DSG}},\lambda_{\text{flow}},\lambda_{\text {back}}\) are set to 1000, 1000, 500, 1000 respectively. When the \(\lambda_{\text{SG}},\lambda_{\text{DSG}}\) are set too low, our model fails to match the visual changes with sound. Additionally, with too low \(\lambda_{\text{flow}},\lambda_{\text{back}}\), our method cannot produce the seamless results. Total time step \(T\) and the sampling computation cost have a trade-off relationship. Enlarging the total time step can lead to the more realistic result. However, the total time step \(T\) is set to 100 because we empirically confirm that the improvement in video editing performance is saturated when the total time step is larger than 100 (see supplementary document).
### Qualitative Analysis
We qualitatively demonstrate the effectiveness of sound in video editing, specifically for semantic transition and volume control. Then, we compare our method to several prominent video editing methods. To the best of our knowledge, we propose the first work that performs local natural video editing with given natural sound inputs. Since there are no perfectly identical settings, we compare two types of baseline: local video editing and image editing extension. We emphasize that our comparison experiments are conducted solely based on sound. Additionally, we provide the video in supplemental materials.
#### 4.2.1 Effectiveness of Sound in Video Editing
**Semantic Transition using Sound** Since sound contains any semantic changes in acoustic features over time, we can produce an edited video with the corresponding sound variation. As Fig. 4 (a) shows, our schemes can guide diffusion models in this challenging setting, sound transition. Changing from _Cracking fire_ sound to _Explosion_ sound, we observe that our framework can capture the semantic transition solely based on the acoustic features of the input sound, and the semantics of the video is still consistent with the sound. This is because Eq. 7 is capable of providing guidance regarding the variation of visual latent representation without any latent interpolation.
**Visual Changes with Sound Volume** Sound has one of the
Figure 8: **Sound-guided video global editing** Our model can edit global areas in the video. Given source image, we change the appearance of the video according to to _underwater bubbling_ sound. We apply widely used image cloning technique [16] for natural blending.
Figure 7: **Context adaptive video editing** The first row is the source video, the second row is the mask, and the third row is the edited results of ours. Given the fire crackling sound, we add reasonable visual effects within the local area, considering the global context of the original video.
unique properties, volume, which is beneficial in deciding the magnitude of visual changes. By increasing the waveform scale, Fig. 4 (b) qualitatively shows that our framework is capable of producing visual effects consistent with the sound volume. Given the _Explosion_ sound, the style gradually becomes noticeable as the volume increases.
#### 4.2.2 Comparison with Baselines
**Localized Video Editing** We compare our method with existing local video editing, Neural Layered At-lases (NLA) [29]. As shown in Fig. 5, our framework produces more realistic and natural results compared to NLA. The added visual effect in NLA is static and cannot move, resulting in unnaturally edited videos. Furthermore, the quality of NLA-supported video editing is highly dependent on the quality of video decomposition. In contrast, our approach allows for more flexibility and control over the edited video, as we are able to stylize desired parts of the video with audio input while maintaining temporal consistency. For fairness, we fix the resolution of the frames \(256\times 256\) and set the number of frames at ten. The same visual style guided by sound is set in the first frame of NLA.
**Leveraging Image Generative Models for Video Editing** Our framework can be applied to a similar setting in that we leverage the pre-trained image generative models. Because those works aim to perform text-guided image editing, we use sound latent representation rather than text latent representation for fairness. As Fig. 6 shows, our method is more temporally consistent than blended diffusion and VQGAN\(+\)CLIP [14]. The visual changes produced by blended diffusion fail to maintain geometric properties between frames. Since VQGAN\(+\)CLIP regularizes the latent vector and uses a discrete latent space for image generation, each frame shares similar fine details of style. However, the motion between frames is not related to the meaning of the text or audio. On the other hand, our framework delivers explicit pixel-level motion, while visual style changes according to sound transition or volume.
#### 4.2.3 Context Adaptive Video Editing
We achieve robust editing performance within the local area by understanding the global context of the video. We let the global context affect the style in the local area. Given _fire crackling_ sound, our results show that the visual effect of the fire is swept away by the waves (see Fig. 7). We consider the entire context of the video, not just the edited area, so we can get a more natural video editing result.
#### 4.2.4 Sound Guided Video Global Editing
We support sound-guided video global editing, where the entire frame is regarded as the foreground. After sampling, we blend the sampled frames and the source video using the seamless image cloning technique [16] while preserving the texture and lighting of the target video. Our framework can produce the video of the object slowly sinking in water according to _underwater bubbling_ sound (see Fig. 8).
### Quantitative Evaluation
We quantitatively compare our method with three baselines: Neural Layered Atlases (NLA) [29], Blended Diffusion [6], and VQGAN\(+\)CLIP [14] on the DAVIS [41] dataset in a zero-shot setting. We use ten types of audio samples, including _fire crackling, explosion, thunderstorm, raining, underwater bubbling_, etc (see Table 1).
**Temporal Consistency** We observe that optical flow guidance leads to higher temporal consistency than other baselines, except NLA. This is because the NLA applies geometric transformations of the visual style without any changes in fine-grained details.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Model} & Temporal Consistency & \multicolumn{3}{c}{Video Quality Metric} & \multicolumn{1}{c}{Semantic Consistency} \\ \cline{2-6} & T-diff (\(\downarrow\)) & FVD (\(\downarrow\)) & FID (\(\downarrow\)) & IS (\(\uparrow\)) & CLIP score (\(\uparrow\)) \\ \hline Neural Layered Atlases (NLA) [29] & **2.2057** & 3649.4945 & 379.3251 & 5.8814 & 0.5592 \\ \hline Blended Diffusion [6] & 35.5234 & 3127.8477 & 391.1936 & 5.050 & 0.6035 \\ VQGAN + CLIP [14] & 43.8372 & 4939.3658 & 351.4702 & 4.3484 & 0.5308 \\ \hline Ours w/o \(\mathcal{L}_{\text{down}}\) & 7.6357 & 3065.4342 & 349.7132 & 6.3487 & 0.6080 \\ Ours & 5.9172 & **2718.2134** & **332.2388** & **7.1999** & **0.6138** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison between ours and alternatives on the DAVIS [41] dataset. We report T-diff [12] for temporal consistency and FVD [52], FID [20], IS [46] for video quality, and CLIP score for semantic consistency. The best score and second-best score are shown in **bold** and underlined.
Figure 9: **User study between ours and baselines**
**Video Quality** Although Blended Diffusion and VQGAN\(+\)CLIP generate high-quality frames individually, our method obtains higher quality video than baselines. Furthermore, we found that our method with \(\mathcal{L}_{\text{flow}}\) produces better results compared to the method without it, indicating that \(\mathcal{L}_{\text{flow}}\) is effective in ensuring the video quality. This is because image-generative models sample each frame independently, but ours regulate the variety of styles with optical flow-based guidance.
**CLIP score** We measure the cosine similarity score with the CLIP version ViT-B/32 [42] between the edited video and text labels. CLIP image encoder extracts 512-dimensional video embeddings corresponding to each frame and averages them. In addition, 512-dimensional text embeddings are also extracted from the CLIP text encoder. We illustrate that our method achieves the best semantic consistency compared to any other baseline, which means that edited videos are more visually correlated with sound.
### User Study
We conduct a human evaluation study by assembling 100 participants from Amazon Mechanical Turk (AMT). Participants are asked to choose between the edited results of Neural Layered Atlases (NLA) [29], Blended Diffusion [6], VQGAN\(+\)CLIP [14], and ours from 20 questions. We investigate two factors: Semantic Consistency (i) _Which video editing results are better consistent with the target attribute?_ and Realness (ii) _Please evaluate how realistic the video is._ We evaluate perceptual realism by measuring the Likert scale from 1 (low realistic) to 5 (high realistic). Fig. 9 shows that Soundini outperforms the other methods regarding semantic consistency and realness.
### Ablation Study
Fig. 10 shows the effect of optical flow guidance in terms of temporal consistency. We compare our method with the removed version of the specific loss \(\mathcal{L}_{\text{flow}}\) in Eq. 8 and the removed version of scheduling \(1-\frac{t}{T}\) in Eq. 11. In addition, we visualize the denoised frame of each time step between adjacent frames. We observe that the use of optical flow-based guidance results in higher temporal consistency compared to the version without it. In contrast, independent sampling, without consideration of the relationship, fails to maintain temporal consistency. This demonstrates the effectiveness of the proposed optical flow-based guidance in improving the temporal consistency of our method.
## 5 Discussion
**Limitation** Soundini successfully drives changes in visual style to be consistent with changes in sound. However, the performance of our framework degrades when the mask shape is not consistent with the content of the sound within the mask area. This is because our framework deals mainly with the pixel-wise relationship between the two adjacent frames (see Fig. 11).
**Societal Impact** Soundini has the potential to raise ethical concerns about the generation of immoral content. As the user has complete control over the video editing and can apply styles that may contain inappropriate visual content, it is important to consider the social impact of such applications. In addition, inappropriate use of editing to create fake videos could have significant consequences for public figures, leading to the erosion of trust in the media.
## 6 Conclusion
Given sound and localization masks, we present a novel method to add realistic visual effects to videos using local sound guidance for diffusion. By leveraging audio-visual embedding space, our framework performs realistic video editing on various sound sources. Moreover, the optical flow guidance diffusion sampling mechanism is definitely helpful in achieving temporal consistency and producing context-adaptive results. We believe that our framework is the first work to address sound-guided video editing.
Figure 11: **Limitation of Soundini** Given bird song sound and heart-shape mask, Soundini fails to obtain natural videos associated with the beating of the wings of a bird.
Figure 10: **Ablation study of optical flow guidance** The first shows the sampling results without \(\mathcal{L}_{\text{flow}}\). The second row shows the sampling results without adjusting \(\mathcal{L}_{\text{flow}}\). The third shows the sampling results with increasing \(\mathcal{L}_{\text{flow}}\) after applying optical flow-based guidance. |
2310.13434 | Random Matrix Analysis to Balance between Supervised and Unsupervised
Learning under the Low Density Separation Assumption | We propose a theoretical framework to analyze semi-supervised classification
under the low density separation assumption in a high-dimensional regime. In
particular, we introduce QLDS, a linear classification model, where the low
density separation assumption is implemented via quadratic margin maximization.
The algorithm has an explicit solution with rich theoretical properties, and we
show that particular cases of our algorithm are the least-square support vector
machine in the supervised case, the spectral clustering in the fully
unsupervised regime, and a class of semi-supervised graph-based approaches. As
such, QLDS establishes a smooth bridge between these supervised and
unsupervised learning methods. Using recent advances in the random matrix
theory, we formally derive a theoretical evaluation of the classification error
in the asymptotic regime. As an application, we derive a hyperparameter
selection policy that finds the best balance between the supervised and the
unsupervised terms of our learning criterion. Finally, we provide extensive
illustrations of our framework, as well as an experimental study on several
benchmarks to demonstrate that QLDS, while being computationally more
efficient, improves over cross-validation for hyperparameter selection,
indicating a high promise of the usage of random matrix theory for
semi-supervised model selection. | Vasilii Feofanov, Malik Tiomoko, Aladin Virmaux | 2023-10-20T11:46:12Z | http://arxiv.org/abs/2310.13434v1 | # Random Matrix Analysis to Balance between Supervised
###### Abstract
We propose a theoretical framework to analyze semi-supervised classification under the low density separation assumption in a high-dimensional regime. In particular, we introduce QLDS, a linear classification model, where the low density separation assumption is implemented via quadratic margin maximization. The algorithm has an explicit solution with rich theoretical properties, and we show that particular cases of our algorithm are the least-square support vector machine in the supervised case, the spectral clustering in the fully unsupervised regime, and a class of semi-supervised graph-based approaches. As such, QLDS establishes a smooth bridge between these supervised and unsupervised learning methods. Using recent advances in random matrix theory, we formally derive a theoretical evaluation of the classification error in the asymptotic regime. As an application, we derive a hyperparameter selection policy that finds the best balance between the supervised and the unsupervised terms of our learning criterion. Finally, we provide extensive illustrations of our framework, as well as an experimental study on several benchmarks to demonstrate that QLDS, while being computationally more efficient, improves over cross-validation for hyperparameter selection, indicating a high promise of the usage of random matrix theory for semi-supervised model selection.
Machine Learning, ICML
## 1 Introduction
Semi-supervised learning (SSL, Chapelle et al., 2010; van Engelen and Hoos, 2019) aims to learn using both labeled and unlabeled data at once. This machine learning approach received a lot of attention over the past decade due to its relevance to many real-world applications, where the annotation of data is costly and performed manually (Imran et al., 2020), while the data acquisition is cheap and may result in an abundance of unlabeled data (Fergus et al., 2009). As such, semi-supervised learning could be seen as a learning framework that lies in between the supervised and the unsupervised settings, where the former occurs when all the data is labeled, and the latter is restored when only unlabeled data is available. Generally, a semi-supervised algorithm is expected to outperform its supervised counterpart trained only on labeled data by efficiently extracting the information valuable to the prediction task from unlabeled examples.
In practice, integration of unlabeled observations to the learning process does not always affect the performance (Singh et al., 2008), since the marginal data distribution \(p(\mathbf{x})\) must contain information on the prediction task \(p(y|\mathbf{x})\). Consequently, most semi-supervised approaches rely on specific assumptions about how \(p(\mathbf{x})\) and \(p(y|\mathbf{x})\) are linked with each other. It is principally assumed that examples _similar_ to each other tend to share the same class labels (van Engelen and Hoos, 2019), and implementation of this assumption results in different families of semi-supervised learning models. The first approaches aim to capture the intrinsic geometry of the data using a graph Laplacian (Chong et al., 2020; Song et al., 2022) and suppose that high-dimensional data points with the same label lie on the same low-dimensional _manifold_(Belkin and Niyogi, 2004). Another family of semi-supervised algorithms suggests that examples from a dense region belong to the same class. While some methods explicitly look for such regions by relying on a clustering algorithm (Rigollet, 2007; Peikari et al., 2018), another idea is to directly restrict the classification model to have a decision boundary that only passes through low density regions. This latter approach is said to rely on the _Low Density Separation_(LDS) assumption (Chapelle and Zien, 2005; van Engelen and Hoos, 2019), and it has been widely used in practice in recent decades, combined with the support vector machine (Bennett and Demiriz, 1998; Joachims, 1999), ensemble methods (d'Alche-Buc et al., 2001; Feofanov et al., 2019) and deep learning methods (Saijadi et al., 2016; Berthelot et al., 2019). |
2306.04876 | Comprehensive Stepwise Selection for Logistic Regression | Automated variable selection is widely applied in statistical model
development. Algorithms like forward, backward or stepwise selection are
available in statistical software packages like R and SAS. Many researchers
have criticized the use of these algorithms because the models resulting from
automated selection algorithms are not based on theory and tend to be unstable.
Furthermore, simulation studies have shown that they often select incorrect
variables due to random effects which makes these model building strategies
unreliable. In this article, a comprehensive stepwise selection algorithm
tailored to logistic regression is proposed. It uses multiple criteria in
variable selection instead of relying on one single measure only, like a
$p$-value or Akaike's information criterion, which ensures robustness and
soundness of the final outcome. The result of the selection process might not
be unambiguous. It might select multiple models that could be considered as
statistically equivalent. A simulation study demonstrates the superiority of
the proposed variable selection method over available alternatives. | Bernd Engelmann | 2023-06-08T02:13:02Z | http://arxiv.org/abs/2306.04876v2 | # Comprehensive Stepwise Selection for Logistic Regression
###### Abstract
Automated variable selection is widely applied in statistical model development. Algorithms like forward, backward or stepwise selection are available in statistical software packages like R and SAS. Many researchers have criticized the use of these algorithms because the models resulting from automated selection algorithms are not based on theory and tend to be unstable. Furthermore, simulation studies have shown that they often select incorrect variables due to random effects which makes these model building strategies unreliable. In this article, a comprehensive stepwise selection algorithm tailored to logistic regression is proposed. It uses multiple criteria in variable selection instead of relying on one single measure only, like a \(p\)-value or Akaike's information criterion, which ensures robustness and soundness of the final outcome. The result of the selection process might not be unambiguous. It might select multiple models that could be considered as statistically equivalent. A simulation study demonstrates the superiority of the proposed variable selection method over available alternatives.
## 1 Introduction
Automated variable selection algorithms for regression models have been widely applied in various areas of research (Harrell Jr 2015, Heinze, Wallisch & Dunkler 2018). The most commonly used methods are forward, backward and stepwise variable selection. The idea of forward selection is starting with a constant model and add variables one-by-one. The
criterion to add variables could be statistical significance of model coefficients, i.e., add the variable with the lowest \(p\)-value, or applying an information measure like Akaike's Information Criterion (AIC) or the Baysian Information Criterion (BIC). Backward selection works the other way round. Here, the starting point is a model containing all variables and the least important variables are removed one-by-one until a stopping criterion is reached. Stepwise selection is an extension of forward selection which is more sophisticated as it allows for the removal of variables in later selection steps.
The application of automated selection algorithms has been widely criticized. Smith (2018) demonstrated by means of a simulation study that stepwise selection algorithms have fundamental problems that cannot be attributed to lack of data but even occur with big data. Although Smith (2018) is relatively recent research, concerns about misleading outcome of automated selection algorithms are not new and have been been raised in multiple studies over the past decades (Austin & Tu 2004, Flom & Cassell 2007, Whittingham, Stephens, Bradbury & Freckleton 2006). Proposals to overcome the shortcomings of automated model selection have been made. One possibility is combining automated selection with cross validation to improve the control of potential model instabilities (Harrell Jr 2015, Heinze et al. 2018). A popular alternative is penalizing the size of model coefficients during model estimation, effectively reducing the number of variables included in a model. Depending on the shape of the penalization term, these methods are known as Ridge regression (Schaefer, Roi & Wolfe 1984, Le Cessie & Van Houwelingen 1992), the Lasso (Meier, Van De Geer & Buhlmann 2008), or the elastic net which is essentially a combination of both (Zou & Hastie 2005). These methods are useful in preventing overfitting. However, they cannot solve the fundamental problems outlined in Smith (2018) like the lack of theoretical consideration in the model building process. Furthermore, as will be demonstrated in this article, they only mitigate shortcomings of simple forward, backward and stepwise selection but do not eliminate them.
Despite of the known problems with existing automated model selection algorithms, there are practical applications where a large number of regression models has to be estimated which requires automation. An example could be the estimation of credit risk models for multiple countries and asset classes for investors in loan portfolios. Having an automated selection algorithm that is able to identify sensible well-functioning models would be a valuable support for a data analyst facing this problem. In this article, an automated selection algorithm is proposed for logistic regression, one of the most popular statistical modeling techniques that is applied in many scientific areas. Contrary to the aforementioned selection algorithms which are quite generic and could be applied to vari
ous families of regression models, the framework developed in this article will be tailored to logistic regression and cannot be easily transferred to other model classes, like linear or multinomial regression.
Logistic regression is a binary classification model. Its quality can be measured in two dimensions, discrimination and calibration. By discrimination, the ability of a logistic regression model to separate good from bad observations is measured. Popular measures for this purpose are the Accuracy Ratio (Engelmann, Hayden & Tasche 2003) and the area below the Receiver Operating Characteristic (ROC) curve (Swets 2014). The second dimension, calibration, refers to the accuracy of probability estimates for the bad event. Calibration could be measured by the mean squared error, in the context of logistic regression also known as Brier score (Brier 1950). The stepwise selection algorithm proposed in this article will heavily rely on these two notions. The aim is selecting variables that lead to an overall improvement of discriminative power and calibration. In addition, there will be controls for statistical significance of model coefficients, multi-collinearity, model overfitting, and theoretical soundness.
A key difference of the selection algorithm in this article and simple forward and backward selection is in the outcome. Forward and backward selection will by construction always return one model which is interpreted as the best model according to the criterion that is applied in the selection process. The selection algorithm in this article might deliver multiple solutions. In this case, these solutions could be considered as equivalent in a statistical sense, i.e., when tests on difference in either discrimination or calibration are applied, these tests are unable to distinguish between these models.
In the next section, the Comprehensive Stepwise Selection for Logistic Regression (CSSLR) algorithm is introduced and explained in detail. The main motivation of the algorithm is combining multiple criteria for evaluating the quality of a logistic regression model in a structured way to ensure robustness of the final outcome. Section 3 will illustrate the performance of the algorithm on simulated data. The final section concludes. In the appendix, it is briefly explained how to install and run an implementation of CSSLR in R.
## 2 The CSSLR Algorithm
The CSSLR algorithm is a stepwise selection algorithm that starts with a constant model and adds variables one-by-one in every step. It allows for the removal of variables in later
steps should they turn out to become irrelevant once a model is growing. The algorithm stops when it is no longer possible to improve the set of selected models by adding more variables. On a high level, one selection step of the algorithm is described below.
```
Let \(\textbf{M}=(\mathbb{M}_{1},\ldots,\mathbb{M}_{n})\) be the models selected in the previous steps Let \(\textbf{V}=(V_{1},\ldots,V_{m})\) be the set of variables contained in the data set Part I: Identification of Improved Models for\(i=1,\ldots,n\)do\(\triangleright\) Loop over all previously selected models for\(V_{j},\ V_{j}\notin\mathbb{M}_{i}\)do\(\triangleright\) Loop over all variables not contained in \(\mathbb{M}_{i}\) Estimate model \(\mathbb{M}_{c}\) containing the variables in \(\mathbb{M}_{i}\) and the new variable \(V_{j}\) if\(\mathbb{M}_{c}\) is an improved model then Trim model \(\mathbb{M}_{c}\) if possible and required Add model \(\mathbb{M}_{i}\) to the set to models to be deleted \(\textbf{ID}\) Add model \(\mathbb{M}_{c}\) to the set of improved models \(\textbf{I}\) else Discard model \(\mathbb{M}_{c}\) endif endfor endfor Part II: Identification of Equivalent Models if\(\textbf{I}\) is empty then Stop selection algorithm and return the solution \(\textbf{M}\) else Remove all models in \(\textbf{ID}\) from \(\textbf{M}\) and add the models in \(\textbf{I}\) Find the leading models in \(\textbf{M}\), \(\mathbb{M}_{1}\) and \(\mathbb{M}_{2}\)\(\triangleright\) Leading model could be unique for\(\mathbb{M}\in\textbf{M}\) and \(M\neq\mathbb{M}_{1},\mathbb{M}_{2}\)do if\(\mathbb{M}\) is equivalent to \(\mathbb{M}_{1}\) and \(\mathbb{M}_{2}\)then Keep \(\mathbb{M}\) in \(\textbf{M}\) else Remove \(\mathbb{M}\) from \(\textbf{M}\) endif endfor endif
```
**Algorithm 1** High-Level Algorithm of a Selection Step
Algorithm 1 essentially consists of two parts. In the first part variables are added one-by-one to already existing models. A set of models \(\textbf{I}\) is constructed which contains
all models that have shown an improvement over the existing models. In the second part of a selection step the improved models are compared among each other. Models that are inferior are discarded and only a smaller set of models is carried forward to the next selection step. All models in the smaller set are considered as equivalent.
Algorithm 1 is entirely descriptive. To understand how it works on a data set, the notion of improved model, trimmed model, leading model and equivalent model has to be defined in statistical terms. In all these steps, two models are compared and various statistical quantities are computed. From the outcome it can be decided if a model is improved compared to a second model, should be trimmed, is leading among a set of models, or is equivalent to another model.
To introduce some notation, let \(I\) be an indicator variable which is 0 when an observation in a data set is good and 1 when it is bad. Suppose, a logistic regression model \(\mathbb{M}\in\mbox{\sf M}\) contains the variables \(V_{1},\ldots,V_{m}\). The model equation is
\[-\log\left(\frac{1-P\left(I=1|V_{1},\ldots,V_{m}\right)}{P\left(I=1|V_{1}, \ldots,V_{m}\right)}\right)=\beta_{0}+\sum_{i=1}^{m}\beta_{i}\cdot V_{i}. \tag{1}\]
In the next selection step, a candidate variable \(V_{c}\) with \(V_{c}\notin(V_{1},\ldots,V_{m})\) is added to model \(\mathbb{M}\) resulting in model \(\mathbb{M}_{c}\)
\[-\log\left(\frac{1-P\left(I=1|V_{1},\ldots,V_{m},V_{c}\right)}{P\left(I=1|V_{1 },\ldots,V_{m},V_{c}\right)}\right)=\tilde{\beta}_{0}+\sum_{i=1}^{m}\tilde{ \beta}_{i}\cdot V_{i}+\tilde{\beta}_{c}\cdot V_{c}. \tag{2}\]
To evaluate whether model \(\mathbb{M}_{c}\) is an improvement over model \(\mathbb{M}\), model \(\mathbb{M}_{c}\) has to fulfill some minimum requirements like the statistical significance of \(\tilde{\beta}_{c}\). In addition, it has to be better than model \(\mathbb{M}\). Therefore, performance measures have to be analyzed which allow to decide whether \(\mathbb{M}_{c}\) is an improvement over \(\mathbb{M}\). This is the step that is tailored to logistic regression.
As outlined above, the quality of a logistic regression model can be measured in terms of discrimination and calibration. Discrimination can be measured by the area under the ROC curve (AUC). An overview of different approaches for its calculation can be found in Faraggi & Reiser (2002). A requirement for model improvement should be \(AUC\left(\mathbb{M}_{c}\right)>AUC(\mathbb{M})\). To make sure that this effect is not just due to data noise, a statistical test on the difference of two models' AUC should be applied (DeLong, DeLong & Clarke-Pearson 1988). As a decision criterion, one could define a critical \(p\)-value for the AUC-test, \(p_{AUC,I}\), and require that the \(p\)-value of the test comparing \(AUC\left(\mathbb{M}_{c}\right)\) with
\(AUC(\mathbb{M})\) is less than \(p_{AUC,I}\).
As a measure for calibration, the mean squared error is widely used. It is computed from estimated probabilities \(\pi=P(I=1)\) for being bad and the realization of the binary variable \(I\):
\[MSE=\frac{1}{N}\sum_{i=1}^{N}\left(\pi_{i}-I_{i}\right)^{2}, \tag{3}\]
where \(N\) is the sample size of a data set. For an improvement in calibration, the requirement is \(MSE(M_{c})<MSE(M)\). To make this decision statistically sound, two tests should be applied. The first test, Spiegelhalter (1986), checks whether each model individually is well calibrated. Here, the null hypothesis is that MSE is equal to its expected value \(E\left[MSE\right]\) and it should not be possible to reject it. Therefore, one defines a critical \(p\)-value for this test \(p_{calib}\) and requires that this test's \(p\)-value is greater than \(p_{calib}\). Only, if both models \(\mathbb{M}\) and \(\mathbb{M}_{c}\) pass the test of Spiegelhalter (1986), a test for comparing \(MSE(\mathbb{M}_{c})\) and \(MSE(\mathbb{M})\) can be performed (Redelmeier, Bloch & Hickam 1991). Analogously to the \(AUC\) test, a critical \(p\)-value \(p_{MSE,I}\) is defined and to ensure that \(MSE(\mathbb{M}_{c})\) is below \(MSE(\mathbb{M})\) with statistical significance, the \(p\)-value of the Redelmeier et al. (1991) test has to be below \(p_{MSE,I}\).
Finally, to control for overfitting, the Aikaike information criterion (AIC) and variance inflation factors (1) could be used to control for the number of variables included and multi-collinearity, respectively. An variable is added to a previously selected model only if the resulting AIC value is reduced and if variance inflation factors are below a threshold \(v_{crit}\) that has to be defined by the user.
The tests described above are used to define the notion of an improved model. The criteria to be fulfilled are listed in Table 1 below.
\begin{table}
\begin{tabular}{l l} \hline Description & Quantification \\ \hline \(\tilde{\beta}_{c}\) statistically significant & \(p\)-value of likelihood ratio test \(<p_{lr,I}\) \\ \(\tilde{\beta}_{c}\) within theoretical expectation & Sign of \(\tilde{\beta}_{c}\) matches expectation of statistician \\ No multi-collinearity in \(\mathbb{M}_{c}\) & Variance inflation factors \(<v_{crit}\) \\ \(\mathbb{M}_{c}\) is well calibrated & \(p\)-value of Spiegelhalter (1986) test \(>p_{calib}\) \\ \(\mathbb{M}_{c}\) does not show overfitting & Akaike information criterion: \(\text{AIC}(\mathbb{M}_{c})<\text{AIC}(\mathbb{M})\) \\ \(\mathbb{M}_{c}\) discriminates better than \(\mathbb{M}\) & \(p\)-value of DeLong et al. (1988) test \(<p_{AUC,I}\) \\ \(\mathbb{M}_{c}\) is better calibrated than \(\mathbb{M}\) & \(p\)-value of Redelmeier et al. (1991) test \(<p_{MSE,I}\) \\ \hline \end{tabular}
\end{table}
Table 1: List of criteria the model \(\mathbb{M}_{c}\) has to fulfill to be considered as improved over \(\mathbb{M}\)
To make this part of the CSSLR algorithm applicable, a statistician has to define a table with expected signs of model coefficients. To give an example, when building a model for the creditworthiness of corporations, an analyst would expect that high profitability improves the creditworthiness (negative sign) and high debt reduces creditworthiness (positive sign). The expected impact of a variable on \(P(I=1)\) should be clarified before the start of model building and verified whenever a new variable is added to a model. In situations where no expectation on the sign of a variable's coefficient could be formed, the sign check will be omitted. Besides that, values for the parameters \(p_{lr,I}\), \(p_{calib}\), \(p_{AUC,I}\), \(p_{MSE,I}\), and \(v_{crit}\) have to be defined. Some fine-tuning of these parameters during a model selection process might be required to ensure that the algorithm does not select a too large number of models. This depends mostly on the data set and the number of bad observations.
Some of the criteria in Table 1 are debatable. While most statisticians should agree on the first five criteria, some might prefer a weaker notion of improved. One could consider a model as improved if it either shows a significantly higher \(AUC\) or a significantly lower \(MSE\) and is not significantly weaker in the other measure. This would allow the selection of a wider range of models. This consideration illustrates that the CSSLR algorithm offers some flexibility and might be more difficult to parameterize compared to the simple forward and backward selection algorithms. However, its big advantage is that selected models will fulfill a much broader range of quality criteria.
Once a model \(\mathbb{M}_{c}\) is identified as an improved model, it should be validated to ensure that the variables that have been included before \(V_{c}\) still show the desired behavior. If one ore more variables no longer show a positive contribution to model performance, one might consider removing them from the model, i.e., to trim model \(\mathbb{M}_{c}\). For this purpose an incremental analysis will be performed as outlined in Algorithm 2.
```
for\(i=1,\ldots,m\)do\(\triangleright\) Loop over all variables previously included in \(\mathbb{M}_{c}\) Check sign \(sgn\) of \(\tilde{\beta}_{i}\) and value of \(p_{lr}\) Remove \(V_{i}\) from \(\mathbb{M}_{c}\), run the difference tests to compute \(p_{AUC}\) and \(p_{MSE}\) if\(sgn\) is wrong then Remove \(V_{i}\) from \(\mathbb{M}_{c}\) elseif\(p_{lr}>p_{lr,T}\) AND \(p_{AUC}>p_{AUC,T}\) AND \(p_{MSE}>p_{MSE,T}\)then Remove \(V_{i}\) from \(\mathbb{M}_{c}\) endif if\(V_{i}\) is removed then
```
**Algorithm 2** Incremental Analysis to Trim a Model
Stop the for-loop Rerun for-loop on the reduced model until no more trimming is needed end if endfor ```
**Algorithm 3** Determination of Leading Models
The idea of Algorithm 2 is to provide an additional validation of model \(\mathbb{M}_{c}\) before it is accepted as an improved model. Most importantly, it has to be ensured that the signs of model coefficients are still within expectations after including variable \(V_{c}\). Furthermore, each variable \(V_{i}\) should still have some positive contribution to the model, either by having a significant model coefficient, improving \(AUC\), or improving \(MSE\). Only if no positive contribution of \(V_{i}\) to model \(\mathbb{M}_{c}\) is visible, it should be removed. This process is controlled by three additional parameters \(p_{lr,T}\), \(p_{AUC,T}\), and \(p_{MSE,T}\) that have to be defined when running the CSSLR algorithm.
After the search for improved models and their trimming, the outcome is not necessarily unique but there might be a set of candidate models. In the second part of the CSSLR algorithm, the candidate models in this set are compared in order to identifying a smaller subset of models that is superior in statistical terms. Only the smaller subset of superior models is kept and used as input in the next selection step. A starting point in this comparison is identifying the leading models. This is done by analyzing the two key dimensions of logistic regression models, discrimination and calibration. If there is a single model dominating in both dimensions, the leading model can be unique. If this is not the case, the number of leading models is two. The identification of leading models is described in Algorithm 3.
``` Determine model \(\mathbb{M}_{1}\) with the largest \(AUC\) value Determine model \(\mathbb{M}_{2}\) with the smallest \(MSE\) value if\(\mathbb{M}_{1}=\mathbb{M}_{2}\)then Leading model is unique else Run the AUC and MSE difference tests and compute \(p_{AUC}\) and \(p_{MSE}\) if\(p_{AUC}<p_{AUC,E}\) AND \(p_{MSE}>p_{MSE,E}\)then Leading model is \(\mathbb{M}_{1}\) else if\(p_{AUC}>p_{AUC,E}\) AND \(p_{MSE}<p_{MSE,E}\)then Leading model is \(\mathbb{M}_{2}\) else Leading models are \(\mathbb{M}_{1}\) and \(\mathbb{M}_{2}\) ```
**Algorithm 4** Determination of Leading Models
**end if**
**end if**
When comparing models \(\mathbb{M}_{1}\) and \(\mathbb{M}_{2}\) in Algorithm 3, the tests on difference in \(AUC\) and \(MSE\) are run. When there is a statistically significant difference in \(AUC\) but not in \(MSE\), model \(\mathbb{M}_{1}\) is considered as superior and defined as the leading model. If it is the other way round, model \(\mathbb{M}_{2}\) is dominating model \(\mathbb{M}_{1}\). If both tests or none of the tests results in statistical significant outcomes, both models are considered as statistically equivalent and both models are kept in the list of candidate models for the next selection step. To decide on the statistical equivalence of two models, critical \(p\)-values \(p_{AUC,E}\) and \(p_{MSE,E}\) have to be defined before running the CSSLR algorithm.
When determining the leading models in Algorithm 3, the notion of equivalent models was introduced. These are models that cannot be rank-ordered in terms of discrimination and calibration, either because they are indistinguishable in both dimension, or because one model has the higher discriminative power and the second model the lower calibration error. This explains the final part of a selection step in Algorithm 1 where a model is compared with the leading models and kept in case it is equivalent or discarded, otherwise.
The stepwise selection is starting from a constant model. It adds variables one-by-one until either including additional variables does not lead to further improvements of selected models or a pre-defined maximum of selection steps is reached. In the remainder of this article, the performance of the CSSLR algorithm will be illustrated.
## 3 Performance of CSSLR
The CSSLR algorithm is evaluated on multiple data sets generated by simulation. The starting point is a vector containing the good/bad indicator variables \(I\). It contains \(K\) good events coded by "0" and \(K\) bad events represented by "1". On this data set, strong, weak and non-discriminating variables are created. The conditional distributions of strong variables \(S_{i}\), weak variables \(W_{i}\) and non-discriminating variables \(R_{i}\) are given as
\[S_{i}(I=0) \sim N\left(\mu=\mu_{1},\sigma=1\right)\] \[S_{i}(I=1) \sim N\left(\mu=-\mu_{1},\sigma=1\right)\] \[W_{i}(I=0) \sim N\left(\mu=\mu_{2},\sigma=1\right)\] \[W_{i}(I=1) \sim N\left(\mu=-\mu_{2},\sigma=1\right)\] \[R_{i}(I=0) \sim N\left(\mu=0,\sigma=1\right)\]
\[R_{i}(I=1)\sim N\left(\mu=0,\sigma=1\right)\]
where \(N\) is a normally distributed variable with expectation \(\mu\) and standard deviation \(\sigma\). Different values of \(\mu_{1}>\mu_{2}\) will be chosen to evaluate the performance of the selection algorithm. The generation of data is done with \(K=500\) and the number of random data sets generated for model selection is 1000 in each simulation run.
To illustrate the sensitivity of the CSSLR algorithm, four different sets of parameters are chosen to control the selection algorithm. They are displayed in Table 2 below. The final row of this table deserves more explanation. Model improvement is evaluated in the CSSLR algorithm mainly by using the AUC-test and the MSE-test. Two version are analyzed: First, a model is considered as improved over a reference model if one of these two tests indicates improvement and the second one indicates equivalence. In this case, it is sufficient to see improvement in one quantity while no deterioration is visible in the other. Second, in a more conservative version, a model is considered as improved if both AUC and MSE are improved significantly. In this case, the algorithm is expected to terminate earlier as it applies stricter criteria.
To see how CSSLR compares with existing methods, four alternative selection algo
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & CSSLR1a & CSSLR1b & CSSLR2a & CSSLR2b \\ \hline \(p_{lr,I}\) & 5.0 & 5.0 & 5.0 & 5.0 \\ \(p_{calib}\) & 50.0 & 50.0 & 10.0 & 10.0 \\ \(v_{crit}\) & 5.0 & 5.0 & 5.0 & 5.0 \\ \(p_{AUC,I}\) & 5.0 & 5.0 & 10.0 & 10.0 \\ \(p_{MSE,I}\) & 5.0 & 5.0 & 10.0 & 10.0 \\ \(p_{AUC,T}\) & 2.5 & 2.5 & 2.5 & 2.5 \\ \(p_{MSE,T}\) & 2.5 & 2.5 & 2.5 & 2.5 \\ \(p_{AUC,E}\) & 5.0 & 5.0 & 10.0 & 10.0 \\ \(p_{MSE,E}\) & 5.0 & 5.0 & 10.0 & 10.0 \\ Decision \(I\) & AUC or MSE & AUC and MSE & AUC or MSE & AUC and MSE \\ \hline \hline \end{tabular}
\end{table}
Table 2: Different sets of CSSLR parameters used to control the selection algorithm: \(p_{lr,I}\) is the \(p\)-value of the model coefficient significance test in %, \(p_{calib}\) the \(p\)-value for the Spiegelhalter calibration test in %, \(v_{crit}\) the maximum acceptable variance inflation factor, \(p_{AUC,I}\) the \(p\)-value of the AUC-test used to decide about model improvement in %, and \(p_{MSE,I}\) the \(p\)-value of the MSE-test used to decide about model improvement in %. The \(p\)-values \(p_{AUC,T}\) and \(p_{MSE,T}\) are used to decide about model trimming and \(p_{AUC,E}\) and \(p_{MSE,E}\) to determine model equivalence. The row ”Decision \(I\)” specifies the criteria used for the model improvement decision.
rithms are applied. The first alternative is stepwise selection based on AIC which is implemented in the function stepAIC of the R package MASS (Ripley, Venables, Bates, Hornik, Gebhardt & Firth, 2013). As a second alternative, code was extracted from the R package My.stepwise (International Harvard Statistical Consulting Company, 2017) to create a routine that selects variables based on \(p\)-values of model coefficient significance tests. The critical \(p\)-value was set to 5.0% to be consistent with the parameterization of CSSLR in Table 2. The final alternative are two versions of the LASSO taken from the R package glmnet (Hastie, Qian & Tay, 2021). The LASSO depends on a penalty parameter \(\lambda\) in the estimation of the model equation. In glmnet, cross validation is used to suggest sensible choices for \(\lambda\) based on the distribution of estimation errors. Lasso1 uses the optimal value of \(\lambda\), \(\lambda_{o}\), which minimizes the cross validation error while Lasso2 uses \(\lambda_{1}>\lambda_{o}\) which leads to a cross validation error of one standard deviation higher than the minimum value. This results in a more regularized version of the LASSO. Lasso2 will, therefore, in general lead to more parsimonious models than Lasso1.
The first test uses three strong variables with \(\mu_{1}=\pm 1.0\), three weak variables with \(\mu_{2}=\pm 0.5\) and 14 nuisance variables resulting in a data set of 20 variables besides the response variable. To get an impression on the strength of these variables, note that \(\mu=\pm 1.0\) leads to variables with an \(AUC\) of about 90% while \(\mu=\pm 0.5\) creates variables with an \(AUC\) of about 75%. The results of the eight selection algorithms are displayed
\begin{table}
\begin{tabular}{r r r r r r r} \hline Method & \(P_{s}\) & \(A_{s}\) & \(P_{w}\) & \(A_{w}\) & \(P_{nd}\) & \(A_{nd}\) \\ \hline CSSLR1a & 100.00 & 3.00 & 99.70 & 2.39 & 1.50 & 1.00 \\ CSSLR1b & 100.00 & 3.00 & 92.70 & 1.71 & 0.00 & NaN \\ CSSLR2a & 100.00 & 3.00 & 100.00 & 2.72 & 4.90 & 1.02 \\ CSSLR2b & 100.00 & 3.00 & 98.10 & 2.16 & 0.30 & 1.00 \\ AIC & 100.00 & 3.00 & 100.00 & 3.00 & 93.60 & 2.85 \\ Coeff & 100.00 & 3.00 & 100.00 & 2.98 & 57.10 & 1.51 \\ Lasso1 & 100.00 & 3.00 & 100.00 & 3.00 & 99.90 & 7.31 \\ Lasso2 & 100.00 & 3.00 & 100.00 & 3.00 & 58.60 & 2.04 \\ \hline \end{tabular}
\end{table}
Table 3: Results of automated model selection from a data set of 3 strong (\(\mu=\pm 1\)), 3 weak (\(\mu=\pm 0.5\)) and 14 nuisance variables. The methods evaluated are four versions of CSSLR, an AIC-based forward selection method, a forward selection based on coefficient \(p\)-values, and two version of the LASSO. \(P_{s}\) / \(P_{w}\) /\(P_{nd}\) is the percentage of simulation runs where at least one strong / weak / non-discriminating variable was selected and \(A_{s}\) / \(A_{w}\) / \(A_{nd}\) is the average number of strong / weak / non-discriminating variables selected conditional on the number of selected strong / weak / non-discriminating variables being at least one.
in Table 3. For each method, the percentage of simulations is reported where at least one strong / weak / nuisance variable is selected. In addition, the average number of strong / weak / nuisance variables selected is computed conditional on being greater than zero. All methods select all strong variables in all scenarios. The differences are in selecting weak and nuisance variables. Overall, CSSLR is selecting more parsimonious models compared to the alternatives. There are multiple scenarios where CSSLR does not select weak variables, especially when "AUC and MSE" is used for deciding about model improvement. When "AUC or MSE" is used, only in 0.3% of all scenarios CSSLR1a does not find weak variables while CSSLR2a always includes weak variables. Both CSSLR1a and CSSLR2a have a higher tendency of selecting non-discriminating variables where CSSLR2a performs worst with including non-discriminating variables in 4.9% of all simulation runs.
Compared to CSSLR, the four reference methods select more variables on average. The two best performing methods Coeff and Lasso2 include nuisance variables in more than 50% of all simulation runs which is substantially worse than all versions of CSSLR. Furthermore, the number of nuisance variables included is larger. While CSSLR when it selects nuisance variables mostly includes one variable only, the alternative selection methods in many cases select two or more. This makes a variable selection based on CSSLR more reliable since it does a better job in rejecting nuisance variables and includes variables only that have power in explaining the response variable.
In a second experiment, the strength of both strong and weak variables are reduced. The motivation is bringing these variables in terms of AUC closer to the nuisance variables and see whether CSSLR is still able to separate them. Here, \(\mu_{1}=\pm 0.30\) and \(\mu_{2}=\pm 0.15\) are used. These numbers roughly correspond to \(AUC=65\%\) and \(AUC=58\%\), respectively. The results are shown in Table 4. The results are comparable to Table 3. Still all strong variables are selected while the percentage of CSSLR runs selecting weak variables is slightly decreased. However, the ability to identify nuisance variables is still strong and the percentage of scenarios where CSSLR selects nuisance variables is well below 5% for all four parameterizations.
In the third test, the strong variables are removed from the data sets of the second experiment and replaced by nuisance variables resulting in data set where three variables have weak and 17 variables have no discriminatory power. The results are presented in Table 5. In this case, the percentages for selecting weak variables are increased for CSSLR compared to Table 4 and the ability to reject nuisance variables remains strong. The four alternatives still select non-discriminating variables in too many scenarios. The best per
forming method is Lasso2 which selects nuisance variables in 32.9% of all simulation runs which is still substantially higher than the numbers for CSSLR.
Finally, the test is run on data sets containing 20 non-discriminating variables. The
\begin{table}
\begin{tabular}{r r r r r r r} \hline Method & \(P_{s}\) & \(A_{s}\) & \(P_{w}\) & \(A_{w}\) & \(P_{nd}\) & \(A_{nd}\) \\ \hline CSSLR1a & 100.00 & 3.00 & 92.60 & 1.85 & 0.70 & 1.00 \\ CSSLR1b & 100.00 & 3.00 & 87.00 & 1.72 & 0.30 & 1.00 \\ CSSLR2a & 100.00 & 3.00 & 99.20 & 2.34 & 3.50 & 1.09 \\ CSSLR2b & 100.00 & 3.00 & 97.60 & 2.18 & 2.20 & 1.05 \\ AIC & 100.00 & 3.00 & 100.00 & 2.99 & 91.90 & 2.46 \\ Coeff & 100.00 & 3.00 & 100.00 & 2.95 & 52.80 & 1.36 \\ Lasso1 & 100.00 & 3.00 & 100.00 & 3.00 & 99.90 & 6.40 \\ Lasso2 & 100.00 & 3.00 & 100.00 & 2.92 & 32.90 & 1.44 \\ \hline \end{tabular}
\end{table}
Table 4: Results of automated model selection from a data set of three strong (\(\mu=\pm 0.3\)), three weak (\(\mu=\pm 0.15\)) and 14 nuisance variables. The methods evaluated are four versions of CSSLR, an AIC-based forward selection method, a forward selection based on coefficient \(p\)-values, and two version of the LASSO. \(P_{s}\) / \(P_{w}\) /\(P_{nd}\) is the percentage of simulation runs where at least one strong / weak / non-discriminating variable was selected and \(A_{s}\) / \(A_{w}\) / \(A_{nd}\) is the average number of strong / weak / non-discriminating variables selected conditional on the number of selected strong / weak / non-discriminating variables being at least one.
\begin{table}
\begin{tabular}{r r r r r} \hline Method & \(P_{w}\) & \(A_{w}\) & \(P_{nd}\) & \(A_{nd}\) \\ \hline CSSLR1a & 100.00 & 2.42 & 0.90 & 1.00 \\ CSSLR1b & 98.80 & 2.13 & 0.00 & NaN \\ CSSLR2a & 100.00 & 2.76 & 5.10 & 1.02 \\ CSSLR2b & 100.00 & 2.54 & 1.10 & 1.00 \\ AIC & 100.00 & 3.00 & 95.00 & 2.89 \\ Coeff & 100.00 & 2.99 & 61.00 & 1.46 \\ Lasso1 & 100.00 & 3.00 & 95.50 & 5.09 \\ Lasso2 & 100.00 & 2.97 & 32.30 & 1.64 \\ \hline \end{tabular}
\end{table}
Table 5: Results of automated model selection from a data set of 3 \(\mu=\pm 0.15\) and 17 nuisance variables. The methods evaluated are four versions of CSSLR, an AIC-based forward selection method, a forward selection based on coefficient \(p\)-values, and two version of the LASSO. \(P_{w}\) /\(P_{nd}\) is the percentage of simulation runs where at least one weak / non-discriminating variable was selected and \(A_{w}\) / \(A_{nd}\) is the average number of weak / non-discriminating variables selected conditional on the number of selected strong / weak / non-discriminating variables being at least one.
correct behavior of a selection algorithm would be rejecting all variables and proposing a model with the constant only as independent variable. From Table 6 it can be seen that CSSLR is performing considerably better than three of the reference methods while Lasso2 is comparable to CSSLR in terms of the number of scenarios where a nuisance variable is selected. When this happens, however, Lasso2 tends to select on average two nuisance variables while CSSLR selects one variable only. The best performing method is the most restrictive version of CSSLR, CSSLR2a, where in all scenarios the correct model with the constant is selected.
## 4 Conclusions
In this article, a comprehensive stepwise model selection algorithm for logistic regression, CSSLR, was proposed. In contrast to existing model selection methods, CSSLR is less generic and tailored to logistic regression by focusing on its two key dimensions, discriminatory power and calibration. A model's discriminatory power is measured by AUC while calibration is measured by MSE. Starting from a model with the constant only, new variables are added one-by-one if they fulfill basic requirements like significance tests for model coefficients and low variance inflation factors and, in addition, pass tests on improvement of AUC and MSE. The outcome of the selection process may not be a single model but multiple models that could be considered as equivalent in terms of AUC
\begin{table}
\begin{tabular}{r c c} \hline \hline Method & \(P_{nd}\) & \(A_{nd}\) \\ \hline CSSLR1a & 15.70 & 1.00 \\ CSSLR1b & 0.00 & NaN \\ CSSLR2a & 15.70 & 1.00 \\ CSSLR2b & 2.00 & 1.00 \\ AIC & 97.10 & 3.36 \\ Coeff & 64.70 & 1.53 \\ Lasso1 & 35.60 & 3.75 \\ Lasso2 & 11.60 & 1.95 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of automated model selection from a data set of 20 nuisance variables. The methods evaluated are four versions of CSSLR, an AIC-based forward selection method, a forward selection based on coefficient \(p\)-values, and two version of the LASSO. \(P_{nd}\) is the percentage of simulation runs where at least one non-discriminating variable was selected and \(A_{nd}\) is the average number of non-discriminating variables selected conditional on the number of selected non-discriminating variables is at least one.
and MSE.
In a simulation study CSSLR was compared with model selection based on AIC, \(p\)-values of significance tests for model coefficients and two versions of the LASSO. It was demonstrated that CSSLR is superior to these methods in terms of selecting meaningful variables while at the same time rejecting nuisance variables. In all experiments the percentages of simulations where CSSLR selected nuisance variables was substantially lower than for the tested alternatives. This gives some confidence that in practical applications, CSSLR will lead to parsimonious models selecting the most important variables only while variables representing data noise will most likely be filtered out.
The superior performance of CSSLR comes at a price. Compared to the alternatives analyzed in this article, the parameterization of CSSLR is more complex since thresholds for multiple \(p\)-values have to be defined. Furthermore, for the variable selection on a rich data set it might be necessary to perform thousands of regression model estimations and statistical tests which results in substantially higher computational times. While the routines in the R packages MASS and glmnet are computationally efficient and deliver solutions within seconds, CSSLR might take minutes or for large datasets even hours until the selection process is completed. Despite of this, CSSLR should still save a data analyst a lot of time in analyzing a model estimation problem because it gives transparent results of every step in the selection process. It shows why certain models have been rejected or selected and documents the full process of arriving at the final models.
Finally, it should be noted that the high-level variable selection method outlined in Algorithm 1 is generic and did not use any properties of logistic regression. This means that it should be possible to improve model selection algorithms utilizing Algorithm 1 for other classes of statistical models by tailoring the notion of improved and equivalent to their characteristics. Exploring variable selection for other model families is beyond the scope of this article and left for future research.
## 5 Appendix
The code of CSSLR is written in R. To replicate the results of this article and perform own tests of the selection algorithm, the code can be accessed on
[https://github.com/berndengelmann/CSSLR](https://github.com/berndengelmann/CSSLR). The easiest way of installing the package is using the command devtools::install_github("berndengelmann/CSSLR"). After installing the package, test scripts could be found in a subfolder Tests in the folder where
the package is installed. The script ModelSelectionSimulation_Article.R allows the replication of the tables presented in this article.
|
2308.12560 | NOVA: NOvel View Augmentation for Neural Composition of Dynamic Objects | We propose a novel-view augmentation (NOVA) strategy to train NeRFs for
photo-realistic 3D composition of dynamic objects in a static scene. Compared
to prior work, our framework significantly reduces blending artifacts when
inserting multiple dynamic objects into a 3D scene at novel views and times;
achieves comparable PSNR without the need for additional ground truth
modalities like optical flow; and overall provides ease, flexibility, and
scalability in neural composition. Our codebase is on GitHub. | Dakshit Agrawal, Jiajie Xu, Siva Karthik Mustikovela, Ioannis Gkioulekas, Ashish Shrivastava, Yuning Chai | 2023-08-24T05:00:07Z | http://arxiv.org/abs/2308.12560v1 | # NOVA: NOvel View Augmentation for Neural Composition of Dynamic Objects
###### Abstract
We propose a novel-view augmentation (NOVA) strategy to train NeRFs for photo-realistic 3D composition of dynamic objects in a static scene. Compared to prior work, our framework significantly reduces blending artifacts when inserting multiple dynamic objects into a 3D scene at novel views and times; achieves comparable PSNR without the need for additional ground truth modalities like optical flow; and overall provides ease, flexibility, and scalability in neural composition. Our codebase is on GitHub.
## 1 Introduction
Photo-realistic composition of objects in a 3D scene has significant applications, one of which is creating realistic content and experiences inside the Metaverse. Despite recent advances in neural radiance fields (NeRFs) [9], photo-realistic composition from dynamic monocular videos remains a challenging problem. This is primarily due to the ill-posed nature of this task--multiple scene configurations can lead to identical observed image sequences, a problem we refer to as the 3D structure ambiguity.
Current approaches for this task [2, 7] build implicit representations of the static scene and dynamic objects separately by predicting a per-point blending factor along with color and density. To deal with structure ambiguity, these methods also predict modalities such as 3D scene flow and depth to regularize the prediction within each frame and between neighboring frames. This requires ground truth data for these modalities, thus limiting applicability. These approaches also suffer from blending mask prediction errors when rendering a novel view, causing blending artifacts at the boundaries of the image that are not present in the reference frustum. This effect is amplified when inserting multiple objects into the scene and dramatically degrades the rendering quality (see Fig. 1).
We introduce a framework, NOVA, that helps mitigate these issues. NOVA reduces blending artifacts by augmenting NeRF with losses for different views during training and requiring the network to predict consistent masks and colors across novel views. NOVA additionally extends prior works to facilitate learning different dynamic objects of the scene using separate implicit representations and controlling their movement by manipulating these representations. NOVA does not require 3D scene flow regularization, thus removing the need for a scene flow predictor during data preparation and reducing training time without impacting PSNR. In summary, our contributions are three-fold:
1. a flexible NeRF composition framework to add an arbitrary number of dynamic objects into a static 3D scene;
2. a novel-view augmentation strategy for learning better per-point blending factors;
3. corresponding novel-view losses for high rendered image fidelity.
## 2 Related Work
Object composition via inverse-rendering.Inserting objects into a scene requires properties like lighting, depth, geometry, and material. [3, 14, 8, 24] estimate these properties for an indoor scene from a single image. For outdoor scenes, a high dynamic range light field is necessary to rep
Figure 1: Prior works (left column) have blending artifacts that are amplified when multiple objects are inserted at different points in the same scene. Our method (right column) reduces these blending artifacts significantly.
resent sun and sky [5, 20], and adversarial methods are commonly used to train photo-realistic results [20, 6, 19, 10].
Composing dynamic objects using NeRFs.NeRFs [9] achieve impressive novel-view synthesis results with a simple formulation for static scenes, encouraging research to compose multiple NeRFs. Guo [4] proposed training per-object scattering functions for proper lighting effects during composition. Yang [22] separated the scene into background and object branches, using 2D segmentation as supervision. To allow for 3D pose control, Ost [11] proposed a learnable scene graph to decompose dynamic objects into nodes encoding transformation and radiance. Tancik [15] proposed a framework to tune and compose individually trained NeRFs into city-scale scenes.
Novel-view synthesis for dynamic videos.Current works either learn a static canonical radiance field, with a second per-time-step field to apply deformation [17, 12, 13], or learn a dynamic radiance field directly conditioned on time [7, 21, 1, 2, 16]. For the latter direction, it is common to learn a scene flow field [18] concurrently and constrain adjacent frames for pixel consistency. Besides scene flow, Li [7] also applied geometric consistency and depth as prior; Gao [2] introduced additional auxiliary losses. Tian [16] propose a flow-based feature aggregation module to incorporate spatial and temporal features.
## 3 Method
Our framework is inspired by Gao [2], which jointly trains two NeRFs that separately handle the time-invariant static and time-varying dynamic parts of a monocular video. The static NeRF predicts the per-point color and density \((c,\sigma)\) given the point's position and viewing direction \((x,y,z,\theta,\phi)\). The dynamic NeRF predicts the per-point color, density, scene flow, and blending factor \((c,\sigma,s_{f},s_{b},\beta)\) given the point's position, viewing direction, and time \((x,y,z,\theta,\phi,t)\). Ground-truth optical flow is used to learn the scene flow, and several regularizing losses are applied to scene flow and depth to resolve the 3D structure ambiguity when learning from a monocular view. The NeRF composition is done in an unsupervised manner using the per-point blending factors \(\beta\).
This approach works well for scene reconstruction but produces blending artifacts when manipulating the scene (see Fig. 1). We introduce a framework with three novel modules to alleviate these issues, described in detail in the subsequent sections. We also remove the losses based on ground-truth optical flow in Gao [2] from our framework to reduce the amount of supervision.
### Multiple NeRFs
Our framework uses separate NeRFs to learn different parts of the scene. Each NeRF is provided a segmentaion mask of the scene and is either static or dynamic based on the dynamicity of the scene parts it models (see Fig. 2). The static and dynamic NeRF architectures are similar to that of Gao [2]. The final RGB image is produced from a novel viewpoint by combining the outputs of all the NeRFs as follows:
\[\mathbf{C}_{P}^{full}(\mathbf{r})=\sum_{k=1}^{K}T_{k}^{full}\left(\sum_{n=1}^ {num_{k}Nefs}\alpha_{k}^{n}\beta_{k}^{n}\mathbf{c}_{k}^{n}\right) \tag{1}\]
where \(K\) is the number of samples along the ray \(\mathbf{r}\), \(T_{k}^{full}\) is the transmittance at the \(k^{th}\) sample along the ray after accounting for rays from all the NeRFs, and \(\alpha_{k}^{n}\), \(\beta_{k}^{n}\), and \(\mathbf{c}_{k}^{n}\)
Figure 2: Overview of our training framework. Based on the 2D segmentation masks, separate NeRFs are initialized. These NeRFs predict per-point RGB color and blending factors, which are passed through a differentiable volume renderer to generate the final composed image from a novel viewpoint.
are the alpha, blending factor, and color respectively predicted by the \(n^{th}\) NeRF for the \(k^{th}\) sample along the ray.
### Novel-View Augmentation
Our novel-view augmentation training strategy reduces blending artifacts when manipulating multiple dynamic objects and composing them into the scene. During training, we shift the camera responsible for the dynamic object to a novel view (see Fig. 3). Given the camera's relative transformation, we calculate the ground truth segmentation mask at the novel view using stereo geometry. Points are sampled along the rays of the camera at the novel viewpoint \(C_{2}\) and passed through the corresponding NeRF. We render the predicted segmentation mask \(\mathbf{M}_{P}^{n}\) for the \(n^{th}\) NeRF as follows:
\[\mathbf{M}_{P}^{n}(\mathbf{r})=\sum_{k=1}^{K}T_{k}^{full}\alpha_{k}^{full} \beta_{k}^{n} \tag{2}\]
where \(T_{k}^{full}\) is the transmittance and \(\alpha_{k}^{full}\) is the alpha at the \(k^{th}\) sample along the ray after accounting for rays from all the NeRFs, and \(\beta_{k}^{n}\) is the blending factor predicted by the \(n^{th}\) NeRF at the \(k^{th}\) sample along the ray. This augmentation strategy can be applied to other ground truths available for training like RGB images.
### Novel-View Losses
We introduce a few losses to ensure high image fidelity when placing objects at novel points in the scene.
**Novel-View Mask Loss.** We take the squared error loss between the predicted and ground-truth masks for the novel viewpoint:
\[\mathcal{L}_{nvm}=\sum_{n=1}^{num\_NeRFs}\sum_{ij}\|\mathbf{M}_{GT}^{n}( \mathbf{r}_{ij})-\mathbf{M}_{P}^{n}(\mathbf{r}_{ij})\|_{2} \tag{3}\]
**Per-Camera Novel-View RGB Loss.** We render the RGB image of each NeRF as follows:
\[\mathbf{C}_{P}^{n}(\mathbf{r})=\sum_{k=1}^{K}T_{k}^{n}\alpha_{k}^{n}\beta_{k}^ {n}\mathbf{c}_{k}^{n} \tag{4}\]
We take the squared error loss between the predicted and the ground-truth RGB image from the novel viewpoint of only the pixels for which the NeRF is responsible for:
\[\mathcal{L}_{nvcn}=\sum_{n=1}^{num\_NeRFs}\sum_{ij}\mathbf{M}_{GT}^{n}( \mathbf{r}_{ij})\|\mathbf{C}_{GT}(\mathbf{r}_{ij})-\mathbf{C}_{P}^{n}( \mathbf{r}_{ij})\|_{2} \tag{5}\]
**Full Novel-View RGB Loss.** After rendering the final RGB image using Eq. 1, we take the squared error loss with the ground truth full RGB image as follows:
\[\mathcal{L}_{nvcf}=\sum_{ij}\|\mathbf{C}_{GT}(\mathbf{r}_{ij})-\mathbf{C}_{P }^{full}(\mathbf{r}_{ij})\|_{2} \tag{6}\]
**Blending Loss.** To ensure the contributions of all the NeRFs for a particular point sum to one, we introduce a blending loss:
\[\mathcal{L}_{nvb}=\sum_{ijk}\left|\left(\sum_{n=1}^{num\_NeRFs}\beta_{ijk}^{n} \right)-1\right| \tag{7}\]
**Alpha Loss.** We force the NeRFs to not predict anything outside the masks they are responsible for by explicitly adding a loss for alphas to be 0 outside the camera mask:
\[\mathcal{L}_{nva}=\sum_{n=1}^{num\_NeRFs}\sum_{ij}\left(1-\mathbf{M}_{GT}^{n} (\mathbf{r}_{ij})\right)\cdot\left(\sum_{k}\left|\alpha_{ijk}^{n}\right|\right) \tag{8}\]
## 4 Experimental Results
### Dataset
We use the preprocessed Dynamic Scene Dataset [23] provided by Gao _et al_. [2], which contains video sequences for seven scenes, each consisting of a static background and moving objects. Each sequence has 12 images captured at different time steps and camera poses, which make them effectively monocular.
### Evaluation
#### 4.2.1 Quantitative Evaluation
We evaluate the image fidelity quantitatively by assessing the PSNR between the synthesized image and the corresponding ground truth image at a fixed viewpoint but changing time. Our framework performs comparably to other methods without the need for additional modalities of ground truth data like optical flow (see Tab. 1).
#### 4.2.2 Qualitative Evaluation
We compare our novel-view renderings with Gao _et al_. [2] in Fig. 4. Our framework reduces blending artifacts, as visible clearly from our predicted object masks and generated final images, with the improvement being significant when composing multiple dynamic objects.
Figure 3: Novel-view augmentation training strategy
#### 4.2.3 Ablation Study
We study the impact of each of our losses on the quality of the final image. As seen in Fig. 5, using just \(\mathcal{L}_{nvm}\) can remove the blending artifacts, but RGB losses are necessary to ensure the inserted objects have proper color.
## 5 Conclusion
We have introduced a framework, NOVA, for the neural composition of dynamic scenes using NeRFs. Our major contributions are three modules: multiple NeRFs, novel view augmentation, and novel view losses. Using monocular dynamic video, object segmentation masks, and depth information, our results demonstrate our framework's reliability, ease, flexibility, and scalability of inserting multiple dynamic objects into a scene photo-realistically.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} Method & Balloon1 & Balloon2 & Jumping & Playground & Skating & Truck & Umbrella & Average \\ \hline NeRF + time & 17.32 & 19.66 & 16.72 & 13.79 & 19.23 & 15.46 & 17.17 & 17.05 \\ Yoon _et al_. [23] & 18.74 & 19.88 & 20.15 & 15.08 & 21.75 & 21.53 & 20.35 & 19.64 \\ Li _et al_. [7] & 21.35 & 24.02 & 24.10 & 20.85 & 28.88 & 23.33 & 22.56 & 23.58 \\ Gao _et al_. [2] & 21.43 & 26.59 & 23.57 & 23.74 & 31.92 & 25.50 & 22.68 & 25.06 \\ \hline Ours & 21.52 & 25.08 & 20.27 & 22.31 & 27.73 & 23.31 & 23.08 & 23.33 \\ \end{tabular}
\end{table}
Table 1: We compare PSNR of our method against other methods that report their PSNR on Dynamic Scene Dataset [23]. The best results are highlighted in red while the second best are in blue. Our model performs comparably to other methods despite not using ground-truth optical flow supervision.
Figure 4: Qualitative results of our model on Umbrella, Balloon1, and Jumping scenes. Compared with Gao _et al_. [2], our novel-view augmentation training significantly reduces artifacts in the novel-view mask prediction, and produces images with higher fidelity, especially when composing multiple objects in a scene.
Figure 5: Ablation study on \(\mathcal{L}_{nvm}\) and novel-view RGB losses. |
2310.08772 | Investigating the Robustness and Properties of Detection Transformers
(DETR) Toward Difficult Images | Transformer-based object detectors (DETR) have shown significant performance
across machine vision tasks, ultimately in object detection. This detector is
based on a self-attention mechanism along with the transformer encoder-decoder
architecture to capture the global context in the image. The critical issue to
be addressed is how this model architecture can handle different image
nuisances, such as occlusion and adversarial perturbations. We studied this
issue by measuring the performance of DETR with different experiments and
benchmarking the network with convolutional neural network (CNN) based
detectors like YOLO and Faster-RCNN. We found that DETR performs well when it
comes to resistance to interference from information loss in occlusion images.
Despite that, we found that the adversarial stickers put on the image require
the network to produce a new unnecessary set of keys, queries, and values,
which in most cases, results in a misdirection of the network. DETR also
performed poorer than YOLOv5 in the image corruption benchmark. Furthermore, we
found that DETR depends heavily on the main query when making a prediction,
which leads to imbalanced contributions between queries since the main query
receives most of the gradient flow. | Zhao Ning Zou, Yuhang Zhang, Robert Wijaya | 2023-10-12T23:38:52Z | http://arxiv.org/abs/2310.08772v1 | Investigating the Robustness and Properties of Detection Transformers (DETR) Toward Difficult Images
###### Abstract
Transformer-based object detectors (DETR) have shown significant performance across machine vision tasks, ultimately in object detection. This detector is based on a self-attention mechanism along with the transformer encoder-decoder architecture to capture the global context in the image. The critical issue to be addressed is how this model architecture can handle different image nuisances, such as occlusion and adversarial perturbations. We studied this issue by measuring the performance of DETR with different experiments and benchmarking the network with convolutional neural network (CNN) based detectors like YOLO and Faster-RCNN. We found that DETR performs well when it comes to resistance to interference from information loss in occlusion images. Despite that, we found that the adversarial stickers put on the image require the network to produce a new unnecessary set of keys, queries, and values, which in most cases, results in a misdirection of the network. DETR also performed poorer than YOLOv5 in the image corruption benchmark. Furthermore, we found that DETR depends heavily on the main query when making a prediction, which leads to imbalanced contributions between queries since the main query receives most of the gradient flow.
## 1 Introduction
With the continuous development of deep learning, computer vision has reached a new stage, and target detection, as one of the very vital core directions, has also received attention and many applications based on object detection algorithms.
Before the concept of deep learning was introduced, object detection was mostly based on manual feature extraction. However, as manual feature extraction methods often failed to meet various features in the targets, traditional target detection algorithms could not meet people's needs. After the rise of deep learning, neural networks can automatically learn powerful feature extraction and fitting capabilities from large amounts of data. Thus many DL-based object detectors with excellent performance have emerged. These detectors can be broadly classified into three categories: two-stage object detection, one-stage object detection, and transformer-based object detection.
Faster R-CNN [1] is the most popular two-stage detector nowadays. It first generates a proposal for the object bounding box in the image through a network. It then extracts features from each candidate box, and uses them for object classification and bounding box regression tasks to obtain the final bounding box. On the other hand, the YOLO [2] series model is well-known as a one-stage detector, which discards the anchor frame setting in two-stage and extracts the prediction frames directly from the image. Furthermore, with the increasing popularity of transformers applied in computer vision tasks, new transformer-based object detectors, such as DETR [9], have also emerged. Instead of using anchor frames and NMS, DETR uses an encoder-decoder structure to classify each object in the image.
DETR is an end-to-end target detection network proposed by Facebook in 2020. Compared to traditional RNNs, DETR uses multiple self-attentive structures, and the parallel computing used therein allows DETR to extract relevance efficiently in context. It is also one of the best
performing target detection methods available.
This paper investigated the properties of DETR and compared the robustness of the network using different interference to the image data, such as sticker and occlusion. We find three main findings in our experiment (1) DETR can handle a small amount of occlusion well compared to Faster R-CNN and YOLOv5, but when there is too much information loss, the attention mechanism of DETR is difficult to be useful. (2) the adversarial patch manages to produce a new set of unnecessary keys, queries, and values in the network, which in most cases, results in the misdirection of the network. (3) The image corruption benchmark performance of DETR is lower than that for YOLO model. (4) We observed a main query phenomenon in the DETR model and showed that is caused the slow convergence problem of the model.
## 2 Related Works
Recent successes of Transformer-based models in computer vision tasks have inspired several works [14, 15, 11, 5, 12, 16, 17, 18, 19] that study their robustness against corrupted images and adversarial attacks. Some works [11, 5, 12, 18] claimed that transformers are more robust than CNNs in different evaluation settings, including adversarial attacks, while others [13, 16, 17, 19] hypothesized that transformers are vulnerable as CNNs. In this study, we aim to understand the robustness of the transformers-based mechanism for object detector settings. We evaluated their robustness against patch masking, adversarial attacks, and common natural corruption images.
Occlusion is one of the effects of target detector performance [6]. There are two general types of occlusion: intra-class occlusion and inter-class occlusion. Many problems such as pedestrian detection, stereo images, etc. face problems caused by occlusion, whether for outdoor or indoor scene tasks. Because when an object is occluded, some information is lost and the remaining information may be difficult to recognise, studying the occlusion problem is an inevitable step for target detection algorithms. [5] evaluated the performance of Vision Transformer on occlusion. Therefore, we also analyzed object detectors performance on occluded images.
Regarding a more realistic robustness benchmark, [7] proposed the ImageNet-C dataset, which consists of 15 diverse corruption types, covering noise, blur, weather, and digital categories. This dataset was first proposed for image classification tasks. In [8], Claudio M. _et al._ evaluated object detectors' performance in bad weather. They proposed several robustness benchmarks for object detection named Pascal-C, Coco-C, and Cityscapes-C. In our work, we utilized their image corruption generator and focused on the reasons behind the performance drop of object detectors.
## 3 Methodology
### Occlusion
Object occlusion is a major problem in target detection applications. Whether it is pedestrian detection, object tracking, or autonomous driving, the object to be detected may be occluded within or between classes. Such partial occlusion reduces the features extracted from the objects and affects the object detector's precision.
Object occlusion has different implications for different tasks. In this paper, we set the value of the image patches that need to be occluded to 0 to reach the information loss effect in that part. Moreover, DETR is a network based on a self-attentive model, so it is essential to study the impact of regions containing more information on DETR. In this paper, we use two different occlusion methods to test Faster R-CNN, YOLO, and DETR, (a) random occluded, (b) salient occluded.
(a) Random occulted: We use dataset COCO128 to test the anti-jamming capability of DETR. Since the image size is different in COCO128, we split the image into 10 x 10 patches and randomly occlude these patches by setting the pixel value of the occluded patches to 0 to simulate the loss of information. The occlusion ratio is set to the number of occluded patches/total patches. In our experiments, we will test images with masking ratios of 0.2, 0.4, 0.6, and 0.8, respectively.
(b) Salient occluded: The self-attentive mechanism in the transformer contributes to the excellent performance of DETR. Therefore, it can selectively extract information from the feature map for later classification and box prediction. In a realistic study of how resistant DETR is to interference, the focus should be on the effect of salient regions. In this experiment, we treated the target with different occlusion rates to find the reason why significant regions affect robustness. We considered the area containing the top 20% of prospective information to be significant. We use different occlusion rates to process the salient part of patches in the object and feed the processed images into the three networks for comparison. Then we use different occlusion rates to process the salient part of patches in the object and feed the processed images into the three networks for comparison.
### Adversarial Stickers
The adversarial attack is a popular method to disturb the output precision of machine learning models. By adding a small portion of adversarial perturbation to the image, the detectors can make a misleading judgment, resulting in a significant decrease in performance. Some research utilized a similar approach to evaluate the networks. In [3], Sharif _et al._ demonstrated that using adversarial glasses is possible to fool facial recognition systems. These glasses were
designed to fit any face, allowing it to impersonate any person. Another work [4] demonstrated different methods for constructing fake stop signs to misclassify the models by creating a poster that looks like a stop sign or modifying the stop signs to make it hard to recognizable. This indicated that an adversarial patch is a reliable way to evaluate the performance and robustness of a particular network.
We utilized the attack by completely replacing a portion of the image with a sticker (patch). The stickers are masked with some constraints to fit in different sizes of images by applying random values of translation and scaling to place the stickers in different locations with different sizes. In particular, given a three-dimensional image \(x\), patch \(p\), location \(l\), and transformation \(t\) (in this case translation and scaling), we defined a patch operator \(S(p,x,l,t)\) which first apply the transformation to the patch, and then following the transformation of patch \(p\) to the location \(l\) in the image \(x\).
This attack proposed to distract the way object detectors make their predictions and draw the bounding boxes. In the case where an image contain several objects, the network must be able to decide the most salient object in the image. Thus, when a targeted object which some of its part overlapped by a sticker, the detectors should be able to classified the object without significantly reducing the accuracy of the targeted object and other objects in the image.
DETR uses transformers to capture the global context of the image. Since the adversarial stickers attack only change the pixel values of a certain region in the image, it is essential to see whether the global scope attention of DETR. Moreover, we want to investigate whether DETR is more robust to this attack than CNN-based mechanism detectors such as YOLO and Faster-RCNN.
### Benchmark
In a real-world scenario, the camera's photo sometimes suffers from common corruptions such as defocus blur, motion blur, or bad weather. Therefore, we wanted to evaluate the performance of YOLOv5m, DETR R50, and DETR R101 on corrupted images. The evaluations were done using 15 image corruptions with five levels of severity in each corruption category, as shown in Figure 1.
Apart from model comparison, we also investigated the reasons behind the performance drop of DETR models. We analyzed the attention maps from the transformer encoder and decoder to see how they changed as the image corruption became more severe.
### Query Properties
One of the critical innovations in DETR is the introduction of object queries in the transformer decoder. It is implemented as learnable positional encodings that are added to the decoder input at each layer. These queries help the model decode a set of box coordinates and their class labels. The number of the object query is usually smaller than 100 to reduce the number of bounding boxes generated at the early stage and the computation cost. In [9], queries have shown unique preferences on objects with a particular size and position. Therefore it is believed that each query has several modes of operation focusing on different areas and box sizes. In particular, the transformer architectures allow the object query to utilize the context in the whole image to detect large objects more precisely. In this paper, we further analyzed the properties of queries and their relations with class labels. We also presented the observation of main queries and their significant contributions to the DETR's model. Finally, we fine-tuned a DETR R50 model on pascal-voc dataset to investigate the model's transfer learning ability.
## 4 Evaluation
We record the experiment results with the primary COCO challenge metric, the mean Average Precision (mAP). This metric averages precision-recall scores at different Intersection-overUnion (IoU) threshold. Moreover, to mitigate bias toward local patch, we ignore predicted boxes with less than 50% with the target box. We did the same for original images for a fair comparison between clean and attacked images.
### Evaluation on Occlusion
We tested the robustness of Faster R-CNN, YOLOv5, and DETR on the COCO128 dataset with occlusion ratios (number of patches masked/total patches) of 0.2, 0.4, 0.6, and 0.8, and the results are shown in Figure 2. To prevent random perturbations in random occlusion, four separate experiments were conducted, and the evaluation values of the experimental results were taken. DETR outperformed Faster R-CNN and YOLOv5 in several experiments, and the average decrease in accuracy of DETR during the process of increasing the occlusion ratio from 0.2 to 0.8 was \([52.92\%,73.91\%,69.04\%]\), while the figures for Faster R-CNN and YOLOv5 were \([55.85\%,79.54\%,55.56\%]\) and
Figure 1: The image corruptions used in Benchmark evaluation.
\([54.93\%,71.09\%,65.40\%]\). It can be seen that DETR performs well in terms of resistance to interference from information loss.
In the experiments with significant areas occluded, we also treated the significant areas of the images with occluded rates of \(0.2,0.4,0.6\), and \(0.8\), respectively.
At a significant region occluded rate of 0.2, DETR performed the best, followed by YOLOv5 and Faster R-CNN. For example, DETR detected two classes, potted plant and vase, and both had an accuracy of over 90%. The other two networks detected only one class of targets with accuracies below 80%. When the occlusion rate of the salient regions was raised to 0.4 and above, all three networks showed false detections and failed to detect targets. From the results, we can see that DETR can still perform the task of target detection well when a small portion of important information is missing. We illustrate why DETR has better robustness against occlusion by analyzing DETR's attention map.
As can be seen in Figure 4, when the detected object is partially occluded, DETR's attention will mainly focus on the not occluded part because the transformer can combine global information to fill in the loss part by surrounding pixels. However, it can be seen that when too much information is missing due to too many occluded parts, the DETR's attention gradually starts to diverge, and it is unable to find the detected object accurately.
### Evaluation on Adversarial Stickers
We evaluate the three object detectors on images with adversarial stickers in which the detectors utilize ResNet101 as the backbone. For the dataset, we select 128 images from the MS COCO 2017 validation set [10]. The adversarial patch was placed in the dataset images with random size and position. We report the resulting mAPs in Table 2: mAP under attacked images measured by the three object detectors. In terms of adversarial stickers evaluation, we find that Faster-RCNN gives the lowest performance among all three detectors, followed by DETR and YOLOv5. The resulting bounding box of DETR is also shown in Figure 5. As shown in the upper row of Figure 5, the network misclassified the attacked image, interpreting the stickers to be a person with 96% confidence instead of entirely ignoring it. In the case the sticker not overlapping any item in the image, the sticker will not affect the accuracy of other detected objects in the image. However, in some cases where the stickers overlap the image, it can affect the attention process resulting
\begin{table}
\begin{tabular}{l|c|c|c|c}
**Occlusion rate** & **0.2** & **0.4** & **0.6** & **0.8** \\ \hline DETR & 0.342 & 0.161 & 0.042 & 0.013 \\ YOLOv5 & 0.304 & 0.137 & 0.0396 & 0.0137 \\ Faster-RCNN & 0.299 & 0.132 & 0.027 & 0.012 \\ \end{tabular}
\end{table}
Table 1: The Mean Average Precision (mAP) result comparison of DETR, YOLOv5 and Faster-RCNN on images with different occlusion ratio.
Figure 4: DETR detection results on different occlusion rate 0.2 (left top), 0.4 (right top), 0.6 (left bottom), 0.8 (right bottom).
Figure 3: The detection results of different occlusion rate on DETR, YOLO and Faster R-CNN
Figure 2: The mAP of different occlusion rate on DETR, YOLO and Faster R-CNN
in lower accuracy of other detected items in the image, as shown in the bottom row of Figure 5.
This experiment implies that the sticker is capable of producing new unnecessary input features to be processed in the self-attention mechanism, which misdirects the network's attention. The impact of the sticker on the self-attention mechanism of DETR is illustrated in Figure 6.
As can be seen in Figure 6, the sticker patch manages to produce a new set of keys, queries, and values in the network, which in most cases, results in the misdirection of the network. In the case where the stickers not overlapping other items in the image, the dot-product attention only computes the query \(q_{n}\) with the corresponding key \(k_{n}\). However, in most cases when the stickers overlap other items in the image, the queries of those particular items (i.e., \(q_{1},\dots,q_{n}\)) are misdirected to the key token that represents the adversarial sticker (in this case the \(k_{2}\)). This result in increasing the attention weight to the corresponding adversarial patch key token. In other words, the network misguided the attention from actual image content to the adversarial stickers. To gain intuition on what the detector see on a particular attacked image, we visualize the self-attention weight on the adversarial patch. As shown in the Figure 7, the network consider the entire sticker object to make prediction, counting it as the important object to predict.
### Evaluation on Image Corruption Benchmark
We evaluated the average precision of YOLOv5m, DETR R50, and DETR R101 on 15 image corruption categories and five levels of severity each, as shown in Figure 8. Although DETR models have better performance on original images, their precision on corrupted images is generally lower than YOLOv5m. DETR models performed much worse than YOLOv5m on images with impulse noise. Since DETR only used ReLU activation, it weakened the model's ability to counter these extreme values. We visualized the attention maps at pixel (300, 450) throughout the impulse noise test, shown in Figure 9. At the start, pixels corresponding to the "cat" had a strong correlation with our center point. However, as the impulse noise increases, the attention map gradually shrinks to the pixels near the center points. The encoder failed to correlate with "cat" pixels. This finally leads to a decrease in precision.
\begin{table}
\begin{tabular}{l|c|c} & **mAP** & **mAP\({}_{50}\)** \\ \hline DETR & 0.512 & 0.726 \\ YOLOv5 & 0.548 & 0.743 \\ Faster-RCNN & 0.495 & 0.674 \\ \end{tabular}
\end{table}
Table 2: The Mean Average Precision (mAP) and mAP\({}_{50}\) result comparison of DETR, YOLOv5 and Faster-RCNN on images with adversarial stickers.
Figure 5: DETR detection result on original image (left) compare with the image with the adversarial sticker (right). The upper row shows the case where the sticker not overlapping other items in the image, while the lower row illustrated the opposite case.
Figure 6: Self-attention mechanism for adversarial stickers patch settings. In this case, \(q\), \(k\), and \(v\) represent the projected queries, keys, and value tokens of the input features. The mechanism involving dot-product attention computing the dot-product between queries with all corresponding keys before normalizing with softmax function to get the token attention weights. The adversarial patch introduces a new input feature at \(X_{2}\) which misdirects the network’s attention to the adversarial patch.
Figure 7: Visualization of self-attention weight on the adversarial patch.
### Query Properties
First, we evaluated the contribution of each query in object detection. Both DETR R50 and DETR R101 models were tested on the MSCOCO validation dataset. We collected all predictions with confidence larger than 0.8 and the corresponding query ID to compute the query frequency, as shown in the middle plots in Figure 10. We found an interesting phenomenon that both the DETR R50 and DETR R101 model have one main query (71 in DETR R50 and 68 in DETR R101) that detect lots of objects. Each main query can account for 7.5% of the total predictions. Intuitive thinking was that the main queries are responsible for detecting "person", which is the most common object in the dataset. Therefore, we investigated the relations between main queries and classes. We divided the main queries' frequency by the total frequency in each class to visualize their contribution, as shown in the right plots. Both main queries have shown very similar distribution over object categories. None of them has a high contribution (\(<\) 40%) in the "person" category, and both of them has a high contribution (\(<\) 40%) in the "airplane", "train", "cat" and "bear" category. This indicated that the main query has particular preferences on object classes, but they are not detecting the most common object, "person".
Apart from class relations, we also analyzed bounding boxes' location and size predictions from main queries, shown in Figure 11. Both queries prefer to detect medium to large objects in the center region. However, there are also a large amount of small to medium-sized boxes circulating the center.
To analyze the importance of the main query, we masked its outputs and evaluated the precision change, as shown in Table 3. The main query indeed has a massive influence on the model's performance. Without predictions from the main query, the average precision dropped by around 7 points. This shows that DETR is heavy depending on the predictions from the main query. It will make the model more vulnerable to attacks.
Figure 11: The predicted bounding boxes center location and size of main queries. Yellow colored points refers to boxes with a large size.
Figure 8: The benchmark results for three object detectors.
Figure 10: Left columns: Class distribution from model’s prediction. Middle columns: Frequency of each query in predictions. Right columns: Main queries’ contribution in detecting each object category.
Figure 9: The detection result from DETR on original image and and encoder attention at (300, 450) as the impulse noise became more severe.
#### 4.4.1 Cause of The Main Query
In the following experiment, we investigated the cause of the main query and the imbalanced query contribution problem in the DETR model. We fine-tuned a DETR R50 model on the pascal-voc dataset. We first trained the model for 50 epochs to let it stabilize and then collected the gradient information that was applied to each query in the following ten epochs. In Figure 12, we computed the mean gradient magnitude applied on each query and the query frequency. Query 71 remained as the main query in pascal-voc dataset. It received the most gradient flow, and its magnitude is significantly larger than any other query. We believe the cause for the imbalanced gradient flow is due to the transformer structure. The DETR wanted to keep its transformer module simple so every query is allowed to compute cross-attentions with the whole encoder context. In the early stage of training, the main query converged fast and made the best predictions among all queries though it might not be precise. However, this is enough for the main query to receive most of the gradient flow, while others had much fewer chances to update and compete with the main query. It not only forced this single query to converge to most of the detection tasks, but also affected other queries' convergence progress. Therefore, the imbalanced query contribution problem caused the slow convergence and low performance of the DETR model.
#### 4.4.2 Solutions to The Main Query
Our ideas against this problem are simple and direct, to suppress positive predictions from the main query and encourage that from all remaining queries. This can be achieved by two methods: random query drop and cross-attentions through a smaller window.
To evaluate the performance of random query drop, we fine-tuned another DETR R50 model on pascal-voc dataset for 30 epochs and compared the loss graph with the previous fine-tuned model, as shown in Figure 13. We observed that the model with random query drop has a lower testing loss and showed faster convergence. Random query drop is only a prototype idea proposed by us, and we hope it can be further evaluated on other transformers that use independent queries like DETR. The second idea is similar to the one proposed in Swin Transformer [20], where attention is computed inside a small window. This not only reduces the computation cost but also prevents any single query from being able to answer all detection tasks.
## 5 Conclusion
In this work, we investigate the robustness and properties of transformer-based detectors DETR by measuring its performance toward different image nuisances and making a comparison with other detectors. Two widely adopted CNN-based detectors like YOLOv5 and Faster-RCNN, are used to compare their performance with DETR. These detectors are evaluated with two main test cases, occlusion and adversarial stickers. We implement random and salient occlusion to analyze how the network handles the specific regions impacted by occlusion, specifically on the region where it contains essential information in the image. In terms of adversarial stickers, this attack is utilized to change the pixel values in some portion of the image to mislead the network. Moreover, we benchmark the robustness of DETR and analyze the query properties to a set of corruption images.
Our experiment implies that DETR performs well when
Figure 12: The average gradient flow to each query during transfer learning and the query frequency in pascal-voc-dataset.
Figure 13: The loss graph comparison between two fine-tuned model with and without random query drop.
\begin{table}
\begin{tabular}{l|c|c} & **mAP** & **mAP without main query** \\ \hline DETR R50 & 0.420 & 0.353 \\ DETR R101 & 0.435 & 0.367 \\ \end{tabular}
\end{table}
Table 3: The mAP of two models with and without predictions from the main query.
it comes to resistance to interference from information loss in occlusion images. Nevertheless, in the case of the sticker patch, it manages to produce a new set of keys, queries, and values in the network which in most cases results in the misdirection of the network. This result in increasing the attention weight to the corresponding adversarial patch key token, making the network misguided the attention to the adversarial stickers. For the benchmark, the experiment indicates that DETR precision on corrupted images is generally lower than YOLOv5. We also found that DETR depends heavily on the predictions from the main query. This impact the main query to receive most of the gradient flow, which leads to imbalanced contributions among all queries.
|
2303.10005 | Projections onto $L^p$-Bergman spaces of Reinhardt Domains | For $1<p<\infty$, we emulate the Bergman projection on Reinhardt domains by
using a Banach-space basis of $L^p$-Bergman space. The construction gives an
integral kernel generalizing the ($L^2$) Bergman kernel. The operator defined
by the kernel is shown to be absolutely bounded projection on the $L^p$-Bergman
space on a class of domains where the $L^p$-boundedness of the Bergman
projection fails for certain $p \neq 2$. As an application, we identify the
duals of these $L^p$-Bergman spaces with weighted Bergman spaces. | Debraj Chakrabarti, Luke D. Edholm | 2023-03-17T14:26:18Z | http://arxiv.org/abs/2303.10005v3 | # Projections onto \(L^{p}\)-Bergman spaces of Reinhardt domains
###### Abstract.
For \(1<p<\infty\), we emulate the Bergman projection on Reinhardt domains by using a Banach-space basis of \(L^{p}\)-Bergman space. The construction gives an integral kernel generalizing the \((L^{2})\) Bergman kernel. The operator defined by the kernel is shown to be absolutely bounded projection on the \(L^{p}\)-Bergman space on a class of domains where the \(L^{p}\)-boundedness of the Bergman projection fails for certain \(p\neq 2\). As an application, we identify the duals of these \(L^{p}\)-Bergman spaces with weighted Bergman spaces.
2020 Mathematics Subject Classification: 32A36, 46B15, 32A70, 32A25 The first author was supported in part by US National Science Foundation grant number DMS-2153907, and by a gift from the Simons Foundation (number 706445). The second author was supported in part by Austrian Science Fund (FWF): AI0455721.
sometimes for all) \(p\neq 2\); see [1, 10, 11, 12] and the survey [13]. Recent studies of the Bergman projection in certain classes of Reinhardt domains ([1, 1, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] and the survey [11]. Recent studies of the Bergman projection in certain classes of Reinhardt domains ([1, 1, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] and the survey [11]. Recent studies of the Bergman projection in certain classes of Reinhardt domains ([1, 1, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] and the survey [11].) shed more light on this phenomenon, revealing that the \(L^{p}\)-behavior of the Bergman projection that one sees on, e.g., smooth bounded strongly pseudoconvex domains breaks down on bounded Reinhardt domains whose boundary passes through the center of rotational symmetry, a simple example being the Hartogs triangle \(\{|z_{1}|<|z_{2}|<1\}\subset\mathbb{C}^{2}\). In such a domain it is possible that there are indices \(1<p_{1}<p_{2}<\infty\) such that the linear subspace \(A^{p_{2}}(\Omega)\) is not dense in the Bergman space \(A^{p_{1}}(\Omega)\). This phenomenon can never occur on smoothly bounded pseudoconvex domains (see [10]), and may constitute a glimpse of an \(L^{p}\)-function theory where the Banach geometry of \(L^{p}\) replaces the Hilbert space idea of orthogonality. In the Reinhardt domains studied in this paper, Laurent representations are used to clarify some of these phenomena. For example, the fact that \(A^{p_{2}}(\Omega)\) is not necessarily dense in \(A^{p_{1}}(\Omega)\) is a manifestation of the fact that there may be monomials whose \(p_{1}\)-th power is integrable but not the \(p_{2}\)-th power.
### Projection operators associated to bases
Let \(L\) be a separable Hilbert space, \(A\) a closed subspace of \(L\) and \(\{e_{j}\}\) a complete orthogonal set in \(A\). The orthogonal projection \(\boldsymbol{P}\) from \(L\) to \(A\) may be represented by the following series (convergent in the norm of \(L\)):
\[\boldsymbol{P}f=\sum_{j}\frac{\langle f,e_{j}\rangle}{\|e_{j}\|^{2}}e_{j}, \qquad f\in L. \tag{1.2}\]
Since \(\boldsymbol{P}f\) is defined geometrically as the point in \(A\) nearest to \(f\), this representation is independent of the choice of complete orthogonal set \(\{e_{j}\}\). When \(L=L^{2}(\Omega)\), \(A=A^{2}(\Omega)\), (1.2) coincides with the Bergman projection formula given by (1.1).
In a general Banach space, the analog of a complete orthogonal set is a _Schauder basis_: a sequence \(\{e_{j}\}_{j=1}^{\infty}\) in a complex Banach space \(A\) is a Schauder basis if for each \(f\in A\), there is a unique sequence \(\{c_{j}\}_{j=1}^{\infty}\) of complex numbers such that \(f=\sum_{j=1}^{\infty}c_{j}e_{j}\), where the series converges in the norm-topology of \(A\) (see [12]). In this case, there exist bounded linear functionals \(a_{j}:A\to\mathbb{C}\) such that \(c_{j}=a_{j}(f)\), generalizing the Fourier coefficients \(a_{j}(f)=\frac{\langle f,e_{j}\rangle}{\|e_{j}\|^{2}}\) seen in the Hilbert setting.
When \(L\) is a Banach space, \(A\) a closed subspace, and \(\{e_{j}\}_{j=1}^{\infty}\) a Schauder basis of \(A\), one might attempt to define a projection operator from \(L\) onto \(A\) by emulating (1.2):
\[\boldsymbol{P}f=\sum_{j}\,\widetilde{a}_{j}(f)e_{j},\qquad f\in L, \tag{1.3}\]
where \(\widetilde{a}_{j}:L\to\mathbb{C}\) is a Hahn-Banach (norm-preserving) extension of \(a_{j}:A\to\mathbb{C}\). When it exists, an operator of type (1.3) will be called a _basis projection_ determined by the Schauder basis; this notion encapsulates the orthogonal projection (1.2) when \(L\) is Hilbert. A less obvious example of a basis projection is seen by considering the unit circle \(\mathbb{T}\) with the Haar measure and \(1<p<\infty\). The classical Szego projection from \(L^{p}(\mathbb{T})\) onto the Hardy space \(H^{p}(\mathbb{D})\) is a basis projection; see Proposition 2.7. In contrast, we show in Proposition 3.15 that for \(p\neq 2\), the attempt to extend the Bergman projection to \(L^{p}\) by continuity - even if successful - is _never_ a basis projection. This is an underlying reason for the deficiencies of the Bergman projection in \(L^{p}\) spaces, and our goal in this paper is to construct basis projections from \(L^{p}(\Omega)\) to \(A^{p}(\Omega)\).
### The Monomial Basis Projection
Formula (1.3) is purely formal, as there is no guarantee that a basis projection onto the subspace determined by a given basis exists. Several technical points must first be addressed:
(1) A basis projection depends on both the range subspace \(A\) and on the choice of Schauder basis - or the slightly more general notion of a _Banach-space basis_ (see Section 2.1) - determining the projection. A Banach space need not have such a basis, but in the Bergman space \(A^{p}(\Omega)\) of a Reinhardt domain \(\Omega\subset\mathbb{C}^{n}\), there is a distinguished basis tied to geometry and function theory. This is the collection of Laurent monomials in \(A^{p}(\Omega)\), functions \(z\mapsto z_{1}^{\alpha_{1}}z_{2}^{\alpha_{2}}\dots z_{n}^{\alpha_{n}}\) where \(\alpha_{j}\in\mathbb{Z}\), \(1\leq j\leq n\). The fact that these monomials under an appropriate partial ordering give a Banach-space basis of \(A^{p}(\Omega)\) was first proved in [1], and is recalled in a slightly more general form in Theorem 2.12 below. The projection operator from \(L^{p}(\Omega)\) to \(A^{p}(\Omega)\) defined in terms of this monomial basis by formula (1.3) is the main topic of this paper: the _Monomial Basis Projection (MBP)_.
(2) A Hahn-Banach extension of a linear functional in general is far from unique, but in our application, where we extend coefficient functionals defined on \(A^{p}(\Omega)\) to \(L^{p}(\Omega)\), we do have uniqueness; see Propositions 2.3 and 2.4 below. This means the MBP can be unambiguously defined by (1.3), since the summation procedure is specified by the partial ordering of our Banach-space basis mentioned in item (1).
(3) None of the above guarantees that the formal series (1.3) converges for \(f\in L\). Showing that (1.3) defines a bounded operator on \(L\) requires direct estimation to show that the partial summation operators are uniformly bounded in the operator norm of \(L\). In our application to Bergman spaces \(A^{p}(\Omega)\), the problem is simplified because of the availability of an integral kernel representation of the MBP.
### Notation, definitions and conventions
1. Unless otherwise indicated, \(\Omega\) will denote a bounded Reinhardt domain in \(\mathbb{C}^{n}\) with center of symmetry at \(0\), i.e., whenever \(z\in\Omega\), for every tuple \((\theta_{1},\dots,\theta_{n})\in\mathbb{R}^{n}\), we have \((e^{i\theta_{1}}z_{1},\dots,e^{i\theta_{n}}z_{n})\in\Omega\). Let \(|\Omega|\subset\mathbb{R}^{n}\) denote its _Reinhardt Shadow_, i.e., \[|\Omega|=\{(|z_{1}|\,,\dots,|z_{n}|)\in\mathbb{R}^{n}:z\in\Omega\}.\]
2. The index \(p\) satisfies \(1<p<\infty\), and denote by \(q\) the index Holder-conjugate to \(p\), i.e., \(\frac{1}{p}+\frac{1}{q}=1\).
3. For a domain \(U\subset\mathbb{C}^{n}\) and a measurable function \(\lambda:U\to[0,\infty]\) which is positive a.e. (the _weight_), we set for a measurable function \(f\), \[\|f\|_{L^{p}(U,\lambda)}^{p}=\|f\|_{p,\lambda}^{p}=\int_{U}|f|^{p}\,\lambda\,dV,\] (1.4) where \(dV\) denotes Lebesgue measure, and functions equal a.e. are identified. We let \(L^{p}(U,\lambda)\) be the space of functions \(f\) for which \(\left\|f\right\|_{p,\lambda}<\infty\), which is a Banach space.
Let \(A^{p}(U,\lambda)\) be the subspace of \(L^{p}(U,\lambda)\) consisting of holomorphic functions:
\[A^{p}(U,\lambda)=L^{p}(U,\lambda)\cap\mathcal{O}(U).\]
We will only consider weights \(\lambda:U\to[0,\infty]\) which are _admissible_ in the sense that _Bergman's inequality_ holds in \(A^{p}(U,\lambda)\), i.e., for each compact set \(K\subset U\), there is a constant \(C_{K}>0\) such that for each \(f\in A^{p}(U,\lambda)\) we have
\[\sup_{K}|f|\leq C_{K}\left\|f\right\|_{L^{p}(U,\lambda)}. \tag{1.5}\]
It is easy to see that if \(\lambda\) is a positive continuous function on \(U\) then it is an admissible weight on \(U\). We treat a class of more general admissible weights in Section 3.2.
If \(\lambda\) is an admissible weight on \(U\), a standard argument shows that \(A^{p}(U,\lambda)\) is a closed subspace of \(L^{p}(U,\lambda)\), and therefore a Banach space. It is called a _weighted Bergman space_.
(4) We are interested in Reinhardt domains \(\Omega\) and phenomena which are invariant under rotational symmetry. Therefore, we consider only weights \(\lambda\) on \(\Omega\) which are both admissible
and _multi-radial_, in the sense that there is a function \(\ell\) on the Reinhardt shadow \(|\Omega|\) such that \(\lambda(z_{1},\dots,z_{n})=\ell(\left|z_{1}\right|,\dots,\left|z_{n}\right|)\).
1. For \(\alpha\in\mathbb{Z}^{n}\), we denote by \(e_{\alpha}\) the Laurent monomial of exponent \(\alpha\): \[e_{\alpha}(z)=z_{1}^{\alpha_{1}}\dots z_{n}^{\alpha_{n}}.\] (1.6)
2. We define the set of \(p\)_-allowable indices_ to be the collection \[\mathcal{S}_{p}(\Omega,\lambda)=\left\{\alpha\in\mathbb{Z}^{n}:e_{\alpha}\in A ^{p}(\Omega,\lambda)\right\}.\] (1.7) If \(\lambda\equiv 1\), we abbreviate \(\mathcal{S}_{p}(\Omega,1)\) by \(\mathcal{S}_{p}(\Omega)\).
3. The map \(\chi_{p}:\mathbb{C}^{n}\to\mathbb{C}^{n}\) defined by \[\chi_{p}(\zeta)=\left(\zeta_{1}\left|\zeta_{1}\right|^{p-2},\cdots,\zeta_{n} \left|\zeta_{n}\right|^{p-2}\right)\] (1.8) will be referred to as the _twisting map_. It appears in the definition of the Monomial Basis Kernel in (1.10), and arises also in the duality pairing (7.5). Given a function \(f\) we denote by \(\chi_{p}^{*}f\) its pullback under \(\chi_{p}\): \[\chi_{p}^{*}f=f\circ\chi_{p}.\] (1.9)
### The Monomial Basis Kernel
When it exists, the MBP of \(A^{p}(\Omega,\lambda)\) is (by construction) a bounded surjective projection, which we write \(\mathbf{P}^{\Omega}_{p,\lambda}:L^{p}(\Omega,\lambda)\to A^{p}(\Omega,\lambda)\). To obtain an integral formula analogous to (1.1), we define the _Monomial Basis Kernel_ of \(A^{p}(\Omega,\lambda)\) (abbreviated _MBK_), as the formal series on \(\Omega\times\Omega\) given by
\[K^{\Omega}_{p,\lambda}(z,w)=\sum_{\alpha\in\mathcal{S}_{p}(\Omega,\lambda)} \frac{e_{\alpha}(z)\overline{\chi_{p}^{*}e_{\alpha}(w)}}{\left\|e_{\alpha} \right\|_{p,\lambda}^{p}}. \tag{1.10}\]
When \(p=2\), the MBK coincides with the Bergman kernel of \(A^{2}(\Omega,\lambda)\), in which case the above series is known to converge locally normally on \(\Omega\times\Omega\). For a general \(1<p<\infty\), we show in Theorem 3.3 that when \(\Omega\) is pseudoconvex, the series (1.10) also converges locally normally on \(\Omega\times\Omega\). In Theorem 3.13 we prove that the MBP admits the representation
\[\mathbf{P}^{\Omega}_{p,\lambda}(f)(z)=\int_{\Omega}K^{\Omega}_{p,\lambda}(z,w)f(w )\lambda(w)dV(w),\qquad f\in L^{p}(\Omega,\lambda). \tag{1.11}\]
### Improved \(L^{p}\)-mapping behavior
The main theme of this paper is that the Monomial Basis Projection can have better mapping properties in \(L^{p}\) spaces than the Bergman projection. In Section 6 we illustrate this on nonsmooth pseudoconvex Reinhardt domains called _monomial polyhedra_ (see [10, 2]). A bounded domain \(\mathscr{U}\subset\mathbb{C}^{n}\) is a monomial polyhedron in our sense, if there are exactly \(n\) monomials \(e_{\alpha^{1}},\dots,e_{\alpha^{n}}\) such that
\[\mathscr{U}=\left\{z\in\mathbb{C}^{n}:\left|e_{\alpha^{1}}(z)\right|<1,\dots, \left|e_{\alpha^{n}}(z)\right|<1\right\}.\]
We recall the \(L^{p}\)-mapping behavior of the Bergman projection on \(\mathscr{U}\):
**Proposition 1.12** ([2]).: _There is a positive integer \(\kappa(\mathscr{U})\) such that the Bergman projection on \(\mathscr{U}\) is bounded in the \(L^{p}\)-norm if and only if_
\[\frac{2\kappa(\mathscr{U})}{\kappa(\mathscr{U})+1}<p<\frac{2\kappa(\mathscr{U })}{\kappa(\mathscr{U})-1}. \tag{1.13}\]
Examples of monomial polyhedra in \(\mathbb{C}^{2}\) are the (rational) generalized Hartogs triangles studied in [1, 1]. Define \(\mathbb{H}_{\gamma}=\{\left|z_{1}\right|^{\gamma}<\left|z_{2}\right|<1\}\), \(\gamma>0\). If \(\gamma=\frac{m}{n}\) is rational, \(\gcd(m,n)=1\), this domain is a monomial polyhedron with \(\alpha^{1}=(m,-n),\alpha^{2}=(0,1)\). In this case it can be shown that \(\kappa(\mathbb{H}_{m/n})=m+n\), yielding the interval \(p\in\left(\frac{2m+2n}{m+n+1},\frac{2m+2n}{m+n-1}\right)\) from (1.13) on which the Bergman projection is \(L^{p}\)-bounded. We also note the case of \(\mathbb{H}_{\gamma}\)
\(\gamma\) irrational - which is _not a monomial polyhedron_ by our definition. On these domains, it is shown in [1] that the Bergman projection is \(L^{p}\)-bounded if and only if \(p=2\).
This limited \(L^{p}\)-regularity is one of several deficiencies that can arise when the Bergman projection acts on \(L^{p}\) spaces of nonsmooth domains; other possible defects such as a lack of surjectivity onto \(A^{p}\) are discussed in Section 8. The Monomial Basis Projection avoids these defects and is shown to have far more favorable mapping behavior. Define for \(1<p<\infty\) the "absolute" operator of \(A^{p}(\mathscr{U})\) by
\[(\boldsymbol{P}_{p,1}^{\mathscr{U}})^{+}(f)(z)=\int_{\mathscr{U}}\left|K_{p,1 }^{\mathscr{U}}(z,w)\right|f(w)\,dV(w). \tag{1.14}\]
**Theorem 1.15**.: _Let \(1<p<\infty\) and let \(\mathscr{U}\subset\mathbb{C}^{n}\) be a monomial polyhedron. Then the operator \((\boldsymbol{P}_{p,1}^{\mathscr{U}})^{+}\) is bounded from \(L^{p}(\mathscr{U})\) to itself._
After setting the stage in Sections 4 and 5, the proof of Theorem 1.15 is finally carried out in Section 6. An application of this result is given in Section 7, where we represent the dual space \(A^{p}(\mathscr{U})^{\prime}\) as a _weighted_ Bergman space on \(\mathscr{U}\); see Theorem 7.17.
**Corollary 1.16**.: _The Monomial Basis Projection is a bounded surjective projection operator \(\boldsymbol{P}_{p,1}^{\mathscr{U}}:L^{p}(\mathscr{U})\to A^{p}(\mathscr{U})\)._
Proof.: It is clear that the boundedness of the operator \((\boldsymbol{P}_{p,1}^{\mathscr{U}})^{+}\) on \(L^{p}(\mathscr{U})\) implies the boundedness on \(L^{p}(\mathscr{U})\) of the integral operator in (1.11). However, in Proposition 3.22, we will show that whenever this integral operator satisfies \(L^{p}\) estimates, it coincides with the Monomial Basis Projection \(\boldsymbol{P}_{p,1}^{\mathscr{U}}:L^{p}(\mathscr{U})\to A^{p}(\mathscr{U})\). The MBP is a surjective projection operator whenever its defining series (2.15) converges.
### Acknowledgements
The authors thank Zeljko Cuckovic, Bernhard Lamel, Laszlo Lempert, Jeff McNeal and Brett Wick for their comments and suggestions, which led to mathematical and organizational improvements in this paper.
## 2. Basis Projections
### Bases in Banach spaces
Since our application uses bases indexed by multi-indices, we need a slightly more general notion of a basis in a Banach space than that of a Schauder basis described in Section 1.2. For a multi-index \(\alpha\in\mathbb{Z}^{n}\), let \(\left|\alpha\right|_{\infty}=\max_{1\leq j\leq n}\left|\alpha_{j}\right|\).
**Definition 2.1**.: Let \(A\) be a Banach space, \(n\) a positive integer and \(\mathfrak{A}\subset\mathbb{Z}^{n}\) a set of multi-indices. A collection \(\{e_{\alpha}:\alpha\in\mathfrak{A}\}\) of elements of \(A\) is said to form a _Banach-space basis_ of \(A\) if for each \(f\in A\), there are unique complex numbers \(\{c_{\alpha}:\alpha\in\mathfrak{A}\}\) such that
\[f=\lim_{N\to\infty}\sum_{\begin{subarray}{c}\left|\alpha\right|_{\infty}\leq N \\ \alpha\in\mathfrak{A}\end{subarray}}c_{\alpha}e_{\alpha}, \tag{2.2}\]
where the sequence of partial sums converges to \(f\) in the norm-topology of \(A\). The sums on the right hand side of (2.2) whose limit is taken are called _square partial sums_.
Schauder bases are special cases of this definition corresponding to taking \(n=1\) and \(\mathfrak{A}\) the set of positive integers. A related notion is that of a finite dimensional _Schauder decomposition_ (see [11]). A Banach-space basis in our sense determines a Schauder decomposition of the Banach space \(A\) into the finite-dimensional subspaces \(A_{n}=\operatorname{span}\{e_{\alpha}:\left|\alpha\right|_{\infty}=n\}\), \(n\geq 0\).
Adapting a classical proof ([11, Proposition 1.a.2]), is not difficult to see that for each \(\alpha\in\mathfrak{A}\), the map \(a_{\alpha}:A\to\mathbb{C}\) assigning to an element \(x\in A\) the coefficient \(c_{\alpha}\) of the series (2.2) is a bounded linear functional on \(A\). The collection of functionals \(\{a_{\alpha}:\alpha\in\mathfrak{A}\}\) is called the set of _coefficient functionals dual to the basis_\(\{e_{\alpha}:\alpha\in\mathfrak{A}\}\).
### Unique Hahn-Banach extension
Recall that a normed linear space is said to be _strictly convex_, if for distinct vectors \(f,g\) of unit norm, we have \(\left\|f+g\right\|<2\).
**Proposition 2.3** ([14]).: _If \(L\) is a Banach space such that its normed dual \(L^{\prime}\) is strictly convex, and \(f:A\to\mathbb{C}\) is a bounded linear functional on a subspace \(A\subset L\), then \(f\) admits a unique norm-preserving extension as a linear functional on \(L\)._
Proof.: That at least one functional extending \(f\) and having the same norm exists is the content of the Hahn-Banach theorem. Without loss of generality, the norm of \(f\) as an element of \(A^{\prime}\) is \(1\). Suppose that \(f\) admits two distinct extensions \(f_{1},f_{2}\in L^{\prime}\) such that \(\left\|f_{1}\right\|_{L^{\prime}}=\left\|f_{2}\right\|_{L^{\prime}}=1\). Then \(g=\frac{1}{2}(f_{1}+f_{2})\) is yet another extension of \(f\) to an element of \(L^{\prime}\), so \(\left\|g\right\|_{L^{\prime}}\geq\left\|f\right\|_{A^{\prime}}=1\). On the other hand, thanks to the strict convexity of \(L^{\prime}\), we have \(\left\|g\right\|_{L^{\prime}}<\frac{1}{2}\cdot 2=1\). This contradiction shows that \(f_{1}=f_{2}\).
The examples of unique Hahn-Banach extensions in this paper arise from the following:
**Proposition 2.4**.: _Let \((X,\mathcal{F},\mu)\) be a measure space, and \(1<p<\infty\). The dual of \(L^{p}(\mu)\) is strictly convex._
Proof.: Since the dual of \(L^{p}(\mu)\) can be isometrically identified with \(L^{q}(\mu)\) where \(q\) is the exponent conjugate to \(p\), it suffices to check that \(L^{q}(\mu)\) is strictly convex. Let \(f,g\) be distinct elements of \(L^{q}(\mu)\) such that \(\left\|f\right\|_{q}=\left\|g\right\|_{q}=1\). Suppose we have \(\left\|f+g\right\|_{q}=2=\left\|f\right\|_{q}+\left\|g\right\|_{q}\), so that we have equality in the Minkowski triangle inequality for \(L^{q}(\mu)\). It is well-known that equality occurs in the Minkowski triangle inequality only if \(f=cg\) for some \(c>0\). But since \(\left\|f\right\|_{q}=\left\|g\right\|_{q}=1\) this gives that \(c=1\), which is a contradiction since \(f\neq g\). Therefore \(\left\|f+g\right\|_{q}<2\) showing that \(L^{q}(\mu)\) is strictly convex.
### Basis projections
Let \(L\) be a Banach space such that its dual is strictly convex, \(A\) be a closed subspace, the collection \(\{e_{\alpha}:\alpha\in\mathfrak{A}\}\) a Banach-space basis of \(A\) in the sense of Definition 2.1, and let \(\{a_{\alpha}:\alpha\in\mathfrak{A}\}\) be the coefficient functionals dual to this basis. Let \(\widetilde{a}_{\alpha}:L\to\mathbb{C}\) be the unique Hahn-Banach extension of the functional \(a_{\alpha}:A\to\mathbb{C}\), where uniqueness follows by Propositon 2.3.
**Definition 2.5**.: A bounded linear projection operator \(\boldsymbol{P}\) from \(L\) onto \(A\) is called the _basis projection_ determined by \(\{e_{\alpha}:\alpha\in\mathfrak{A}\}\), if for each \(f\in L\), we have a series representation convergent in the norm of \(L\) given by
\[\boldsymbol{P}f=\lim_{N\to\infty}\sum_{\begin{subarray}{c}|\alpha|_{\infty} \leq N\\ \alpha\in\mathfrak{A}\end{subarray}}\widetilde{a}_{\alpha}(f)e_{\alpha}. \tag{2.6}\]
### The Szego projection
Let \(1<p<\infty\), \(L=L^{p}(\mathbb{T})\), the \(L^{p}\)-space of the circle with the normalized Haar measure \(\frac{1}{2\pi}d\theta\), and \(A=H^{p}(\mathbb{D})\), the Hardy space of the unit disc, the subspace of \(L^{p}(\mathbb{T})\) consisting of those elements of \(L^{p}(\mathbb{T})\) which are boundary values of holomorphic functions in the disc. Let \(\tau_{\alpha}(e^{i\theta})=e^{i\alpha\theta}\), \(\alpha\in\mathbb{Z}\), denote the \(\alpha\)-th trigonometric monomial on \(\mathbb{T}\). It is well-known that \(\{\tau_{\alpha}:\alpha\geq 0\}\) is a (normalized) Schauder basis of \(H^{p}(\mathbb{D})\), i.e., the partial sums of the Fourier series of a function in \(H^{p}(\mathbb{D})\) converge in the norm \(L^{p}(\mathbb{T})\). Notice that Schauder bases are simply Banach-space bases in the sense of Definition 2.1 where \(\mathfrak{A}\) is the set of positive integers.
**Proposition 2.7**.: _For \(1<p<\infty\), the basis projection from \(L^{p}(\mathbb{T})\) onto \(H^{p}(\mathbb{D})\) determined by the Schauder basis \(\{\tau_{\alpha}\}_{\alpha=0}^{\infty}\) exists, and coincides with the Szego projection._
Proof.: The coefficient functionals on \(H^{p}(\mathbb{D})\) dual to the Schauder basis \(\{\tau_{\alpha}:\alpha\geq 0\}\) are precisely the Fourier coefficient functionals \(\{a_{\alpha}\}_{\alpha=0}^{\infty}\):
\[a_{\alpha}(f)=\int_{0}^{2\pi}f(e^{i\theta})e^{-i\alpha\theta}\frac{d\theta}{2 \pi},\qquad f\in H^{p}(\mathbb{D}). \tag{2.8}\]
Notice that for \(f\in H^{p}(\mathbb{D})\), we have
\[|a_{\alpha}(f)|\leq\int_{0}^{2\pi}\left|f(e^{i\theta})\right|\frac{d\theta}{2\pi }\leq\left\|f\right\|_{L^{p}(\mathbb{T})}\left\|1\right\|_{L^{q}(\mathbb{T})}= \left\|f\right\|_{L^{p}(\mathbb{T})}, \tag{2.9}\]
where \(q\) is the Holder conjugate of \(p\), and we use Holder's inequality along with the fact that the measure is a probability measure. Therefore \(\left\|a_{\alpha}\right\|\leq 1\). But since \(\left\|\tau_{\alpha}\right\|_{L^{p}(\mathbb{T})}=1\), and \(a_{\alpha}(\tau_{\alpha})=1\), it follows that \(\left\|a_{\alpha}\right\|=1\). We now claim that the Hahn-Banach extension \(\widetilde{a}_{\alpha}:L^{p}(\mathbb{T})\to\mathbb{C}\) of the coefficient functional \(a_{\alpha}:H^{p}(\mathbb{D})\to\mathbb{C}\) is still the Fourier coefficient functional:
\[\widetilde{a}_{\alpha}(f)=\int_{0}^{2\pi}f(e^{i\theta})e^{-i\alpha\theta} \frac{d\theta}{2\pi},\qquad f\in L^{p}(\mathbb{T}).\]
Indeed, \(\widetilde{a}_{\alpha}\) is an extension of \(a_{\alpha}\), and repeating the argument of (2.9) shows \(\left\|\widetilde{a}_{\alpha}\right\|=1\), and thus it is a Hahn-Banach extension. Uniqueness follows from Propositions 2.3 and 2.4.
Let \(\boldsymbol{S}\) denote the basis projection from \(L^{p}(\mathbb{T})\) onto \(H^{p}(\mathbb{D})\) and let \(f\in L^{p}(\mathbb{T})\) be a trigonometric polynomial. Then formula (2.6) in this case becomes:
\[\boldsymbol{S}f(e^{i\phi})=\sum_{\alpha=0}^{\infty}\left(\int_{0}^{2\pi}f(e^{ i\theta})e^{-i\alpha\theta}\frac{d\theta}{2\pi}\right)e^{i\alpha\phi}=\int_{0}^{2 \pi}\frac{f(e^{i\theta})}{1-e^{i(\phi-\theta)}}\cdot\frac{d\theta}{2\pi}.\]
This shows that on the trigonometric polynomials, the basis projection coincides with the Szego projection, which is known to be represented by the singular integral at the end of the above chain of equalities. But as the Szego projection is bounded from \(L^{p}(\mathbb{T})\) onto \(H^{p}(\mathbb{D})\), it follows that the basis projection exists and equals the Szego projection on \(L^{p}(\mathbb{T})\).
### The Monomial Basis Projection
On a Reinhardt domain \(\Omega\subset\mathbb{C}^{n}\) each holomorphic function \(f\in\mathcal{O}(\Omega)\) has a unique _Laurent expansion_
\[f=\sum_{\alpha\in\mathbb{Z}^{n}}c_{\alpha}e_{\alpha}, \tag{2.10}\]
where \(c_{\alpha}\in\mathbb{C}\) and the series converges locally normally, i.e., for each compact \(K\subset\Omega\), the sum \(\sum_{\alpha}\left\|c_{\alpha}e_{\alpha}\right\|_{K}<\infty\), where \(\left\|\cdot\right\|_{K}=\sup_{K}\left|\cdot\right|\) is the sup norm (see e.g. [10]). It follows that (2.10) converges uniformly on compact subsets of \(\Omega\). Define
\[a_{\alpha}:\mathcal{O}(\Omega)\to\mathbb{C},\qquad a_{\alpha}(f)=c_{\alpha} \tag{2.11}\]
where \(c_{\alpha}\) is as above in (2.10). The functional \(a_{\alpha}\) is called the \(\alpha\)-th _Laurent coefficient functional_ of the domain \(\Omega\).
The following result shows that the Laurent monomials (under an appropriate ordering) form a basis of the Bergman space \(A^{p}(\Omega,\lambda)\), where \(\lambda\) is an admissible multi-radial weight. The unweighted version of this result (the case \(\lambda\equiv 1\)) was proved in [10], inspired by the case of the disc considered in [11]. The more general Theorem 2.12 is proved in exactly the same way, by replacing the implicit weight \(\lambda\equiv 1\) in [10, Theorem 3.11] with a general multi-radial weight \(\lambda\). A key ingredient of the proof, the density of Laurent polynomials in \(A^{p}(\Omega,\lambda)\), can also be proved using Cesaro summability of power series (see [10, Theorem 2.5].) Recall that the notation and conventions established in Section 1.4 are in force throughout the paper.
**Theorem 2.12**.: _The collection of Laurent monomials \(\{e_{\alpha}:\alpha\in\mathcal{S}_{p}(\Omega,\lambda)\}\) forms a Banach-space basis of \(A^{p}(\Omega,\lambda)\). The functionals dual to this basis are the coefficient functionals \(\{a_{\alpha}:\alpha\in\mathcal{S}_{p}(\Omega,\lambda)\}\), and the norm of \(a_{\alpha}:A^{p}(\Omega,\lambda)\to\mathbb{C}\) is given by_
\[\left\|a_{\alpha}\right\|_{A^{p}(\Omega,\lambda)^{\prime}}=\frac{1}{\left\|e_{ \alpha}\right\|_{p,\lambda}}. \tag{2.13}\]
Thus, if \(f\in A^{p}(\Omega,\lambda)\), the Laurent series of \(f\) written as \(\sum_{\alpha\in\mathbb{Z}^{n}}a_{\alpha}(f)e_{\alpha}\) consists only of terms corresponding to monomials \(e_{\alpha}\in A^{p}(\Omega,\lambda)\), i.e., if \(\alpha\not\in\mathcal{S}_{p}(\Omega,\lambda)\), then \(a_{\alpha}(f)=0\).
We are ready to formally define the main object of this paper:
**Definition 2.14**.: A bounded linear projection \(\boldsymbol{P}^{\Omega}_{p,\lambda}\) from \(L^{p}(\Omega,\lambda)\) onto \(A^{p}(\Omega,\lambda)\) is called the _Monomial Basis Projection_ of \(A^{p}(\Omega,\lambda)\), if for \(f\in L^{p}(\Omega,\lambda)\) it admits the series representation convergent in the norm of \(L^{p}(\Omega,\lambda)\) given by
\[\boldsymbol{P}^{\Omega}_{p,\lambda}(f)=\lim_{N\to\infty}\sum_{ \begin{subarray}{c}|\alpha|_{\infty}\leq N\\ \alpha\in\mathcal{S}_{p}(\Omega,\lambda)\end{subarray}}\widetilde{a}_{\alpha} (f)e_{\alpha}, \tag{2.15}\]
where \(\widetilde{a}_{\alpha}:L^{p}(\Omega,\lambda)\to\mathbb{C}\) is the unique Hahn-Banach extension of the coefficient functional \(a_{\alpha}:A^{p}(\Omega,\lambda)\to\mathbb{C}\).
_Remark 2.16_.: The surjectivity onto the space \(A^{p}(\Omega,\lambda)\) is built in to the definition of the Monomial Basis Projection, since it acts as the identity operator there. Notice that the MBP is a basis projection in the sense of Definition 2.5, when \(L=L^{p}(\Omega),A=A^{p}(\Omega)\) and \(\{e_{\alpha}\}\) is the monomial basis of \(A^{p}(\Omega,\lambda)\). \(\Diamond\)
## 3. The monomial basis kernel
### Existence of the kernel function
The Monomial Basis Kernel of \(A^{p}(\Omega,\lambda)\) was introduced as a formal series in (1.10). Using (1.8) and (1.9), we can write
\[\chi_{p}^{*}e_{\alpha}(w)=e_{\alpha}(w)\left|e_{\alpha}(w)\right|^{p-2}, \tag{3.1}\]
which allows for the re-expression of the MBK as
\[K_{p,\lambda}^{\Omega}(z,w)=\sum_{\alpha\in\mathcal{S}_{p}(\Omega,\lambda)} \frac{e_{\alpha}(z)\overline{e_{\alpha}(w)}\left|e_{\alpha}(w)\right|^{p-2}}{ \left\|e_{\alpha}\right\|_{p,\lambda}^{p}}. \tag{3.2}\]
A sufficient condition for the convergence of this series is now given.
**Theorem 3.3**.: _Let \(\Omega\) be a pseudoconvex Reinhardt domain in \(\mathbb{C}^{n}\) and \(\lambda\) be an admissible multi-radial weight function on \(\Omega\). The series (3.2) defining \(K_{p,\lambda}^{\Omega}(z,w)\) converges locally normally on \(\Omega\times\Omega\)._
We need two lemmas for the proof of this result. The first is an analog for Laurent series of Abel's lemma on the domain of convergence of a Taylor series ([12, p. 14]):
**Lemma 3.4**.: _Let \(\Omega\subset\mathbb{C}^{n}\) be a Reinhardt domain, define \(\mathcal{S}(\Omega)=\{\alpha\in\mathbb{Z}^{n}:e_{\alpha}\in\mathcal{O}(\Omega)\}\), and for coefficients \(a_{\alpha}\in\mathbb{C},\,\alpha\in\mathcal{S}(\Omega)\), let_
\[\sum_{\alpha\in\mathcal{S}(\Omega)}a_{\alpha}e_{\alpha} \tag{3.5}\]
_be a formal Laurent series on \(\Omega\). Suppose that for each \(z\in\Omega\) there is a \(C>0\) such that for each \(\alpha\in\mathcal{S}(\Omega)\) we have \(|a_{\alpha}e_{\alpha}(z)|\leq C.\) Then (3.5) converges locally normally on \(\Omega\)._
Proof.: See Lemma 1.6.3 and Proposition 1.6.5 of [11, Section 1.6].
Given a Reinhardt domain \(\Omega\subset\mathbb{C}^{n}\) and a number \(m>0\), define the \(m\)_-th Reinhardt power_ of \(\Omega\) to be the Reinhardt domain
\[\Omega^{(m)}=\left\{z\in\mathbb{C}^{n}:\left(|z_{1}|^{\frac{1}{m}},\ldots,|z_{ n}|^{\frac{1}{m}}\right)\in\Omega\right\}. \tag{3.6}\]
If \(\Omega\) is pseudoconvex, then for each \(m>0\) the domain \(\Omega^{(m)}\) is pseudoconvex. Indeed, recall the logarithmic shadow of \(\Omega\), the subset \(\log(\Omega)\) of \(\mathbb{R}^{n}\) given by
\[\log(\Omega)=\{(\log\left|z_{1}\right|,\ldots,\log\left|z_{n}\right|):z\in \Omega\}. \tag{3.7}\]
Recall also that \(\Omega\) is pseudoconvex if and only if the set \(\log(\Omega)\) is convex, and \(\Omega\) is "weakly relatively complete" ([10, Theorem 1.11.13 and Proposition 1.11.6]). It is easily seen that the condition of weak relative completeness is preserved by the construction of Reinhardt powers, and
\[\log\left(\Omega^{(m)}\right)=\{(m\log\left|z_{1}\right|,\ldots,m\log\left|z_{ n}\right|):z\in\Omega\}=m\log(\Omega)\]
is itself convex, if \(\log(\Omega)\) is convex. So \(\Omega^{(m)}\) is pseudoconvex if and only if \(\Omega\) is pseudoconvex.
The second result needed in the proof of Theorem 3.3 is the following:
**Lemma 3.8**.: _Let \(A\) be a Banach space of holomorphic functions on \(\Omega\) and suppose that for each \(z\in\Omega\) the evaluation functional \(\phi_{z}:A\to\mathbb{C}\) given by \(\phi_{z}(f)=f(z)\) for \(f\in A\) is continuous. Then for \(m>0\), the following series converges locally normally on \(\Omega^{(m)}\):_
\[\sum_{\begin{subarray}{c}\alpha\in\mathbb{Z}^{n}\\ e_{\alpha}\in A\end{subarray}}\frac{e_{\alpha}}{\left\|e_{\alpha}\right\|_{A} ^{m}}.\]
Proof.: Let \(z\in\Omega^{(m)}\) so that there is \(\zeta\in\Omega\) such that \(\left|z_{j}\right|=\left|\zeta_{j}\right|^{m}\) for each \(j\). If \(\phi_{\zeta}:A\to\mathbb{C}\) is the evaluation functional, there is a constant \(C>0\) such that \(\left|\phi_{\zeta}(f)\right|\leq C\left\|f\right\|_{A}\) for each \(f\in A\). Then for each \(\alpha\in\mathbb{Z}^{n}\) such that \(e_{\alpha}\in A\) we have
\[\frac{\left|e_{\alpha}(z)\right|}{\left\|e_{\alpha}\right\|_{A}^{m}}=\left( \frac{\left|e_{\alpha}(\zeta)\right|}{\left\|e_{\alpha}\right\|_{A}}\right)^{ m}=\left(\frac{\phi_{\zeta}(e_{\alpha})}{\left\|e_{\alpha}\right\|_{A}} \right)^{m}\leq C^{m}.\]
The result now follows by Lemma 3.4.
Proof of Theorem 3.3.: Let \(t_{j}=z_{j}\overline{w_{j}}\left|w_{j}\right|^{p-2}\), \(1\leq j\leq n\), and \(t=(t_{1},\ldots,t_{n})\). Then the series for the MBK given in (3.2) assumes the form
\[K_{p,\lambda}^{\Omega}(z,w)=\sum_{\alpha\in\mathcal{S}_{p}(\Omega,\lambda)} \frac{t^{\alpha}}{\left\|e_{\alpha}\right\|_{p,\lambda}^{p}}. \tag{3.9}\]
Since Bergman's inequality (1.5) holds for admissible weights by definition, point evaluations are bounded on \(A^{p}(\Omega,\lambda)\). Lemma 3.8 therefore guarantees the series in (3.9) above converges locally normally on \(\Omega^{(p)}\) defined in (3.6). It thus suffices to show that the image of the map \(\Omega\times\Omega\to\mathbb{C}^{n}\) given by
\[(z,w)\longmapsto(t_{1},\ldots,t_{n})\]
coincides with \(\Omega^{(p)}\), since then the image of a compact set \(K\subset\Omega\times\Omega\) is a compact subset of \(\Omega^{(p)}\), on which the series (3.9) is known to converge normally.
Now consider the logarithmic shadow \(\log(\Omega\times\Omega)=\log(\Omega)\times\log(\Omega)\) defined in (3.7). Due to the log-convexity of pseudoconvex Reinhardt domains, what we want to prove is equivalent to saying that the map from \(\log(\Omega)\times\log(\Omega)\to\mathbb{R}^{n}\) given by
\[(\xi,\eta)\longmapsto\xi+(p-1)\eta \tag{3.10}\]
has image exactly \(p\log(\Omega)=\{p\theta:\theta\in\log(\Omega)\}=\log\left(\Omega^{(p)}\right)\). But since \(\log(\Omega)\) is convex, the map on \(\log(\Omega)\times\log(\Omega)\) given by
\[(\xi,\eta)\longmapsto\tfrac{1}{p}\xi+\left(1-\tfrac{1}{p}\right)\eta\]
has image contained in \(\log(\Omega)\). Taking \(\xi=\eta\) we see that the image is exactly \(\log(\Omega)\). Therefore the image of (3.10) is precisely \(p\log(\Omega)\) and we have proved that the series (3.2) converges locally normally on \(\Omega\times\Omega\)
### More general admissible weights
Continuous positive functions \(\lambda\) are always admissible weights in the sense of Section 1.4, item (3). In Sections 4, 5, 6 and 7 below, we encounter more general multi-radial weights which vanish or blow up along the axes. Let \(Z\subset\mathbb{C}^{n}\) denote the union of the coordinate hyperplanes
\[Z=\{z\in\mathbb{C}^{n}:z_{j}=0\text{ for some }1\leq j\leq n\}.\]
**Proposition 3.11**.: _Let \(U\) be a domain in \(\mathbb{C}^{n}\) and let \(U^{*}=U\setminus Z\). Suppose that \(\lambda:U\to[0,\infty]\) is a measurable function on \(U\) such that the restriction \(\lambda|_{U^{*}}\) is an admissible weight on \(U^{*}\). Then \(\lambda\) is an admissible weight on \(U\)._
Proof.: Assume that \(U\cap Z\neq\varnothing\), since otherwise there is nothing to show, and set \(\lambda^{*}=\lambda|_{U^{*}}\). If \(f\in A^{p}(U,\lambda)\), then since \(\lambda^{*}\) is admissible on \(U^{*}\), if a compact \(K\) is contained in \(U^{*}\), there exists a \(C_{K}>0\) such that
\[\sup_{K}|f|\leq C_{K}\left\|f\right\|_{A^{p}(U^{*},\lambda^{*})}=C_{K}\left\| f\right\|_{A^{p}(U,\lambda)}.\]
To complete the proof, we need to show that for each \(\zeta\in U\cap Z\), there is a compact neighborhood \(K\) of \(\zeta\) in \(U\) such that (1.5) holds for each \(f\in A^{p}(U,\lambda)\). Now, there is a polydisc \(P\) centered at \(\zeta\) given by \(P=\{z\in\mathbb{C}^{n}:|z_{j}-\zeta_{j}|<r,\,1\leq j\leq n\}\) such that the closure \(\overline{P}\) is contained in \(U\). We can assume further that the radius \(r>0\) is chosen so that it is distinct from each of the nonnegative numbers \(\left|\zeta_{j}\right|,\,1\leq j\leq n\). Then the "distinguished boundary"
\[T=\{z\in\mathbb{C}^{n}:|z_{j}-\zeta_{j}|=r,\,1\leq j\leq n\}\]
of this polydisc satisfies the condition that \(T\subset U^{*}\). Therefore for each \(f\in\mathcal{O}(U)\) and each \(w\in P\), we have the Cauchy representation:
\[f(w)=\frac{1}{(2\pi i)^{n}}\int_{T}\frac{f(z_{1},\ldots,z_{n})}{(z_{1}-w_{1}) \ldots(z_{n}-w_{n})}\,dz_{1}\ldots dz_{n} \tag{3.12}\]
where the integral is an \(n\)-times repeated contour integral on \(T\). Now suppose that \(K\) is a compact subset of \(P\) containing the center \(\zeta\), and let \(\rho>0\) be such that \(|z_{j}-w_{j}|\geq\rho\) for each \(z\in T\) and \(w\in K\). Then for \(w\in K\), a sup-norm estimate on (3.12) gives
\[|f(w)|\leq\frac{1}{(2\pi)^{n}}\cdot\frac{\sup_{T}|f|}{\rho^{n}}(2\pi r)^{n} \leq\left(\frac{r}{\rho}\right)^{n}\cdot\left\|f\right\|_{A^{p}(U^{*},\lambda^ {*})}=\left(\frac{r}{\rho}\right)^{n}\cdot\left\|f\right\|_{A^{p}(U,\lambda)}\]
where we used the fact that \(\lambda^{*}\) is admissible on \(U^{*}\). The result follows.
### Integral representation of the Monomial Basis Projection
**Theorem 3.13**.: _If the Monomial Basis Projection \(\boldsymbol{P}^{\Omega}_{p,\lambda}:L^{p}(\Omega,\lambda)\to A^{p}(\Omega,\lambda)\) exists, then_
\[\boldsymbol{P}^{\Omega}_{p,\lambda}(f)(z)=\int_{\Omega}K^{\Omega}_{p,\lambda} (z,w)f(w)\lambda(w)dV(w),\qquad f\in L^{p}(\Omega,\lambda), \tag{3.14}\]
_and for each \(z\in\Omega\), we have \(K^{\Omega}_{p,\lambda}(z,\cdot)\in L^{q}(\Omega,\lambda)\)._
When \(p=2\), this is simply the representation of the Bergman projection \(\boldsymbol{B}^{\Omega}_{\lambda}\) of \(A^{2}(\Omega,\lambda)\) by its Bergman kernel. But the existence of the MBP of \(A^{p}(\Omega,\lambda)\) for \(p\neq 2\) is not guaranteed by abstract Hilbert-space theory. We note a related consequence of Theorem 3.13, which should be contrasted with Proposition 2.7:
**Corollary 3.15**.: _Suppose the Bergman projection \(\boldsymbol{B}^{\Omega}_{\lambda}:L^{2}(\Omega,\lambda)\to A^{2}(\Omega,\lambda)\) extends by continuity to a bounded operator \(\boldsymbol{B}^{\Omega}_{\lambda}:L^{p}(\Omega,\lambda)\to A^{p}(\Omega,\lambda)\), \(p\neq 2\). The extension is not the basis projection determined by the monomial basis \(\{e_{\alpha}:\alpha\in\mathcal{S}_{p}(\Omega,\lambda)\}\)._
Proof.: This is immediate, since the Bergman kernel is distinct from the MBK for \(p\neq 2\).
By Proposition 2.4, the dual space of \(L^{p}(\Omega,\lambda)\) is strictly convex. Proposition 2.3 thus guarantees that each coefficient functional in the set \(\{a_{\alpha}:\alpha\in\mathcal{S}_{p}(\Omega,\lambda)\}\) dual to the monomial basis \(\{e_{\alpha}:\alpha\in\mathcal{S}_{p}(\Omega,\lambda)\}\) has a unique Hahn-Banach extension to a functional \(\widetilde{a}_{\alpha}:L^{p}(\Omega,\lambda)\to\mathbb{C}\). We now identify this extension:
**Proposition 3.16**.: _For \(\alpha\in\mathcal{S}_{p}(\Omega,\lambda)\), let \(g_{\alpha}\) be the function defined on \(\Omega\) by_
\[g_{\alpha}=\frac{\chi_{p}^{p}e_{\alpha}}{\|e_{\alpha}\|_{p,\lambda}^{p}}=\frac {e_{\alpha}\left|e_{\alpha}\right|^{p-2}}{\|e_{\alpha}\|_{p,\lambda}^{p}}. \tag{3.17}\]
_Then the unique Hahn-Banach extension \(\widetilde{a}_{\alpha}:L^{p}(\Omega,\lambda)\to\mathbb{C}\) of the coefficient functional \(a_{\alpha}:A^{p}(\Omega,\lambda)\to\mathbb{C}\) is given by_
\[\widetilde{a}_{\alpha}(f)=\int_{\Omega}f\cdot\overline{g_{\alpha}}\lambda dV, \qquad f\in L^{p}(\Omega,\lambda). \tag{3.18}\]
Proof.: First we compute the norm of \(g_{\alpha}\) in \(L^{q}(\Omega,\lambda)\):
\[\|g_{\alpha}\|_{q,\lambda}^{q}=\frac{1}{\|e_{\alpha}\|_{p,\lambda}^{pq}}\int _{\Omega}|e_{\alpha}|^{(p-1)q}\,\lambda\,dV=\frac{1}{\|e_{\alpha}\|_{p,\lambda }^{pq}}\,\|e_{\alpha}\|_{p,\lambda}^{p}=\frac{1}{\|e_{\alpha}\|_{p,\lambda}^{ pq-p}}=\frac{1}{\|e_{\alpha}\|_{p,\lambda}^{q}}.\]
It follows that \(g_{\alpha}\in L^{q}(\Omega,\lambda)\) and the linear functional in (3.18) satisfies \(\widetilde{a}_{\alpha}\in L^{p}(\Omega,\lambda)^{\prime}\) with norm is given by
\[\|\widetilde{a}_{\alpha}\|_{L^{p}(\Omega,\lambda)^{\prime}}=\|g_{\alpha}\|_{q, \lambda}=\frac{1}{\|e_{\alpha}\|_{p,\lambda}}. \tag{3.19}\]
By (2.13), we have \(\|a_{\alpha}\|_{A^{p}(\Omega,\lambda)^{\prime}}=\|\widetilde{a}_{\alpha}\|_{ L^{p}(\Omega,\lambda)^{\prime}}\). To complete the proof it remains to show that \(\widetilde{a}_{\alpha}\) is an extension of \(a_{\alpha}\).
By Theorem 2.12, the linear span of \(\{e_{\beta}:\beta\in\mathcal{S}_{p}(\Omega,\lambda)\}\) is dense in \(A^{p}(\Omega,\lambda)\). Therefore we only need to show that for each \(\beta\in\mathcal{S}_{p}(\Omega,\lambda)\), we have \(\widetilde{a}_{\alpha}(e_{\beta})=a_{\alpha}(e_{\beta})\). Since \(\lambda\) is multi-radial, there is a function \(\ell\) on the Reinhardt shadow \(|\Omega|\) such that \(\lambda(z)=\ell(|z_{1}|,\ldots,|z_{n}|)\). And since \(g_{\alpha}\in L^{q}(\Omega,\lambda)\) and \(e_{\beta}\in L^{p}(\Omega,\lambda)\), the product \(e_{\beta}\overline{g_{\alpha}}\in L^{1}(\Omega,\lambda)\). Fubini's theorem therefore implies
\[\int_{\Omega}e_{\beta}\overline{g_{\alpha}}\lambda dV=\frac{1}{\|e_{\alpha}\| _{p,\lambda}^{p}}\int_{|\Omega|}r^{\beta}(r^{\alpha})^{p-1}\left(\int_{\mathbb{ T}^{n}}e^{i(\beta-\alpha,\theta)}d\theta\right)r_{1}r_{2}\ldots r_{n}\ell dr _{1}\ldots dr_{n}, \tag{3.20}\]
where \(d\theta=d\theta_{1}\ldots d\theta_{n}\) is the natural volume element of the unit torus \(\mathbb{T}^{n}\). First suppose that \(\beta\neq\alpha\), so that the integral over \(\mathbb{T}^{n}\) on the right hand side of (3.20) vanishes. Then we have \(\int_{\Omega}e_{\beta}\overline{g_{\alpha}}\lambda dV=0=a_{\alpha}(e_{\beta})\). If \(\beta=\alpha\), (3.20) gives
\[\int_{\Omega}e_{\alpha}\overline{g_{\alpha}}\lambda dV=\frac{(2\pi)^{n}}{\|e_{ \alpha}\|_{p,\lambda}^{p}}\cdot\int_{|\Omega|}(r^{\alpha})^{p}r_{1}r_{2}\ldots r _{n}\ell dr_{1}\ldots dr_{n}=\frac{1}{\|e_{\alpha}\|_{p,\lambda}^{p}}\cdot \|e_{\alpha}\|_{p,\lambda}^{p}=1=a_{\alpha}(e_{\alpha}).\]
It follows that \(\widetilde{a}_{\alpha}\) is a norm preserving extension of \(a_{\alpha}\). Since this extension is unique, the result follows.
Observe that by combining (3.2) and (3.17), the MBK of \(A^{p}(\Omega,\lambda)\) can be written as
\[K_{p,\lambda}^{\Omega}(z,w)=\sum_{\alpha\in\mathcal{S}_{p}(\Omega,\lambda)}e_{ \alpha}(z)\overline{g_{\alpha}(w)}. \tag{3.21}\]
We now establish our necessary and sufficient condition for the existence of the MBP:
**Proposition 3.22**.: _Define an integral operator on \(C_{c}(\Omega)\) by_
\[\mathbf{Q}f(z)=\int_{\Omega}K_{p,\lambda}^{\Omega}(z,w)f(w)\lambda(w)dV(w),\qquad f \in C_{c}(\Omega). \tag{3.22}\]
_The MBP of \(A^{p}(\Omega,\lambda)\) exists if and only if \(\mathbf{Q}\) satisfies \(L^{p}\)-estimates, i.e., there is a constant \(C>0\) such that for each \(f\in C_{c}(\Omega)\) we have the inequality_
\[\left\|\mathbf{Q}f\right\|_{p,\lambda}\leq C\left\|f\right\|_{p,\lambda}. \tag{3.24}\]
Proof.: Recall that \(\Omega\subset\mathbb{C}^{n}\) is a pseudoconvex Reinhardt domain and \(\lambda\) is an admissible multi-radial weight. The function \(K^{\Omega}_{p,\lambda}\) is continuous on \(\Omega\times\Omega\) by Theorem 3.3, so the integral in (3.23) exists for each \(z\in\Omega\). Since the function \(z\mapsto K^{\Omega}_{p,\lambda}(z,w)\) is holomorphic for each \(w\in\Omega\), \(\mathbf{Q}f\) is holomorphic for \(f\in C_{c}(\Omega)\), for instance, by applying Morera's theorem in each variable, or equivalently, by applying \(\bar{\partial}\) to both sides.
Let \(f\in C_{c}(\Omega)\). Since the series for \(K^{\Omega}_{p,\lambda}\) converges absolutely and uniformly on the compact subset \(\{z\}\times\operatorname{supp}(f)\subset\Omega\times\Omega\), equation (3.21) gives
\[\mathbf{Q}f(z) =\int_{\Omega}\bigg{(}\sum_{\alpha\in\mathcal{S}_{p}(\Omega, \lambda)}e_{\alpha}(z)\overline{g_{\alpha}(w)}\bigg{)}f(w)\lambda(w)\,dV(w)\] \[=\sum_{\alpha\in\mathcal{S}_{p}(\Omega,\lambda)}\left(\int_{ \Omega}f(w)\overline{g_{\alpha}(w)}\lambda(w)\,dV(w)\right)e_{\alpha}(z)=\sum _{\alpha\in\mathcal{S}_{p}(\Omega,\lambda)}\widetilde{a}_{\alpha}(f)e_{ \alpha}(z). \tag{3.25}\]
The series (3.25) converges unconditionally and is the Laurent series of the holomorphic function \(\mathbf{Q}f\). It is therefore uniformly convergent for \(z\) in compact subsets of \(\Omega\).
Suppose now that the MBP \(\mathbf{P}^{\Omega}_{p,\lambda}:L^{p}(\Omega,\lambda)\to A^{p}(\Omega,\lambda)\) exists, which by Definition 2.14 is a bounded, surjective, linear projection given by the following limit of partial sums, convergent in \(A^{p}(\Omega,\lambda)\):
\[\mathbf{P}^{\Omega}_{p,\lambda}f=\lim_{N\to\infty}\sum_{\begin{subarray}{c}| \alpha|_{\infty}\leq N\\ \alpha\in\mathcal{S}_{p}(\Omega,\lambda)\end{subarray}}\widetilde{a}_{\alpha }(f)e_{\alpha},\qquad f\in L^{p}(\Omega,\lambda). \tag{3.26}\]
Since convergence in \(A^{p}(\Omega,\lambda)\) implies uniform convergence on compact subsets, it follows that for \(f\in C_{c}(\Omega)\), \(\mathbf{Q}f=\mathbf{P}^{\Omega}_{p,\lambda}f\). Therefore \(\mathbf{Q}\) satisfies \(L^{p}\)-estimates, i.e. (3.24) holds.
Conversely, suppose that (3.24) holds. Then \(\mathbf{Q}\) can then be extended by continuity to an operator \(\widetilde{\mathbf{Q}}\) on \(L^{p}(\Omega,\lambda)\) with the same norm. We claim that \(\widetilde{\mathbf{Q}}\) is the MBP.
If \(f\in L^{p}(\Omega,\lambda)\), we can find a sequence \(\{f_{j}\}\subset C_{c}(\Omega)\) such that \(f_{j}\to f\) in \(L^{p}(\Omega,\lambda)\). Each \(\mathbf{Q}f_{j}\in A^{p}(\Omega,\lambda)\) and (by definition) \(\mathbf{Q}f_{j}\to\widetilde{\mathbf{Q}}f\) in \(L^{p}(\Omega,\lambda)\). But this implies \(\mathbf{Q}f_{j}\to\widetilde{\mathbf{Q}}f\) uniformly on compact subsets, so the limit \(\widetilde{\mathbf{Q}}f\) is holomorphic, and thus the range of \(\widetilde{\mathbf{Q}}\) is contained in \(A^{p}(\Omega,\lambda)\). A direct computation now shows \(\widetilde{\mathbf{Q}}e_{\alpha}=e_{\alpha}\) for \(\alpha\in\mathcal{S}_{p}(\Omega,\lambda)\), and it follows that \(\widetilde{\mathbf{Q}}\) is a surjective projection from \(L^{p}(\Omega,\lambda)\) to \(A^{p}(\Omega,\lambda)\).
If \(f\in C_{c}(\Omega)\), then \(\mathbf{Q}f=\widetilde{\mathbf{Q}}f\in A^{p}(\Omega,\lambda)\) and by Theorem 2.12 the Laurent series expansion of \(\widetilde{\mathbf{Q}}f\) given by (3.25) converges (as a sequence of square partial sums) in \(A^{p}(\Omega,\lambda)\):
\[\widetilde{\mathbf{Q}}f=\lim_{N\to\infty}\sum_{\begin{subarray}{c}|\alpha|_{\infty }\leq N\\ \alpha\in\mathcal{S}_{p}(\Omega,\lambda)\end{subarray}}\widetilde{a}_{\alpha} (f)e_{\alpha}. \tag{3.27}\]
For a general \(g\in L^{p}(\Omega,\lambda)\), \(\widetilde{\mathbf{Q}}g\in A^{p}(\Omega,\lambda)\) and so again by Theorem 2.12,
\[\widetilde{\mathbf{Q}}g=\lim_{N\to\infty}\sum_{\begin{subarray}{c}|\alpha|_{\infty }\leq N\\ \alpha\in\mathcal{S}_{p}(\Omega,\lambda)\end{subarray}}a_{\alpha}(\widetilde{ \mathbf{Q}}g)e_{\alpha}. \tag{3.28}\]
It follows that on \(C_{c}(\Omega)\) we have the identity \(a_{\alpha}\circ\mathbf{Q}=\widetilde{a}_{\alpha}\). This relation extends by continuity to give \(a_{\alpha}\circ\widetilde{\mathbf{Q}}=\widetilde{a}_{\alpha}\) as functionals on \(L^{p}(\Omega,\lambda)\). Then (3.28) becomes
\[\widetilde{\mathbf{Q}}g=\lim_{N\to\infty}\sum_{\begin{subarray}{c}|\alpha|_{\infty }\leq N\\ \alpha\in\mathcal{S}_{p}(\Omega,\lambda)\end{subarray}}\widetilde{a}_{\alpha}( g)e_{\alpha}.\]
In other words, \(\widetilde{\mathbf{Q}}\) is the MBP, as we wanted to show.
Proof of Theorem 3.13.: Since the MBP exists, by Proposition 3.22 the operator \(\mathbf{Q}\) of (3.23) satisfies \(L^{p}\)-estimates. Then, by the continuity of point-evaluation in \(A^{p}(\Omega,\lambda)\), for each \(z\in\Omega\) the map \(g\mapsto\mathbf{Q}g(z)\) is a bounded linear functional on \(L^{p}(\Omega,\lambda)\). Formula (3.23) representing this functional now shows that \(K^{\Omega}_{p,\lambda}(z,\cdot)\in L^{q}(\Omega,\lambda)\). Standard techniques of real analysis (cutting off and mollification) gives us a sequence \(\{f_{j}\}\subset C_{c}(\Omega)\) such that \(f_{j}\to f\) in \(L^{p}(\Omega,\lambda)\). Therefore for each \(z\in\Omega\), the sequence \(\{K^{\Omega}_{p,\lambda}(z,\cdot)f_{j}(\cdot)\}\subset C_{c}(\Omega)\) converges in \(L^{1}(\Omega,\lambda)\) to the limit \(K^{\Omega}_{p,\lambda}(z,\cdot)f(\cdot)\). Since integration against the weight \(\lambda\) is a bounded linear functional on \(L^{1}(\Omega,\lambda)\), we obtain (3.14) in the limit.
## 4. The one dimensional case
In this section we compute Monomial Basis Kernels on the unit disc \(\mathbb{D}\) and punctured unit disc \(\mathbb{D}^{*}\) - specifically, the MBKs of the spaces \(A^{p}(\mathbb{D},\mu_{\gamma})\) and \(A^{p}(\mathbb{D}^{*},\mu_{\gamma})\) where \(\mu_{\gamma}(z)=|z|^{\gamma}\). From these formulas it is shown that the corresponding Monomial Basis Projections are absolutely bounded integral operators. We begin with a more general computation of certain subkernels that are needed in Section 6.
### Arithmetic progression subkernels on \(\mathbb{D}\) and \(\mathbb{D}^{*}\)
Let \(a,b\in\mathbb{Z}\) with \(b\) positive, \(U=\mathbb{D}\) or \(\mathbb{D}^{*}\), \(1<p<\infty\) and \(\mu_{\gamma}(z)=|z|^{\gamma}\), \(\gamma\in\mathbb{R}\). Consider the set of integers
\[\mathcal{A}(U,p,\gamma,a,b)=\{\alpha\in\mathbb{Z}:\alpha\equiv a\mod b\} \cap\mathcal{S}_{p}(U,\mu_{\gamma}), \tag{4.1}\]
where as usual, \(\mathcal{S}_{p}(U,\mu_{\gamma})\subset\mathbb{Z}\) is the set of \(\alpha\) such that \(e_{\alpha}\in A^{p}(U,\mu_{\gamma})\). Notice that \(a\) is determined only modulo \(b\), so we can always assume that \(0\leq a\leq b-1\). Notice also that if \(b=1\) and \(a=0\) we have \(\mathcal{A}(U,p,\gamma,0,1)=\mathcal{S}_{p}(U,\mu_{\gamma})\). We now identify \(\mathcal{A}(U,p,\gamma,a,b)\) with an arithmetic progression:
**Proposition 4.2**.: _Let \(U,p,\gamma,a,b\) be as above. There is an integer \(\theta\) such that_
\[\mathcal{A}(U,p,\gamma,a,b)=\{\theta+\nu b:\nu\geq 0,\nu\in\mathbb{Z}\}. \tag{4.3}\]
Proof.: Let \(U=\mathbb{D}^{*}\). We claim that \(\alpha\in\mathcal{S}_{p}(\mathbb{D}^{*},\mu_{\gamma})\) if and only if \(p\alpha+\gamma+2>0\). Indeed,
\[\|e_{\alpha}\|^{p}_{p,\mu_{\gamma}}=\int_{\mathbb{D}^{*}}|z|^{p\alpha+\gamma} \,dV=2\pi\int_{0}^{1}r^{p\alpha+\gamma+1}\,dr=\frac{2\pi}{p\alpha+\gamma+2}, \tag{4.4}\]
as long as \(p\alpha+\gamma+2>0\), otherwise the integral diverges. Now let \(\theta\) be the smallest integer such that (i) \(\theta\equiv a\mod b\), and (ii) \(p\theta+\gamma+2>0\). Clearly (4.3) holds.
The case \(U=\mathbb{D}\) is nearly identical, but the condition that \(e_{\alpha}\) belongs to \(A^{p}(\mathbb{D},\mu_{\gamma})\) means that \(\alpha\) must be nonnegative. If \(\theta\) is the smallest integer in the set \(\mathcal{S}_{p}(\mathbb{D},\mu_{\gamma})\), it is determined now by three conditions: (i) \(\theta\equiv a\mod b\), (ii) \(p\theta+\gamma+2>0\), and (iii) \(\theta\geq 0\).
_Remark 4.5_.: For \(U,p,\gamma,a,b\) as above (with \(0\leq a\leq b-1\)), we can determine \(\theta\) explicitly:
\[\theta=\begin{cases}a+b\ell,&U=\mathbb{D}^{*}\\ \max\{a+b\ell,a\},&U=\mathbb{D},\end{cases}\quad\text{where}\quad\ell=\left[ -\frac{\gamma+2}{pb}-\frac{a}{b}+1\right].\]
Now define for \(z,w\in U\) the _arithmetic progression subkernel_
\[k^{U}_{p,\gamma,a,b}(z,w)=\sum_{\alpha\in\mathcal{A}(U,p,\gamma,a,b)}\frac{e_{ \alpha}(z)\overline{\chi_{p}^{*}e_{\alpha}(w)}}{\|e_{\alpha}\|_{p,\mu_{\gamma}} ^{p}}=\sum_{\alpha\in\mathcal{A}(U,p,\gamma,a,b)}\frac{t^{\alpha}}{\|e_{\alpha} \|_{p,\mu_{\gamma}}^{p}}, \tag{4.6}\]
where \(\chi_{p}^{*}\) is defined by (1.9) and \(t=z\overline{w}\,|w|^{p-2}\). Notice that \(k^{U}_{p,\gamma,0,1}\) is the MBK of \(A^{p}(U,\mu_{\gamma})\).
**Proposition 4.7**.: _For \(z,w\in U\) and other notation as specified above, we have_
\[k^{U}_{p,\gamma,a,b}(z,w)=\frac{t^{\theta}}{2\pi}\cdot\frac{(p\theta+\gamma+2 )-(\gamma+2+p(\theta-b))t^{b}}{(1-t^{b})^{2}}. \tag{4.8}\]
Proof.: The calculation in (4.4) shows that if \(\alpha\in\mathcal{S}_{p}(U,\mu_{\gamma})\), then
\[\|e_{\alpha}\|_{p,\mu_{\gamma}}^{p}=\frac{2\pi}{p\alpha+\gamma+2}.\]
Now combining (4.6) with Proposition 4.2, we see that
\[k^{U}_{p,\gamma,a,b}(z,w)=\sum_{\alpha\in\mathcal{A}(U,p,\gamma, a,b)}\frac{t^{\alpha}}{\|e_{\alpha}\|_{p,\mu_{\gamma}}^{p}} =\frac{t^{\theta}}{2\pi}\sum_{\nu=0}^{\infty}(p(\theta+b\nu)+\gamma +2)t^{b\nu}\] \[=\frac{t^{\theta}}{2\pi}\left(p\sum_{\nu=0}^{\infty}(b\nu+1)t^{b \nu}+(p\theta+\gamma+2-p)\sum_{\nu=0}^{\infty}t^{b\nu}\right).\]
Writing this in closed form yields
\[k^{U}_{p,\gamma,a,b}(z,w)=\frac{t^{\theta}}{2\pi}\cdot\frac{(p\theta+\gamma+2 )-(\gamma+2+p(\theta-b))t^{b}}{(1-t^{b})^{2}}.\]
**Corollary 4.9**.: _The arithmetic progression kernel \(k^{U}_{p,\gamma,a,b}\) admits the bound_
\[\left|k^{U}_{p,\gamma,a,b}(z,w)\right|\leq C\frac{(|z||w|^{p-1})^{\theta}}{|1 -z^{b}\overline{w}^{b}|w|^{(p-2)b}|^{2}},\]
_where \(C>0\) is independent of \(z,w\in U\)._
Proof.: This follows from (4.8), on noting that \((p\theta+\gamma+2)\) is necessarily positive.
Setting \(a=0\), \(b=1\) in Proposition 4.7 yields the MBKs of \(A^{p}(\mathbb{D}^{*},\mu_{\gamma})\) and \(A^{p}(\mathbb{D},\mu_{\gamma})\):
**Corollary 4.10**.: _Let \(\gamma\in\mathbb{R}\), \(\mu_{\gamma}(z)=|z|^{\gamma}\) and \(t=z\overline{w}\,|w|^{p-2}\). The Monomial Basis Kernels of \(A^{p}(\mathbb{D}^{*},\mu_{\gamma})\) and \(A^{p}(\mathbb{D},\mu_{\gamma})\) are given, respectively, by_
1. \(K^{\mathbb{D}^{*}}_{p,\mu_{\gamma}}(z,w)=\frac{1}{2\pi}\cdot\frac{(p\ell+ \gamma+2)t^{\ell}-(\gamma+2+p(\ell-1))t^{\ell+1}}{(1-t)^{2}}\)_, where_ \(\ell=\left\lfloor-\frac{\gamma+2}{p}+1\right\rfloor\)_._
2. \(K^{\mathbb{D}}_{p,\mu_{\gamma}}(z,w)=\frac{1}{2\pi}\cdot\frac{(pL+\gamma+2)t^ {L}-(\gamma+2+p(L-1))t^{L+1}}{(1-t)^{2}}\)_, where_ \(L=\max\{\ell,0\}\)_._
### Two tools
We now recall two important results.
**Proposition 4.11**.: _For \(1\leq j\leq N\), let \(D_{j}\) be a domain in \(\mathbb{R}^{n_{j}}\), let \(K_{j}:D_{j}\times D_{j}\to[0,\infty)\) be a positive kernel on \(D_{j}\), and let \(\lambda^{j}\) be an a.e. positive weight on \(D_{j}\). Suppose that for each \(j\), there exist a.e. positive measurable functions \(\phi_{j},\psi_{j}\) on \(D_{j}\) and constants \(C_{1}^{j},C_{2}^{j}>0\) such that the following two estimates hold:_
1. _For every_ \(z\in D_{j}\)_,_ \(\int_{D_{j}}K_{j}(z,w)\psi_{j}(w)^{q}\lambda^{j}(w)\,dV(w)\leq C_{1}^{j}\phi_{ j}(z)^{q}\)_._
_._
2. _For every_ \(w\in D_{j}\)_,_ \(\int_{D_{j}}\phi_{j}(z)^{p}K_{j}(z,w)\lambda^{j}(z)\,dV(z)\leq C_{2}^{j}\psi_{j}(w )^{p}\)_._
_Now let \(D=D_{1}\times\cdots\times D_{N}\) be the product of the domains, let \(K(z,w)=\prod_{j=1}^{N}K_{j}(z_{j},w_{j})\), where \(z_{j},w_{j}\in D_{j}\), \(z=(z_{1},\ldots,z_{N})\in D\), \(w=(w_{1},\ldots,w_{N})\in D\), and let \(\lambda(w)=\prod_{j=1}^{N}\lambda^{j}(w_{j})\). Then the following operator is bounded on \(L^{p}(D,\lambda)\):_
\[\boldsymbol{T}f(z)=\int_{D}K(z,w)f(w)\lambda(w)dV(w).\]
Proof.: When \(N=1\), this is the classical Schur's test for boundedness of integral operators on \(L^{p}\)-spaces (see [22, Theorem 3.6]). The case \(N\geq 2\) reduces to the case \(N=1\), if we let \(\phi(z)=\prod_{j=1}^{N}\phi_{j}(z_{j})\) and \(\psi(z)=\prod_{j=1}^{N}\psi_{j}(z_{j})\) and using the Tonelli-Fubini theorem to represent integrals over \(D\) as repeated integrals over the product representations.
**Proposition 4.12** (Lemma 3.4 of [1]; also see [10] for \(\beta=0\)).: _Let \(U=\mathbb{D}\) or \(\mathbb{D}^{*}\), \(0<\epsilon<1\) and \(\beta>-2\). There exists \(C>0\) such that_
\[\int_{U}\frac{(1-|w|^{2})^{-\epsilon}}{|1-z\overline{w}|^{2}}|w|^{\beta}\,dV(w )\leq C(1-|z|^{2})^{-\epsilon}. \tag{4.13}\]
### \(L^{p}\)-boundedness of operators
We now prove that arithmetic progression subkernels represent absolutely bounded operators. In particular, the existence and absolute boundedness of the Monomial Basis Projections of \(A^{p}(\mathbb{D}^{*},\mu_{\gamma})\) and \(A^{p}(\mathbb{D},\mu_{\gamma})\) are established.
**Proposition 4.14**.: _Define the following auxiliary functions on \(U\):_
\[\phi(z)=|z|^{\frac{\theta}{q}}(1-|z|^{2b})^{-\frac{1}{pq}},\qquad\psi(w)=|w|^{ \frac{\theta}{q}}(1-|w|^{2b(p-1)})^{-\frac{1}{pq}}.\]
_There exist constants \(C_{1},C_{2}>0\), such that the following estimates hold:_
1. _For_ \(z\in U\)_,_ \(\int_{U}|k^{U}_{p,\gamma,a,b}(z,w)\big{|}\,\psi(w)^{q}\mu_{\gamma}(w)\,dV(w) \leq C_{1}\phi(z)^{q}\)_._
2. _For_ \(w\in U\)_,_ \(\int_{U}\phi(z)^{p}\,\big{|}k^{U}_{p,\gamma,a,b}(z,w)\big{|}\,\mu_{ \gamma}(z)\,dV(z)\leq C_{2}\psi(w)^{p}\)_._
Proof.: Throughout this proof, \(C\) will denote a positive number depending on \(p,\gamma,a,b\) but independent of \(z,w\in U\). Its value will change from step to step.
From the kernel bound in Corollary 4.9, we obtain
\[\int_{U}|k^{U}_{p,\gamma,a,b}(z,w)|\psi(w)^{q}\mu_{\gamma}(w)\, dV(w) \leq C\int_{U}\frac{(|z||w|^{p-1})^{\theta}}{|1-z^{\theta}\overline {w}^{b}|w|^{(p-2)b}|^{2}}\psi(w)^{q}\mu_{\gamma}(w)\,dV(w)\] \[=C|z|^{\theta}\int_{U}\frac{\left(1-|w|^{2b(p-1)}\right)^{-\frac{ 1}{p}}}{|1-z^{b}\overline{w}^{b}|w|^{(p-2)b}|^{2}}|w|^{p\theta+\gamma}\,dV(w). \tag{4.15}\]
Set \(\zeta=w^{b}|w|^{(p-2)b}\), so \(|\zeta|=|w|^{(p-1)b}\), \(|w|=|\zeta|^{\frac{q-1}{b}}\) and \(dV(w)=\left(\frac{q-1}{b^{2}}\right)|\zeta|^{\frac{2(q-1)}{b}-2}dV(\zeta)\). This change of variable shows
\[(\ref{eq:1})\leq C|z|^{\theta}\int_{U}\frac{(1-|\zeta|^{2})^{-\frac{1}{p}}}{| 1-z^{b}\overline{\zeta}|^{2}}|\zeta|^{\frac{q\theta}{b}+\frac{(\gamma+2)(q-1) }{b}-2}\,dV(\zeta). \tag{4.16}\]
This integral converges if and only if \(q\theta+(\gamma+2)(q-1)>0\). Multiplying by the positive number \(\frac{p}{q}\), we see this condition is equivalent to requiring that \(p\theta+\gamma+2>0\), which is guaranteed to hold. Indeed, in the proof of Proposition 4.2, \(\theta\) was shown to be the smallest integer such that (i) \(\theta\equiv a\mod b\), and (ii) \(p\theta+\gamma+2>0\). Now apply Proposition 4.12:
\[(\ref{eq:1})\leq C|z|^{\theta}(1-|z|^{2b})^{-\frac{1}{p}}=C\left(|z|^{\frac{ \theta}{q}}(1-|z|^{2b})^{-\frac{1}{pq}}\right)^{q}=C\phi(z)^{q},\]
giving us estimate (1) upon taking the final constant \(C\) to be \(C_{1}\). Now consider
\[\int_{U}\big{|}k_{p,\gamma,a,b}^{U}(z,w)\big{|}\phi(z)^{p}\mu_{ \gamma}(z)\,dV(z)\leq C\int_{U}\frac{(|z||w|^{p-1})^{\theta}}{|1-z^{b}\overline{ w}^{b}|w|^{(p-2)b}|^{2}}\phi(z)^{p}\mu_{\gamma}(z)\,dV(z)\\ =C|w|^{(p-1)\theta}\int_{U}\frac{(1-|z|^{2b})^{-\frac{1}{q}}}{|1- w^{b}|w|^{(p-2)b}\overline{z}^{b}|^{2}}|z|^{(1+\frac{p}{q})\theta+\gamma}\,dV(z). \tag{4.17}\]
Set \(\xi=z^{b}\), which says that \(|z|=|\xi|^{\frac{1}{b}}\) and \(dV(z)=b^{-2}|\xi|^{\frac{2}{b}-2}dV(\xi)\). This shows that
(4.18) \[(\ref{eq:11})\leq C|w|^{(p-1)\theta}\int_{U}\frac{(1-|\xi|^{2})^{-\frac{1}{q} }}{|1-w^{b}|w|^{(p-2)b}\overline{\xi}|^{2}}\,|\xi|^{\frac{p\theta}{b}+\frac{ \gamma+2}{b}-2}\,dV(\xi),\] (4.19)
This integral converges since \(p\theta+\gamma+2>0\) (this is the same condition as before). Now apply Proposition 4.12 again to see
\[(\ref{eq:11})\leq C|w|^{(p-1)\theta}\big{(}1-|w|^{2b(p-1)}\big{)}^{-\frac{1}{ q}}=C\left(|w|^{\frac{\theta}{q}}(1-|w|^{2b(p-1)})^{-\frac{1}{pq}}\right)^{p}=C \psi(z)^{p},\]
giving estimate (2) upon taking the final constant \(C\) to be \(C_{2}\).
**Corollary 4.19**.: _The following operator is bounded on \(L^{p}(U,\mu_{\gamma})\):_
\[\boldsymbol{T}_{p,\gamma,a,b}^{U}(f)(z)=\int_{U}\big{|}k_{p,\gamma,a,b}^{U}(z, w)\big{|}\,f(w)\mu_{\gamma}(w)dV(w). \tag{4.20}\]
Proof.: Estimates (1) and (2) in Proposition 4.14 allow for immediate application of Proposition 4.11 with \(N=1\), proving the result.
**Corollary 4.21**.: _The Monomial Basis Projections of the spaces \(A^{p}(\mathbb{D},\mu_{\gamma})\) and \(A^{p}(\mathbb{D}^{*},\mu_{\gamma})\) exist and are absolutely bounded._
Proof.: Absolute boundedness (which by Theorem 3.13 implies existence) follows from Corollary 4.19 on noting that the MBK of \(A^{p}(U,\mu_{\gamma})\) coincides with the subkernel \(k_{p,\gamma,0,1}^{U}\).
## 5. Transformation formula
### The canonical-bundle pullback
If \(\phi:\Omega_{1}\to\Omega_{2}\) is a finite-sheeted holomorphic map of domains in \(\mathbb{C}^{n}\), and \(f\) is a function on \(\Omega_{2}\), we define a function on \(\Omega_{1}\) by setting
\[\phi^{\sharp}(f)=f\circ\phi\cdot\det\phi^{\prime}, \tag{5.1}\]
where \(\phi^{\prime}(z):\mathbb{C}^{n}\to\mathbb{C}^{n}\) is the complex derivative of the map \(\phi\) at \(z\in\Omega_{1}\). If we think of \(\Omega_{1},\Omega_{2}\) as subsets of \(\mathbb{R}^{2n}\) and \(\phi\) as a smooth mapping, we can also consider the \(2n\times 2n\) real Jacobian \(D\phi\) of \(\phi\). Using the well-known relation \(\det D\phi=|\det\phi^{\prime}|^{2}\) between the two Jacobians, we see that \(\phi^{\sharp}\) is a continuous linear mapping of Hilbert spaces \(\phi^{\sharp}:L^{2}(\Omega_{2})\to L^{2}(\Omega_{1})\), and restricts to a map \(A^{2}(\Omega_{2})\to A^{2}(\Omega_{1})\). We will refer to \(\phi^{\sharp}\) as the _canonical-bundle pullback_ induced by \(\phi\), or informally as the \(\sharp\)_-pullback_, in order to distinguish it from a second pullback to be introduced in Section 5.3. If \(\phi\) is a biholomorphism, then \(\phi^{\sharp}\) is an isometric isomorphism of Hilbert spaces \(L^{2}(\Omega_{2})\cong L^{2}(\Omega_{1})\) that restricts to an isometric isomorphism \(A^{2}(\Omega_{2})\cong A^{2}(\Omega_{1})\).
### Proper maps of quotient type
In the classical theory of holomorphic mappings, one considers proper holomorphic mappings, and extends the biholomorphic invariance of Bergman spaces to such mappings via Bell's transformation formula (see [1, 1, 10, 11, 12]). In our applications, we are concerned with a specific class of proper holomorphic mappings. We begin with the following definition (see [1]):
**Definition 5.2**.: Let \(\Omega_{1},\Omega_{2}\subset\mathbb{C}^{n}\) be domains, let \(\Phi:\Omega_{1}\to\Omega_{2}\) be a proper holomorphic mapping and \(\Gamma\subset\operatorname{Aut}(\Omega_{1})\) a finite group of biholomorphic automorphisms of \(\Omega_{1}\). We say \(\Phi\) is of _quotient type with respect to \(\Gamma\)_ if
1. there exist closed lower-dimensional complex-analytic subvarieties \(Z_{j}\subset\Omega_{j}\), \(j=1,2\), such that \(\Phi\) restricts to a covering map \(\Phi:\Omega_{1}\setminus Z_{1}\to\Omega_{2}\setminus Z_{2}\), and
2. for each \(z\in\Omega_{2}\setminus Z_{2}\), the action of \(\Gamma\) on \(\Omega_{1}\) restricts to a transitive action on the fiber \(\Phi^{-1}(z)\).
The group \(\Gamma\) is called _the group of deck transformations of \(\Phi\)_.
The restricted map \(\Phi:\Omega_{1}\setminus Z_{1}\to\Omega_{2}\setminus Z_{2}\) is a _regular_ covering map (see [10, page 135 ff.]); i.e., it gives rise to a biholomorphism between \(\Omega_{2}\setminus Z_{2}\) and the quotient \((\Omega_{1}\setminus Z_{1})/\Gamma\), where it can be shown that \(\Gamma\) acts properly and discontinuously on \(\Omega_{1}\setminus Z_{1}\). It follows that \(\Gamma\) is the full group of deck transformations of the covering map \(\Phi:\Omega_{1}\setminus Z_{1}\to\Omega_{2}\setminus Z_{2}\), and that this covering map has exactly \(|\Gamma|\) sheets, where \(|\Gamma|\) is the size of the group \(\Gamma\). By analytic continuation, the relation \(\Phi\circ\sigma=\Phi\) holds for each \(\sigma\) in \(\Gamma\) on all of \(\Omega_{1}\).
**Definition 5.3**.: Given a domain \(\Omega\subset\mathbb{C}^{n}\), a group \(\Gamma\subset\operatorname{Aut}(\Omega)\) and a space \(\mathfrak{F}\) of functions on \(\Omega\), we define
\[[\mathfrak{F}]^{\Gamma}=\{f\in\mathfrak{F}:f=\sigma^{\sharp}(f)\text{ for all }\sigma\in\Gamma\}, \tag{5.4}\]
where \(\sigma^{\sharp}\) is the canonical-bundle pullback induced by \(\sigma\) as in (5.1). We say that functions in this space are said to be \(\Gamma\)_-invariant in the \(\sharp\) sense_, or simply \(\sharp\)-invariant.
If \(L,M\) are Banach spaces, by a _homothetic isomorphism_\(\boldsymbol{T}:L\to M\) we mean a bijection such that there is a \(C>0\) satisfying
\[\left\|\boldsymbol{T}f\right\|_{M}=C\left\|f\right\|_{L},\qquad\text{ for every }f\in L. \tag{5.5}\]
Fix \(1<p<\infty\) and consider a proper holomorphic mapping \(\Phi:\Omega_{1}\to\Omega_{2}\) of quotient type with respect to group \(\Gamma\). Define the function
\[\lambda_{p}=|\det\Phi^{\prime}|^{2-p}. \tag{5.6}\]
This function arises as a weight in naturally occuring \(L^{p}\)-spaces. Indeed, in Proposition 4.5 of [1] it was shown that the map
\[\Phi^{\sharp}:L^{p}(\Omega_{2})\to[L^{p}(\Omega_{1},\lambda_{p})]^{\Gamma} \tag{5.7}\]
is a homothetic isomorphism with
\[\left\|\Phi^{\sharp}(f)\right\|_{L^{p}(\Omega_{1},\lambda_{p})}^{p}=|\Gamma| \cdot\left\|f\right\|_{L^{p}(\Omega_{2})}^{p}, \tag{5.8}\]
which restricts to a homothetic isomorphism of the holomorphic subspaces
\[\Phi^{\sharp}:A^{p}(\Omega_{2})\to[A^{p}(\Omega_{1},\lambda_{p})]^{\Gamma}. \tag{5.9}\]
### Density-bundle pullbacks
Let \(\Omega_{1},\Omega_{2}\) be open sets in \(\mathbb{R}^{d}\), and \(\phi:\Omega_{1}\to\Omega_{2}\) a smooth map. Given a function \(f\) on \(\Omega_{2}\), define the _density-bundle pullback_, or \(\flat\)_-pullback_, of \(f\) to be the function on \(\Omega_{1}\) given by
\[\phi_{\flat}f=f\circ\phi\cdot\left|\det D\phi\right|^{\frac{1}{2}}, \tag{5.10}\]
where as before, \(D\phi\) denotes the \(d\times d\) Jacobian matrix of \(\phi\). From the change of variables formula, it follows that if \(\phi:\Omega_{1}\to\Omega_{2}\) is a diffeomorphism, then the induced map \(\phi_{\flat}:L^{2}(\Omega_{2})\to L^{2}(\Omega_{1})\) is an isometric isomorphism of Hilbert spaces. When \(\Omega_{1},\Omega_{2}\) are domains in a complex Euclidean space \(\mathbb{C}^{n}\) and the map \(\phi:\Omega_{1}\to\Omega_{2}\) is holomorphic, then
\[\phi_{\flat}f=f\circ\phi\cdot\left|\det\phi^{\prime}\right|, \tag{5.11}\]
where as before, \(\phi^{\prime}\) denotes the complex derivative.
**Definition 5.12**.: Given a domain \(\Omega\subset\mathbb{C}^{n}\), group \(\Gamma\subset\operatorname{Aut}(\Omega)\) and function space \(\mathfrak{F}\) consisting of functions on \(\Omega\), define the subspace
\[[\mathfrak{F}]_{\Gamma}=\{f\in\mathfrak{F}:f=\sigma_{\flat}(f)\text{ for all }\sigma\in\Gamma\}, \tag{5.13}\]
where \(\sigma_{\flat}\) is the density-bundle pullback in (5.11). Functions in \([\mathfrak{F}]_{\Gamma}\) are said to be _\(\Gamma\)-invariant in the \(\flat\) sense_, or simply _\(\flat\)-invariant_ when \(\Gamma\) is clear from context.
The behavior of the \(\flat\)-pullback regarding \(L^{p}\)-spaces and \(\flat\)-invariant functions is analogous to the \(\sharp\)-pullback regarding \(L^{p}\)-spaces and \(\sharp\)-invariant functions:
**Proposition 5.14**.: _Let \(1<p<\infty\), \(\Omega_{1},\Omega_{2}\) be domains in \(\mathbb{C}^{n}\) and \(\Phi:\Omega_{1}\to\Omega_{2}\) be a proper holomorphic map of quotient type with respect to the group \(\Gamma\subset\operatorname{Aut}(\Omega_{1})\). Then_
\[\Phi_{\flat}:L^{p}(\Omega_{2})\to\left[L^{p}(\Omega_{1},\lambda_{p})\right]_{\Gamma} \tag{5.15}\]
_is a homothetic isomorphism._
Proof.: Let \(f\in L^{p}(\Omega_{2})\). By Definition 5.2, there exist varieties \(Z_{1}\subset\Omega_{1}\), \(Z_{2}\subset\Omega_{2}\) such that \(\Phi:\Omega_{1}\backslash Z_{1}\to\Omega_{2}\backslash Z_{2}\) is a regular covering map of order \(|\Gamma|\). Using the change of variables formula (accounting for the fact that \(\Phi\) is a \(|\Gamma|\)-to-one mapping), we see
\[|\Gamma|\,\|f\|_{L^{p}(\Omega_{2})}^{p}=|\Gamma|\int_{\Omega_{2}\backslash Z_ {2}}|f|^{p}\,dV=\int_{\Omega_{1}\backslash Z_{1}}|f\circ\Phi|^{p}|\det\Phi^{ \prime}|^{2}\,dV=\|\Phi_{\flat}(f)\|_{L^{p}(\Omega_{1},\lambda_{p})}^{p}\,, \tag{5.16}\]
which shows \(\Phi_{\flat}(f)\in L^{p}(\Omega_{1},\lambda_{p})\). Observe also that for any \(\sigma\in\Gamma\),
\[\sigma_{\flat}(f\circ\Phi\cdot|\det\Phi^{\prime}|)=f\circ(\Phi\circ\sigma) \cdot|\det(\Phi\circ\sigma)^{\prime}|=f\circ\Phi\cdot|\det\Phi^{\prime}|,\]
showing that \(\Phi_{\flat}(f)\in\left[L^{p}(\Omega_{1},\lambda_{p})\right]_{\Gamma}\). This shows \(\Phi_{\flat}\) is a homothetic isomorphism of \(L^{p}(\Omega_{2})\) onto a subspace of \(\left[L^{p}(\Omega_{1},\lambda_{p})\right]_{\Gamma}\).
It remains to show that this image is the full space. By a partition of unity argument, it is sufficient to show that a function \(g\in\left[L^{p}(\Omega_{1},\lambda_{p})\right]_{\Gamma}\) is in the range of \(\Phi_{\flat}\), provided the support of \(g\) is contained in a set of the form \(\Phi^{-1}(U)\), where \(U\) is an connected open subset of \(\Omega_{2}\setminus Z_{2}\) evenly covered by the covering map \(\Phi\). Notice that \(\Phi^{-1}(U)\) is a disjoint collection of connected open components each biholomorphic to \(U\), and if \(U_{0}\) is one of them, \(\Phi^{-1}(U)\) is the disjoint union \(\bigcup_{\sigma\in\Gamma}\sigma(U_{0})\). Let \(\Psi:U\to U_{0}\) be the local inverse of \(\Phi\) onto \(U_{0}\). Define \(f_{0}\) on \(U\) by \(f_{0}=\Psi_{\flat}\left(g|_{U_{0}}\right)\). We claim that \(f_{0}\) is defined independently of the choice of the component \(U_{0}\) of \(\Phi^{-1}(U)\). Indeed, any other choice is of the form \(\sigma(U_{0})\) for some \(\sigma\in\Gamma\) and the corresponding local inverse is \(\sigma\circ\Psi\). But we have
\[(\sigma\circ\Psi)_{\flat}\left(g|_{\sigma(U_{0})}\right)=\Psi_{\flat}\circ \sigma_{\flat}\left(g|_{\sigma(U_{0})}\right)=\Psi_{\flat}\left(g|_{U_{0}} \right)=f_{0},\]
where we have used the fact that \(\sigma_{\flat}g=g\) since \(g\in\left[L^{p}(\Omega_{1},\lambda_{p})\right]_{\Gamma}\). A partition of unity argument completes the proof.
### Monomial maps
Consider an \(n\times n\) integer matrix \(A\) whose element in the \(j\)-th row and \(k\)-th column of \(A\) is \(a_{k}^{j}\). Let \(a^{j}\) denote the \(j\)-th row of \(A\), and \(a_{k}\) the \(k\)-th column. Letting the rows of \(A\) correspond to monomials \(e_{a^{j}}\), define for \(z\in\mathbb{C}^{n}\) the _matrix power_
\[z^{A}=\begin{pmatrix}e_{a^{1}}(z)\\ \vdots\\ e_{a^{n}}(z)\end{pmatrix}=\begin{pmatrix}z_{1}^{a_{1}}z_{2}^{a_{2}^{1}}\cdots z _{n}^{a_{n}^{1}}\\ \vdots\\ z_{1}^{a_{1}}z_{2}^{a_{2}^{n}}\cdots z_{n}^{a_{n}^{n}}\end{pmatrix}, \tag{5.17}\]
provided each component is defined. Define the _monomial map_\(\Phi_{A}\) to be the rational map on \(\mathbb{C}^{n}\) given by
\[\Phi_{A}(z)=z^{A}. \tag{5.18}\]
The following properties of monomial maps are known in the literature and references to their proofs are given at the end of the list. Three pieces of notation must first be explained: The element-wise exponential map \(\exp:\mathbb{C}^{n}\to(\mathbb{C}^{*})^{n}\) is given by \(\exp(z)=(e^{z_{1}},\ldots,e^{z_{n}})\); if \(z=(z_{1},\ldots,z_{n}),\ w=(w_{1},\ldots,w_{n})\) are points in \(\mathbb{C}^{n}\), define their component-wise product to be \(z\odot w=(z_{1}w_{1},z_{2}w_{2},\ldots,z_{n}w_{n})\); \(\mathbb{1}\in\mathbb{Z}^{1\times n}\) is a row vector with \(1\) in each component.
1. The following formula generalizes the familiar power-rule: \[\det\Phi^{\prime}_{A}=\det A\cdot e_{1A-1}.\] (5.19a)
2. If \(A\) is an invertible \(n\times n\) matrix of nonnegative integers, then \(\Phi_{A}:\mathbb{C}^{n}\to\mathbb{C}^{n}\) is a proper holomorphic map of quotient type with respect to the group \[\Gamma_{A}=\{\sigma_{\nu}:\sigma_{\nu}(z)=\exp\left(2\pi iA^{-1}\nu\right) \odot z,\ \nu\in\mathbb{Z}^{n\times 1}\}.\] (5.19b)
3. The group \(\Gamma_{A}\) has exactly \(|\det A|\) elements.
4. The canonical-bundle pullback of the monomial \(e_{\alpha}\) via the element \(\sigma_{\nu}\in\Gamma_{A}\) is \[\sigma_{\nu}^{\sharp}(e_{\alpha})=e^{2\pi i(\alpha+1)A^{-1}\nu}\cdot e_{\alpha}.\] (5.19c)
5. The set of monomials that are \(\Gamma_{A}\)-invariant in the \(\sharp\) sense as defined by (5.4) is \[\{e_{\alpha}:\alpha=\beta A-\mathbb{1},\ \beta\in\mathbb{Z}^{1\times n}\}.\] (5.19d)
Proof.: Property (1) is proved in both [20, Lemma 4.2] and [1, Lemma 3.8]. Properties (2) and (3) can be found in [1, Theorem 3.12]. See also [20, 21] for related results. Properties (4) and (5) are found in [1, Proposition 6.12].
### Conditions for the transformation formula
For the remainder of Section 5, we assume the following conditions in the statements of our results:
_The domain \(\Omega_{2}\subset\mathbb{C}^{n}\) is pseudoconvex and Reinhardt, \(A\) is an \(n\times n\) matrix of nonnegative integers such that \(\det A\neq 0\), and \(\Omega_{1}=\Phi_{A}^{-1}(\Omega_{2})\), the inverse image of \(\Omega_{2}\) under the monomial map \(\Phi_{A}:\mathbb{C}^{n}\to\mathbb{C}^{n}\) defined in (5.18)._
This set-up has several immediate consequences:
1. We obtain by restriction a proper holomorphic map \[\Phi_{A}:\Omega_{1}\to\Omega_{2},\] which is of quotient type with respect to the group \(\Gamma_{A}\) defined in (5.19b).
2. The domain \(\Omega_{1}\) is pseudoconvex and Reinhardt.
3. The weight \(\lambda_{p}\) from (5.6) is given by \[\lambda_{p}(\zeta)=|\det\Phi^{\prime}_{A}(\zeta)|^{2-p}=|\det A|^{2-p}\prod_{k =1}^{n}|\zeta_{k}|^{(1\cdot a_{k}-1)(2-p)},\] (5.20) where as before \(\mathbb{1}\in\mathbb{Z}^{1\times n}\) has \(1\) in each component and \(a_{k}\) is the \(k\)-th column of \(A\).
4. By Proposition 3.11, the weight \(\lambda_{p}\) is admissible in the sense of Section 1.4.
5. By (5.7) the canonical-bundle pullback gives a homothetic isomorphism \[\Phi_{A}^{\sharp}:L^{p}(\Omega_{2})\to[L^{p}(\Omega_{1},\lambda_{p})]^{\Gamma_ {A}},\] which by (5.9) restricts to a homothetic isomorphism of the holomorphic subspaces \[\Phi_{A}^{\sharp}:A^{p}(\Omega_{2})\to[A^{p}(\Omega_{1},\lambda_{p})]^{\Gamma_ {A}}.\]
### \(\Gamma\)-invariant subkernel
Assuming the conditions and set-up established in Section 5.5, define the following subset of \(p\)-allowable indices which are \(\Gamma\)-invariant in the \(\sharp\) sense. (We often suppress reference to the matrix \(A\) in our notation, writing \(\Phi_{A}=\Phi\), \(\Gamma_{A}=\Gamma\), etc.)
\[\mathcal{S}_{p}^{\Gamma}(\Omega_{1},\lambda_{p})=\{\alpha\in\mathcal{S}_{p}( \Omega_{1},\lambda_{p}):\sigma^{\sharp}(e_{\alpha})=e_{\alpha}\text{ for all }\sigma\in\Gamma\}. \tag{5.21}\]
We use this to define the "\(\Gamma\)-invariant subkernel" of the Monomial Basis Kernel:
\[K_{p,\lambda_{p},\Gamma}^{\Omega_{1}}(z,w)=\sum_{\alpha\in\mathcal{S}_{p}^{ \Gamma}(\Omega_{1},\lambda_{p})}\frac{e_{\alpha}(z)\overline{\chi_{p}^{*}e_{ \alpha}(w)}}{\|e_{\alpha}\|_{p,\lambda_{p}}^{p}}. \tag{5.22}\]
**Proposition 5.23**.: _The following sets are equal_
\[\big{\{}e_{\beta}:\beta\in\mathcal{S}_{p}^{\Gamma}(\Omega_{1},\lambda_{p}) \big{\}}=\big{\{}\tfrac{1}{\det A}\Phi^{\sharp}(e_{\alpha}):\alpha\in \mathcal{S}_{p}(\Omega_{2})\big{\}}.\]
Proof.: Thinking of \(\alpha\) as an element of \(\mathbb{Z}^{1\times n}\), a computation shows that \(e_{\alpha}\circ\Phi_{A}=e_{\alpha A}\). Thus \(\Phi^{\sharp}(e_{\alpha})=(\det A)e_{(\alpha+1)A-1}\), so we have
\[\big{\{}\tfrac{1}{\det A}\Phi^{\sharp}(e_{\alpha}):\alpha\in\mathcal{S}_{p}( \Omega_{2})\big{\}}=\{e_{(\alpha+1)A-1}:\alpha\in\mathcal{S}_{p}(\Omega_{2})\}. \tag{5.24}\]
Since the image of \(A^{p}(\Omega_{2})\) under \(\Phi^{\sharp}\) is the space \([A^{p}(\Omega_{1},\lambda_{p})]^{\Gamma}\), we see
\[\{e_{(\alpha+1)A-1}:\alpha\in\mathcal{S}_{p}(\Omega_{2})\}\subset\{e_{\beta}: \beta\in\mathcal{S}_{p}(\Omega_{1},\lambda_{p}),\ \sigma^{\sharp}(e_{\beta})=e_{\beta}\text{ for all }\sigma\in\Gamma\}.\]
But since the map \(\Phi^{\sharp}:A^{p}(\Omega_{2})\to[A^{p}(\Omega_{1},\lambda_{p})]^{\Gamma}\) is linear, \(\Phi^{\sharp}(f)\) must have more than one term in its Laurent expansion if \(f\) has more than one term in its Laurent expansion. Thus
\[\{e_{(\alpha+1)A-1}:\alpha\in\mathcal{S}_{p}(\Omega_{2})\} =\{e_{\beta}:\beta\in\mathcal{S}_{p}(\Omega_{1},\lambda_{p}),\ \sigma^{\sharp}(e_{\beta})=e_{\beta}\text{ for all }\sigma\in\Gamma\}\] \[=\big{\{}e_{\beta}:\beta\in\mathcal{S}_{p}^{\Gamma}(\Omega_{1}, \lambda_{p})\big{\}}\,,\]
completing the proof.
### Transforming operators with positive kernels
We prove here a transformation law for the "absolute" operator involving the MBK:
\[(\boldsymbol{P}_{p,1}^{\Omega_{2}})^{+}f(z)=\int_{\Omega_{2}}\Big{|}K_{p,1}^{ \Omega_{2}}(z,w)\Big{|}\,f(w)\,dV(w),\qquad f\in C_{c}(\Omega_{2}). \tag{5.25}\]
This operator is defined on \(C_{c}(\Omega_{2})\), but can be extended to \(L^{p}(\Omega_{2})\) when \(L^{p}\)-estimates are shown to hold. Define a related operator using the \(\Gamma\)-invariant subkernel from (5.22):
\[(\boldsymbol{P}_{p,\lambda_{p},\Gamma}^{\Omega_{1}})^{+}f(z)=\int_{\Omega_{1} }\Big{|}K_{p,\lambda_{p},\Gamma}^{\Omega_{1}}(z,w)\Big{|}\,f(w)\lambda_{p}(w) dV(w),\qquad f\in C_{c}(\Omega_{1}). \tag{5.26}\]
These operators are closely related via the \(\flat\)-pullback of Section 5.3:
**Theorem 5.27**.: _The following statements are equivalent:_
1. \((\boldsymbol{P}_{p,1}^{\Omega_{2}})^{+}\) _extends to a bounded operator_ \((\boldsymbol{P}_{p,1}^{\Omega_{2}})^{+}:L^{p}(\Omega_{2})\to L^{p}(\Omega_{2})\)_._
2. \((\boldsymbol{P}_{p,\lambda_{p},\Gamma}^{\Omega_{1}})^{+}\) _extends to a bounded operator_ \((\boldsymbol{P}_{p,\lambda_{p},\Gamma}^{\Omega_{1}})^{+}:[L^{p}(\Omega_{1}, \lambda_{p})]_{\Gamma}\to[L^{p}(\Omega_{1},\lambda_{p})]_{\Gamma}\)_._
_When these equivalent statements hold,_
\[\Phi_{\flat}\circ(\boldsymbol{P}_{p,1}^{\Omega_{2}})^{+}=(\boldsymbol{P}_{p, \lambda_{p},\Gamma}^{\Omega_{1}})^{+}\circ\Phi_{\flat} \tag{5.28}\]
_as operators on \(L^{p}(\Omega_{2})\), which is to say, that the following diagram commutes_
\[\begin{CD}L^{p}(\Omega_{2})@>{\Phi_{\flat}}>{}>L^{p}(\Omega_{1},\lambda_{p}) _{\Gamma}\\ @V{}V{(\boldsymbol{P}_{p,1}^{\Omega_{2}})^{+}}V@V{}V{(\boldsymbol{P}_{p, \lambda_{p},\Gamma}^{\Omega_{1}})^{+}}V\\ L^{p}(\Omega_{2})@>{\Phi_{\flat}}>{}>L^{p}(\Omega_{1},\lambda_{p})_{\Gamma}. \end{CD} \tag{5.29}\]
The following kernel transformation formula can be thought of as a generalization of the classical biholomorphic transformation formula for the Bergman kernel.
**Proposition 5.30**.: _The Monomial Basis Kernel admits the transformation law_
\[K^{\Omega_{1}}_{p,\lambda_{p},\Gamma}(z,w)=\frac{1}{|\Gamma|}\det\Phi^{\prime}(z )\cdot K^{\Omega_{2}}_{p,1}(\Phi(z),\Phi(w))\cdot\frac{|\det\Phi^{\prime}(w)|^{ p}}{\det\Phi^{\prime}(w)}. \tag{5.31}\]
Proof.: Starting from the series representation for \(K^{\Omega_{2}}_{p,1}(z,w)\) in (3.2), we have
\[K^{\Omega_{2}}_{p,1}(\Phi(z),\Phi(w)) =\sum_{\alpha\in\mathcal{S}_{p}(\Omega_{2})}\frac{e_{\alpha}(\Phi (z))\overline{e_{\alpha}(\Phi(w))}|e_{\alpha}(\Phi(w))|^{p-2}}{\left\|e_{ \alpha}\right\|_{L^{p}(\Omega_{2})}^{p}}\] \[=|\Gamma|\sum_{\alpha\in\mathcal{S}_{p}(\Omega_{2})}\frac{e_{ \alpha}(\Phi(z))\overline{e_{\alpha}(\Phi(w))}|e_{\alpha}(\Phi(w))|^{p-2}}{ \left\|\Phi^{\sharp}(e_{\alpha})\right\|_{L^{p}(\Omega_{1},\lambda_{p})}^{p}}, \tag{5.32}\]
since by (5.8), the homothetic isomorphism \(\Phi^{\sharp}\) scales norms uniformly for each \(f\in L^{p}(\Omega_{2})\) by \(|\Gamma|\cdot\left\|f\right\|_{L^{p}(\Omega_{2})}^{p}=\left\|\Phi^{\sharp}(f) \right\|_{L^{p}(\Omega_{1},\lambda_{p})}^{p}\). Now use the definition of \(\Phi^{\sharp}\) to write
\[\eqref{eq:Kp} =|\Gamma|\frac{\det\Phi^{\prime}(w)}{\det\Phi^{\prime}(z)|\det \Phi^{\prime}(w)|^{p}}\sum_{\alpha\in\mathcal{S}_{p}(\Omega_{2})}\frac{\Phi^{ \sharp}(e_{\alpha})(z)\overline{\Phi^{\sharp}(e_{\alpha})(w)}|\Phi^{\sharp}(e _{\alpha})(w)|^{p-2}}{\left\|\Phi^{\sharp}(e_{\alpha})\right\|_{L^{p}(\Omega_{ 1},\lambda_{p})}^{p}}\] \[=|\Gamma|\frac{\det\Phi^{\prime}(w)}{\det\Phi^{\prime}(z)|\det \Phi^{\prime}(w)|^{p}}\sum_{\beta\in\mathcal{S}_{p}^{\Gamma}(\Omega_{1}, \lambda_{p})}\frac{e_{\beta}(z)\overline{e_{\beta}(w)}|e_{\beta}(w)|^{p-2}}{ \left\|e_{\beta}\right\|_{L^{p}(\Omega_{1},\lambda_{p})}^{p}} \tag{5.33}\] \[=|\Gamma|\frac{\det\Phi^{\prime}(w)}{\det\Phi^{\prime}(z)|\det \Phi^{\prime}(w)|^{p}}\cdot K^{\Omega_{1}}_{p,\lambda_{p},\Gamma}(z,w). \tag{5.34}\]
Equation (5.33) follows from Proposition 5.23, and (5.34) follows from the definition of the \(\Gamma\)-invariant MBK given in (5.22). This completes the proof.
Proof of Theorem 5.27.: Proposition 5.14 and (5.16) show that \(\Phi_{\flat}:L^{p}(\Omega_{2})\to L^{p}(\Omega_{1},\lambda_{p})_{\Gamma}\) is a homothetic isomorphism with \(\left\|\Phi_{\flat}f\right\|_{L^{p}(\Omega_{1},\lambda_{p})}^{p}=|\Gamma| \left\|f\right\|_{L^{p}(\Omega_{2})}^{p}\). Now for \(f\in C_{c}(\Omega_{2})\),
\[\Phi_{\flat}\circ(\boldsymbol{P}_{p,1}^{\Omega_{2}})^{+}f(z) =|\det\Phi^{\prime}(z)|\int_{\Omega_{2}}\left|K^{\Omega_{2}}_{p,1}( \Phi(z),w)\right|f(w)\,dV(w)\] \[=\frac{|\det\Phi^{\prime}(z)|}{|\Gamma|}\int_{\Omega_{1}}\left|K^ {\Omega_{2}}_{p,1}(\Phi(z),\Phi(w))\right|f(\Phi(w))\cdot|\det\Phi^{\prime}(w) |^{2}\,dV(w)\] \[=\int_{\Omega_{1}}\left|K^{\Omega_{1}}_{p,\lambda_{p},\Gamma}(z,w )\right|\Phi_{\flat}f(w)\lambda_{p}(w)\,dV(w) \tag{5.35}\] \[=(\boldsymbol{P}_{p,\lambda_{p},\Gamma}^{\Omega_{1}})^{+}\circ \Phi_{\flat}f(z).\]
Equality in (5.35) uses the kernel transformation law (5.31), and the final line makes sense since the properness of \(\Phi\) guarantees \(\Phi_{\flat}f\in\left[C_{c}(\Omega_{1})\right]_{\Gamma}\). The fact that \(C_{c}(\Omega_{2})\) is dense in \(L^{p}(\Omega_{2})\), along with the fact that its image \(\Phi_{\flat}\left(C_{c}(\Omega_{2})\right)=\left[C_{c}(\Omega_{1})\right]_{\Gamma}\) is dense in \(\left[L^{p}(\Omega_{1},\lambda_{p})\right]_{\Gamma}\) shows that statements (1) and (2) are equivalent. When these statements hold, equation (5.28) and Diagram (5.29) follow immediately.
## 6. Monomial Polyhedra
In this section we prove Theorem 1.15, which says that if \(\mathscr{U}\) is a monomial polyhedron and \(1<p<\infty\), the Monomial Basis Projection of \(A^{p}(\mathscr{U})\) is absolutely bounded. As discussed in Section 1.6, this stands in contrast with the limited \(L^{p}\)-regularity of the Bergman projection.
### Matrix representation
We denote the spaces of row and column vectors with integer entries by \(\mathbb{Z}^{1\times n}\) and \(\mathbb{Z}^{n\times 1}\), respectively. Suppose \(B=(b_{k}^{j})\in M_{n\times n}(\mathbb{Z})\) is a matrix of integers with \(\det B\neq 0\), with rows written as \(b^{j}=(b_{1}^{j},\dots,b_{n}^{j})\in\mathbb{Z}^{1\times n}\). Define
\[\mathscr{U}_{B}=\{z\in\mathbb{C}^{n}:|e_{b^{j}}(z)|<1,\quad 1\leq j\leq n\}\,, \tag{6.1}\]
and call it the monomial polyhedron associated to the matrix \(B\), provided it is bounded. This gives a compact notation for the domains defined in Section 1.6
The matrix \(B\) in (6.1) is far from unique. If \(B^{\prime}\) is obtained from \(B\) by permuting rows or by multiplying any row by a positive integer, then \(\mathscr{U}_{B}=\mathscr{U}_{B^{\prime}}\). We recall the following observation, originally proved in [1, Proposition 3.2]:
**Proposition 6.2**.: _Suppose that \(\mathscr{U}_{B}\) is a bounded monomial polyhedron as in (6.1), where \(\det B\neq 0\). Without loss of generality we may assume_
1. \(\det B>0\)_._
2. _each entry in the inverse matrix_ \(B^{-1}\) _is nonnegative._
Given the monomial polyhedron \(\mathscr{U}_{B}\), we will assume for the rest of the paper that \(B\) satisfies both properties (1) and (2) of Proposition 6.2. Observe that Cramer's rule combined with property (2) says that the adjugate \(A=(\det B)B^{-1}\) is a matrix of nonnegative integers.
The following representation of monomial polyhedra as quotients was first proved in [1, Theorem 3.12].
**Proposition 6.3**.: _Let \(A=(\det B)B^{-1}\in M_{n\times n}(\mathbb{Z})\). There exists a product domain_
\[\Omega=U_{1}\times\dots\times U_{n}\subset\mathbb{C}^{n}, \tag{6.4}\]
_each factor \(U_{j}\) either a unit disc \(\mathbb{D}\) or a unit punctured disc \(\mathbb{D}^{*}\), such that the monomial map \(\Phi_{A}:\mathbb{C}^{n}\to\mathbb{C}^{n}\) of (5.18) restricts to a proper holomorphic map \(\Phi_{A}:\Omega\to\mathscr{U}_{B}\). This map is of quotient type with respect to group \(\Gamma_{A}\), which is given in (5.19b)._
The conditions of Section 5.5 are satisfied, if we take \(\Omega_{1}=\Omega\), \(\Omega_{2}=\mathscr{U}_{B}\), and \(A,\Phi_{A},\Gamma_{A}\) as above in Proposition 6.3. In the present situation, the source domain \(\Omega_{1}=\Omega\) is a product and the weight \(\lambda_{p}=\left|\det\Phi_{A}^{\prime}\right|^{2-p}\) of (5.20) admits a tensor product structure:
\[\lambda_{p}(\zeta)=\left|\det\Phi_{A}^{\prime}(\zeta)\right|^{2-p}=\left(\det A \right)^{2-p}\prod_{j=1}^{n}\mu_{\gamma_{j}}(\zeta_{j}), \tag{6.5}\]
where \(\mu_{\gamma_{j}}\) is the weight on \(U_{j}\) given by
\[\mu_{\gamma_{j}}(z)=|z|^{\gamma_{j}},\qquad\text{where}\qquad\gamma_{j}=( \mathbb{1}\cdot a_{j}-1)(2-p), \tag{6.6}\]
\(\mathbb{1}\in\mathbb{Z}^{1\times n}\) is the row vector with \(1\) in each component and \(a_{j}\in\mathbb{Z}^{n\times 1}\) the \(j\)-th column of \(A\). We can remove the absolute value from \(\det A\) since \(\det A=(\det B)^{n}\cdot\frac{1}{\det B}=\det B^{n-1}>0\).
### Absolute boundedness of the Monomial Basis Projection
We now give a decomposotion of the the \(\Gamma\)-invariant subkernel defined in (5.22).
**Proposition 6.7**.: _Let \(d=\det A\) (a positive integer). The \(\Gamma\)-invariant subkernel defined in (5.22) admits the decomposition_
\[K_{p,\lambda_{p},\Gamma}^{\Omega}(z,w)=\sum_{i=1}^{d^{n-1}}K_{i}(z,w), \tag{6.8}\]
_where each \(K_{i}\) is a tensor product of \(n\) arithmetic progression subkernels defined in (4.6):_
\[K_{i}(z,w)=d^{p-2}\prod_{j=1}^{n}k_{p,\gamma_{j},\alpha_{i,j},d}^{U_{j}}(z_{j},w_{j}), \tag{6.9}\]
_where the \(\gamma_{j}\) is determined by (6.6) and \(\alpha_{i,j}\in\mathbb{Z}/d\mathbb{Z}\) is determined by the group \(\Gamma\)._
Proof.: Following (5.22), the \(\Gamma\)-invariant subkernel \(K^{\Omega}_{p,\lambda_{p},\Gamma}(z,w)\) is found by summing over the \(p\)-allowable indices, \(\Gamma\)-invariant in the \(\sharp\) sense. From (5.21), this set can be written as
\[\mathcal{S}^{\Gamma}_{p}(\Omega,\lambda_{p})=\{\alpha\in\mathcal{S}_{p}(\Omega,\lambda_{p}):\sigma^{\sharp}(e_{\alpha})=e_{\alpha}\text{ for all }\sigma\in\Gamma\}= \mathcal{S}_{p}(\Omega,\lambda_{p})\cap[\mathbb{Z}^{n}]^{\Gamma}, \tag{6.10}\]
where \([\mathbb{Z}^{n}]^{\Gamma}\) is defined to be the subset of \(\mathbb{Z}^{1\times n}\) consisting of exactly those indices for which the corresponding monomials are \(\Gamma\)-invariant, i.e.,
\[[\mathbb{Z}^{n}]^{\Gamma}=\{\alpha\in\mathbb{Z}^{1\times n}:\sigma^{\sharp}(e _{\alpha})=e_{\alpha}\text{ for all }\sigma\in\Gamma\}.\]
By (5.19d), we see that \([\mathbb{Z}^{n}]^{\Gamma}=\{\alpha\in\mathbb{Z}^{1\times n}:\alpha=\beta A- \mathbb{1},\,\beta\in\mathbb{Z}^{1\times n}\}\), so after translating by \(\mathbb{1}\), we have
\[[\mathbb{Z}^{n}]^{\Gamma}+\mathbb{1}=\mathbb{Z}^{1\times n}A=\{\beta A:\beta \in\mathbb{Z}^{1\times n}\}\subset\mathbb{Z}^{1\times n}.\]
We make two observations: first, it is known (see Lemma 3.3 of [21]) that \(\mathbb{Z}^{1\times n}A\) is a sublattice of \(\mathbb{Z}^{1\times n}\) with index
\[\left|\mathbb{Z}^{1\times n}/(\mathbb{Z}^{1\times n}A)\right|=\det A=d.\]
Second, we claim that \(\mathbb{Z}^{1\times n}A\) contains \(d\,\mathbb{Z}^{1\times n}=\{d\beta:\beta\in\mathbb{Z}^{1\times n}\}\) as a sublattice. Consider a vector \(v=dy\), for some \(y\in\mathbb{Z}^{1\times n}\) and check that \(v\in\mathbb{Z}^{1\times n}A\). Since \(A\) is invertible, there is a solution \(x\in\mathbb{Q}^{1\times n}\) with \(v=dy=xA\). Write \(A\) in terms of its rows \(a^{1},\cdots,a^{n}\in\mathbb{Z}^{1\times n}\) as \(A=[a^{1},\cdots,a^{n}]^{T}\). Cramer's rule shows the \(j\)-th component of \(x\) is
\[x_{j}=\frac{\det\left([a^{1},\cdots,a^{j-1},dy,a^{j+1},\cdots,a^{n}]^{T}\right) }{\det A}=\det\left([a^{1},\cdots,a^{j-1},y,a^{j+1},\cdots,a^{n}]^{T}\right) \in\mathbb{Z},\]
confirming that \(x\in\mathbb{Z}^{1\times n}\), and therefore that \(d\,\mathbb{Z}^{1\times n}\) is a sublattice of \(\mathbb{Z}^{1\times n}A\).
Since the index \(\left|\mathbb{Z}^{1\times n}/d\,\mathbb{Z}^{1\times n}\right|=d^{n}\), the Third Isomorphism Theorem for groups says
\[\left|\mathbb{Z}^{1\times n}A/d\,\mathbb{Z}^{1\times n}\right|=\frac{\left| \mathbb{Z}^{1\times n}/d\,\mathbb{Z}^{1\times n}\right|}{\left|\mathbb{Z}^{1 \times n}/\mathbb{Z}^{1\times n}A\right|}=d^{n-1}.\]
It now follows that we have a representation of the group \(\mathbb{Z}^{1\times n}A\) as a disjoint union of \(d^{n-1}\) cosets of the subgroup \(d\,\mathbb{Z}^{1\times n}\), i.e., there are \(\ell^{i}\in\mathbb{Z}^{1\times n}A\), such that we have
\[\mathbb{Z}^{1\times n}A=[\mathbb{Z}^{n}]^{\Gamma}+\mathbb{1}=\bigsqcup_{i=1}^ {d^{n-1}}(d\,\mathbb{Z}^{1\times n}+\ell^{i}),\]
where \(\bigsqcup\) denotes disjoint union. Therefore, we have
\[[\mathbb{Z}^{n}]^{\Gamma}=\left(\bigsqcup_{i=1}^{d^{n-1}}(d\,\mathbb{Z}^{1 \times n}+\ell^{i})\right)-\mathbb{1}=\bigsqcup_{i=1}^{d^{n-1}}\left(d\, \mathbb{Z}^{1\times n}+(\ell^{i}-\mathbb{1})\right).\]
Fix an \(i,1\leq i\leq d^{n-1}\) and write \(\ell^{i}=(\ell^{i}_{1},\ldots,\ell^{i}_{n})\) with \(\ell^{i}_{j}\in\mathbb{Z}\). Then we have
\[d\,\mathbb{Z}^{1\times n}+(\ell^{i}-\mathbb{1}) =\{(d\cdot\nu_{1}+\ell^{i}_{1}-1,\ldots,d\cdot\nu_{n}+\ell^{i}_{n }-1):\nu_{1},\ldots,\nu_{n}\in\mathbb{Z}\}\] \[=\prod_{j=1}^{n}\{\alpha\in\mathbb{Z}:\alpha\equiv\ell^{i}_{j}-1 \mod d\}, \tag{6.11}\]
where in the last line we have the Cartesian product of \(n\) sets of integers.
We now analyze the other intersecting set \(\mathcal{S}_{p}(\Omega,\lambda_{p})\) in (6.10). Let \(\alpha\in\mathbb{Z}^{n}\). Combining the representation of \(\lambda_{p}\) from (6.5) with the fact that \(e_{\alpha}(z)=\prod_{j=1}^{n}e_{\alpha_{j}}(z_{j})\), we can write the norm of \(e_{\alpha}\) on \(\Omega\) in terms of the norms of the \(e_{\alpha_{j}}\) on the factors \(U_{j}\):
\[\|e_{\alpha}\|_{L^{p}(\Omega,\lambda_{p})}^{p}=d^{2-p}\prod_{j=1}^{n}\left\|e_{ \alpha_{j}}\right\|_{L^{p}(U_{j}\mu_{\gamma_{j}})}^{p}. \tag{6.12}\]
The left-hand side is finite, i.e., \(\alpha\in\mathcal{S}_{p}(\Omega,\lambda_{p})\), if and only if each factor on the right-hand side is finite, i.e., for each \(1\leq j\leq n\) we have \(\alpha_{j}\in\mathcal{S}_{p}(U_{j},\mu_{\gamma_{j}})\). Consequently we obtain a Cartesian product representation of the set
\[\mathcal{S}_{p}(\Omega,\lambda_{p})=\prod_{j=1}^{n}\mathcal{S}_{p}(U_{j},\mu_ {\gamma_{j}}). \tag{6.13}\]
Therefore by (6.10), we have
\[\mathcal{S}_{p}^{\Gamma}(\Omega,\lambda_{p})=\mathcal{S}_{p}(\Omega,\lambda_ {p})\cap\left(\bigsqcup_{i=1}^{d^{n-1}}\big{(}(d\,\mathbb{Z}^{1\times n}+\ell_ {i})-\mathbb{1}\big{)}\right)=\bigsqcup_{i=1}^{d^{n-1}}\mathscr{L}_{i},\]
where
\[\mathscr{L}_{i} =\mathcal{S}_{p}(\Omega,\lambda_{p})\cap\big{(}(d\,\mathbb{Z}^{1 \times n}+\ell_{i})-\mathbb{1}\big{)}\] by definition \[=\left(\prod_{j=1}^{n}\mathcal{S}_{p}(U_{j},\mu_{\gamma_{j}}) \right)\bigcap\left(\prod_{j=1}^{n}\{\alpha\in\mathbb{Z}:\alpha\equiv\ell_{j}^ {i}-1\mod d\}\right)\quad\text{ by \eqref{eq:11} and \eqref{eq:12}}\] \[=\prod_{j=1}^{n}\big{(}\mathcal{S}_{p}(U_{j},\mu_{\gamma_{j}}) \cap\{\alpha\in\mathbb{Z}:\alpha\equiv\ell_{j}^{i}-1\mod d\}\big{)}\] \[=\prod_{j=1}^{n}\mathcal{A}(U_{j},p,\gamma_{j},\ell_{j}^{i}-1,d), \tag{6.14}\]
and the last equality follows from the definition (4.1). We now define
\[K_{i}(z,w)=\sum_{\alpha\in\mathscr{L}_{i}}\frac{e_{\alpha}(z)\overline{\chi_{ p}^{*}e_{\alpha}(w)}}{\|e_{\alpha}\|_{p,\lambda_{p}}^{p}}, \tag{6.15}\]
which immediately gives (6.8), since absolute convergence permits rearrangement of the series defining \(K_{p,\lambda_{p},\Gamma}^{\Omega_{1}}\). Now from (6.12), we see that for \(\alpha\in\mathscr{L}_{i}\) we have
\[\frac{e_{\alpha}(z)\overline{\chi_{p}^{*}e_{\alpha}(w)}}{\|e_{\alpha}\|_{p, \lambda_{p}}^{p}}=d^{p-2}\prod_{j=1}^{n}\frac{e_{\alpha_{j}}(z_{j})\overline{ \chi_{p}^{*}e_{\alpha_{j}}(w_{j})}}{\left\|e_{\alpha_{j}}\right\|_{p,\mu_{ \gamma_{j}}}^{p}}, \tag{6.16}\]
where for each \(j\), we have \(\alpha_{j}\in\mathcal{A}(U_{j},p,\gamma_{j},\ell_{j}^{i}-1,d)\), and on the right hand side \(\chi_{p}:\mathbb{C}\to\mathbb{C}\) is the one-dimensional version of the map (1.8). Using (6.14) and (6.16), we can rearrange the sum (6.15) as
\[K_{i}(z,w) =d^{p-2}\prod_{j=1}^{n}\left(\sum_{\alpha_{j}\in\mathcal{A}(U_{j },p,\gamma_{j},\ell_{j}^{i}-1,d)}\frac{e_{\alpha_{j}}(z_{j})\overline{\chi_{p}^ {*}e_{\alpha_{j}}(w_{j})}}{\left\|e_{\alpha_{j}}\right\|_{p,\mu_{\gamma_{j}}}^{ p}}\right) \tag{6.17}\] \[=d^{p-2}\prod_{j=1}^{n}k_{p,\gamma_{j},\ell_{j}^{i}-1,d}^{U_{j}} (z_{j},w_{j})\]
where the rearrangement in (6.17) is justified since each of the \(n\) factor series on the right hand side is absolutely convergent. The final line is just the definition given in (4.6).
Proof of Theorem 1.15.: Theorem 5.27 says \((\boldsymbol{P}_{p,1}^{\mathcal{U}})^{+}:L^{p}(\mathscr{U})\to L^{p}(\mathscr{U})\) is a bounded operator if and only if \((\boldsymbol{P}_{p,\lambda_{p},\Gamma}^{\Omega})^{+}:[L^{p}(\Omega,\lambda_{p} )]_{\Gamma}\to[L^{p}(\Omega,\lambda_{p})]_{\Gamma}\) is bounded. From (6.8), we see that
\[\left|K_{p,\lambda_{p},\Gamma}^{\Omega}(z,w)\right|\leq\sum_{i=1}^{d^{n-1}}|K_{ i}(z,w)|\,. \tag{6.18}\]
From formula (5.26) defining the operator \((\boldsymbol{P}_{p,\lambda_{p},\Gamma}^{\Omega})^{+}\), it would be sufficient to prove that for each \(1\leq i\leq n\), the operator
\[f\mapsto\int_{\Omega}\left|K_{i}(\cdot,w)\right|f(w)\lambda_{p}(w)dV(w)\]
is bounded on (the full space) \(L^{p}(\Omega,\lambda_{p})\). Formula (6.9) now gives
\[\left|K_{i}(z,w)\right|=d^{p-2}\prod_{j=1}^{n}\left|k_{p,\gamma_{j},\alpha_{i, j},d}^{U_{j}}(z_{j},w_{j})\right|.\]
Proposition 4.14 now says that for each \(1\leq j\leq n\), there exist functions \(\phi_{j},\psi_{j}\) and constants \(C_{1}^{j},C_{2}^{j}\) such that
\[\int_{U_{j}}\left|k_{p,\gamma_{j},\alpha_{i,j},d}^{U_{j}}(z,w) \right|\psi_{j}(w)^{q}\mu_{\gamma_{j}}(w)\,dV(w)\leq C_{1}^{j}\phi_{j}(z)^{q},\] \[\int_{U_{j}}\phi_{j}(z)^{p}\left|k_{p,\gamma_{j},\alpha_{i,j},d}^ {U_{j}}(z,w)\right|\mu_{\gamma_{j}}(z)\,dV(z)\leq C_{2}^{j}\psi_{j}(w)^{p}.\]
Proposition 4.11 now finishes the proof.
## 7. Duality theory of Bergman spaces
### Properties of the twisting map
In this section, \(\Omega\) will denote an arbitrary Reinhardt domain in \(\mathbb{C}^{n}\). We return now to the twisting map \(\chi_{p}\) introduced in (1.8), and use it to present a duality theory for Bergman spaces on Reinhardt domains. This leads to a concrete description for all \(1<p<\infty\) of the duals of the \(A^{p}\)-Bergman spaces when the Monomial Basis Projection is absolutely bounded; this is new on all monomial polyhedral domains (including the Hartogs triangle), and even new in the case of the punctured disc.
**Proposition 7.1**.: _The twisting map \(\chi_{p}:\mathbb{C}^{n}\to\mathbb{C}^{n}\) has the following properties._
1. _It is a homeomorphism of_ \(\mathbb{C}^{n}\) _with itself, and its inverse is the map_ \(\chi_{q}\)_._
2. _It is a diffeomorphism away from the set_ \(\bigcup_{j=1}^{n}\{z_{j}=0\}\) _and its Jacobian determinant (as a mapping of the real vector space_ \(\mathbb{C}^{n}\)_) is given by_ \[\eta_{p}(\zeta)=\det(D\chi_{p})=(p-1)^{n}\left|\zeta_{1}\cdot\cdot\cdot\cdot \zeta_{n}\right|^{2p-4}.\] (7.2)
3. _It restricts to a homeomorphism_ \(\chi_{p}:\Omega\to\Omega^{(p-1)}\) _with inverse_ \(\chi_{q}:\Omega^{(p-1)}\to\Omega\)_, where_ \(\Omega^{(p-1)}\) _is a Reinhardt power of_ \(\Omega\) _as in (_3.6_)._
Proof.: For item (1), notice that if \(w=\chi_{p}(z)\), then for each \(j\) we have
\[w_{j}\left|w_{j}\right|^{q-2}=z_{j}\left|z_{j}\right|^{p-2}\cdot\left|z_{j} \right|z_{j}\left|^{p-2}\right|^{q-2}=z_{j}\left|z_{j}\right|^{p-2+(p-1)(q-2) }=z_{j},\]
since \(p-2+(p-1)(q-2)=pq-p-q=0\). So \(\chi_{q}\circ\chi_{p}\) is the identity, and similarly \(\chi_{p}\circ\chi_{q}\) is also the identity. Item (2) follows from direct computation. Item (3) follows upon noting that in each coordinate, the map \(z\mapsto z\left|z\right|^{p-2}\) is represented in polar coordinates as \(re^{i\theta}\mapsto r^{p-1}e^{i\theta}\). The claim follows from the definition of \(\Omega^{(p-1)}\)
**Proposition 7.3**.: _The Monomial Basis Kernels of \(A^{p}(\Omega)\) and \(A^{q}\big{(}\Omega^{(p-1)},\eta_{q}\big{)}\) are related via the twisting map in the following way:_
\[K^{\Omega}_{p,1}\left(\chi_{q}(z),w\right)=\overline{K^{\Omega(p-1)}_{q,\eta_{q }}(\chi_{p}(w),z)},\qquad z\in\Omega^{(p-1)},\,w\in\Omega. \tag{7.4}\]
_This "twisted" symmetry generalizes the conjugate symmetry of the Bergman kernel on \(\Omega\)._
Proof.: Recalling equation (3.1) above, observe that
\[\left|\chi_{q}^{*}e_{\alpha}(\zeta)\right|^{p}=|e_{\alpha}(\chi_{q}(\zeta))|^{ p}=|e_{\alpha}(\zeta)|^{(q-1)p}=|e_{\alpha}(\zeta)|^{q}\,.\]
Now using \(\chi_{q}\) to change of variables, we have
\[\|e_{\alpha}\|_{L^{p}(\Omega)}^{p}=\int_{\Omega^{(p-1)}}|e_{\alpha}(\chi_{q}( \zeta))|^{p}\,\eta_{q}(\zeta)\,dV(\zeta)=\|e_{\alpha}\|_{L^{q}(\Omega^{(p-1)},\eta_{q})}^{q}\,,\]
which in particular shows the equality of the sets \(\mathcal{S}_{p}(\Omega)=\mathcal{S}_{q}\big{(}\Omega^{(p-1)},\eta_{q}\big{)}\) of allowable indices. Thus, for \(z\in\Omega^{(p-1)}\) and \(w\in\Omega\), we have
\[K^{\Omega}_{p,1}\left(\chi_{q}(z),w\right) =\sum_{\alpha\in\mathcal{S}_{p}(\Omega)}\frac{e_{\alpha}(\chi_{q} (z))\overline{\chi_{p}^{*}e_{\alpha}(w)}}{\|e_{\alpha}\|_{L^{p}(\Omega)}^{p}}\] \[=\sum_{\alpha\in\mathcal{S}_{q}(\Omega^{(p-1)},\eta_{q})}\frac{ \overline{e_{\alpha}(\chi_{p}(w))\overline{\chi_{q}^{*}e_{\alpha}(z)}}}{\|e_ {\alpha}\|_{L^{q}(\Omega^{(p-1)},\eta_{q})}^{q}}=\overline{K^{\Omega^{(p-1)}}_ {q,\eta_{q}}(\chi_{p}(w),z)}.\]
By setting \(p=2\), (7.4) recaptures the conjugate symmetry of the Bergman kernel.
### Adjoints and Duality
We now use the map \(\chi_{p}\) to give a "twisted" \(L^{2}\)-style pairing of the spaces \(L^{p}(\Omega)\) and \(L^{q}(\Omega^{(p-1)},\eta_{q})\):
\[\{f,g\}_{p}=\int_{\Omega}f\cdot\overline{\chi_{p}^{*}(g)}\,dV,\qquad f\in L^{p }(\Omega),\quad g\in L^{q}(\Omega^{(p-1)},\eta_{q}). \tag{7.5}\]
**Proposition 7.6**.: _The map \((f,g)\mapsto\{f,g\}_{p}\), is an isometric duality pairing of \(L^{p}(\Omega)\) and \(L^{q}\big{(}\Omega^{(p-1)},\eta_{q}\big{)}\). In other words, through \(\{\cdot,\cdot\}_{p}\) we obtain the dual space identification_
\[L^{p}(\Omega)^{\prime}\simeq L^{q}\big{(}\Omega^{(p-1)},\eta_{q}\big{)},\]
_where the operator norm of the functional \(\{\cdot,g\}_{p}\in L^{p}(\Omega)^{\prime}\) is equal to the norm of its representative function \(g\in L^{q}\big{(}\Omega^{(p-1)},\eta_{q}\big{)}\)._
Proof.: It is a classical fact that the ordinary \(L^{2}\)-style pairing of \(L^{p}(\Omega)\) with \(L^{q}(\Omega)\) given by
\[(f,h)\mapsto\int_{\Omega}f\cdot\overline{h}\,dV,\qquad f\in L^{p}(\Omega), \quad g\in L^{q}(\Omega)\]
is an isometric duality pairing. Proposition 7.1 says that \(\chi_{q}:\Omega^{(p-1)}\to\Omega\) is a diffeomorphism outside a set of measure zero, with inverse \(\chi_{p}:\Omega\to\Omega^{(p-1)}\), itself a diffeomorphism outside a set of measure zero. It therefore suffices to show that
\[\chi_{q}^{*}:L^{q}(\Omega)\to L^{q}(\Omega^{(p-1)},\eta_{q}) \tag{7.7}\]
is an isometric isomorphism of Banach spaces. Calculation shows
\[\|h\|_{L^{q}(\Omega)}^{q}=\int_{\Omega^{(p-1)}}|h\circ\chi_{q}(w)|^{q}\ \eta_{q}(w)dV(w)=\big{\|}\chi_{q}^{*}(h)\big{\|}_{L^{q}(\Omega^{(p-1)},\eta_{q} )}^{q}\,. \tag{7.8}\]
Since the inverse map \(\chi_{p}^{*}\) of \(\chi_{q}^{*}\) exists, it is surjective and the result follows by the closed-graph theorem.
**Proposition 7.9**.: _Suppose the Monomial Basis Projection of \(A^{p}(\Omega)\) is absolutely bounded on \(L^{p}(\Omega)\). Then under the pairing \(\{\cdot,\cdot\}_{p}\) defined in (7.5), its adjoint is the Monomial Basis Projection of \(A^{q}(\Omega^{(p-1)},\eta_{q})\), which is itself absolutely bounded in \(L^{q}(\Omega^{(p-1)},\eta_{q})\); i.e.,_
\[\big{\{}\boldsymbol{P}^{\Omega}_{p,1}f,g\big{\}}_{p}=\Big{\{}f,\boldsymbol{P} ^{\Omega^{(p-1)}}_{q,\eta_{q}}g\Big{\}}_{p}\,,\quad\text{for all}\,\,\,f\in L^{p}( \Omega),\,\,g\in L^{q}(\Omega^{(p-1)},\eta_{q}).\]
Proof.: Suppose that \(f\in L^{p}(\Omega)\) and \(g\in L^{q}(\Omega^{(p-1)},\eta_{q})\):
\[\big{\{}\boldsymbol{P}^{\Omega}_{p,1}f,g\big{\}}_{p}=\int_{\Omega }\boldsymbol{P}^{\Omega}_{p,1}f\cdot\overline{\chi_{p}^{*}g}\,dV =\int_{\Omega}\left(\int_{\Omega}K^{\Omega}_{p,1}(z,w)f(w)\,dV(w) \right)\overline{g(\chi_{p}(z))}\,dV(z) \tag{7.10}\] \[=\int_{\Omega}\left(\int_{\Omega}K^{\Omega}_{p,1}(z,w)\overline{ g(\chi_{p}(z))}dV(z)\right)f(w)\,dV(w), \tag{7.11}\]
where the change in order of integration can be justified as follows. By the assumption that \(\boldsymbol{P}^{\Omega}_{p,1}\) is absolutely bounded on \(L^{p}(\Omega)\), we see that the function on \(\Omega\) given by
\[z\longmapsto\int_{\Omega}\big{|}K^{\Omega}_{p,1}(z,w)\big{|}\cdot|f(w)|\,dV(w)\]
is in \(L^{p}(\Omega)\). Since \(g\in L^{q}(\Omega^{(p-1)},\eta_{q})\), using Tonelli's theorem we see that
\[\int_{\Omega\times\Omega}\big{|}K^{\Omega}_{p,1}(z,w)g(\chi_{p}( z))f(w)\big{|}\,dV(z,w)\] \[=\int_{\Omega}\left(\int_{\Omega}\big{|}K^{\Omega}_{p,1}(z,w) \big{|}\cdot|f(w)|\,dV(z)\right)|g(\chi_{p}(z))|\,dV(w)<\infty,\]
by Proposition 7.6. Fubini's theorem gives that (7.10) = (7.11). Now change variables in the inner integral of (7.11) by setting \(z=\chi_{q}(\zeta)\), where \(\zeta\in\Omega^{(p-1)}\) to obtain
\[\eqref{eq:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f: f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f::f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f::f:f:f:f:f:f:f:f:f:f::f:f:f:f:f:f:f::f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f::f:f:f:f:f:f:f:f:f:f:f:f:f:f:f::f:f:f:f:f:f:f:f::f:f:f:f:f:f:f:f:f:f:f:f:f::f:f::f:f:f::f::f:f::f:f::f:f:f::f:f:f::f:f::f:f::f:f:f::f:f::f:f::f:f:f::f:f::f:f:f::f::f:f:f::f:f:f::f:f:f:f::f:f::f:f:f::f::f:f:f::f:f:f::f:f::f:f:f::f:f::f:f:f::f:f:f:f::f:f:f:f::f:f:f:f::f:f:f::f:f:f::f:f:f::f:f:f:f:f::f::f:f:f:f::f:f:f:f::f:f:f::f:f::f:f::f:f:f::f:f:f:f::f::f:f:f:f::f:f:f::f:f::f:f:f::f:f::f:f::f:f:f::f::f:f:f::f:f::f:f:f::f::f:f:f::f:f::f::f:f::f::f::f:f::f::f::f:f::f::f::f::f:f::f:f::f::f:f:f::f:f::f:f::f:f::f:f::f:f::f:f::f::f::f:f::f::f::f:f::f::f:f::f::f:f::f::f::f:f::f::f::f::f::f::f:f::f::f::f::f::f::f::f::f::f:::f::f::f::f::f:f::f::f::f::f::f::f::f::f::f::f::f::f:::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f:::f::f:::f::f::f::f::f::f::f::f:::f::f:::f::f::f::f::f::f::f::f:::f::f::f::f::f::f::f:::f::f::f::f::f::f::f::f::f::f:::f::f::f::f:::f::f::f::f:::f::f::f::f:::f:::f:::f::f:::f::f:::f::f:::f::f:::f:::f::f::f::f::f::f:::f::f:::f:::f:::f:::f::::f:::f::f:::f:::f::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f::f:::f::f::::f::::f::f:::f:::f:::f:::f:::f:::f:::f::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f::::f:::f:::f::::f:::f:::f::f::::f::::f:::f::::f:::f::::f::::f::::f:::f::::f::::f:::f:::f::::f::::f:::::f::::f::::f:::f::::f:::::f:::::f::::::f:::::f:::::f::::::f::::::f
\(\chi_{q}^{*}:L^{q}(\Omega)\to L^{q}(\Omega^{(p-1)},\eta_{q})\), we see that the operator on \(L^{q}(\Omega^{(p-1)},\eta_{q})\) given by
\[g\longmapsto\int_{\Omega^{(p-1)}}K^{\Omega^{(p-1)}}_{q,\eta_{q}}(\cdot,\zeta)g( \zeta)\eta_{q}(\zeta)\,dV(\zeta)\]
is bounded on \(L^{q}(\Omega^{(p-1)},\eta_{q})\). Now Proposition 3.22 shows (7.13) = (7.12).
**Proposition 7.14**.: _Suppose the Monomial Basis Projection of \(A^{p}(\Omega)\) is absolutely bounded on \(L^{p}(\Omega)\). Then the duality pairing of \(L^{p}(\Omega)\) and \(L^{q}(\Omega^{(p-1)},\eta_{q})\) by \(\{\cdot,\cdot\}_{p}\) restricts to a duality pairing of the holomorphic subspaces. In other words, we can identify the dual space_
\[A^{p}(\Omega)^{\prime}\simeq A^{q}\big{(}\Omega^{(p-1)},\eta_{q}\big{)}.\]
Proof.: We claim that the conjugate-linear continuous map \(A^{q}(\Omega^{(p-1)},\eta_{q})\to A^{p}(\Omega)^{\prime}\) given by \(h\mapsto\{\cdot,h\}_{p,1}\) is a homeomorphism of Banach spaces. To see surjectivity, let \(\phi\in A^{p}(\Omega)^{\prime}\), let \(\widetilde{\phi}:L^{p}(\Omega)\to\mathbb{C}\) be its Hahn-Banach extension, and let \(g\in L^{q}(\Omega^{(p-1)},\eta_{q})\) be such that \(\widetilde{\phi}(f)=\{f,g\}_{p,1}\). The existence of \(g\) follows from Proposition 7.6. We see from Proposition 7.9 that for each \(f\in A^{p}(\Omega)\) we have
\[\phi(f)=\widetilde{\phi}(f)=\{f,g\}_{p}=\{P^{\Omega}_{p,1}f,g\}_{p}=\{f,P^{ \Omega}_{q,\eta_{q}}g\}_{p}\]
so the surjectivity follows since \(\boldsymbol{P}^{\Omega^{(p-1)}}_{q,\eta_{q}}g\in A^{q}\big{(}\Omega^{(p-1)}, \eta_{q}\big{)}\). Now if \(h\in A^{q}\big{(}\Omega^{(p-1)},\eta_{q}\big{)}\) is in the null-space of this map, i.e., for each \(f\in A^{p}(\Omega)\) we have \(\{f,h\}_{p}=0\), then for \(g\in L^{p}(\Omega)\):
\[\{g,h\}_{p}=\{g,\boldsymbol{P}^{\Omega^{(p-1)}}_{q,\eta_{q}}h\}_{p}=\{ \boldsymbol{P}^{\Omega}_{p,1}g,h\}_{p}=0.\]
This shows that \(h=0\), so the mapping is injective.
### Dual spaces on monomial polyhedra
The duality pairing in Section 7.2 should be contrasted with the usual Holder duality pairing of \(L^{p}\) and \(L^{q}\). On the disc \(\mathbb{D}\), the Holder pairing restricts to a duality pairing of the holomorphic subspaces, yielding the identification \(A^{p}(\mathbb{D})^{\prime}\simeq A^{q}(\mathbb{D})\). On the punctured disc, the Holder pairing fails to restrict to a holomorphic duality pairing and any attempt to identify \(A^{p}(\mathbb{D}^{*})^{\prime}\) with \(A^{q}(\mathbb{D}^{*})\) fails. This is discussed further in Section 8.3. For similar results, see [11].
**Theorem 7.15**.: _Let \(U=\mathbb{D}^{*}\) or \(\mathbb{D}\). The dual space of \(A^{p}(U)\) admits the identification_
\[A^{p}(U)^{\prime}\simeq A^{q}(U,\eta_{q}),\qquad\eta_{q}(\zeta)=(q-1)|\zeta|^{ 2q-4},\]
_via the pairing (7.5), sending \((f,g)\mapsto\{f,g\}_{p}\), where \(f\in A^{p}(U)\), \(g\in A^{q}(U,\eta_{q})\)._
Proof.: It was shown in Corollary 4.21 that the MBP of \(A^{p}(U)\) is absolutely bounded. Recalling the definition of a Reinhart power in (3.6), it it clear that in our case \(U^{(m)}=U\) for every \(m>0\), so in particular for \(m=p-1\). Proposition 7.14 now gives the result.
The same behavior regarding Reinhardt powers seen on the disc and punctured disc continues to hold on all monomial polyhedra:
**Proposition 7.16**.: _Let \(\mathscr{U}\subset\mathbb{C}^{n}\) be a monomial polyhedron of the form (6.1). Then for each \(m>0\), the Reinhardt power \(\mathscr{U}^{(m)}=\mathscr{U}\)._
Proof.: Write \(\mathscr{U}=\mathscr{U}_{B}\), where the rows of \(B\) are given by \(b^{j}=(b^{j}_{1},\ldots,b^{j}_{n})\in\mathbb{Z}^{1\times n}\). From the definition of the Reinhardt power of a domain given in (3.6), we see
\[\mathscr{U}^{(m)} =\{z\in\mathbb{C}^{n}:(|z_{1}|^{\frac{1}{m}},\ldots,|z_{n}|^{ \frac{1}{m}})\in\mathscr{U}\}\] \[=\{z\in\mathbb{C}^{n}:|e_{b^{j}}\big{(}|z_{1}|^{\frac{1}{m}}, \ldots,|z_{n}|^{\frac{1}{m}}\big{)}|<1,\ 1\leq j\leq n\}\] \[=\big{\{}z\in\mathbb{C}^{n}:|e_{b^{j}}(z)|^{\frac{1}{m}}<1,\ 1 \leq j\leq n\big{\}}=\big{\{}z\in\mathbb{C}^{n}:|e_{b^{j}}(z)|<1,\ 1\leq j\leq n\big{\}}=\mathscr{U}.\]
**Theorem 7.17**.: _Let \(\mathscr{U}\) be a monomial polyhedron in \(\mathbb{C}^{n}\). The dual space of \(A^{p}(\mathscr{U})\) admits the identification_
\[A^{p}(\mathscr{U})^{\prime}\simeq A^{q}(\mathscr{U},\eta_{q}),\qquad\eta_{q}( \zeta)=(q-1)|\zeta_{1}\cdots\zeta_{n}|^{2q-4},\]
_via the pairing (7.5), sending \((f,g)\mapsto\{f,g\}_{p}\), where \(f\in A^{p}(\mathscr{U})\), \(g\in A^{q}(\mathscr{U},\eta_{q})\)._
Proof.: The absolute boundedness of the MBP of \(A^{p}(\mathscr{U})\) seen in Theorem 1.15 allows for the use of Proposition 7.14. In this setting \(\mathscr{U}^{(p-1)}=\mathscr{U}\) by Proposition 7.16, which yields the result.
## 8. Comparing the MBP to the Bergman projection on \(L^{p}\)
Let \(\Omega\subset\mathbb{C}^{n}\) be a bounded Reinhardt domain such that the origin lies on its boundary. In even the simplest example, the punctured \(\operatorname{disc}\mathbb{D}^{*}=\{z\in\mathbb{C}:0<|z|<1\}\), special features of the holomorphic function theory can be seen in the Riemann removable singularity theorem. Higher dimensional versions of this phenomenon was noticed by Sibony in [20] on the Hartogs triangle and later generalized in [1].
### The \(L^{p}\)-irregularity of the Bergman projection
In understanding the \(L^{p}\) function theory \(\Omega\), it is useful to consider the behavior of the sets of \(p\)-allowable indices introduced in Section 1.4: \(\mathcal{S}_{p}(\Omega)=\{\alpha\in\mathbb{Z}^{n}:e_{\alpha}\in L^{p}(\Omega)\}\), as \(p\) traverses the interval \((1,\infty)\). It is clear that the sets can only shrink as \(p\) increases, as fewer monomials become integrable due to increase in the exponent \(p\) in the integral \(\int_{\Omega}|e_{\alpha}|^{p}\,dV\). However, the set \(\mathcal{S}_{p}(\Omega)\) is always nonempty, since \(\mathbb{N}^{n}\subset\mathcal{S}_{p}(\Omega)\), \(\Omega\) being bounded.
For example on the punctured disc, if \(p<2\), then \(\mathcal{S}_{p}(\mathbb{D}^{*})=\{\alpha\in\mathbb{Z}:\alpha\geq-1\}\), and if \(p\geq 2\), then \(\mathcal{S}_{p}(\mathbb{D}^{*})=\{\alpha\in\mathbb{Z}:\alpha\geq 0\}\). The exponent \(p=2\) where the set of indices shrinks is a _threshold_. The \(L^{p}\)-irregularity of the Bergman projection is closely related with these thresholds. It was shown in [1], that on a monomial polyhedron \(\mathscr{U}\), the Bergman projection is bounded in \(L^{p}\) if and only if \(p\in(q^{*},p^{*})\), where \(p^{*}=p^{*}(\mathscr{U})\) is the smallest threshold of \(\mathscr{U}\) bigger than \(2\) and \(q^{*}=q^{*}(\mathscr{U})\) is its Holder conjugate. Explicit values of \(p^{*}\) and \(q^{*}\) are given in Proposition 1.12.
Outside the interval \((q^{*},p^{*})\), the \(L^{p}\)-boundedness of the Bergman projection on the monomial polyhedron \(\mathscr{U}\) fails in different ways depending on whether \(p\geq p^{*}\) or \(p\leq q^{*}\). Since \(\mathscr{U}\) is bounded, we have \(L^{p}(\mathscr{U})\subset L^{2}(\mathscr{U})\) if \(p\geq p^{*}>2\), so the integral operator defining the Bergman projection in (1.1) is defined for each \(f\in L^{p}(\mathscr{U})\). The failure of boundedness of the Bergman projection corresponds to the fact that there are functions \(f\in L^{p}(\mathscr{U})\) for which the projection \(\boldsymbol{B}^{\mathscr{U}}f\) is not in \(A^{p}(\mathscr{U})\). It is easy to give an explicit example when \(\mathscr{U}=\mathbb{H}\), the Hartogs triangle. Suppose \(p\geq p^{*}(\mathbb{H})=4\) and let \(f(z)=\overline{z}_{2}\), which is bounded and therefore in \(L^{p}(\mathbb{H})\). A computation shows that there is a constant \(C\) such that \(\boldsymbol{B}^{\mathbb{H}}f(z)={Cz_{2}}^{-1}\notin L^{p}(\mathbb{H})\). This idea can be generalized to an arbitrary monomial polyhedron \(\mathscr{U}\) to show that if \(p\geq p^{*}\), there is a function in \(L^{p}(\mathscr{U})\) which projects to a monomial which is in \(L^{2}(\mathscr{U})\) but not in \(L^{p}(\mathscr{U})\). In [1] the range of the map \(\boldsymbol{B}^{\mathbb{H}}:L^{p}(\mathbb{H})\to L^{2}(\mathbb{H})\) for \(p\geq 4\) was identified as a weighted \(L^{p}\)-Bergman space strictly larger than \(L^{p}(\mathbb{H})\), and a similar result holds on any monomial polyhedron. Recent work of Huo and Wick [14] shows that \(\boldsymbol{B}^{\mathbb{H}}\) is of weak-type (4,4), and this has been extended to generalized Hartogs triangles in [1]. For \(p\leq q^{*}\), the situation is worse:
**Proposition 8.1**.: _Let \(1<p\leq q^{*}(\mathscr{U})\) and \(z\in\mathscr{U}\). There is a function \(f\in L^{p}(\mathscr{U})\) such that the integral_
\[\int_{\mathscr{U}}B^{\mathscr{U}}(z,w)f(w)\,dV(w)\]
_diverges. Consequently, there is no way to extend the Bergman projection to \(L^{p}(\mathscr{U})\) using its integral representation._
Proof.: Let \(q\) denote the Holder conjugate of \(p\) so that \(q\geq p^{*}\). The holomorphic function on the Reinhardt domain \(\mathscr{U}\) given by \(g(\zeta)=B(\zeta,z)\) has Laurent expansion
\[g(\zeta)=\sum_{\alpha\in\mathcal{S}_{2}(\mathscr{U})}\frac{\overline{z}^{ \alpha}}{\|e_{\alpha}\|_{2}}\zeta^{\alpha}.\]
Since \(q\geq p^{*}\), and the set of integrable monomials shrinks at \(p^{*}\), it follows that there is a monomial \(e_{\alpha}\in A^{2}(\mathscr{U})\setminus A^{q}(\mathscr{U})\). Since this non-\(A^{q}\) monomial appears in the above Laurent series with a nonzero coefficient, and by Theorem 2.12, the Laurent expansion of a function in \(A^{q}\) can only have monomials which are in \(A^{q}\), it follows that \(g\notin A^{q}(\mathscr{U})\). By symmetry therefore, \(B(z,\cdot)\not\in L^{q}(\mathscr{U})\). It now follows that there is a function \(f\in L^{p}(\mathscr{U})\) such that the integral above does not converge.
When \(\mathscr{U}=\mathbb{H}\), one can show by explicit computation that if \(1<p<\frac{4}{3}=q^{*}(\mathbb{H})\), we can take \(f(w)=w_{2}^{-3}\) in the above result for each \(z\in\mathbb{H}\). It was shown in [13] that \(\boldsymbol{B}^{\mathbb{H}}\) fails to be weak-type \((\frac{4}{3},\frac{4}{3})\), and this was extended in [11] to generalized Hartogs triangles. But in light of Proposition 8.1, we see that \(\boldsymbol{B}^{\mathbb{H}}\) does not even exist as an everywhere defined operator on \(L^{4/3}(\mathbb{H})\).
In contrast with the above, Theorem 1.15 guarantees that for \(1<p<\infty\) and \(\mathscr{U}\) a monomial polyhedron, that the MBP \(\boldsymbol{P}^{\mathscr{U}}_{p,1}\) is a bounded operator from \(L^{p}(\mathscr{U})\) onto \(A^{p}(\mathscr{U})\), and Theorem 3.13 says that for \(z\in\mathscr{U}\), the function \(K^{\mathscr{U}}_{p,1}(z,\cdot)\in L^{q}(\mathscr{U})\), where \(\frac{1}{p}+\frac{1}{q}=1\).
### Failure of surjectivity
Even if the Bergman projection can be given a bounded extension to \(L^{p}\), it need not be surjective onto \(A^{p}\) for \(p<2\), as one sees in the case of the punctured disc. Here, since \(A^{2}(\mathbb{D}^{*})\) and \(A^{2}(\mathbb{D})\) are identical, the Bergman kernels have the same formula. The Bergman projection on \(\mathbb{D}^{*}\) consequently extends to a bounded operator on \(L^{p}(\mathbb{D}^{*})\) for every \(1<p<\infty\), but fails to be surjective onto \(A^{p}(\mathbb{D}^{*})\) for \(p\in(1,2)\). This happens because the range of the Bergman projection can be naturally identified with \(A^{p}(\mathbb{D})\), and when \(1<p<2\), the space \(A^{p}(\mathbb{D})\) is a strict subspace of \(A^{p}(\mathbb{D}^{*})\) (for example the function \(g(z)=z^{-1}\) belongs to \(A^{p}(\mathbb{D}^{*})\setminus A^{p}(\mathbb{D})\)). In particular, \(\boldsymbol{B}^{\mathbb{D}^{*}}\) is not the identity on \(A^{p}(\mathbb{D}^{*})\) and its nullspace is the one-dimensional span of \(g(z)=z^{-1}\).
On the Hartogs triangle, the Bergman projection is bounded on \(L^{p}(\mathbb{H})\) for \(\frac{4}{3}<p<4\), but is not surjective onto \(A^{p}(\mathbb{H})\) for \(\frac{4}{3}<p<2\). Let \(\mathcal{N}\subset A^{p}(\mathbb{H})\) be the closed subspace spanned by the monomials in \(A^{p}(\mathbb{H})\setminus A^{2}(\mathbb{H})\). One sees from a computation that the monomials in \(A^{p}(\mathbb{H})\setminus A^{2}(\mathbb{H})\) are \(e_{\alpha}\) with \(\alpha_{1}\geq 0\) and \(\alpha_{1}+\alpha_{2}=-2\). Then one can verify using orthogonality of \(L^{p}\) and \(L^{q}\) monomials that the nullspace of \(\boldsymbol{B}^{\mathbb{H}}\) restricted to \(A^{p}(\mathbb{H})\) is \(\mathcal{N}\).
In contrast, the MBP of \(A^{p}(\mathscr{U})\) accounts for _all_ monomials appearing in the Banach-space basis \(\{e_{\beta}:\beta\in\mathcal{S}_{p}(\mathscr{U})\}\), and Corollary 1.16 shows that for \(1<p<\infty\), \(\boldsymbol{P}^{\mathscr{U}}_{p,1}\) is a bounded _surjective_ projection of \(L^{p}(\mathscr{U})\) onto \(A^{p}(\mathscr{U})\).
### The Bergman projection and holomorphic dual spaces
The following is a reformulation of [1, Theorem 2.15]:
**Theorem 8.2**.: _Suppose that the following two conditions hold on a domain \(U\subset\mathbb{C}^{n}\)._
1. _The absolute Bergman operator_ \((\boldsymbol{B}^{U})^{+}:L^{p}(U)\to L^{p}(U)\) _is bounded._
2. _The Bergman projection acts as the identity operator on both_ \(A^{p}(U)\) _and_ \(A^{q}(U)\)_._
_Then the sesquilinear Holder pairing restricts to a duality pairing of \(A^{p}(U)\) with \(A^{q}(U)\):_
\[\langle f,g\rangle=\int_{U}f\overline{g}\,dV,\qquad f\in A^{p}(U),\quad g\in A ^{q}(U), \tag{8.3}\]
_providing the dual space identification \(A^{p}(U)^{\prime}\simeq A^{q}(U)\)._
Conditions (1) and (2) both hold, for instance, on smoothly bounded strongly pseudo-convex domains (see [10] and [11]), thus yielding the dual space identification. But when one of the conditions (1) or (2) fails, the conclusion can fail.
On the punctured disc \(\mathbb{D}^{*}\subset\mathbb{C}\), (1) always holds but (2) fails for all \(p\neq 2\); it can be shown that under the pairing (8.3), \(A^{p}(\mathbb{D}^{*})^{\prime}\) can only be identified with \(A^{q}(\mathbb{D}^{*})\) if \(p=q=2\). On the Hartogs triangle \(\mathbb{H}\), (1) holds if \(\frac{4}{3}<p<4\), but (2) never holds for a \(p\) in this range, as we saw in Section 8.2. The pairing (8.3) is not a duality pairing on \(\mathbb{H}\) for \(\frac{4}{3}<p<4\) unless \(p=2\). The mapping \(A^{q}(\mathbb{H})\to A^{p}(\mathbb{H})^{\prime}\) given by the pairing is not injective if \(2<p<4\) and not surjective if \(\frac{4}{3}<p<2\).
In contrast with the above, the duality theory of Section 7.2 characterizes duals of Bergman spaces of Reinhardt domains via the pairing (7.5) whenever the MBP is absolutely bounded. We saw that Theorem 7.15 gives a concrete description of the dual space of \(A^{p}(\mathbb{D}^{*})\), and for monomial polyhedra Theorem 7.17 does the same.
|
2301.11562 | Arbitrariness and Social Prediction: The Confounding Role of Variance in
Fair Classification | Variance in predictions across different trained models is a significant,
under-explored source of error in fair binary classification. In practice, the
variance on some data examples is so large that decisions can be effectively
arbitrary. To investigate this problem, we take an experimental approach and
make four overarching contributions: We: 1) Define a metric called
self-consistency, derived from variance, which we use as a proxy for measuring
and reducing arbitrariness; 2) Develop an ensembling algorithm that abstains
from classification when a prediction would be arbitrary; 3) Conduct the
largest to-date empirical study of the role of variance (vis-a-vis
self-consistency and arbitrariness) in fair binary classification; and, 4)
Release a toolkit that makes the US Home Mortgage Disclosure Act (HMDA)
datasets easily usable for future research. Altogether, our experiments reveal
shocking insights about the reliability of conclusions on benchmark datasets.
Most fair binary classification benchmarks are close-to-fair when taking into
account the amount of arbitrariness present in predictions -- before we even
try to apply any fairness interventions. This finding calls into question the
practical utility of common algorithmic fairness methods, and in turn suggests
that we should reconsider how we choose to measure fairness in binary
classification. | A. Feder Cooper, Katherine Lee, Madiha Zahrah Choksi, Solon Barocas, Christopher De Sa, James Grimmelmann, Jon Kleinberg, Siddhartha Sen, Baobao Zhang | 2023-01-27T06:52:04Z | http://arxiv.org/abs/2301.11562v8 | # Variance, Self-Consistency, and Arbitrariness in Fair Classification
###### Abstract
In fair classification, it is common to train a model, and to compare and correct subgroup-specific error rates for disparities. However, even if a model's classification decisions satisfy a fairness metric, it is not necessarily the case that these decisions are equally confident. This becomes clear if we measure variance: We can fix everything in the learning process except the subset of training data, train multiple models, measure (dis)agreement in predictions for each test example, and interpret disagreement to mean that the learning process is more unstable with respect to its classification decision. Empirically, some decisions can in fact be so unstable that they are effectively _arbitrary_. To reduce this arbitrariness, we formalize a notion of _self-consistency_ of a learning process, develop an ensembling algorithm that provably increases self-consistency, and empirically demonstrate its utility to often improve both fairness and accuracy. Further, our evaluation reveals a startling observation: Applying ensembling to common fair classification benchmarks can significantly reduce subgroup error rate disparities, without employing common pre-, in-, or post-processing fairness interventions. Taken together, our results indicate that variance, particularly on small datasets, can muddle the reliability of conclusions about fairness. One solution is to develop larger benchmark tasks. To this end, we release a toolkit that makes the Home Mortgage Disclosure Act datasets easily usable for future research.
Machine Learning, ICML
## 1 Introduction
Consider the following experiment: We fit 10 logistic regression models on different training sets from the COMPAS benchmark (Larson et al., 2016), and compare the resulting classifications for two individuals reserved in a test set. As shown in Figure 1, while the 10 models indicate complete agreement for how to classify Individual 1, they disagree completely on Individual 2. If we were to pick one model to use, there would be no effect on how Individual 1 is classified; however, for Individual 2, the prediction is effectively random. We can interpret this to mean that the learning process that trained these models is more unsure about how to classify Individual 2. The classifications for Individual 2 exhibit high _variance_; they are very sensitive to the training data on which the models were trained. This presents a problem if we only analyze one model with one deterministic classification decision. If, by happenstance, we had sampled slightly different training examples from the available data, the decision could very well flip to the other class. In this respect, the classification is effectively _arbitrary_, which is especially unsettling for decisions that can have significant consequences on individuals' lives (Citron and Pasquale, 2014; Cooper et al., 2022; Dickel and Hellman, 2022; Black et al., 2022).
Intuitively, it does not seem fair for individuals to be subject to decisions for which the learning process is arbitrary. Moreover, this arbitrariness can also bring about discrimination, if a model's decisions are systematically more arbitrary for certain demographic groups. However, such unfairness is not captured in popular fairness definitions, which are commonly applied to evaluate the fairness of a _single model_(Hardt et al., 2016; Pleiss et al., 2017; Kleinberg et al., 2017; Chouldechova, 2017; Calders et al., 2009). Instead, it is made visible by examining empirical approximations of the distribution over _possible models_ that a learning process could produce by training on different samples of the dataset. At this level, it becomes clear that even if a particular deterministic classifier achieves fairness with respect to subgroup-specific error rates, it is not necessarily the case that its underlying classification decisions are equally confident.
In this paper, we formalize this intuition, conceiving of clas
Figure 1: Comparing predictions \(\hat{y}\) for 2 individuals, according to \(10\) logistic regression models trained on bootstrap replicates.
sifications like those for Individual 2 as reflective of a lack of _self-consistency_ in the learning process that produced them. Our aim is to develop a procedure for improving self-consistency, in order to improve both fairness and reliability in conclusions drawn about model performance. To these ends, we make the following contributions:
* We motivate a definition of self-consistency based on variance, and illustrate how it reveals novel insights concerning arbitrariness and unfairness in ML (Section 3);
* We propose an ensembling algorithm that provably increases self-consistency. Our algorithm predicts for inputs that attain a user-specified level of confidence, and abstains otherwise (Section 4);
* We validate our algorithm across datasets and models. These results demonstrate the importance of self-consistency in fair classification. Further, they dispute the validity of COMPAS and South German Credit as reliable fairness benchmarks (Section 5).
Taken together, our results indicate that variance, particularly in small datasets, can muddle the reliability of conclusions about fairness interventions. One possible solution is to develop larger benchmark tasks. To this end, we build and release a toolkit that makes the Home Mortgage Disclosure Act datasets (HMDA) easily usable for future research.
## 2 Preliminaries
To begin, we need to establish some background notation and definitions for supervised binary classification in algorithmic fairness settings. Consider a distribution \(p(\cdot)\), from which we can sample _examples_\((x,g,o)\), where \(x\in\mathbb{R}^{m}\) are _instances_ with \(m\) features, \(g\) is a group of _protected attributes_ that we do not use for learning (e.g., race, gender), and \(o\in\mathcal{O}\) are the associated _observed labels_. \(\mathcal{O}\subseteq\mathcal{Y}\), where \(\mathcal{Y}=\{0,1\}\) is the label space. From \(p(\cdot)\), we can sample training datasets of size \(n\), i.e., \(\{(x,g,o)\}_{i=1}^{n}\), with \(\mathcal{D}\) representing the set of all such \(n\)-sized datasets. A _learning process_ runs a _training procedure_\(\mathcal{A}\) on training dataset \(D_{k}\in\mathcal{D}\) to produce a _classifier_\(h_{D_{k}}\in\mathcal{H}\), where \(\mathcal{H}\) is the hypothesis class consisting of possible valid models. That is, a trained classifier \(h_{D_{k}}\) provides a deterministic mapping from the instance space to the label space, i.e. \(h_{D_{k}}:\mathcal{X}\mapsto\mathcal{Y}\). \(\hat{y}=h_{D_{k}}(x)\) is the _predicted label_ (or, simply, _prediction_) for \(x\). A classifier \(h_{D_{k}}\) has an underlying regressor \(r_{D_{k}}:\mathcal{X}\mapsto[0,1]\), which computes the probability of positive (i.e., \(1\)) classification for each \(x\). We produce \(h_{D_{k}}\) from \(r_{D_{k}}\) by applying a threshold \(\tau\in[0,1]\) to the outputs of \(r_{D_{k}}\); the classification decision rule is \(h_{D_{k}}(x)=\mathds{1}[r_{D_{k}}(x)\geq\tau]\), which evaluates to \(0\) when the output regressor probability for \(x\) is less than \(\tau\), and to \(1\) otherwise.
We refer to the distribution over possible trained models as \(\mu\). Training procedure \(\mathcal{A}\) produces \(h_{D_{k}}\sim\mu\) by minimizing the _loss_ of predictions \(\hat{y}\) with respect to their associated observed labels \(o\) in \(D_{k}\). This loss is computed by a chosen _loss function_\(\ell:\mathcal{Y}\times\mathcal{Y}\mapsto\mathbb{R}\). To reason about the expected performance of a particular trained model \(h_{D_{k}}\), we compute predictions for a _test set_ of fresh examples and calculate their loss. Since these examples have not influenced training, we can understand their predictive loss to be an estimate of the _error_ of \(h_{D_{k}}\) for the task at hand. In practice this estimate is dependent on the specific dataset \(D_{k}\) used for training. To reason more generally about the error of possible models produced by a specific learning process, we instead need to consider the _expected error_, \(\mathbb{E}_{D,O}[\ell(o,\hat{y})]\). This computes the loss with respect to the distribution of all possible trained models \(\mu\).
Of course, in practice we typically only have access to one training dataset, not a distribution \(p(\cdot)\) from which we can sample fresh training examples. As a result, to train multiple \(h_{D_{k}}\) and estimate expected error, we bootstrap the available data (Efron & Tibshirani, 1993) to generate replicates \(D_{1},D_{2}\ldots,D_{B}\), which simulates drawing different training datasets from a distribution. In fair classification, to evaluate expected error it is common to use _0-1 loss_\(\triangleq\mathds{1}[\hat{y}\neq o]\) or _cost-sensitive loss_, which assigns asymmetric costs for false positives, FP, and false negatives, FN(Agarwal et al. (2018); Elkan (2001), Appendix B.2.1). Common fair classification definitions, such as Equality of Opportunity (Hardt et al., 2016), further analyze error by computing disparities across group-specific error rates FPR and FNR.
## 3 Variance and Self-Consistency
FPR and FNR represent only one way to decompose error. Additionally, we can analyze error's different statistical sources -- its constituent _noise_, _bias_, and _variance_(AbuMostafa et al., 2012; Geman et al., 1992). Noise and bias depend on the Bayes optimal classifier, which we typically do not have access to in practice (Appendix B.1). In contrast, since variance just depends on the underlying training data, we can estimate it empirically. Following our intuition from Figure 1, we can bootstrap the training dataset to simulate drawing from a distribution (Efron, 1979), use these bootstrap replicates to train multiple models, and then compare the resulting models' predictions for the same test example to see how confident the learning process is with respect to classifying the example.
**Defining variance.** We therefore begin to formalize our analysis of arbitrariness by considering variance. Formally,
**Definition 3.1**.: Given the distribution over possible trained models \(\mu\) and loss \(\ell\), the _variance_ for fresh instance \((x,g)\) is
\[\mathtt{Var}\big{(}\mathcal{A},\mathcal{D},(x,g)\big{)} \triangleq\mathbb{E}_{h_{D_{i}}\sim\mu,h_{D_{j}}\sim\mu}\Big{[} \ell\Big{(}h_{D_{i}}(x),h_{D_{j}}(x)\Big{)}\Big{]}\] \[=\frac{1}{n(n-1)}\sum_{i\neq j}\ell\Big{(}h_{D_{i}}(x),h_{D_{j}}( x)\Big{)}.\]
This definition uses the loss to compare all possible model predictions to each other for the same test example, and
computes the average over these pairwise comparisons. For 0-1 or cost-sensitive loss, we can make the following simplifying observation. We denote \(\hat{Y}\) the multiset of predictions for models \(h_{D_{1}},h_{D_{2}},\ldots,h_{D_{n}}\) on \((x,g)\), with \(|\hat{Y}|=n=\alpha+\beta>1\), and \(\alpha\) and \(\beta\) representing the counts of \(0\) and \(1\) predictions, respectively. It follows that
\[\texttt{Var}\big{(}\mathcal{A},\mathcal{D},(x,g)\big{)}=\frac{(C_{01}+C_{10}) \alpha\beta}{n(n-1)}, \tag{1}\]
with \(C_{01},C_{10}\in\mathbb{R}^{+}\) denoting the FP- and FN-associated costs, respectively, which we can relate directly to the decision threshold \(\tau\) used in \(\mathcal{A}\)(Appendix B.2). Using the bootstrap method (Efron, 1979; Efron and Tibshirani, 1993), we can train a concrete number of models \(n\) and compute an approximation of Definition 3.1, \(\texttt{Var}\).
We can now formally support the claim that average error rates like FPR and FNR (and their empirical counterparts, FPR and FNR), on which popular fairness definitions depend, do not expose variance (Hardt et al., 2016; Jiang et al., 2020; Calders et al., 2009; Pleiss et al., 2017). This is clear because, even when averaging over multiple models \(n\), FPR and FNR compare classifications \(\hat{y}=h_{D_{k}}\) to the observed labels \(o\); they do not compare the different \(\hat{y}\) to each other.
**Defining and illustrating self-consistency.** By considering costs \(C_{01}\) and \(C_{10}\), variance encodes a measure of magnitude. However, magnitude is not especially meaningful for our purposes; it is the relative, not absolute, cost of \(C_{01}\) and \(C_{10}\) that define the classification decision threshold \(\tau\) (Section 2). In order to focus unambiguously on the (dis)agreement part of variance, we define a notion of _self-consistency_:
**Definition 3.2**.: Given the distribution over possible trained models \(\mu\), the _self-consistency_ for fresh instance \((x,g)\) is
\[\texttt{SC}\big{(}\mathcal{A},\mathcal{D},(x,g)\big{)} \triangleq\mathbb{P}_{h_{D_{i}}\sim\mu,h_{D_{j}}\sim\mu}\{h_{D_{ i}}(x)=h_{D_{j}}(x)\}\] \[=\frac{1}{n(n-1)}\sum_{i\neq j}\mathds{1}[h_{D_{i}}(x)=h_{D_{j}}( x)].\]
In words, SC models the probability that two models produced by the same learning process, on different training data subsets, agree on their predictions for the same test example. In practice, we can compute
\[\texttt{SC}\big{(}\mathcal{A},\mathcal{D},(x,g)\big{)}=1-\frac{2\alpha\beta} {n(n-1)}, \tag{2}\]
where this equivalence follows from writing Definition 3.1 in terms of Definition 3.1 for 0-1 loss SC is defined on \([0.5,1]\), with \(0.5\) representing minimal and \(1\) representing maximal self-consistency. We choose to define self-consistency on this range so that its measure coheres with the intuition in Figure 1, with Individuals 1 and 2 exhibiting \(100\%\) and \(50\%\) self-consistency, respectively (Appendix C). As with variance, we can compute empirical approximations of self-consistency, SC, with larger \(n\) corresponding to higher quality approximations.
Beyond estimating self-consistency for an individual test example, we can also do so across the entire test set and with respect to subgroup membership. We provide illustrative examples from two of the most common fair classification benchmarks, COMPAS and Old Adult (Fabris et al., 2022). In Figure 2, we plot the distribution of SC over the test set: The \(y\)-axis shows the cumulative proportion of the test set that has attained the \(x\)-specified level of SC (defined on \([0.5,1]\)). To obtain these results, we split the available data into train and test sets, and bootstrap the train set 100 times to train different models. We repeat this process on 10 train/test splits, and the resulting confidence intervals (inset) indicate that our SC estimates are stable.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{**COMPAS**} \\ \hline & **Err** & **FPR** & **FNR** & **SC** \\ \hline
**Total** & \(.366\pm.005\) & \(.173\pm.008\) & \(.193\pm.007\) & \(.73\pm.003\) \\ \hline \(g=\) NW & \(.369\pm.005\) & \(.18\pm.007\) & \(.19\pm.008\) & \(.732\pm.004\) \\ \hline \(g=\) W & \(.359\pm.013\) & \(.16\pm.012\) & \(.199\pm.011\) & \(.727\pm.008\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean \(\pm\) STD in error rates for the experiments in Figure 2. See Appendix E for more detailed results.
Figure 2: Cumulative proportion of test instances that attain the given level of self-consistency. We train random forest classifiers to estimate SC with 101 bootstrap replicates, and repeat with 10 train/test splits to produce confidence intervals.
Figure 2 further illuminates the importance of self-consistency with respect to its relationship to arbitrariness and discrimination. For one, both sub-figures show that self-consistency varies drastically across test examples, underscoring that classification decisions are far from equally confident. In COMPAS, about one-half of test examples are under 70% self-consistent; nearly one-quarter are effectively 50% self-consistent, meaning they resemble Individual 2 in Figure 1 and thus their predictions are essentially arbitrary. These differences in confidence persist despite the fact that the 100 models plotted in Figure 1(a) exhibit relatively small disparity between subgroup Err, FPR, and FNR (Table 1, left). In short, it is possible to come close to satisfying some fair classification metrics, even if the underlying classifications carry very different levels of confidence (Section 5).
The plot for Old Adult (Figure 1(b)) shows that self-consistency can also differ according to subgroup-\(g\) membership. It is possible for the degree of arbitrariness to be _systematically worse_ for a particular demographic \(g\). While the lack of self-consistency is not as extreme as it is for COMPAS -- the majority of test examples for Old Adult exhibit over 90% SC -- it is unequally distributed on the Male subgroup. Given the relationship between variance (Definition 3.1) and self-consistency (Definition 3.2), these differences in subgroup-conditional self-consistency are likely responsible for at least some of the observed discrepancies in subgroup-conditional error rates (Table 1, right). Of course, noise and bias likely also contribute to unfairness. Due to these other sources of error, it is possible for models to produce consistent predictions, but for those predictions to be consistently _wrong_ (with respect to observed label alignment i.e., \(\hat{y}\neq o\)). There are test examples for both plots in Figure 2 for which all 100 predictions \(\hat{y}\neq o\)(Appendix E.3). Even in these cases, measuring SC can still highlight useful information. For example, if models tend to be consistently wrong for examples in a particular subgroup, it is worth considering that there may be _label bias_. That is, the observed label available in the dataset may be tainted by past unfairness in human decision processes, and thus not reflective of desired learning outcomes (Cooper and Abrams, 2021; Wick et al., 2019; Jiang and Nachum, 2020). We defer this line of investigation to future work.
## 4 Accounting for Self-Consistency
In the remainder of this work, we study the impact of improving self-consistency in fair classification tasks. Our goal is to reduce arbitrariness induced by a lack of self-consistency in classifying individual examples, and to see if doing so can also reduce the systematic differences in arbitrariness that may exist across subgroups (Figure 1(b)). At first glance, bootstrap aggregation (bagging) seems a promising candidate, as it has remained one of the most successful variance reduction algorithms for decades (Breiman, 1996). However, it does not naturally handle the arbitrariness problem we describe in Section 3. The bagging aggregation rule picks the majority-vote classification, and therefore embeds the notion that predicting slightly-better-than-random is a sufficient tie-breaking strategy.
Instead, we suggest a simple framework that modifies bagging to account for self-consistency (Algorithm 1). The key difference is to add a user-selected, minimally-acceptable level of self-consistency \(\kappa\in[0.5,1]\) to threshold the test examples being classified, with higher \(\kappa\) instilling a more significant degree of confidence. For examples that fail to achieve SC that is at least \(\kappa\), Algorithm 1 opts to Abstain rather than produce a \(\hat{y}\in[0,1]\). The easiest way to implement the semantics of Algorithm 1 is to modify traditional bagging with the example Aggregate function shown in lines 11-16,1 Algorithm 1 can also be instantiated to use a _super-ensembling_ strategy. In this case, rather than training an ensemble of \(\eta\) single models, we can train and ensemble \(\eta\) bagged classifiers to reduce variance (line 5), and then aggregate their outputs to account for SC-level \(\kappa\).
Footnote 1: We also experiment with averaging the outputs of regressors \(r\) and applying threshold \(\tau\) to produce predictions.
**Measuring performance.** To validate our approach, we need to confirm that it improves self-consistency. And to examine its impact on fairness, we need to measure changes in the subgroup-conditional error rates. To measure self-consistency, we adapt Definition 3.2 to account for abstentions. We define abstentions to agree with both \(0\) and \(1\) predictions. This makes sense intuitively: Algorithm 1 abstains to avoid making predictions that lack self-consistency, so abstaining should not increase disagreement between
predictions. It follows that we can continue to use Definition 3.2, but with one small adjustment. Instead of the total number of predictions \(n=\alpha+\beta\), with \(\alpha\) and \(\beta\) corresponding to \(0\) and \(1\) predictions, respectively, we now allow for \(n\geq\alpha+\beta\), in order to account for possibly some nonzero number of abstentions. It is easy to show that any algorithm that meets the semantics of this framework will have improved self-consistency (Appendix D.2). Of course, this means that we could achieve perfect self-consistency by _always_ abstaining. It is therefore important to confirm empirically that our super-ensembling algorithm does not abstain too frequently, which necessarily depends on the datasets examined in practice.
**A remark on cost.** It can be considerably more computationally intensive to train an ensemble of models to compute SC than to train a handful of models and perform cross-validation, as is the standard practice in fair classification. However, as our empirical analysis in the next section demonstrates, this cost comes with a huge benefit: It enables us to improve self-consistency and to root out the arbitrariness of producing predictions that are effectively close-to-random, which is especially important in high-stakes fairness settings (Cooper et al., 2021). Moreover, for common fair classification datasets, the increased cost on modern hardware is relatively small; (super-) ensembling with confidence takes under an hour to execute (Appendix E.5).
## 5 Experiments and Discussion
We provide extensive empirical results in two areas: We validate the effectiveness of both ensembling and super-ensembling variants of Algorithm 1, and we explain how our results reveal key insights about the reliability of popular fair classification benchmarks. For all experiments that follow, we illustrate Algorithm 1 with \(\kappa=.75\), but note that, in practice, appropriate choice of \(\kappa\) is task-dependent.
**Code.** We build and release an extensible software suite ([https://github.com/pasta41/variance](https://github.com/pasta41/variance))
of different Aggregate methods, which we apply to common fair classification datasets and models: COMPPAS, Old Adult, German and Taiwan Credit, and 3 large-scale New Adult - CA tasks on logistic regression (LR), decision trees (DTs), random forests (RFs), multi-layer perceptrons (MLPs) and support vector machines (SVMs). We consulted several well-cited prior fair classification studies to inform our choice of models and hyperparameter optimization search spaces (Appendix E). We also examine the NY and TX 2017 subsets of the the Home Mortgage Data Disclosure Act (HMDA) 2007-2017 dataset (Federal Financial Institutions Examination Council, 2017), which have 244,107 and 576,978 examples, respectively. These datasets are less commonly used in current fairness research (Fabris et al., 2022), possibly because the over-100-million data examples are only available in bulk files. Following the example of Ding et al. (2021), one of our contributions is to pre-process these datasets -- all locations and years -- and release them with a standalone software package that makes them easy to explore. Our hope is that effortless access to HMDA will further reduce the community's dependency on small datasets.2
Footnote 2: See [https://github.com/pasta41/hmda](https://github.com/pasta41/hmda) for this PyPI package, which aligns with the terms of service for HMDA.
**Validating Algorithm 1.** Algorithm 1 is naturally parallelizable, so we designed our code for a batch processing cluster environment. This enabled us to train and compare several million different models over the course of our study. Unfortunately, we necessarily defer most results to Appendix E.5, and present a representative portion of results from different datasets and models, beginning with Old Adult in Figure 3 and Table 2. We visualize Algorithm 1 by plotting the self-consistency of the underlying models that we bag with confidence \(\kappa=0.75\) (delineated as a dashed dark blue line). We simultaneously plot the results of ensembling individual random forests with confidence (the faded set of curves), and super-ensembling bags of random forests with confidence (the darker set of curves). We show our results in terms of the \(\tilde{\mathcal{SC}}\) of the underlying bagged models because
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{**Ensembling random forests with confidence**} \\ \hline & \(\mathbf{\text{\#b}}\) & \(\mathbf{\text{\#P}}\mathbf{\text{R}}\) & \(\mathbf{\text{\#R}}\) & \(\mathbf{\text{\#R}}\) \\ \hline
**Total** & \(.155\pm.004\) & \(.023\pm.001\) & \(.06\pm.002\) & \(.228\pm.004\) \\ \hline \(g=\text{F}\) & \(.048\pm.002\) & \(.007\pm.001\) & \(.035\pm.001\) & \(.112\pm.003\) \\ \hline \(g=\text{M}\) & \(.219\pm.005\) & \(.032\pm.002\) & \(.076\pm.003\) & \(.284\pm.007\) \\ \hline \hline \multicolumn{5}{c}{**Super-ensembling random forests with confidence**} \\ \hline & \(\mathbf{\text{\#R}}\) & \(\mathbf{\text{\#F}}\) & \(\mathbf{\text{\#R}}\) & \(\mathbf{\text{\#R}}\) \\ \hline
**Total** & \(.204\pm.004\) & \(.05\pm.002\) & \(.088\pm.001\) & \(.043\pm.001\) \\ \hline \(g=\text{F}\) & \(.077\pm.003\) & \(.017\pm.002\) & \(.049\pm.002\) & \(.02\pm.003\) \\ \hline \(g=\text{M}\) & \(.267\pm.005\) & \(.067\pm.003\) & \(.107\pm.003\) & \(.054\pm.002\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean \(\pm\) STD across 10 train/test splits.
doing so conveys how Algorithm 1 makes decisions to predict or abstain: 3 For both types of ensembling, Algorithm 1 predicts for all examples captured by the area to the right of the \(\kappa\) reference line, and abstains for all examples on the left.
Footnote 3: Additionally, the \(\hat{\textsc{SC}}\) distribution of Algorithm 1 — computed by doing a _third_ round of ensembling — has nearly all of its mass at \(\hat{\textsc{SC}}=1\), which makes it difficult to visualize (Appendix E.5).
By comparing the shaded regions between each set of curves, it is clear that super-ensembling Old Adult improves overall self-consistency and brings subgroup-conditional self-consistency closer together. We can assess this more formally by measuring the distance between each pair of curves -- _before_ and _after_ the internal round of bagging at Algorithm 1, line 5 -- and computing their difference. To do so, we use the Wasserstein-1 distance (Appendix E.5.1), which is the natural choice because it has a simple closed form for CDFs. For the two subgroups, we can call their respective \(\hat{\textsc{SC}}\) CDF curves \(A\) and \(B\) (with associated probability measures \(a,b\)), and compute
\[\mathcal{W}_{1}(a,b)=\int_{\mathbb{R}}|A(\kappa)-B(\kappa)|\ d\kappa, \tag{3}\]
which measures the summed differences at all values \(\kappa\). Subtracting (3) for the _after_ curves from (3) for the _before_ curves yields a positive difference of \(.101\).4
Footnote 4: We can also measure the \(\mathcal{W}_{1}\) distance pairwise _within_ each subgroup, to see how far the \(\hat{\textsc{SC}}\) curve moves down per subgroup, indicating improved overall \(\hat{\textsc{SC}}\) within a subgroup (Appendix).
Beyond SC, we also show \(\hat{\textsc{PR}}\), \(\hat{\textsc{FPR}}\), \(\hat{\textsc{FNR}}\), and the abstention rate AR in Table 2. We rely on Table 1 (right) in Section 3 to serve as the baseline expected error of individual random forests, and highlight the performance of ensembling random forests (top) and super-ensembling bags of random forests (bottom) with \(\kappa=.75\). In addition to improving \(\hat{\textsc{SC}}\), both ensembling approaches improve error rates for both subgroups, and also decrease the disparity in the rates between subgroups: For Old Adult, our results are both fairer and more accurate, according to common fairness definition (Hardt et al., 2016). For example, in Table 1, \(g=\textsc{F}\) has \(\hat{\textsc{FPR}}=.037\) and \(g=\textsc{M}\) has \(\hat{\textsc{FPR}}=.097\), for a disparity of \(.06\); both instantiations of Algorithm 1 improve upon this disparity as a byproduct of improving self-consistency.
It is important to note that, for ensembling with confidence, we report _corrected_ error rates and PR that account for abstentions; we compute these rates only in terms of cast \(0\) and \(1\) votes, and separately report the abstention rate AR in relation to the total number of possible votes. Table 2 reveals some key points concerning AR. AR is unequal across subgroups, with Algorithm 1 abstaining more frequently for \(g=\textsc{M}\). This makes sense in relation to our conceptual
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{**Baseline: Random forest classifiers**} \\ \hline & **PR** & **FPR** & **FNR** & **AR** \\ \hline
**Total** & \(.778\pm.001\) & \(.094\pm.001\) & \(.087\pm.001\) & — \\ \hline \(g=\textsc{NW}\) & \(.715\pm.003\) & \(.109\pm.002\) & \(.098\pm.001\) & — \\ \hline \(g=\textsc{W}\) & \(.794\pm.001\) & \(.091\pm.001\) & \(.084\pm.001\) & — \\ \hline \multicolumn{5}{c}{**Super-ensembling random forests with confidence**} \\ \hline & **PR** & **FPR** & **FNR** & **AR** \\ \hline
**Total** & \(.803\pm.001\) & \(.068\pm.001\) & \(.06\pm.001\) & \(.082\pm.001\) \\ \hline \(g=\textsc{NW}\) & \(.724\pm.003\) & \(.072\pm.001\) & \(.077\pm.001\) & \(.096\pm.003\) \\ \hline \(g=\textsc{W}\) & \(.822\pm.002\) & \(.067\pm.001\) & \(.056\pm.002\) & \(.078\pm.001\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mean \(\pm\) STD across 10 train/test splits.
Figure 4: Alg. 1, DTs, HMDA-TX, \(g=\{\)Hisp./Lat., Not Hisp./Lat.\(\}\)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{**Baseline: Decision tree classifiers**} \\ \hline & **PR** & **FPR** & **FNR** & **AR** \\ \hline
**Total** & \(.776\pm.001\) & \(.083\pm.0\) & \(.082\pm.001\) & — \\ \hline \(g=\textsc{HL}\) & \(.72\pm.002\) & \(.101\pm.001\) & \(.091\pm.001\) & — \\ \hline \(g=\textsc{NHL}\) & \(.794\pm.0\) & \(.077\pm.0\) & \(.079\pm.0\) & — \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean \(\pm\) STD across 5 train/test splits.
contribution: The learning process is less confident about how to predict examples with membership in \(g=\) M and, rather than predicting arbitrarily, Algorithm 1 opts not to predict at all. It is also useful to abstain from the perspective of error and the impact on fairness metrics. We can see this by analyzing the error for the classifications traditional bagging _would have made_ on the examples for which Algorithm 1 abstains. For Old Adult, the average total Err of examples with \(\kappa<.75\) is close to 40%, as compared to 17% for random forests (Table 1), and 8% for ensembling and 14% for super-ensembling with confidence, respectively -- with a significantly larger proportion of that 40% error being ascribed to members of \(g=\) M (Appendix E.5). By abstaining, Algorithm 1 also serves to identify examples that could be investigated in more detail for other interventions.
To represent large-scale tasks, we also highlight analogous results for HMDA-2017-TX using decision trees, \(g=\) ethnicity (Figure 4, Table 3), HMDA-2017-NY using random forests, \(g=\) race (Figure 5, Table 4), and New Adult - CA - Income using decision trees, \(g=\) race (Figure 6, Table 5). For all three sets of results, the tables we show the baseline for individual models (top) and super-ensembling with confidence (bottom), and we defer the associated results and discussion for ensembling with confidence to the Appendix. In comparison to more traditional smaller tasks, our results for larger datasets generally demonstrate lower variance, and thus better SC, (Appendix E). For the HMDA results shown here, as with Old Adult (Figure 3), Algorithm 1 significantly improves self-consistency, and has a smaller (but appreciable) improvement in subgroup differences in SC, with the \(\mathcal{W}_{1}\) distance difference yielding.022 for TX and.028 for NY. This change is less pronounced for New Adult, for which the original SC curves are much closer together to start, resulting in a \(\mathcal{W}_{1}\) distance difference of.008. However, similar to HMDA the overall SC distribution demonstrates significantly increased self-consistency overall, as visualized by the increase in the proportion of the test set that exhibits higher SC.
Both these large-scale results for HMDA and New Adult, and for Old Adult indicate another important aspect of the abstention rate. For the class of methods that meet the semantics of Algorithm 1, there is a trade-off between AR and error. Super-ensembling involves an inner loop of traditional bagging for variance reduction that improves SC before applying a confidence threshold, as is clear when comparing the area between the different sets of curves and the change in \(\mathcal{W}_{1}\) distance. This increase in SC naturally leads to a decrease in the abstention rate when we perform the outer loop to bag with confidence. For Old Adult, ensembling with \(\kappa=.75\) yields total AR of 22.8%, while super-ensembling yields 4.3% (Table 2); HMDA-TX yields 35% and 18% AR, respectively (Table 3, Appendix E.5); HMDA-NY yields 30.6% and 8.2% AR, respectively (Table 4, Appendix E.5); and, New Adult yields 50.3% and 13.7% AR, respectively (Table 5, Appendix E.5). However, this reduction in AR comes at the cost of predicting incorrectly more often; the super-ensembled classifier has a larger number of examples that are more self-consistent and wrong (i.e., \(\hat{y}\neq o\)). This can also have the effect of increasing the disparity between subgroup error rates, if improving SC benefits subgroups asymmetrically. This occurs with FNR for HMDA-TX (Table 3), FNR for HMDA-TX and HMDA-NY (Tables 3 and 4) and FPR for New Adult (Table 5). For example, for HMDA-TX, the relative FNR disparity increases from 1.2% to 1.5%. For New Adult, the relative FPR disparity increases from.2% to.7%, in part due to the larger relative decrease in PR for \(g=\) NW. Even so, in all three sets of experiments, both subgroups exhibit more substantial absolute FNR and FPR improvements after super-ensembling.
models like logistic regression (Figure 7), which generally exhibit much higher SC in comparison to other models, especially those trained on much larger datasets (Appendix E.4.6). This effect is particularly pronounced for German Credit, which, for supervised fair classification, has only 670 examples. The confidence intervals for SC measurements are more scattered, so we have to average over substantially more train/test splits to produce them reliably (Figure 8, Appendix E.4.2). For both datasets, the lack of self-consistency leads to very high abstention rates -- for some model types, over 50% -- when performing ensembling with confidence on individual models (Appendix E.5).
Furthermore, in estimating the expected average error -- an artifact of computing SC over 100 models -- we find that average expected subgroup error rates for both of these tasks are quite similar. COMPAS exhibits a couple of percentage points of disparity (Table 6), while South German Credit subgroup rates are statistically equivalent (Table 7). Notably, this is true even for the FPR baseline for COMPAS, which lowers to 1.8% after super-ensembling (Table 6). Put differently, by training and averaging over many models, we produce a better estimate of the expected model than can be achieved by training a small handful of models and performing cross-validation; and, in doing so, our estimates indicate both close-to-parity in common fairness metrics (Hardt et al., 2016) and, for COMPAS, lower overall expected error than the typically-reported 35% Err(Lin et al., 2020). We examine this result in detail in Appendix E.6. The key takeaway has to do with variance. The underlying, individual models that compose the expected error and constitute our ensembles exhibit radically different subgroup-specific error rates, with a skew toward slightly higher FPR for \(g=\text{NW}\). As a result, while any individual model drawn from the distribution of possible models is more likely to exhibit FPR unfairness for examples in \(g=\text{NW}\), aggregating many models can have the effect of reducing the overall magnitude of individual model disparities.
Nevertheless, the conclusion to draw from our results is _not_ that COMPAS and German Credit are close-to-fair. Rather, the takeaway is that using standard fair classification modeling approaches leads to a large amount of variance for these tasks. In turn, it is unlikely that reliable conclusions can be drawn about satisfying fairness metrics from the typical practice of training and cross-validating small sets of models. In this respect, our work supports an observation that has been made by much prior research. While it is possible to produce more reliable estimates of error from bootstrapping and ensembling, it is likely more prudent to test fair classification interventions on larger datasets (Chen et al., 2018; Ding et al., 2021; Cooper and Abrams, 2021).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{**Baseline: Logistic regression models**} \\ \hline & **PR** & **FPR** & **FNR** & **AR** \\ \hline
**Total** & \(.403\pm.011\) & \(.139\pm.011\) & \(.191\pm.01\) & — \\ \hline \(g=\text{NW}\) & \(.453\pm.012\) & \(.147\pm.013\) & \(.183\pm.011\) & — \\ \hline \(g=\text{W}\) & \(.308\pm.015\) & \(.126\pm.013\) & \(.207\pm.011\) & — \\ \hline \multicolumn{5}{c}{**Super-ensembling logistic regression with confidence**} \\ \hline & **PR** & **FPR** & **FNR** & **AR** \\ \hline
**Total** & \(.389\pm.006\) & \(.123\pm.006\) & \(.19\pm.015\) & \(.041\pm.003\) \\ \hline \(g=\text{NW}\) & \(.442\pm.007\) & \(.129\pm.008\) & \(.18\pm.013\) & \(.043\pm.005\) \\ \hline \(g=\text{W}\) & \(.286\pm.006\) & \(.111\pm.006\) & \(.208\pm.021\) & \(.038\pm.005\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Mean \(\pm\) STD across 10 train/test splits.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{**Baseline: Decision tree classifiers**} \\ \hline & **PR** & **FPR** & **FNR** & **AR** \\ \hline
**Total** & \(.711\pm.018\) & \(.147\pm.024\) & \(.17\pm.018\) & — \\ \hline \(g=\text{F}\) & \(.712\pm.042\) & \(.145\pm.059\) & \(.177\pm.047\) & — \\ \hline \(g=\text{M}\) & \(.711\pm.019\) & \(.148\pm.023\) & \(.169\pm.019\) & — \\ \hline \multicolumn{5}{c}{**Super-ensembling decision trees with confidence**} \\ \hline & **PR** & **FPR** & **FNR** & **AR** \\ \hline
**Total** & \(.847\pm.024\) & \(.148\pm.03\) & \(.054\pm.017\) & \(.175\pm.03\) \\ \hline \(g=\text{F}\) & \(.879\pm.06\) & \(.152\pm.07\) & \(.052\pm.051\) & \(.199\pm.059\) \\ \hline \(g=\text{M}\) & \(.842\pm.023\) & \(.148\pm.031\) & \(.054\pm.015\) & \(.171\pm.033\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Mean \(\pm\) STD across 50 train/test splits.
Figure 8: Alg. 1, DTs, German Credit, \(g=\{\text{Female},\text{Male}\}\).
Figure 7: Alg. 1, LR, COMPAS, \(g=\{\text{Non-white},\text{White}\}\).
Future scholarship should transition to using datasets like New Adult, made easily usable by Ding et al. (2021), and HMDA, which we package for greater accessibility as a part of the present work. Based on these results, it also seems worthwhile to revisit prior empirical results in fair classification that depend on COMPAS and German Credit. The differences in the underlying models, which constitute the large lack of SC shown here, suggest that post-processing individual models is unlikely to enforce parity that generalizes well across different possible models trained by the same learning process, which differ only in the subset of training data on which they were trained.
## 6 Related Work on Variance and Fairness
While we have discussed related work throughout this paper, we now provide additional discussion specifically on the relatively small handful of papers on fair classification that contend specifically with variance. One influential prior work, Chen et al. (2018), adopted the definition of variance from Domingos (2000a), which has since been taken up by (Black & Fredrikson, 2021; Black et al., 2022a). This work elects to rely on this variance definition because, for binary classification and 0-1 loss, Domingos (2000a) permits a formal decomposition of expected error into its constituent noise, bias, and variance. However, this definition presents some limitations. Notably, it depends on a notion of a "main prediction," which we show can be quite brittle in high-variance tasks (like those we study here), and it also does not cleanly extend to cost-sensitive loss in practice (Appendix B.3). These drawbacks encouraged us to use what we believe is a more natural definition of variance, from which we are able to neatly derive the notion of self-consistency that is the focus of our study (Section 3).
While our definition forgoes the decomposition that Domingos (2000a;b) affords, our work does not require such a decomposition for the results that we develop. Notably, the prior work on fair classification that leverages Domingos (2000a) does not directly employ the decomposition, either. Further, Chen et al. (2018) does not measure variance or self-consistency empirically, as we do in our work. Their experiments on Old Adult alter training dataset size as a proxy for understanding variance-induced error, rather than measuring disagreement between predictions on the same test example and its implications for fairness and arbitrariness. In contrast, Black et al. (2022a) studies variance directly, and independently also develops an ensembling-based strategy to contend with it. However, their results are directly tied to estimating the "main prediction" (Domingos, 2000a), and thus are fundamentally different from our work, which is free from the issues presented by this definition (Appendix B.3).
More distant related work studies variance in deep learning on fair classification (Qian et al., 2021; Forde et al., 2021). Studying variance in deep learning settings presents distinct challenges that we do not address here. In particular, non-determinism introduced from using GPUs makes it difficult to study a fixed learning process, for which the only source of non-determinism originates from the stochasticity of resampling the training dataset (Cooper et al., 2022a).
## 7 Conclusion and Future Work
In this paper, we present an empirically motivated, theoretically grounded study on the relationship between variance, self-consistency, and arbitrariness in fair classification contexts. Prior work on fair classification has focused overwhelmingly on relative subgroup error rates, treating such disparities as unfair because they amount to a form of discrimination. We instead focus on a lack of self-consistency in predictions, explaining why this, too, can be a important type of unfairness, due to the arbitrariness that it introduces into ML-based decision-making. We further show that these phenomena can interact: Certain subgroups may be subject to more arbitrary decisions than others.
To contend with arbitrariness, we develop an intuitive ensembling-based approach that helps to improve self-consistency. In contrast to traditional bagging, however, our approach requires some minimum level of agreement in model predictions, rather than relying on a simple-majority vote, since anything slightly better than chance is rarely enough to address concerns with arbitrariness. While we find empirically that this approach reduces arbitrariness, and can in some cases even narrow the gap in error rates across subgroups, our method may abstain on a nontrivial fraction of predictions. These results imply that there may be limits to how much we can improve self-consistency -- and that there may be times when it is simply inappropriate to make predictions, given the remaining arbitrariness.
Further, our results indicate that variance is a large source of unfairness in small-scale, popular fairness benchmarks -- notably, COMPAS and German Credit. Our main takeaway for COMPAS and German Credit is that they should be discontinued as benchmark tasks in the field. It seems likely that, when training typical fair classification hypothesis classes, the variance in these tasks is not sufficiently surfaced by the cross-validation of just a few models, which is the traditional fair classification approach for evaluating models empirically. Therefore, our results also call into question the reliability of past estimates of the effectiveness of fairness interventions, which depend on these tasks. Looking backward, it is worth revisiting prior empirical results that rely on COMPAS and German Credit to showcase fair classification interventions, in order to see if they are reproducible in relation to accounting for self-consistency (Bouthillier et al., 2019). Going forward, to test their algorithmic interventions, researchers should instead
use larger-scale datasets, such as New Adult, made easily usable by Ding et al. (2021), and HMDA, which we package for greater accessibility as part of the present work.
Beyond these observations for COMPAS and German Credit, our work suggests several promising directions for future research. For several tasks, we were able to achieve close-to-parity in common fairness metrics by accounting for self-consistency, without applying standard pre-processing, in-processing, or post-processing fairness interventions. In future work, it would be fruitful to investigate how such standard methods could further improve fairness disparities when used in combination with the ensembling algorithm we present. In doing so, it will also be worth investigating how introducing such techniques may in turn impact self-consistency and arbitrariness. There are several concrete questions for investigating possible interactions between self-consistency and more traditional, accuracy-focused fair classification methods: What other methods can we develop to improve self-consistency? What strategies should we rely on when it is not possible to improve self-consistency? What can we learn by studying examples that models predict incorrectly with a high degree of self-consistency?
Further along these lines, given that improving self-consistency can improve both accuracy and fairness, it would also be interesting to revisit classic results and critiques of the fairness-accuracy trade-off problem formulation specifically in relation to the role of variance-induced error (Cooper and Abrams, 2021; Rodolfa et al., 2021; Corbett-Davies et al., 2017; Dutta et al., 2020). Such investigations would complement the interest in "model multiplicity," (Breiman, 2001), which has recently been imported into the fairness literature (Black et al., 2022b; Watson-Daniels et al., 2022). For example, it could be useful to see how accounting for self-consistency impacts the candidate set of Rashomon models to consider. Lastly, and more generally, it would be compelling to see how such an analysis of confidence in predictions, _vis-a-vis_ self-consistency, could bring about greater transparency with respect to the fairness and accuracy of ML decisions (Bhatt et al., 2021; Cooper et al., 2022b).
## Acknowledgements
Much of this work was done as part of an internship at Microsoft Research, FATE Lab, New York. A. Feder Cooper is supported by the Artificial Intelligence Policy and Practice initiative at Cornell University, the John D. and Catherine T. MacArthur Foundation, and Professor Christopher De Sa's NSF CAREER grant. The authors would like to thank Fernando Delgado, Abigail Z. Jacobs, Kweku Kwegyir-Aggrey, Katherine Lee, and Emanuel Moss for feedback on earlier iterations of this work.
|
2305.03491 | One loop to rule them all: Perturbativity in the presence of ultra
slow-roll dynamics | We discuss the issue of perturbativity in single-field inflationary models
with a phase of ultra slow-roll (USR) tailor suited to generate an order-one
abundance of primordial black holes (PBHs). More in detail, we impose the
condition that loop corrections made up of short-wavelength modes enhanced by
the USR dynamics do not alter the tree-level power spectrum of curvature
perturbations. In our analysis, the USR phase is preceded and followed by two
stages of ordinary slow-roll (SR), and we model the resulting SR/USR/SR
dynamics using both instantaneous and smooth transitions. Focusing on scales
relevant for CMB observations, we find that it is not possible, with these
arguments, to rule out the scenario of PBH formation via USR, not even in the
limit of instantaneous transition. However, we also find that loop corrections
of short modes on the power spectrum of long modes, even though not large
enough to violate perturbativity requirements, remain appreciable and, most
importantly, are not tamed in realistic realisations of smooth SR/USR/SR
transitions. This makes perturbativity a powerful theoretical tool to constrain
USR dynamics. We extend the analysis at any scale beyond those relevant for CMB
observations. We find that loop corrections of short modes remain within the
few percent if compared to the tree-level power spectrum. However, we also find
one notable exception of phenomenological relevance: we show that the so-called
dip in the power spectrum of curvature perturbation is an artifact of the
tree-level computation. | Gabriele Franciolini, Antonio Junior Iovino, Marco Taoso, Alfredo Urbano | 2023-05-05T13:05:31Z | http://arxiv.org/abs/2305.03491v2 | # One loop to rule them all:
###### Abstract
We discuss the issue of perturbativity in single-field inflationary models with a phase of ultra slow-roll (USR) tailor suited to generate an order-one abundance of primordial black holes (PBHs). More in detail, we impose the condition that loop corrections made up of short-wavelength modes enhanced by the USR dynamics do not alter the tree-level power spectrum of curvature perturbations. In our analysis, the USR phase is preceded and followed by two stages of ordinary slow-roll (SR), and we model the resulting SR/USR/SR dynamics using both instantaneous and smooth transitions. Focusing on scales relevant for CMB observations, we find that it is not possible, with these arguments, to rule out the scenario of PBH formation via USR, not even in the limit of instantaneous transition. However, we also find that loop corrections of short modes on the power spectrum of long modes, even though not large enough to violate perturbativity requirements, remain appreciable and, most importantly, are not tamed in realistic realisations of smooth SR/USR/SR transitions. This makes perturbativity a powerful theoretical tool to constrain USR dynamics. We extend the analysis at any scale beyond those relevant for CMB observations. We find that loop corrections of short modes remain within the few percent if compared to the tree-level power spectrum. However, we also find one notable exception of phenomenological relevance: we show that the so-called dip in the power spectrum of curvature perturbation is an artifact of the tree-level computation.
###### Contents
* I Introduction
* II Set-up of the computation using the "in-in" formalism
* II.1 Conventions
* II.2 The (minimal) dynamics of ultra slow-roll
* II.3 The cubic action
* II.4 Beyond the cubic action
* III One-loop computation
* III.1 Loop correction with a large hierarchy of scales
* III.2 Loop correction at any scales
* IV Time integration beyond the instantaneous transition and at any scales
* IV.1 Loop evaluation at the CMB scales
* IV.1.1 The instantaneous transition
* IV.1.2 Dynamics during USR
* IV.1.3 Dynamics at the SR/USR/SR transition
* IV.2 Loop evaluation at any scales
* V Discussion and outlook
* A Dynamics of curvature modes, some essential results
Introduction
In this work, we consider the so-called standard scenario of primordial black hole (PBH) formation [1; 2; 3]. In this scenario, the formation of PBHs during the early Universe is an exceptional phenomenon in which extremely dense regions of radiation energy are tightly packed to the point of gravitational collapse [4; 5; 6]. General relativity and the inflationary stage preceding the radiation epoch offer a mechanism to generate such over-densities: small-scale curvature perturbations, stretched way beyond the horizon by the inflationary expansion, are transferred to the radiation fluid after the end of inflation at around the time of their horizon re-entry. In order to truly trigger a gravitational collapse of the radiation fluid, the amplitude of such small-scale curvature perturbations needs to be greatly enhanced by some dynamics during the inflationary stage. This statement can be made more quantitative by introducing the dimensionless power spectrum \(\mathcal{P}(k)\) which gives the contribution to the variance of the curvature perturbation field per bin of \(\log k\), with \(k\) the comoving wavenumber in Fourier space. At scales relevant for Cosmic Microwave Background (CMB) observations (that is, \(0.005\lesssim k\,[{\rm Mpc}^{-1}]\lesssim 0.2\)) we typically have \(\mathcal{P}(k)=O(10^{-9})\); at smaller scales (\(1.5\times 10^{13}\lesssim k\,[{\rm Mpc}^{-1}]\lesssim 1.5\times 10^{14}\)) asteroid-mass PBHs may comprise the totality of dark matter (DM) observed in the Universe but their formation requires \(\mathcal{P}(k)=O(10^{-2})\) (for recent reviews see [7; 8; 9; 10; 11]). Theory-side, therefore, what makes the formation of PBHs an exceptional phenomenon is the fact that it requires a seven-order-of-magnitude enhancement of the small-scale power spectrum with respect to the value observed at large scales.
In the context of single-field models of inflation, the above-mentioned enhancement can be dynamically realized by introducing a phase of ultra slow-roll (USR) [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30] during which the inflaton field, after the first conventional phase of slow-roll (SR) that is needed to fit large-scale cosmological observations, almost stops the descent along its potential (typically because of the presence of a quasi-stationary inflection point) before starting rolling down again in a final stage of SR dynamics that eventually ends inflation. In this work, we shall refer to this three-stage dynamics as SR/USR/SR.
A very legitimate question is whether the USR dynamics is consistent with perturbativity. Technically speaking, the dimensionless power spectrum of curvature perturbation \(\mathcal{P}(k)\) is typically computed within the free theory. However, curvature perturbations, being gravitational in nature, feature an intricate architecture of non-linear interactions. The effect of non-linear interactions is twofold. On the one hand, they generate, in addition to the variance, non-zero higher-order cumulants that may leave a peculiar non-Gaussian pattern to the statistics of the curvature field. On the other hand, the variance itself gets corrected with respect to the value computed within the free theory. In this paper, we focus on the second effect and, on very general grounds, we define the perturbative criterion
\[\mathcal{P}(k)\equiv\mathcal{P}_{\rm tree}(k)\left[1+\Delta\mathcal{P}_{\text{ 1-loop}}(k)\right]\quad\Longrightarrow\quad\Delta\mathcal{P}_{\text{1-loop}}( k)\stackrel{{!}}{{<}}1\,, \tag{1}\]
meaning that the power spectrum computed within the free theory (the "tree-level" power spectrum in the above equation) must be larger than the corrections \(\Delta\mathcal{P}\) introduced by the presence of interactions. Such corrections can be organized in a formal series expansion, and we will focus in particular on the first-order term, dubbed "1-loop" in the above equation.
The gut feeling is that, unless one is led to consider \(\mathcal{P}_{\rm tree}(k)=O(1)\), perturbativity should be under control. However, ref. [31] made the bold claim that, in the presence of USR, the perturbativity condition in eq. (1) could be violated at scales \(k\) relevant for CMB observations; even more strikingly, ref. [31] argues that USR dynamics tailored to generate a sizable abundance of asteroid- or solar-mass PBHs are ruled-out. What makes the claim of ref. [31] so hard to accept is that it basically says that loop of short modes alters the correlation of long CMB modes. This is counter-intuitive since it clashes with the intuition that physics at different scales should be decoupled.
Given the above, it is not surprising that ref. [31] sparked an intense debate mostly exclusively polarized on defending or disproving the claim that PBH formation from single-field inflation is ruled out [32; 33; 34; 35; 36; 37; 38; 39]. In this paper, we bury the hatchet and critically examine the consequences of eq. (1) in the presence of single-field inflation with USR dynamics.
Our analysis is structured as follows. In section II, we set the ground for the one-loop computation; in particular, we define all our conventions in section II.1, the SR/USR/SR background dynamics in section II.2 and the interaction Hamiltonian in section II.3. In section III, we compute the one-loop correction to the curvature power spectrum within the setup described in section II.1; in particular, in section III.1, we focus on the case in which there is a large hierarchy between the momenta running in the loop and the external ones while in section III.2 we consider the case in which the external momenta are generic. In section IV, we discuss the implications of the perturbative bound in eq. (1). In particular, in section IV.1, we consider the case in which the external momenta are long CMB modes. In this section, we critically compare our result with those of ref. [31], and discuss a number of crucial generalization. In section IV.2, we extend the computation to the case in which the external momenta are short modes. Finally, we conclude in section V.
Set-up of the computation using the "in-in" formalism
### Conventions
First, we set our conventions. We set the reduced Planck mass to one; \(t\) is the cosmic time (with \(\dot{}\equiv d/dt\)) and \(\tau\) the conformal time (with \({}^{\prime}\equiv d/d\tau\)) with \(dt/d\tau=a\) being \(a\) the scale factor of the flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric \(ds^{2}=dt^{2}-a^{2}(t)d\vec{x}^{2}\), with \(\vec{x}\) comoving coordinates. The Hubble rate is \(H\equiv\dot{a}/a\). The \(e\)-fold time \(N\) is defined by \(dN=Hdt\) from which we also have \(dN/d\tau=aH\). The Hubble-flow parameters \(\epsilon_{i}\) (for \(i\geqslant 1\)) are defined by the recursive relation
\[\epsilon_{i}\equiv\frac{\dot{\epsilon}_{i-1}}{H\epsilon_{i-1}}\,,\qquad\text{ with:}\quad\epsilon_{0}\equiv\frac{1}{H}\,. \tag{2}\]
As customary, we simply indicate as \(\epsilon\) the first Hubble parameter, \(\epsilon\equiv\epsilon_{1}=-\dot{H}/H^{2}\). Instead of the second Hubble parameter \(\epsilon_{2}\), sometimes it is useful to introduce the Hubble parameter \(\eta\) defined by1
Footnote 1: We remark that in ref. [31] the symbol \(\eta\) refers to the second Hubble parameter \(\epsilon_{2}\).
\[\eta\equiv-\frac{\ddot{H}}{2H\dot{H}}=\epsilon-\frac{1}{2}\frac{d\log\epsilon }{dN}\,,\qquad\text{with:}\quad\epsilon_{2}=2\epsilon-2\eta\,. \tag{3}\]
We consider the theory described by the action
\[\mathcal{S}=\int d^{4}x\sqrt{-g}\bigg{[}\frac{1}{2}R(g)+\frac{1}{2}g^{\mu\nu} (\partial_{\mu}\phi)(\partial_{\nu}\phi)-V(\phi)\bigg{]}\,. \tag{4}\]
\(R(g)\) is the scalar curvature associated with the space-time whose geometry is described by the metric \(g\) with line element \(ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}\). The classical background evolves in the flat FLRW universe and the background value of the scalar field is a function of time, \(\phi(t)\). We tacitly assume that the scalar potential features an approximate stationary inflection point so as to trigger the transition SR/USR/SR.
We focus on scalar perturbations. We consider the perturbed metric in the following generic form
\[ds^{2}=N^{2}dt^{2}-h_{ij}(N^{i}dt+dx^{i})(N^{j}dt+dx^{j})\,, \tag{5}\]
and choose the gauge in which
\[N=1+\delta N(\vec{x},t)\,,\qquad\quad N^{i}=\delta^{ij}\partial_{j}B(\vec{x}, t)\,,\qquad\quad h_{ij}=a^{2}(t)e^{2\zeta(\vec{x},t)}\delta_{ij}\,,\qquad \quad\delta\phi(\vec{x},t)=0\,. \tag{6}\]
The field \(\zeta(\vec{x},t)\) is the only independent scalar degree of freedom since \(N\) and \(N^{i}\) are Lagrange multipliers subject to the momentum and Hamiltonian constraints. It is important to stress that the variable \(\zeta\) as defined in eq. (6) is constant outside the horizon (more in general, outside the horizon and after the end of possible non-adiabatic phases) and represents the correct non-linear generalization of the Bardeen variable [40].
At the quadratic order in the fluctuations, the action is
\[\mathcal{S}_{2}=\int d^{4}x\,\epsilon\,a^{3}\left[\hat{\zeta}^{2}-\frac{( \partial_{k}\zeta)(\partial^{k}\zeta)}{a^{2}}\right]\,. \tag{7}\]
Comoving curvature perturbations are quantized by introducing the free operator
\[\hat{\zeta}(\vec{x},\tau)=\int\frac{d^{3}\vec{k}}{(2\pi)^{3}}\hat{\zeta}(\vec {k},\tau)e^{i\vec{x}\cdot\vec{k}}\,,\qquad\text{with:}\quad\hat{\zeta}(\vec{k},\tau)=\zeta_{k}(\tau)a_{\vec{k}}+\zeta_{k}^{*}(\tau)a_{-\vec{k}}^{\dagger}\,, \tag{8}\]
and
\[[a_{\vec{k}},a_{\vec{k}^{\prime}}]=[a_{\vec{k}}^{\dagger},a_{\vec{k}^{\prime}} ^{\dagger}]=0\,,\qquad[a_{\vec{k}},a_{\vec{k}^{\prime}}^{\dagger}]=(2\pi)^{3} \delta^{(3)}(\vec{k}-\vec{k}^{\prime})\,,\qquad\quad a_{\vec{k}}|0\rangle=0\,, \tag{9}\]
where the last condition defines the vacuum of the free theory \(|0\rangle\). We define the comoving wavenumber \(k\equiv|\vec{k}|\). The scale factor in the FLRW universe corresponds to a rescaling of the spatial coordinate; consequently, physically sensible results should be invariant under the rescaling [41]
\[a\to\lambda a\,,\qquad\vec{x}\to\vec{x}/\lambda\,,\qquad\vec{k}\to\lambda\vec {k}\,,\qquad k\to|\lambda|k\,,\qquad\text{with}\ \ \lambda\in\mathbb{R}\,. \tag{10}\]
Furthermore, if we consider the conformal time \(\tau\) instead of the cosmic time \(t\), we also have
\[\tau\to\tau/\lambda\,. \tag{11}\]
Notice that, under the above rescaling, we have \(a_{\vec{k}}\to a_{\vec{k}}/|\lambda|^{3/2}\) (from the scaling property of the three-dimensional \(\delta\) function) and, consequently, \(\zeta_{k}\to\zeta_{k}/|\lambda|^{3/2}\) so that \(\hat{\zeta}(\vec{k},\tau)\to\hat{\zeta}(\vec{k},\tau)/|\lambda|^{3}\) and \(\hat{\zeta}(\vec{x},\tau)\) invariant. In the case of free fields, we have
\[\langle 0|\hat{\zeta}(\vec{k}_{1},\tau_{1})\hat{\zeta}(\vec{k}_{2},\tau_{2})|0 \rangle=(2\pi)^{3}\delta(\vec{k}_{1}+\vec{k}_{2})\zeta_{k_{1}}(\tau_{1})\zeta_ {k_{2}}^{*}(\tau_{2})\,. \tag{12}\]
In the presence of a time-derivative, we simply have
\[\langle 0|\hat{\zeta}^{\prime}(\vec{k}_{1},\tau_{1})\hat{\zeta}(\vec{k}_{2}, \tau_{2})|0\rangle=(2\pi)^{3}\delta(\vec{k}_{1}+\vec{k}_{2})\zeta_{k_{1}}^{ \prime}(\tau_{1})\zeta_{k_{2}}^{*}(\tau_{2})\,. \tag{13}\]
Note that the time dependence occurs in the mode function, not in the raising/lowering operator. The mode function \(\zeta_{k}(\tau)\) is related to the linear-order Mukhanov-Sasaki (M-S) equation. More in detail, if we define \(\zeta_{k}(\tau)=u_{k}(\tau)/z(\tau)\) with \(z(\tau)\equiv a(\tau)\sqrt{2\epsilon(\tau)}\), the mode \(u_{k}(\tau)\) verifies the equation
\[\frac{d^{2}u_{k}}{d\tau^{2}}+\left(k^{2}-\frac{1}{z}\frac{d^{2}z}{d\tau^{2}} \right)u_{k}=0\,. \tag{14}\]
We are interested in the computation of the quantity
\[\lim_{\tau\to 0^{-}}\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2}, \tau)\rangle=\int\frac{d^{3}\vec{k}}{(2\pi)^{3}}P(k)e^{i\vec{k}\cdot(\vec{x}_{ 1}-\vec{x}_{2})}\,, \tag{15}\]
at some late time \(\tau\to 0^{-}\) at which curvature perturbations become constant at super-horizon scales. Equivalently, we write
\[\lim_{\tau\to 0^{-}}\langle\hat{\zeta}(\vec{x},\tau)\hat{\zeta}(\vec{x},\tau) \rangle=\int\frac{dk}{k}\underbrace{\left[\frac{k^{3}}{2\pi^{2}}P(k)\right]}_{ \equiv\mathcal{P}(k)}=\int\frac{dk}{k}\mathcal{P}(k)\,, \tag{16}\]
where \(\mathcal{P}(k)\) is the a-dimensional power spectrum. At the level of the quadratic action, we find
\[\langle 0|\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau)|0\rangle= \int\frac{d^{3}\vec{k}_{1}}{(2\pi)^{3}}\,d^{3}\vec{k}_{2}\delta(\vec{k}_{1}+ \vec{k}_{2})\,\zeta_{k_{1}}(\tau)\zeta_{k_{2}}^{*}(\tau)e^{i(\vec{x}_{1}\cdot k _{1}+\vec{x}_{2}\cdot k_{2})}=\int\frac{d^{3}\vec{k}}{(2\pi)^{3}}|\zeta_{k}( \tau)|^{2}e^{i\vec{k}\cdot(\vec{x}_{1}-\vec{x}_{2})}\,, \tag{17}\]
which of course gives the familiar result
\[\mathcal{P}(k)=\lim_{\tau\to 0^{-}}\frac{k^{3}}{2\pi^{2}}|\zeta_{k}(\tau)|^{2}\,. \tag{18}\]
The goal is to compute corrections that arise from the presence of interactions. This means that, in eq. (15), the vacuum expectation value should refer to the vacuum \(|\Omega\rangle\) of the interacting theory and the dynamics of the operator \(\hat{\zeta}(\vec{x},\tau)\) is described by the full action that also includes interactions.
We compute the left-hand side of eq. (15) by means of the "_in-in_" formalism (see e.g. [42; 43; 44]). Correlators are given by
\[\langle\Omega|\hat{\mathcal{O}}(\tau)|\Omega\rangle\equiv\langle\hat{\mathcal{ O}}(\tau)\rangle=\langle 0|\left\{\bar{T}\exp\left[i\int_{-\infty(1+i \epsilon)}^{\tau}d\tau^{\prime}\hat{H}_{\rm int}(\tau^{\prime})\right]\right\} \hat{\mathcal{O}}_{I}(\tau)\left\{T\exp\left[-i\int_{-\infty(1-i\epsilon)}^{ \tau}d\tau^{\prime}\hat{H}_{\rm int}(\tau^{\prime})\right]\right\}|0\rangle\,, \tag{19}\]
where on the right-hand side all fields appearing in \(\hat{\mathcal{O}}_{I}(\tau)\) and \(\hat{H}_{\rm int}(\tau^{\prime})\) are free fields in the interaction picture. We shall indicate free fields in the interaction picture with the additional subscript \({}_{I}\). It should be noted that the latter are nothing but the operators of the free theory that we quantized in eq. (8).
\(T\) and \(\bar{T}\) are the time and anti-time ordering operator, respectively. As customary, the small imaginary deformation in the integration contour guarantees that \(|\Omega\rangle\to|0\rangle\) as \(\tau\to-\infty\) where \(|\Omega\rangle\) is the vacuum of the interacting theory. On the left-hand side, the operator \(\hat{\mathcal{O}}(\tau)\) is the equal-time product of operators at different space points, precisely like in eq. (15). We expand in the interaction Hamiltonian, so we use the Dyson series
\[T\exp\left[-i\int_{-\infty_{-}}^{\tau}d\tau^{\prime}\hat{H}_{\rm int}(\tau^{ \prime})\right]=1-i\int_{-\infty_{-}}^{\tau}d\tau^{\prime}\hat{H}_{\rm int}( \tau^{\prime})+i^{2}\int_{-\infty_{-}}^{\tau}d\tau^{\prime}\int_{-\infty_{-}}^{ \tau^{\prime}}d\tau^{\prime\prime}\hat{H}_{\rm int}(\tau^{\prime})\hat{H}_{\rm int }(\tau^{\prime\prime})+\ldots\,, \tag{20}\]
where, for simplicity, we introduce the short-hand notation \(\infty_{\pm}\equiv\infty(1\pm i\epsilon)\). Each order in \(\hat{H}_{\text{int}}\) is an interaction vertex, and carries both a time integral and the space integral (enclosed in the definition of \(\hat{H}_{\text{int}}\)) which in Fourier space enforces momentum conservation.
It is crucial to correctly identify the interaction Hamiltonian. Before proceeding in this direction, let us clarify our notation. We expand the action in the form
\[\mathcal{S}=\int d^{3}\vec{x}dt\underbrace{\mathcal{L}[\zeta(\vec{x},t),\hat{ \zeta}(\vec{x},t),\partial_{k}\zeta(\vec{x},t)]}_{=\,\mathcal{L}[\zeta(\vec{x },t)]}=\underbrace{\int d^{3}\vec{x}dt\,\mathcal{L}_{2}(\vec{x},t)}_{=\mathcal{ S}_{2}}+\underbrace{\int d^{3}\vec{x}dt\,\mathcal{L}_{3}[\zeta(\vec{x},t)]}_{= \mathcal{S}_{3}}+\underbrace{\int d^{3}\vec{x}dt\,\mathcal{L}_{4}[\zeta(\vec{ x},t)]}_{=\mathcal{S}_{4}}+\ldots\,, \tag{21}\]
with \(\mathcal{S}_{2}\) defined in eq. (7). We also define (as a function of conformal time)
\[H_{\text{int}}^{(k)}(\tau)\equiv\int d^{3}\vec{x}\,\mathcal{H}_{k}[\zeta(\vec {x},\tau)]\qquad\Longrightarrow\qquad\hat{H}_{\text{int}}^{(k)}(\tau)\equiv \int d^{3}\vec{x}\,\mathcal{H}_{k}[\hat{\zeta}_{I}(\vec{x},\tau)]\,. \tag{22}\]
At the cubic order, we simply have
\[\mathcal{H}_{3}[\zeta(\vec{x},\tau)]=-\mathcal{L}_{3}[\zeta(\vec{x},\tau)]\,. \tag{23}\]
We shall construct the relevant cubic interaction Hamiltonian in section II.3. At the quartic order, simply writing \(\mathcal{H}_{4}=-\mathcal{L}_{4}\) does not capture the correct result if the cubic Lagrangian features interactions that depend on the time derivative of \(\zeta\) since the latter modify the definition of the conjugate momentum.
Using, at the operator level, the notation introduced in eq. (22), we schematically write at the first order in the Dyson series expansion
\[\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau )\rangle_{1^{\text{int}}}=\] \[\langle 0|\hat{\zeta}_{I}(\vec{x}_{1},\tau)\hat{\zeta}_{I}(\vec{x}_{2 },\tau)\bigg{[}-i\int_{-\infty_{-}}^{\tau}d\tau^{\prime}\hat{H}_{\text{int}}^ {(4)}(\tau^{\prime})\bigg{]}|0\rangle+\langle 0|\bigg{[}i\int_{-\infty_{+}}^{ \tau}d\tau^{\prime}\hat{H}_{\text{int}}^{(4)}(\tau^{\prime})\bigg{]}\hat{ \zeta}_{I}(\vec{x}_{1},\tau)\hat{\zeta}_{I}(\vec{x}_{2},\tau)|0\rangle\,. \tag{24}\]
At the first order, therefore, the first non-zero quantum correction involves the quartic Hamiltonian. At the second order in the Dyson series expansion and considering again terms with up to eight fields in the vacuum expectation values, we write schematically
\[\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau) \rangle_{2^{\text{nd}}} =\langle 0|\hat{\zeta}_{I}(\vec{x}_{1},\tau)\hat{\zeta}_{I}(\vec{x}_{ 2},\tau)\bigg{[}-\int_{-\infty_{-}}^{\tau}d\tau^{\prime}\int_{-\infty_{-}}^{ \tau^{\prime}}d\tau^{\prime\prime}\hat{H}_{\text{int}}^{(3)}(\tau^{\prime}) \hat{H}_{\text{int}}^{(3)}(\tau^{\prime\prime})\bigg{]}|0\rangle\] \[+\langle 0|\bigg{[}-\int_{-\infty_{+}}^{\tau}d\tau^{\prime}\int_{- \infty_{+}}^{\tau^{\prime}}d\tau^{\prime\prime}\hat{H}_{\text{int}}^{(3)}( \tau^{\prime\prime})\hat{H}_{\text{int}}^{(3)}(\tau^{\prime})\bigg{]}\hat{ \zeta}_{I}(\vec{x}_{1},\tau)\hat{\zeta}_{I}(\vec{x}_{2},\tau)|0\rangle\] \[+\langle 0|\bigg{[}i\int_{-\infty_{+}}^{\tau}d\tau^{\prime}\hat{H}_{ \text{int}}^{(3)}(\tau^{\prime})\bigg{]}\hat{\zeta}_{I}(\vec{x}_{1},\tau)\hat{ \zeta}_{I}(\vec{x}_{2},\tau)\bigg{[}-i\int_{-\infty_{-}}^{\tau}d\tau^{\prime \prime}\hat{H}_{\text{int}}^{(3)}(\tau^{\prime\prime})\bigg{]}|0\rangle\,. \tag{25}\]
The vacuum expectation values of interacting-picture fields can be computed using Wick's theorem. Schematically, eqs. (24, 25) give rise to the following connected diagrams.
(26)
From the above classification, we see that, at the same loop order, we have three classes of connected diagrams that, in principle, should be discussed together. Notice that, contrary to the first two, the last diagram is not of 1-Particle-Irreducible (1PI) type since it consists of a tadpole attached to a two-point propagator.
To proceed further, we need to specify the background dynamics, that shape the time evolution of the Hubble parameters, and the interaction Hamiltonian, that specifies which terms in the Dyson expansion contribute at a given perturbative order.
### The (minimal) dynamics of ultra slow-roll
We start with a discussion of the USR dynamics.
In order to make our discussion more concrete, in the following we shall refer to fig. 1, see the caption for details. In this figure, we plot both the classical (top panel) and quantum (bottom panel) dynamics that characterize a realistic model of single-field inflation that features a phase of USR because of the presence of an approximate stationary inflection point in the inflationary potential. We refer to ref. [12] for more details about the model; we remark that this model, without including loop corrections to the computation of the curvature power spectrum, is compatible with CMB constraints and gives \(\approx 100\%\) of dark
Figure 1: _Classical (top panel) and quantum (central panel) dynamics in the context of an explicit single-field model of inflation that exhibits the presence of a phase of USR in between the time interval \(N_{\rm in}<N<N_{\rm end}\) (cf. ref. [12]). In this specific realization, we have \(N_{\rm in}=36.3\) and \(N_{\rm end}=38.8\)._Top panel:_ we plot the evolution of the background quantities \(\epsilon\), \(\epsilon_{2}\) and \(\epsilon_{2}^{\prime}\) (cf. eqs. (2, 3)) together with the evolution of the Hubble rate (normalized with respect to the value \(H_{\star}\equiv H(N_{\star})\) and scaled by a factor 10 for ease of comparison). **Central panel:** we plot the solutions of the M-S equation in eq. (33) for two different curvature modes. The mode in black (blue) exits the Hubble horizon well before (during) the USR phase. **Bottom panel:** we plot the classicality parameter \(C_{k}\) for the same two modes (cf. the main text for details). In the case of the black mode (\(N_{k}\ll N_{\rm in}\)) the classicality parameter quickly vanishes after horizon crossing, and remains negligible also during the USR phase. In the case of the blue mode (\(N_{q}\approx N_{\rm end}\)) the classicality parameter remains sizable during the USR phase, signalling that this mode retains its quantum nature during USR.
matter in the form of asteroid-mass PBHs.2 In fig. 1, the relation between \(e\)-fold time \(N\) (bottom \(x\)-axis) and conformal time \(\tau\) (top \(x\)-axis) is given by the integral
Footnote 2: The model predicts the tensor-to-scalar ratio \(r\simeq 0.037\) which is still (barely) compatible, at 95% confidence level, with the latest results released by the BICEP and Keck collaboration [45].
\[\tau=-\frac{1}{k_{\star}}\int_{N}^{N_{0}}\frac{H(N_{\star})}{H(N^{\prime})}e^{ N_{\star}-N^{\prime}}dN^{\prime}\,, \tag{27}\]
where \(N_{0}\) indicates the end of inflation, with \(\tau\) conventionally set to \(0\) at this time, and \(N_{\star}\) the instant of time at which the comoving scale \(k_{\star}=0.05\) Mpc\({}^{-1}\) crosses the Hubble horizon. In fig. 1, we set \(N_{\star}=0\), and the model gives \(N_{0}-N_{\star}\simeq 52\). We can highlight few crucial properties of the dynamics presented above:
_a)_ We start from the classical analysis. During USR, \(\epsilon_{2}(\tau)\) changes according to the schematic
\[\epsilon_{2}\approx 0\quad\stackrel{{\rm SR/USR at time \tau_{\rm in}}}{{\Longrightarrow}}\quad|\epsilon_{2}|>3\quad\stackrel{{\rm USR/ SR at time \tau_{\rm end}}}{{\Longrightarrow}}\quad\epsilon_{2}\approx O(1)\,,\] (28) thus making \(\epsilon_{2}^{\prime}(\tau)\) non-zero at around the two transitions at conformal times \(\tau_{\rm in}\) and \(\tau_{\rm end}\) (equivalently, at \(e\)-fold times \(N_{\rm in}\) and \(N_{\rm end}\)). The evolution of \(\epsilon_{2}\) and \(\epsilon_{2}^{\prime}\) are shown in the top panel of fig. 1, with dotted green and dashed orange lines, respectively. _b)_ During USR, the Hubble parameter \(\epsilon\) decreases exponentially fast (the inflaton almost stops its classical motion). The evolution of \(\epsilon\) is shown in the top panel of fig. 1 (dot-dashed magenta line); in addition, we also plot the time evolution of the Hubble rate \(H\). _c)_ We now consider the USR dynamics at the quantum level. It is crucial to understand the typical behaviour of curvature modes (solid lines in the central panel of fig. 1) and their time derivatives (dashed lines). In the central panel of fig. 1, we plot two representative cases: the black lines correspond to the case of a mode \(\zeta_{k}\) that exits the Hubble horizon at some time \(N_{k}\) well before the USR phase (like a CMB mode) while the blue lines correspond to a curvature mode \(\zeta_{q}\) that exits the Hubble horizon at some time \(N_{q}\) during the USR phase. We notice that the derivative \(|d\zeta_{k}/dN|\) decays exponentially fast, and, soon after horizon crossing, becomes negligible, while \(|\zeta_{k}|\) settles to a constant value. Consequently, we expect that interaction terms that involve the time derivative of CMB modes will be strongly suppressed. _d)_ Finally, we consider the issue of the quantum-to-classical transition. We define the so-called classicality parameter [46]\(C_{k}=|\zeta_{k}\dot{\zeta}_{k}^{\star}-\zeta_{k}^{\star}\dot{\zeta}_{k}|/| \zeta_{k}\dot{\zeta}_{k}|\) which goes to zero in the classical limit. In the case of conventional SR dynamics, the classicality parameter scales according to \(C_{k}\sim 2k\tau\), and it vanishes exponentially fast right after the horizon crossing time. In the bottom panel of fig. 1, we plot the classicality parameter for two representative modes of our dynamics. The black modes experiences its horizon crossing well before the USR phase (\(N_{k}\ll N_{\rm in}\)). Its classicality parameter quickly vanishes and remains \(\ll 1\) during the subsequent USR phase. The blue line, on the contrary, represents the classicality parameter for a mode that experiences its horizon crossing during the USR phase. Its classicality parameter remains sizable during the USR phase, signalling that this short mode can not be treated classically during USR.
With the aim of facilitating the numerical computations of the following sections, instead of working with a numerical description of USR, we now introduce a simple semi-analytical model [47; 48; 49]. We define the hyperbolic tangent parametrization
\[\eta(N)=\frac{1}{2}\left[-\eta_{\rm II}+\eta_{\rm II}\tanh\left(\frac{N-N_{ \rm in}}{\delta N}\right)\right]+\frac{1}{2}\left[\eta_{\rm II}+\eta_{\rm III }+(\eta_{\rm III}-\eta_{\rm II})\tanh\left(\frac{N-N_{\rm end}}{\delta N} \right)\right]\,, \tag{29}\]
where the parameter \(\delta N\) controls the width of the two transitions at \(N_{\rm in}\) and \(N_{\rm end}\). The limit \(\delta N\to 0\) reproduces the step-function approximation. Using the definition
\[\delta(x)=\lim_{\epsilon\to 0}\frac{1}{2\epsilon\cosh^{2}(x/\epsilon)}\,, \tag{30}\]
we find
\[\lim_{\delta N\to 0}\frac{d\eta}{dN}=(-\eta_{\rm II}+\eta_{\rm III}) \delta(N-N_{\rm end})+\eta_{\rm II}\delta(N-N_{\rm in})\,. \tag{31}\]
Using \(\eta\simeq-(1/2)d\log\epsilon/dN\), we find the following expression
\[\frac{\epsilon(N)}{\epsilon_{\rm ref}}=e^{-\eta_{\rm III}(N-N_{\rm ref })}\left[\cosh\left(\frac{N-N_{\rm end}}{\delta N}\right)\cosh\left(\frac{N-N_{ \rm in}}{\delta N}\right)\right]^{-\frac{\delta N\eta_{\rm III}}{2}}\left[ \cosh\left(\frac{N_{\rm ref}-N_{\rm end}}{\delta N}\right)\cosh\left(\frac{N _{\rm ref}-N_{\rm in}}{\delta N}\right)\right]^{\frac{\delta N\eta_{\rm III}}{2}}\] \[\left[\cosh\left(\frac{N-N_{\rm end}}{\delta N}\right)\text{sech }\left(\frac{N-N_{\rm in}}{\delta N}\right)\right]^{\delta N\left(\eta_{\rm II }-\frac{\eta_{\rm III}}{2}\right)}\left[\cosh\left(\frac{N_{\rm ref}-N_{\rm end }}{\delta N}\right)\text{sech}\left(\frac{N_{\rm ref}-N_{\rm in}}{\delta N} \right)\right]^{\frac{\delta N}{2}(-2\eta_{\rm II}+\eta_{\rm III})}, \tag{32}\]
where \(\epsilon_{\rm ref}\ll 1\) is the value of \(\epsilon\) at some initial reference time \(N_{\rm ref}\). For future reference, we define \(\bar{\epsilon}(N)\equiv\epsilon(N)/\epsilon_{\rm ref}\). In this way we have an analytical description of the background dynamics; most importantly, eqs. (29, 32) are
almost all that we need to know to solve the M-S equation [50; 51]
\[\frac{d^{2}u_{k}}{dN^{2}}+(1-\epsilon)\frac{du_{k}}{dN}+\left[\frac{k^{2}}{( aH)^{2}}+(1+\epsilon-\eta)(\eta-2)-\frac{d}{dN}(\epsilon-\eta)\right]u_{k}=0\,. \tag{33}\]
For consistency with eq. (32), we consider the Hubble rate as a function of time according to
\[H(N)=H(N_{\rm ref})\exp\left[-\int_{N_{\rm ref}}^{N}\epsilon(N^{\prime})dN^{ \prime}\right]\,. \tag{34}\]
We shall use the short-hand notation \(a(N_{\rm i})\equiv a_{\rm i}\) and \(H(N_{\rm i})\equiv H_{\rm i}\). Consequently, we rewrite eq. (33) in the form
\[\frac{d^{2}u_{k}}{dN^{2}}+(1-\epsilon)\frac{du_{k}}{dN}+\left[\bar{k}^{2} \bigg{(}\frac{H_{\rm in}}{H}\bigg{)}^{2}e^{2(N_{\rm in}-N)}+(1+\epsilon-\eta) (\eta-2)-\frac{d}{dN}(\epsilon-\eta)\right]u_{k}=0\,, \tag{35}\]
with \(\bar{k}\equiv k/(a_{\rm in}H_{\rm in})\). We solve the above equation for different \(\bar{k}\) with Bunch-Davies initial conditions
\[\sqrt{k}\,u_{k}(N)=\frac{1}{\sqrt{2}}\,,\qquad\qquad\sqrt{k}\,\frac{du_{k}}{ dN}(N)=-\frac{i}{\sqrt{2}}\frac{k}{a(N)H(N)}\,, \tag{36}\]
at some arbitrary time \(N\ll N_{k}\) with \(k=a(N_{k})H(N_{k})\). Modes with \(\bar{k}\approx O(1)\) are modes that exit the Hubble horizon at about the beginning of the USR phase, modes with \(\bar{k}\ll 1\) are modes that exit the Hubble horizon well before the beginning of the USR phase, modes with \(\bar{k}\gg 1\) are modes that exit the Hubble horizon well after the beginning of the USR phase. In the left panel of fig. 3, we show the tree-level power spectrum that we obtain by numerically solving eq. (35) and using eq. (18). Thanks to our parametrization in eq. (29) we control the sharpness of the transition varying \(\delta N\).
In order to make contact with the analysis of ref. [31], we set \(\eta_{\rm III}=0\). However, it should be noted that in more realistic models we need \(\eta_{\rm III}\neq 0\) and negative so that the power spectrum decreases for modes with \(\bar{k}\gg 1\). This feature is necessary if we want to connect the USR phase to a subsequent SR phase that ends inflation. Since we are considering single-field models of inflation, in our analysis this is a necessary requirement. Consequently, the power spectrum at small scales - both before and after the peak - does not respect the property of scale invariance. Before the peak, the power spectrum of the short modes grows with a maximum slope given by \({\cal P}(\bar{k})\sim\bar{k}^{4}\); after
Figure 2: _Schematic evolution of \(\eta(N)\) in eq. (29) (**left panel**), \(\epsilon(N)\) in eq. (32) (**central panel**) and \(d\epsilon_{2}/dN\) (**right panel**) as function of the number of \(\epsilon\)-folds \(N\). We explore different values of \(\delta N\) with the limit \(\delta N\to 0\) that corresponds to instantaneous transitions SR/USR at \(N=N_{\rm in}\) and USR/SR at \(N=N_{\rm end}\). In the right panel, the limit \(\delta N\to 0\) corresponds to \(\delta\) function transitions at \(N_{\rm in}\) and \(N_{\rm end}\). Furthermore, notice that the lines corresponding to \(d\epsilon_{2}/dN\) and \(-2d\eta/dN\) superimpose, showing that \(-2d\eta/dN\) is a perfect approximation of \(d\epsilon_{2}/dN\)._
the peak, the power spectrum of the short modes decays approximately as \(\mathcal{P}(\bar{k})\sim\bar{k}^{2\eta_{\rm III}}\). After the peak, therefore, the power spectrum becomes approximately scale invariant only if we take \(\eta_{\rm III}\approx 0\); however, in such case \(\epsilon\) remains anchored to the tiny value reached during the USR phase and inflation never ends.
In ref. [31], the loop integration is restricted to the interval of modes \(\bar{k}\in[\bar{k}_{\rm in},\bar{k}_{\rm end}]\) where \(\bar{k}_{\rm in}=1\) and \(\bar{k}_{\rm end}=e^{\Delta N_{\rm USR}}(H_{\rm end}/H_{\rm in})\simeq e^{ \Delta N_{\rm USR}}\) with \(\Delta N_{\rm USR}\equiv N_{\rm end}-N_{\rm in}\). This interval of modes is limited by the two vertical dashed lines in the left panel of fig. 3. In ref. [31], limiting the integration to the range \(\bar{k}\in[\bar{k}_{\rm in},\bar{k}_{\rm end}]\) is justified by the fact that the power spectrum of short modes peaks in this window of modes.
For future reference, let us stress one more important point. In the left panel of fig. 3 we indicate the growth of the power spectrum given by the scaling \(\Delta\mathcal{P}=(k_{\rm end}/k_{\rm in})^{2\eta_{\rm II}}\). This result immediately follows from the application of the SR formula \(\mathcal{P}(k)=H^{2}/8\pi^{2}\epsilon\) if one accounts for the exponential decay \(\epsilon\sim e^{-2\eta_{\rm II}N}\) during USR and converts \(N\) into \(k\) by means of the horizon-crossing condition \(k=aH\). Therefore, not surprisingly, the scaling \(\Delta\mathcal{P}=(k_{\rm end}/k_{\rm in})^{2\eta_{\rm II}}\) captures well the growth of the power spectrum if one directly jumps from the initial to the final SR phase. However, as shown in the left panel of fig. 3, the above estimate does not accurately describe the amplitude of the power spectrum at the position of its peak; the latter can easily be one order of magnitude larger than what suggested by \(\Delta\mathcal{P}=(k_{\rm end}/k_{\rm in})^{2\eta_{\rm II}}\). This features has important consequences when estimating the PBH abundance, which rather sensitive to the spectral amplitude. We will come back to this point in the next section.
Finally, it is possible to check numerically that neglecting the time dependence of the Hubble rate as in eq. (34) has a negligible impact. In the following, therefore, we shall keep \(H\) constant (that is, \(H=H_{\rm ref}\) does not evolve in time). Furthermore, if we take \(H\) constant and in the limit \(\delta N=0\), it is possible to get, for some special values of \(\eta_{\rm II}\) and \(\eta_{\rm III}\), a complete analytical description of the SR/USR/SR dynamics [12; 47].
### The cubic action
At the cubic order in the fluctuations, the action is
\[\mathcal{S}_{3}=\int d^{4}x\bigg{\{} \epsilon^{2}a^{3}\dot{\zeta}^{2}\zeta+\epsilon^{2}a\zeta(\partial _{k}\zeta)(\partial^{k}\zeta)-2\epsilon^{2}a^{3}\dot{\zeta}(\partial_{k}\zeta )\partial^{k}(\partial^{-2}\dot{\zeta})+\frac{\epsilon\epsilon_{2}}{2}a^{3} \dot{\zeta}\zeta^{2}-\frac{a^{3}\epsilon^{3}}{2}\big{[}\dot{\zeta}^{2}\zeta- \zeta\partial_{k}\partial_{l}(\partial^{-2}\dot{\zeta})\partial^{k}\partial^{ l}(\partial^{-2}\dot{\zeta})\big{]}\] \[+\bigg{[}\frac{d}{dt}\left(\epsilon a^{3}\dot{\zeta}\right)- \epsilon a\partial_{k}\partial^{k}\zeta\bigg{]}\bigg{[} \frac{\epsilon_{2}}{2}\zeta^{2}+\frac{2}{H}\dot{\zeta}\zeta- \frac{1}{2a^{2}H^{2}}(\partial_{k}\zeta)(\partial^{k}\zeta)+\frac{1}{2a^{2}H ^{2}}\partial^{-2}\partial_{k}\partial_{l}(\partial^{k}\zeta\partial^{l}\zeta)\] \[+\frac{\epsilon}{H}(\partial_{k}\zeta)\partial^{k}(\partial^{-2} \dot{\zeta})-\frac{\epsilon}{H}\partial^{-2}\partial_{k}\partial_{l}\partial ^{k}\zeta\partial^{l}(\partial^{-2}\dot{\zeta})\bigg{]}\bigg{\}}\,. \tag{37}\]
Figure 3: _Left panel: Tree-level power spectrum in the minimal dynamics of section II.2. The numerical values of the other parameters are \(\eta_{\rm II}=3.5\), \(\eta_{\rm III}=0\) and \(N_{\rm end}-N_{\rm in}=2.5\). In our parametrization, we go beyond the instantaneous transition approximation and we explore different values of \(\delta N\). The vertical double-arrow indicates the growth of the power spectrum given by the naive scaling \(\Delta\mathcal{P}=(k_{\rm end}/k_{\rm in})^{2\eta_{\rm II}}=e^{2\eta_{\rm II} \Delta N_{\rm USR}}\). This scaling captures well the amplitude of the transition from the initial to the final SR phase but it does not give a reliable estimate of the peak amplitude of the power spectrum, which can easily be one order of magnitude larger. Right panel: Time evolution of two representative modes with \(\bar{k}=1\) and \(\bar{k}=e^{\Delta N_{\rm USR}}\) for \(\delta N\in[0.1\div 0.5]\) (from darker to lighter colors, respectively). The black lines represent the limit \(\delta N\to 0\)._
As shown in ref. [40], it is possible to simplify the cubic action by means of a field redefinition that introduces a non-linear shift in the original field. Concretely, if we define
\[\zeta\equiv\zeta_{n}+f(\zeta_{n})\,, \tag{38}\]
with
\[f(\zeta)\equiv\frac{1}{2}\bigg{[}\frac{\epsilon_{2}}{2}\zeta^{2}+ \frac{2}{H}\dot{\zeta}\zeta-\frac{(\partial_{k}\zeta)(\partial^{k}\zeta)}{2a^{ 2}H^{2}}+\frac{1}{2a^{2}H^{2}}\partial^{-2}\partial_{k}\partial_{l}(\partial^{k }\zeta\partial^{l}\zeta)+\frac{\epsilon}{H}(\partial_{k}\zeta)\partial^{k}( \partial^{-2}\dot{\zeta})-\frac{\epsilon}{H}\partial^{-2}\partial_{k}\partial _{l}\partial^{k}\zeta\partial^{l}(\partial^{-2}\dot{\zeta})\bigg{]}\,, \tag{39}\]
we find, by direct computation, that at the quadratic order the field \(\zeta_{n}\) is described by the action
\[\mathcal{S}_{2}(\zeta_{n})=\int d^{4}x\,\epsilon\,a^{3}\left[ \dot{\zeta}_{n}^{2}-\frac{(\partial_{k}\zeta_{n})(\partial^{k}\zeta_{n})}{a^{ 2}}\right]\,, \tag{40}\]
which has the same structure as the quadratic action for the original variable \(\zeta\). However, at the cubic order, we find
\[\mathcal{S}_{3}(\zeta_{n})=\int d^{4}x\bigg{\{} \epsilon^{2}a^{3}\dot{\zeta}_{n}^{2}\zeta_{n}+\epsilon^{2}a\zeta_{n }(\partial_{k}\zeta_{n})(\partial^{k}\zeta_{n})-2\epsilon^{2}a^{3}\dot{\zeta} _{n}(\partial_{k}\zeta_{n})\partial^{k}(\partial^{-2}\dot{\zeta}_{n})+\frac{ \epsilon\dot{\epsilon}_{2}}{2}a^{3}\dot{\zeta}_{n}\zeta_{n}^{2}\] \[-\frac{a^{3}\epsilon^{3}}{2}\big{[}\dot{\zeta}_{n}^{2}\zeta_{n}- \zeta_{n}\partial_{k}\partial_{l}(\partial^{-2}\dot{\zeta}_{n})\partial^{k} \partial^{l}(\partial^{-2}\dot{\zeta}_{n})\big{]}\bigg{\}}\,, \tag{41}\]
in which, thanks to the above field redefinition, the second and third lines in eq. (37) cancel out.
If we neglect terms with spatial derivatives and interactions suppressed by two or more powers of the Hubble parameter \(\epsilon\), we find
\[\mathcal{S}_{3}(\zeta_{n})\ni\int d^{4}x\,\frac{\epsilon\dot{ \epsilon}_{2}}{2}a^{3}\dot{\zeta}_{n}\zeta_{n}^{2}\,. \tag{42}\]
Notice that we do not count the coupling \(\epsilon_{2}\) as a slow-roll suppression since we are interested in the USR phase during which \(|\epsilon_{2}|>3\) and \(\dot{\epsilon}_{2}\neq 0\). Eq. (42) is the only interaction included in ref. [31]. This means that, implicitly, ref. [31] computes the two-point function for the field \(\zeta_{n}\). This is because, in terms of the dynamical variable \(\zeta\), there is another interaction of order \(\epsilon\epsilon_{2}\) that should be included, that is the one in the second line of eq. (37).
However, as stressed in ref.[40], \(\zeta_{n}\) is not the right dynamical variable to consider since it is not conserved outside the horizon. This is a trivial consequence of eq. (38). Since \(\zeta\) is conserved outside the horizon, \(\zeta_{n}\) can not be conserved simply because various coefficients in the non-linear relation in eq. (38) are time-dependent. Alternatively, as discussed in ref.[40], the above fact is also evident from the very same structure of the interactions that appear in eq. (41). The interaction \(\epsilon\dot{\epsilon}_{2}\dot{\zeta}_{n}\zeta_{n}^{2}\) only has one time-derivative acting on the field \(\zeta_{n}\); consequently, it alters the value of \(\zeta_{n}\) on super-horizon scales (if one computes the equation of motion for \(\zeta_{n}\), it is easy to see that the constant solution is not stable). Let us make the above considerations more concrete. Eventually, we are interested in the computation of the two-point function for the original curvature field. Given the field redefinition in eq. (38), we write
\[\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau)\rangle= \langle\{\hat{\zeta}_{n}(\vec{x}_{1},\tau)+f[\hat{\zeta}_{n}(\vec{x }_{1},\tau)]\}\big{\{}\hat{\zeta}_{n}(\vec{x}_{2},\tau)+f[\hat{\zeta}_{n}(\vec {x}_{2},\tau)]\}\rangle\] \[= \langle\hat{\zeta}_{n}(\vec{x}_{1},\tau)\hat{\zeta}_{n}(\vec{x}_ {2},\tau)\rangle+ \tag{43}\] \[\langle\hat{\zeta}_{n}(\vec{x}_{1},\tau)f[\hat{\zeta}_{n}(\vec{x }_{2},\tau)]\rangle+\langle f[\hat{\zeta}_{n}(\vec{x}_{1},\tau)]\hat{\zeta}_{n}( \vec{x}_{1},\tau)\rangle+\] (44) \[\langle f[\hat{\zeta}_{n}(\vec{x}_{1},\tau)]f[\hat{\zeta}_{n}(\vec {x}_{2},\tau)]\rangle\,, \tag{45}\]
The first term, eq. (43), corresponds to the two-point function for the shifted curvature field whose cubic action is given by eq. (41); \(\langle\hat{\zeta}_{n}(\vec{x}_{1},\tau)\hat{\zeta}_{n}(\vec{x}_{2},\tau)\rangle\) can be computed perturbatively by means of the "_in-in_" formalism sketched at the end of section II.1. Eqs. (44, 45) account for the difference between \(\zeta\) and \(\zeta_{n}\) at the non-linear level. Notice that the first term in the functional form in eq. (39) does not die off in the late-time limit \(\tau\to 0^{-}\) (in which the power spectrum must be eventually evaluated) if we consider the case in which \(\epsilon_{2}\neq 0\) after the USR phase (as expected in realistic single-field models, cf. section II.2). However, if we limit to the case in which \(\eta_{\rm III}=0\) the contribution from the field redefinition vanishes. This limit was considered in ref. [31]. In order to make contact with the analysis presented in ref. [31], we shall also adopt in the bulk of this work the assumption \(\eta_{\rm III}=0\).
Let us now come back to the schematic in eq. (26). The cubic Hamiltonian interaction that follows from eq. (42) gives rise to the last two topologies of connected diagrams illustrated in eq. (26). As in ref. [31], we will only focus on the 1PI diagram, that is, the central diagram in eq. (26). The last diagram in eq. (26) consists of a tadpole
that is attached to a \(\zeta\)-propagator and affects at one-loop its two-point correlation function. The correct way to deal with tadpoles is by changing the background solution, cf. ref. [52] for a discussion in the case of ordinary SR inflation and ref. [41] for the case in which there are additional spectator fields. Recently, ref. [53] estimated the tadpole correction to the background evolution in the context of a model in which there is a resonant amplification of field fluctuations. Imposing the condition that such modification is negligible could give rise to an additional perturbativity bound. We postpone a comprehensive exploration of this issue in the context of realistic USR dynamics to future work, cf. section V.
### Beyond the cubic action
Before proceeding, we comment about quartic interactions since, as qualitatively discussed in eq. (26), they give rise to one-loop corrections which are of the same order if compared to those generated by cubic interaction terms. The derivation of the fourth-order action has been discussed in ref. [54]. Based on this result, ref. [31; 33] claims that the relevant quartic interaction in the case of USR dynamics (that is, the quartic interaction proportional to \(\epsilon_{2}\)) gives a vanishing contribution when inserted in eq. (24). On the contrary, ref. [35] does include quartic interactions using an approach based on the effective field theory of inflation and claims that they give a non-trivial contribution to the loop-corrected power spectrum. However, ref. [35] does not clarify the origin of the discrepancy with ref. [31]. Generally speaking, we expect cubic and quartic interactions to be inextricably linked. For instance, the quartic Hamiltonian receives a contribution that arises from the modification of the conjugate momentum if there are cubic interactions which depend on \(\dot{\zeta}\). Similarly, cubic interactions with spatial derivatives are paired with quartic interactions induced by a residual spatial conformal symmetry of the perturbed metric [41]. En route, we notice that interactions with spatial derivatives are usually neglected for modes that are super-horizon. However, in the spirit of the loop computation in ref. [31], the momenta over which the loop is integrated cross the horizon during the USR phase, and, naively, their spatial derivatives do not pay any super-horizon suppression.
In this work, as a preliminary step towards a more complete analysis and in order to compare our results with the claim made in refs. [32; 33; 34; 35; 36; 37; 38; 39], we only focus on the cubic interaction in eq. (42). However, we stress that all the arguments listed above motivate the need for a more comprehensive analysis. We postpone this task to future work, cf. section V.
## III One-loop computation
We consider in this section the cubic interaction Hamiltonian given by (we omit the subscript \({}_{I}\) in the interaction-picture fields)
\[\hat{H}^{(3)}_{\rm int}(\tau)=\frac{1}{2}\int d^{3}\vec{x}\,\epsilon(\tau) \epsilon_{2}^{\prime}(\tau)a^{2}(\tau)\zeta^{\prime}(\vec{x},\tau)\zeta(\vec{ x},\tau)^{2}\,. \tag{46}\]
We consider eq. (25); this can be written in the compact form
\[\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau)\rangle_{2^{ \rm nd}}=\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau) \rangle_{2^{\rm nd}}^{(1,1)}-2{\rm Re}\left[\langle\hat{\zeta}(\vec{x}_{1}, \tau)\hat{\zeta}(\vec{x}_{2},\tau)\rangle_{2^{\rm nd}}^{(0,2)}\right] \tag{47}\]
where
\[\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau) \rangle_{2^{\rm nd}}^{(1,1)}\equiv \int_{-\infty(1+i\epsilon)}^{\tau}d\tau_{1}\int_{-\infty(1-i \epsilon)}^{\tau}d\tau_{2}\langle 0|\hat{H}_{\rm int}(\tau_{1})\hat{\zeta}_{I}( \vec{x}_{1},\tau)\hat{\zeta}_{I}(\vec{x}_{2},\tau)\hat{H}_{\rm int}(\tau_{2}) |0\rangle\,, \tag{48}\] \[\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau) \rangle_{2^{\rm nd}}^{(0,2)}\equiv \int_{-\infty(1-i\epsilon)}^{\tau}d\tau_{1}\int_{-\infty(1-i \epsilon)}^{\tau_{1}}d\tau_{2}\langle 0|\hat{\zeta}_{I}(\vec{x}_{1},\tau)\hat{ \zeta}_{I}(\vec{x}_{2},\tau)\hat{H}_{\rm int}(\tau_{1})\hat{H}_{\rm int}( \tau_{2})|0\rangle\,. \tag{49}\]
This expansion is consistent with Eq. (16) of Ref. [41]. Consider the first contribution in eq. (48), one finds
\[\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau) \rangle_{2^{\rm nd}}^{(1,1)}=\frac{1}{4}\int_{-\infty_{+}}^{\tau}d\tau_{1} \epsilon(\tau_{1})\epsilon_{2}^{\prime}(\tau_{1})a^{2}(\tau_{1})\int_{-\infty_ {-}}^{\tau}d\tau_{2}\epsilon(\tau_{2})\epsilon_{2}^{\prime}(\tau_{2})a^{2}( \tau_{2})\int d^{3}\vec{y}d^{3}\vec{z}\] \[\int\left[\prod_{i=1}^{8}\frac{d^{3}\vec{k}_{i}}{(2\pi)^{3}} \right]e^{i\vec{y}\cdot(\vec{k}_{1}+\vec{k}_{2}+\vec{k}_{3})}e^{i\vec{x}\cdot( \vec{k}_{1}+\vec{x}_{2}+\vec{k}_{3})}e^{i\vec{z}\cdot(\vec{k}_{6}+\vec{k}_{7}+ \vec{k}_{8})}\] \[\langle 0|\hat{\zeta}_{I}^{\prime}(\vec{k}_{1},\tau_{1})\hat{\zeta}_{I }(\vec{k}_{2},\tau_{1})\hat{\zeta}_{I}(\vec{k}_{3},\tau_{1})\hat{\zeta}_{I}(\vec {k}_{4},\tau)\hat{\zeta}_{I}(\vec{k}_{5},\tau)\hat{\zeta}_{I}^{\prime}(\vec{k} _{6},\tau_{2})\hat{\zeta}_{I}(\vec{k}_{7},\tau_{2})\hat{\zeta}_{I}(\vec{k}_{8}, \tau_{2})|0\rangle\,. \tag{50}\]
The 36 connected Wick contractions can be expressed as
\[\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau) \rangle_{2^{\rm nd}}^{(1,1)}= \int_{-\infty_{+}}^{\tau}d\tau_{1}\epsilon(\tau_{1})\epsilon_{2}^{ \prime}(\tau_{1})a^{2}(\tau_{1})\int_{-\infty_{-}}^{\tau}d\tau_{2}\epsilon(\tau_ {2})\epsilon_{2}^{\prime}(\tau_{2})a^{2}(\tau_{2})\int\frac{d^{3}\vec{k}}{(2\pi) ^{3}}\frac{d^{3}\vec{q}}{(2\pi)^{3}}e^{i(\vec{x}_{1}-\vec{x}_{2})\cdot(\vec{k}+ \vec{q})}\] \[\big{[}\,|\zeta_{k+q}(\tau)|^{2}\,\big{\{}\zeta_{k}(\tau_{1})\zeta_ {k+q}^{\prime}(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{k}^{*}(\tau_{2})\zeta_{k+q}^ {\prime\,*}(\tau_{2})\zeta_{q}^{*}(\tau_{2})+\] \[\zeta_{k}(\tau_{1})\zeta_{k+q}^{\prime}(\tau_{1})\zeta_{q}(\tau_{ 1})\zeta_{k+q}^{\prime\,*}(\tau_{2})\big{[}\zeta_{k-q}^{\prime\,*}(\tau_{2}) \zeta_{q}^{*}(\tau_{2})+\zeta_{k-q}^{*}(\tau_{2})\zeta_{q}^{\prime\,*}(\tau_{ 2})\big{]}+\] \[\zeta_{k}^{*}(\tau_{2})\zeta_{k+q}^{\prime\,*}(\tau_{2})\zeta_{q}^ {*}(\tau_{2})\zeta_{k+q}(\tau_{1})\big{[}\zeta_{k}^{\prime}(\tau_{1})\zeta_{q} (\tau_{1})+\zeta_{k}(\tau_{1})\zeta_{q}^{\prime}(\tau_{1})\big{]}+\] \[\zeta_{k}^{\prime\,*}(\tau_{1})\zeta_{k+q}(\tau_{1})\zeta_{q}( \tau_{1})\zeta_{k+q}^{\prime\,*}(\tau_{2})\big{[}\zeta_{q}^{*}(\tau_{2})\zeta _{k}^{*}(\tau_{2})+\zeta_{k}^{*}(\tau_{2})\zeta_{q}^{*}(\tau_{2})\big{]}+\] \[\zeta_{k}^{\prime\,*}(\tau_{2})\zeta_{k+q}^{\prime\,*}(\tau_{2}) \zeta_{q}^{*}(\tau_{2})\zeta_{k+q}(\tau_{1})\big{[}\zeta_{q}(\tau_{1})\zeta_{ k}^{\prime}(\tau_{1})+\zeta_{k}(\tau_{1})\zeta_{q}^{\prime}(\tau_{1})\big{]} \big{\}}\big{]}\,. \tag{51}\]
Consider now eq. (49). One can write it in the form
\[\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau) \rangle_{2^{\rm nd}}^{(0,2)}= \frac{1}{4}\int_{-\infty_{-}}^{\tau}d\tau_{1}\epsilon(\tau_{1}) \epsilon_{2}^{\prime}(\tau_{1})a^{2}(\tau_{1})\int_{-\infty_{-}}^{\tau_{1}}d \tau_{2}\epsilon(\tau_{2})\epsilon_{2}^{\prime}(\tau_{2})a^{2}(\tau_{2})\int d ^{3}\vec{y}d^{3}\vec{z}\] \[\int\left[\sum_{i=1}^{8}\frac{d^{3}\vec{k}_{i}}{(2\pi)^{3}} \right]e^{i\vec{y}\cdot(\vec{k}_{1}+\vec{k}_{2}+\vec{k}_{3})}e^{i(\vec{x}_{1} \cdot\vec{k}_{+}+\vec{x}_{2}\cdot\vec{k}_{5})}e^{i\vec{z}\cdot(\vec{k}_{0}+ \vec{k}_{7}+\vec{k}_{8})}\] \[\langle 0|\hat{\zeta}_{I}(\vec{k}_{4},\tau)\hat{\zeta}_{I}(\vec{k}_{5}, \tau)\hat{\zeta}_{I}^{\prime}(\vec{k}_{1},\tau_{1})\hat{\zeta}_{I}(\vec{k}_{2},\tau_{1})\hat{\zeta}_{I}(\vec{k}_{3},\tau_{1})\hat{\zeta}_{I}^{\prime}(\vec{k }_{6},\tau_{2})\hat{\zeta}_{I}(\vec{k}_{7},\tau_{2})\hat{\zeta}_{I}(\vec{k}_{8 },\tau_{2})|0\rangle\,. \tag{52}\]
After Wick contractions, we find
\[\langle\hat{\zeta}(\vec{x}_{1},\tau)\hat{\zeta}(\vec{x}_{2},\tau) \rangle_{2^{\rm nd}}^{(0,2)}= \int_{-\infty_{-}}^{\tau}d\tau_{1}\epsilon(\tau_{1})\epsilon_{2}^ {\prime}(\tau_{1})a^{2}(\tau_{1})\int_{-\infty_{-}}^{\tau_{1}}d\tau_{2} \epsilon(\tau_{2})\epsilon_{2}^{\prime}(\tau_{2})a^{2}(\tau_{2})\int\frac{d^{3} \vec{k}}{(2\pi)^{3}}\frac{d^{3}\vec{q}}{(2\pi)^{3}}e^{i(\vec{x}_{1}-\vec{x}_{ 2})\cdot(\vec{k}+\vec{q})}\] \[\big{[}\zeta_{k+q}^{2}(\tau)\big{\{}\zeta_{k}(\tau_{1})\zeta_{k+q }^{\prime\,*}(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{k}^{*}(\tau_{2})\zeta_{k+q}^ {\prime\,*}(\tau_{2})\zeta_{q}^{*}(\tau_{2})+\] \[\zeta_{k}(\tau_{1})\zeta_{k+q}^{\prime\,*}(\tau_{1})\zeta_{q}( \tau_{1})\zeta_{k+q}^{\prime\,*}(\tau_{2})\big{[}\zeta_{k}^{\prime\,*}(\tau_{2}) \zeta_{q}^{\prime\,*}(\tau_{2})+\zeta_{k}^{*}(\tau_{2})\zeta_{q}^{\prime\,*}( \tau_{2})\big{]}+\] \[\zeta_{k}^{\prime\,*}(\tau_{2})\zeta_{k+q}^{\prime\,*}(\tau_{2}) \zeta_{q}^{\prime\,*}(\tau_{2})\zeta_{k+q}^{\prime\,*}(\tau_{1})\big{[}\zeta_{ k}^{\prime\,*}(\tau_{1})\zeta_{q}(\tau_{1})+\zeta_{k}(\tau_{1})\zeta_{q}^{\prime}(\tau_{1}) \big{]}\big{\}}\,. \tag{53}\]
At this point we shift the momentum following the prescription \(k\to k-q\) in such a way that \(k\) is identified with the external momentum. The power spectrum at one loop can be therefore written as
\[\mathcal{P}(k)=\lim_{\tau\to 0}\left(\frac{k^{3}}{2\pi^{2}}\right)\left\{ \left|\zeta_{k}(\tau)\right|^{2}+\frac{1}{(4\pi)^{2}}\left[\Delta P_{1}(k,\tau)+ \Delta P_{2}(k,\tau)\right]\right\}\,, \tag{54}\]
with
\[\Delta P_{1}(k,\tau) \equiv 4\int_{-\infty_{+}}^{\tau}d\tau_{1}\epsilon(\tau_{1})\epsilon_{2}^ {\prime}(\tau_{1})a^{2}(\tau_{1})\int_{-\infty_{-}}^{\tau}d\tau_{2}\epsilon(\tau_ {2})\epsilon_{2}^{\prime}(\tau_{2})a^{2}(\tau_{2})\int_{0}^{\infty}dq\,q^{2}\,d (\cos\theta)\left|\zeta_{k}(\tau)\right|^{2}\] \[\times\big{\{}\zeta_{k-q}(\tau_{1})\zeta_{k}^{\prime}(\tau_{1})\zeta _{q}(\tau_{1})\zeta_{k-q}^{*}(\tau_{2})\zeta_{k}^{\prime\,*}(\tau_{2})+\zeta_{k- q}^{*}(\tau_{2})\zeta_{q}^{\prime\,*}(\tau_{2})\big{]}+\] \[\zeta_{k-q}^{*}(\tau_{2})\zeta_{k}^{\prime\,*}(\tau_{2})\zeta_{q} ^{*}(\tau_{2})\zeta_{k}(\tau_{1})\big{[}\zeta_{k-q}^{*}(\tau_{1})\zeta_{q}(\tau_{ 1})+\zeta_{k-q}(\tau_{1})\zeta_{q}^{\prime}(\tau_{1})\big{]}+\] \[\zeta_{k-q}^{*}(\tau_{1})\zeta_{k}(\tau_{1})\zeta_{q}(\tau_{1}) \zeta_{k}^{*}(\tau_{2})\big{[}\zeta_{q}^{*}(\tau_{2})\zeta_{k-q}^{*}(\tau_{2})+ \zeta_{k-q}^{*}(\tau_{2})\zeta_{q}^{\prime\,*}(\tau_{2})\big{]}+\] \[\zeta_{k-q}^{*}(\tau_
### Loop correction with a large hierarchy of scales
First, we will be concerned with external momenta that describe the large CMB scales, while the USR takes place when modes \(k_{\textsc{USR}}\gg k\) cross the horizon. The situation is summarized in the following schematic
\[\begin{split}\text{comoving length}\,\lambda\end{split} \tag{57}\]
in which the blue horizontal band represents the interval of modes that cross the horizon during the USR phase, the vertical band shaded in magenta. In other words, as we will restrict the integration over momenta \(q\in[q_{\text{in}},q_{\text{end}}]\) that are enhanced by the USR phase, we can assume \(q\gg k\). Consequently, as in ref. [31], we approximate
\[k-q=\sqrt{k^{2}+q^{2}-2kq\cos(\theta)}\approx q\,,\qquad\int_{-1}^{+1}d(\cos \theta)=2\,. \tag{58}\]
With these assumptions, we can further simplify the expressions. We collect each contribution depending on the number of time derivatives acting on the long mode \(\zeta_{k}\). In each expression, the first line indicates terms with no derivative on the long modes, the second one those with one derivative, while the last with two. One finds
\[\begin{split}\Delta P_{1}(k,\tau)&\equiv 8\int_{\tau_{ \text{in}}}^{\tau}d\tau_{1}\epsilon(\tau_{1})\epsilon_{2}^{\prime}(\tau_{1})a ^{2}(\tau_{1})\int_{\tau_{\text{in}}}^{\tau}d\tau_{2}\epsilon(\tau_{2}) \epsilon_{2}^{\prime}(\tau_{2})a^{2}(\tau_{2})\int_{q_{\text{in}}}^{q_{\text{ end}}}dq\,q^{2}\,\left|\zeta_{k}(\tau)\right|^{2}\\ &\times\left\{4\zeta_{k}(\tau_{1})\zeta_{k}^{*}(\tau_{2})\zeta_{q }(\tau_{1})\zeta_{q}^{\prime}(\tau_{1})\zeta_{q}^{*}(\tau_{2})\zeta_{q}^{ \prime\,*}(\tau_{2})+\right.\\ &\qquad\left.2\zeta_{k}^{\prime}(\tau_{1})\zeta_{k}^{*}(\tau_{2}) \zeta_{q}(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{q^{\prime}*}^{\prime}(\tau_{2}) \zeta_{q}^{*}(\tau_{2})+2\zeta_{k}(\tau_{1})\zeta_{k}^{\prime\,*}(\tau_{2}) \zeta_{q}(\tau_{1})\zeta_{q}^{\prime}(\tau_{1})\zeta_{q}^{*}(\tau_{2})\zeta_{q }^{*}(\tau_{2})+\right.\\ &\qquad\left.\zeta_{k}^{\prime}(\tau_{1})\zeta_{k}^{\prime\,*}( \tau_{2})\zeta_{q}(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{q}^{*}(\tau_{2})\zeta_{q }^{*}(\tau_{2})\right\},\\ \Delta P_{2}(k,\tau)&\equiv-16\text{Re}\Big{[}\int _{\tau_{\text{in}}}^{\tau}d\tau_{1}\epsilon(\tau_{1})\epsilon_{2}^{\prime}( \tau_{1})a^{2}(\tau_{1})\int_{\tau_{\text{in}}}^{\tau_{1}}d\tau_{2}\epsilon( \tau_{2})\epsilon_{2}^{\prime}(\tau_{2})a^{2}(\tau_{2})\int_{q_{\text{in}}}^{q _{\text{end}}}dq\,q^{2}\,\zeta_{k}(\tau)^{2}\\ &\times\left\{4\zeta_{k}^{*}(\tau_{1})\zeta_{k}^{*}(\tau_{2}) \zeta_{q}^{\prime}(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{q}^{*}(\tau_{2})\zeta_{ q}^{\prime\,*}(\tau_{2})+\right.\\ &\qquad\left.2\zeta_{k}^{\prime\,*}(\tau_{1})\zeta_{k}^{*}(\tau_{ 2})\zeta_{q}(\tau_{1})^{2}\zeta_{q}^{\prime\,*}(\tau_{2})\zeta_{q}^{*}(\tau_{2 })+2\zeta_{k}^{*}(\tau_{1})\zeta_{k}^{\prime\,*}(\tau_{2})\zeta_{q}^{\prime}( \tau_{1})\zeta_{q}(\tau_{1})\zeta_{q}^{*}(\tau_{2})^{2}+\right.\\ &\qquad\left.\zeta_{k}^{\prime\,*}(\tau_{1})\zeta_{k}^{\prime\,*}( \tau_{2})\zeta_{q}(\tau_{1})^{2}\zeta_{q}^{*}(\tau_{2})^{2}\right\}\Big{]}\,. \end{split} \tag{60}\]
We can combine the two contributions using the properties of symmetric integrals for holomorphic symmetric functions \(f(\tau_{1},\tau_{2})=f(\tau_{2},\tau_{1})\)[55]
\[\int_{\tau_{\text{in}}}^{\tau}d\tau_{1}\int_{\tau_{\text{in}}}^{\tau_{1}}d\tau_ {2}f\left(\tau_{1},\tau_{2}\right)=\frac{1}{2}\int_{\tau_{\text{in}}}^{\tau}d \tau_{1}\int_{\tau_{\text{in}}}^{\tau}d\tau_{2}f\left(\tau_{1},\tau_{2}\right). \tag{61}\]
To shorten the notation, we introduce \(\Delta P(k,\tau)=\Delta P_{1}(k,\tau)+\Delta P_{2}(k,\tau)\) and collect the individual contribution order by order in derivatives:
* **0th order in time derivatives of the long mode.** For ease of reading, we introduce the short-hand notation \(\epsilon(\tau)\epsilon_{2}^{\prime}(\tau)a^{2}(\tau)\equiv g(\tau)\). Consider the sum of the two integrals \[\Delta P_{\rm 0th}(k,\tau)=32\int_{\tau_{\rm in}}^{\tau}d\tau_{1}g( \tau_{1})\int_{\tau_{\rm in}}^{\tau}d\tau_{2}g(\tau_{2})\int_{q_{\rm in}}^{q_{ \rm end}}dq\,q^{2}\,\left|\zeta_{k}(\tau)\right|^{2}\zeta_{k}(\tau_{1})\zeta_{ k}^{*}(\tau_{2})\zeta_{q}(\tau_{1})\zeta_{q}^{\prime}(\tau_{1})\zeta_{q}^{*}(\tau_{2}) \zeta_{q}^{\prime\,*}(\tau_{2})\] (62) \[-64{\rm Re}\Big{[}\int_{\tau_{\rm in}}^{\tau}d\tau_{1}g(\tau_{1}) \int_{\tau_{\rm in}}^{\tau_{1}}d\tau_{2}g(\tau_{2})\int_{q_{\rm in}}^{q_{\rm end }}dq\,q^{2}\,\zeta_{k}(\tau)^{2}\zeta_{k}^{*}(\tau_{1})\zeta_{k}^{*}(\tau_{2} )\zeta_{q}^{\prime}(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{q}^{*}(\tau_{2})\zeta_{ q}^{\prime\,*}(\tau_{2})\Big{]}\,.\] (63) We notice that, in the first integral in eq. (62), the exchange \(\tau_{1}\leftrightarrow\tau_{2}\) transforms \[\zeta_{k}(\tau_{1})\zeta_{k}^{*}(\tau_{2})\zeta_{q}(\tau_{1}) \zeta_{q}^{\prime}(\tau_{1})\zeta_{q}^{*}(\tau_{2})\zeta_{q}^{\prime\,*}(\tau _{2})\stackrel{{\tau_{1}\leftrightarrowleftrightarrow\tau_{2}}}{{ \Longrightarrow}}\zeta_{k}(\tau_{2})\zeta_{k}^{*}(\tau_{1})\zeta_{q}(\tau_{2}) \zeta_{q}^{\prime}(\tau_{2})\zeta_{q}^{*}(\tau_{1})\zeta_{q}^{\prime\,*}(\tau _{1})=\] \[\left[\zeta_{k}(\tau_{1})\zeta_{k}^{*}(\tau_{2})\zeta_{q}(\tau_{1} )\zeta_{q}^{\prime}(\tau_{1})\zeta_{q}^{*}(\tau_{2})\zeta_{q}^{\prime\,*}( \tau_{2})\right]^{*}.\] (64) Therefore, the first integral in eq. (62) is fully symmetric under the exchange \(\tau_{1}\leftrightarrow\tau_{2}\), and we rewrite \(\Delta P_{\rm 0th}(k,\tau)\) as \[\Delta P_{\rm 0th}(k,\tau)=\] \[32\int_{\tau_{\rm in}}^{\tau}d\tau_{1}g(\tau_{1})\int_{\tau_{\rm in }}^{\tau}d\tau_{2}g(\tau_{2})\int_{q_{\rm in}}^{q_{\rm end}}dq\,q^{2}\,\left| \zeta_{k}(\tau)\right|^{2}{\rm Re}\big{[}\zeta_{k}(\tau_{1})\zeta_{k}^{*}(\tau _{2})\zeta_{q}(\tau_{1})\zeta_{q}^{\prime}(\tau_{1})\zeta_{q}^{*}(\tau_{2}) \zeta_{q}^{\prime\,*}(\tau_{2})\big{]}\] (65) \[-64\int_{\tau_{\rm in}}^{\tau}d\tau_{1}g(\tau_{1})\int_{\tau_{\rm in }}^{\tau_{1}}d\tau_{2}g(\tau_{2})\int_{q_{\rm in}}^{q_{\rm end}}dq\,q^{2}\,{ \rm Re}\big{[}\zeta_{k}(\tau)^{2}\zeta_{k}^{*}(\tau_{1})\zeta_{k}^{*}(\tau_{2 })\zeta_{q}^{\prime}(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{q}^{*}(\tau_{2}) \zeta_{q}^{\prime\,*}(\tau_{2})\big{]}\,.\] (66) and apply to the first integral in eq. (65) the identity in eq. (61). We arrive at \[\Delta P_{\rm 0th}(k,\tau)=64\int_{\tau_{\rm in}}^{\tau}d\tau_{1}g( \tau_{1})\int_{\tau_{\rm in}}^{\tau_{1}}d\tau_{2}g(\tau_{2}) \int_{q_{\rm in}}^{q_{\rm end}}dq\,q^{2}\left\{{\rm Re}\big{[}\zeta_{k}( \tau)\zeta_{k}^{*}(\tau)\zeta_{k}(\tau_{1})\zeta_{k}^{*}(\tau_{2})\zeta_{q}( \tau_{1})\zeta_{q}^{\prime}(\tau_{1})\zeta_{q}^{*}(\tau_{2})\zeta_{q}^{\prime \,*}(\tau_{2})\big{]}\right.\] \[-\left.{\rm Re}\big{[}\zeta_{k}(\tau)^{2}\zeta_{k}^{*}(\tau_{1}) \zeta_{k}^{*}(\tau_{2})\zeta_{q}^{\prime}(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{q} ^{*}(\tau_{2})\zeta_{q}^{\prime\,*}(\tau_{2})\big{]}\right\}\,.\] (67) We are now in the position of combining the two integrand functions. Schematically, we define the two combinations \[X\equiv\zeta_{k}^{*}(\tau)\zeta_{k}(\tau_{1})\,,\hskip 28.452756ptY\equiv\zeta_{k}( \tau)\zeta_{q}(\tau_{1})\zeta_{q}^{\prime}(\tau_{1})\zeta_{k}^{*}(\tau_{2}) \zeta_{q}^{*}(\tau_{2})\zeta_{q}^{\prime\,*}(\tau_{2})\,,\] (68) such that the integrand in eq. (67) becomes \[{\rm Re}(XY)-{\rm Re}(X^{*}Y)=-2{\rm Im}(X){\rm Im}(Y)\,.\] (69) We thus arrive at the result \[\Delta P_{\rm 0th}(k,\tau)= -128\int_{\tau_{\rm in}}^{\tau}d\tau_{1}\epsilon(\tau_{1})\epsilon ^{\prime}_{2}(\tau_{1})a^{2}(\tau_{1})\int_{\tau_{\rm in}}^{\tau_{1}}d\tau_{2} \epsilon(\tau_{2})\zeta_{2}^{\prime}(\tau_{2})a^{2}(\tau_{2})\int_{q_{\rm in}}^{q _{\rm end}}dq\,q^{2}\] \[\times{\rm Im}\left[\zeta_{k}^{*}(\tau)\zeta_{k}(\tau_{1})\right]{ \rm Im}\left[\zeta_{k}(\tau)\zeta_{q}(\tau_{1})\zeta_{q}^{\prime}(\tau_{1}) \zeta_{k}^{\prime}(\tau_{2})\zeta_{q}^{*}(\tau_{2})\zeta_{q}^{\prime\,*}(\tau_{2 })\right].\] (70) Given that we are interested in modes \(k\) that are much smaller than the USR-enhanced ones, they are super-horizon at the time of USR phase. Thus, for any time \(\tau\gtrsim\tau_{\rm in}\) of relevance for both time integrations, one has that \[{\rm Im}\left[\zeta_{k}(\tau)\zeta_{k}^{*}(\tau_{1})\right]\simeq{\rm Im}\left[ \left|\zeta_{k}(\tau)\right|^{2}\right]=0,\] (71) which makes the above contribution negligible.
* **1st order in time derivatives of the long mode.** Starting from the second lines of eqs. (59,60), we now consider the sum \[\Delta P_{\rm 1st}(k,\tau)= 16\int_{\tau_{\rm in}}^{\tau}d\tau_{1}g(\tau_{1})\int_{\tau_{\rm in }}^{\tau}d\tau_{2}g(\tau_{2})\int_{q_{\rm in}}^{q_{\rm end}}dq\,q^{2}\,\left| \zeta_{k}(\tau)\right|^{2}\] \[\left[\zeta_{k}^{\prime}(\tau_{1})\zeta_{k}^{*}(\tau_{2})\zeta_{q }(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{q}^{\prime\,*}(\tau_{2})\zeta_{q}^{*}( \tau_{2})+\zeta_{k}(\tau_{1})\zeta_{k}^{\prime\,*}(\tau_{2})\zeta_{q}(\tau_{1}) \zeta_{q}^{\prime}(\tau_{1})\zeta_{q}^{*}(\tau_{2})\zeta_{q}^{*}(\tau_{2})\right]\] \[-32{\rm Re}\Big{\{}\int_{\tau_{\rm in}}^{\tau}d\tau_{1}g(\tau_{1}) \int_{\tau_{\rm in}}^{\tau_{1}}d\tau_{2}g(\tau_{2})\int_{q_{\rm in}}^{q_{\rm end }}dq\,q^{2}\zeta_{k}(\tau)^{2}\] \[\left[\zeta_{k}^{\prime\,*}(\tau_{1})\zeta_{k}^{*}(\tau_{2})\zeta_ {q}(\tau_{1})^{2}\zeta_{q}^{\prime\,*}(\tau_{2})\zeta_{q}^{*}(\tau_{2})+\zeta_{k} ^{*}(\tau_{1})\zeta_{k}^{\prime\,*}(\tau_{2})\zeta_{q}^{\prime
Manipulations analogue to those discussed in the previous point allow one to combine the two integrals together. We find
\[\Delta P_{\rm 1st}(k,\tau)=-64\int_{\tau_{\rm in}}^{\tau}d\tau_{1}g( \tau_{1})\int_{\tau_{\rm in}}^{\tau_{1}}d\tau_{2}g(\tau_{2}) \int_{q_{\rm in}}^{q_{\rm end}}dq\,q^{2}\left\{{\rm Im}\left[\zeta_{k}^{*}( \tau)\zeta_{k}(\tau_{1})\right]{\rm Im}\left[\zeta_{k}(\tau)\zeta_{k}^{*}(\tau_ {2})^{2}\zeta_{q}(\tau_{1})\zeta_{k}^{\prime\,*}(\tau_{2})\zeta_{q}^{\prime}( \tau_{1})\right]\right.\] \[+\left.{\rm Im}\left[\zeta_{k}^{*}(\tau)\zeta_{k}^{\prime}(\tau_{1 })\right]{\rm Im}\left[\zeta_{k}(\tau)\zeta_{q}(\tau_{1})^{2}\zeta_{k}^{*}( \tau_{2})\zeta_{q}^{*}(\tau_{2})\zeta_{q}^{\prime\,*}(\tau_{2})\right]\right\}. \tag{73}\]
Again, since we are interested in modes \(k\) that are much smaller than the USR-enhanced ones, and are super-horizon at the time of USR phase, the contribution within the curly brackets in the first line vanishes thanks to eq. (71). This leaves us with
* **2nd order in time derivatives of the long mode.** Analogue manipulations give
\[\Delta P_{\rm 2nd}(k,\tau) \equiv-32\int_{\tau_{\rm in}}^{\tau}d\tau_{1}\epsilon(\tau_{1}) \epsilon_{2}^{\prime}(\tau_{1})a^{2}(\tau_{1})\int_{\tau_{\rm in}}^{\tau_{1}}d \tau_{2}\epsilon(\tau_{2})\epsilon_{2}^{\prime}(\tau_{2})a^{2}(\tau_{2})\int_ {q_{\rm in}}^{q_{\rm end}}dq\,q^{2}\] \[\times{\rm Im}\left[\zeta_{k}^{*}(\tau)\zeta_{k}^{\prime}(\tau_{1 })\right]{\rm Im}\left[\zeta_{k}(\tau)\zeta_{q}(\tau_{1})\zeta_{q}(\tau_{1}) \zeta_{k}^{*}(\tau_{2})\zeta_{q}^{*}(\tau_{2})\zeta_{q}^{\prime\,*}(\tau_{2}) \right]. \tag{75}\]
We stress that the only approximation employed so far is to take the external momentum to be much smaller than the one in the loop, i.e. \(k\ll q\), which is justified in presence of a large hierarchy between the CMB and the USR scales.
### Loop correction at any scales
It will be useful in the following to remove the assumption that the external momentum is much smaller than the modes in the loop, i.e. the large separation of scales \(k\ll q\). Starting again from eqs. (54,55,56), we can proceed with analogous steps as in the previous section and define
\[X_{1} \equiv\zeta_{k}^{*}(\tau)\zeta_{k}^{\prime}(\tau_{1})\,, Y_{1} \equiv\zeta_{k}(\tau)\zeta_{k-q}(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{k-q}^{*}( \tau_{2})\zeta_{k}^{\prime\,*}(\tau_{2})\zeta_{q}^{*}(\tau_{2})\,,\] \[X_{2} \equiv\zeta_{k}^{*}(\tau)\zeta_{k}^{\prime}(\tau_{1})\,, Y_{2} \equiv\zeta_{k}(\tau)\zeta_{k-q}(\tau_{1})\zeta_{q}(\tau_{1})\zeta_{k}^{*}(\tau_ {2})\zeta_{q}^{*}(\tau_{2})\zeta_{q}^{*}(\tau_{2})+\zeta_{k-q}^{*}(\tau_{2}) \zeta_{q}^{\prime\,*}(\tau_{2})\right],\] \[X_{3} \equiv\zeta_{k}^{*}(\tau)\zeta_{k}(\tau_{1})\,, Y_{3} \equiv\zeta_{k}(\tau)\zeta_{k-q}^{*}(\tau_{2})\zeta_{k}^{*}(\tau_{2})\zeta_{q}^{*} (\tau_{2})\left[\zeta_{k-q}^{*}(\tau_{1})\zeta_{q}(\tau_{1})+\zeta_{k-q}(\tau _{1})\zeta_{q}^{\prime}(\tau_{1})\right.\] \[X_{4} \equiv\zeta_{k}^{*}(\tau)\zeta_{k}(\tau_{1})\,, Y_{4} \equiv\zeta_{k}(\tau)\zeta_{k-q}^{\prime}(\tau_{1})\zeta_{k}^{*}(\tau_{2})\left[ \zeta_{q}^{*}(\tau_{2})\zeta_{k-q}^{\prime\,*}(\tau_{2})\right]\] \[X_{5} \equiv\zeta_{k}^{*}(\tau)\zeta_{k}(\tau_{1})\,, Y_{5} \equiv\zeta_{k}(\tau)\zeta_{k-q}^{*}(\tau_{2})\zeta_{k}^{*}(\tau_{2})\zeta_{q}^{*} (\tau_{2})\left[\zeta_{q}(\tau_{1})\zeta_{k-q}^{\prime}(\tau_{1})+\zeta_{k-q}( \tau_{1})\zeta_{q}^{\prime}(\tau_{1})\right] \tag{76}\]
in such a way that \(\Delta P\equiv\Delta P_{1}+\Delta P_{2}\) can be written in the schematic form
* **Generic loop correction at any scale.** \[\Delta P(k,\tau)\equiv-16\int_{\tau_{\rm in}}^{\tau}d\tau_{1}g(\tau_{1})\int_{ \tau_{\rm in}}^{\tau_{1}}d\tau_{2}g(\tau_{2})\int_{q_{\rm in}}^{q_{\rm end}}dq\,q^ {2}\,\int_{-1}^{1}d(\cos\theta)\times\sum_{i=1}^{5}{\rm Im}\left(X_{i}\right) {\rm Im}\left(Y_{i}\right),\] (77)
thanks to the identity in eq. (69) and where we again introduced \(\epsilon(\tau)\epsilon_{2}^{\prime}(\tau)a^{2}(\tau)\equiv g(\tau)\). This expression is much more intricate than the one obtained in the limit of a large hierarchy of scales between the mode \(k\) and the USR loop momenta. It will allow us to seize the loop correction to the power spectrum also at the USR scales where the peak of the power spectrum is generated.
Time integration beyond the instantaneous transition and at any scales
### Loop evaluation at the CMB scales
Let us try to simplify the structure of eq. (54) in light of the approximations introduced so far. First of all, let us write eq. (54) in the form
\[\mathcal{P}(k)=\frac{H^{2}}{8\pi^{2}\epsilon_{\text{ref}}}\left\{1+\lim_{\tau \to 0^{-}}\frac{4\epsilon_{\text{ref}}k^{3}}{H^{2}(4\pi)^{2}}\Delta P_{\text{1st }}(k,\tau)+\lim_{\tau\to 0^{-}}\frac{4\epsilon_{\text{ref}}k^{3}}{H^{2}(4\pi)^{2}} \Delta P_{\text{2nd}}(k,\tau)\right\}\,, \tag{78}\]
where we used the slow-roll approximation for the first term in eq. (54) given that \(k\) is of the order of the CMB pivot scale. We focus on the leading correction given by \(\Delta P_{\text{1st}}(k,\tau)\). Using the number of \(e\)-folds as the time variable, we find that it can be written in the compact form (cf. our definition in eq. (1))
\[\Delta\mathcal{P}_{\text{1-loop}}(k_{*})\equiv\lim_{\tau\to 0^{-}}\frac{4 \epsilon_{\text{ref}}k^{3}}{H^{2}(4\pi)^{2}}\Delta P_{\text{1st}}(k,\tau)=\] \[32\left(\frac{H^{2}}{8\pi^{2}\epsilon_{\text{ref}}}\right) \int_{N_{\text{in}}-\Delta N}^{N_{\text{end}}+\Delta N}dN_{1}\frac{d\eta}{dN}( N_{1})\int_{N_{\text{in}}-\Delta N}^{N_{1}}dN_{2}\bar{\epsilon}(N_{2})\frac{d\eta}{ dN}(N_{2})e^{3(N_{2}-N_{\text{in}})}\int\frac{d\bar{q}}{\bar{q}^{4}}\text{Im} \left[\bar{\zeta}_{q}(N_{1})^{2}\bar{\zeta}_{q}^{*}(N_{2})\frac{d\bar{\zeta}_ {q}^{*}}{dN}(N_{2})\right] \tag{79}\]
where we introduced the following manupulations:
_i)_ We use the approximation \(\epsilon_{2}(N)\approx-2\eta(N)\). This is because in the relevant range of \(N\) over which we integrate \(\epsilon\ll 1\) while \(\eta=O(1)\), cf. the right panel of fig. 2.
_ii)_ We define \(\bar{q}\equiv q/a_{\text{in}}H\). Furthermore, we use the two relations
\[\frac{a(N_{1})H}{k}=e^{N_{1}-N_{k}}\,,\quad\text{ with: }\quad a(N_{k})H=k\,,\qquad\text{ and }\quad\frac{a(N_{2})H}{q}=\frac{e^{N_{2}-N_{\text{in}}}}{\bar{q}}\,. \tag{80}\]
_iii)_ We introduce the short-hand notation
\[\bar{\zeta}_{q}(N)\equiv\frac{\epsilon_{\text{ref}}^{1/2}q^{3/2}\zeta_{q}(N) }{H}\,. \tag{81}\]
The virtue of this definition is that \(\bar{\zeta}_{q}(N)\) is precisely the quantity we compute numerically by solving the M-S equation, cf. the right panel of fig. 3. Furthermore, it should be noted that the definition in eq. (81) is automatically invariant under the rescaling in eq. (10). The same comment applies to the definition of \(\bar{q}\) and the ratios in eq. (80). Consequently, an expression entirely written in terms of barred quantities is automatically invariant under the rescaling in eq. (10).
_iv)_ Importantly, in the derivation of eq. (79) we use (cf. appendix A)
\[\text{Im}\!\left[\bar{\zeta}_{k}^{*}(N)\frac{d\bar{\zeta}_{k}}{dN}(N_{1}) \right]\simeq\text{Im}\!\left[\bar{\zeta}_{k}^{*}(N_{1})\frac{d\bar{\zeta}_{k }}{dN}(N_{1})\right]=-\frac{\bar{k}^{3}}{4\bar{\epsilon}(N_{1})}e^{3(N_{\text{ in}}-N_{1})}\,, \tag{82}\]
with \(\epsilon(N)\) given by eq. (32) for generic \(\delta N\). This is because \(N_{1,2}\) vary at around \(N_{\text{end}}\), and in this time interval modes with comoving wavenumbers \(k\approx k_{*}\) are way outside the horizon and stay constant. For the very same reason, we also use the slow-roll approximation
\[\bar{\zeta}_{k}(N_{1})\bar{\zeta}_{k}^{*}(N_{2})=\frac{1}{4}\,. \tag{83}\]
_v)_ The range of integration in eq. (79) is as follows. In the case of a smooth transition, we take \(N_{1}\in[N_{\text{in}}-\Delta N,N_{\text{end}}+\Delta N]\) and \(N_{2}\in[N_{\text{in}}-\Delta N,N_{1}]\) where \(\Delta N\) should be large enough to complete the SR/USR/SR transition (that is, \(\Delta N\gtrsim\delta N\)). In the limit of instantaneous transition, we set \(N_{1}=N_{2}=N_{\text{end}}\), which corresponds to consider the dominant contribution given by the first \(\delta\) function in eq. (31). Moreover, we include a factor \(1/2\) since, with respect to the integration over \(N_{2}\), the argument of the \(\delta\) function in eq. (31) picks up the upper limit of the integration interval at \(N_{\text{end}}\). The integration over \(q\), on the contrary, is limited by \(\bar{q}\in[1,e^{\Delta N_{\text{USR}}}]\).
The instantaneous transition
We consider the instantaneous limit (dubbed \(\delta N\to 0\) in the following) of eq. (79). We find
\[\lim_{\delta N\to 0}\Delta\mathcal{P}_{\rm 1-loop}(k_{*})=\left(\frac{H^{2} }{8\pi^{2}\epsilon_{\rm ref}}\right)\eta_{\rm II}^{2}\left(\frac{k_{\rm end}}{ k_{\rm in}}\right)^{-2\eta_{\rm II}+3}\lim_{\delta N\to 0}(\ref{eq:16})\int_{1}^{e^{ \Delta N_{\rm USR}}}\frac{d\bar{q}}{\bar{q}^{4}}|\bar{\zeta}_{q}(N_{\rm end})|^ {2}{\rm Im}\left[\bar{\zeta}_{q}(N_{\rm end})\frac{d\bar{\zeta}_{q}^{*}}{dN}(N_ {\rm end})\right]\,. \tag{84}\]
This expression can be further simplified using (cf. eq. (122))
\[{\rm Im}\left[\bar{\zeta}_{q}(N_{\rm end})\frac{d\bar{\zeta}_{q}^{*}}{dN}(N_ {\rm end})\right]=\frac{\bar{q}^{3}}{4}e^{(2\eta_{\rm II}-3)(N_{\rm end}-N_{ \rm in})}=\frac{\bar{q}^{3}}{4}\left(\frac{k_{\rm end}}{k_{\rm in}}\right)^{ 2\eta_{\rm II}-3}\,, \tag{85}\]
so that we write
\[\lim_{\delta N\to 0}\Delta\mathcal{P}_{\rm 1-loop}(k_{*})=\left(\frac{H^{2} }{8\pi^{2}\epsilon_{\rm ref}}\right)\,4\eta_{\rm II}^{2}\,\lim_{\delta N\to 0 }\int_{1}^{e^{\Delta N_{\rm USR}}}\frac{d\bar{q}}{\bar{q}}|\bar{\zeta}_{q}(N_{ \rm end})|^{2}\,. \tag{86}\]
We remark that this expression is valid for generic values of \(\eta_{\rm II}\) during USR.
We consider now the computation of the last integral. The factor \(\bar{\zeta}_{q}\) grows exponentially during the USR phase. In the case of sub-horizon modes, we have \(\bar{\zeta}_{q}(N)\sim e^{-(1-\eta_{\rm II})N}\) while in the case of super-horizon modes we find \(\bar{\zeta}_{q}(N)\sim e^{-(3-2\eta_{\rm II})N}\) (cf. appendix A). However, the precise estimate of the integral in eq. (86) is complicated by the fact that curvature modes \(\bar{\zeta}_{q}\) with \(\bar{q}\in[1,\exp(\Delta N_{\rm USR})]\) are neither sub- nor super-horizon but they exit the horizon during the USR phase, thus making the analytical estimate of the argument of their exponential growth more challenging.
The situation simplifies if we consider some special values of \(\eta_{\rm II}\). We consider the case \(\eta_{\rm II}=3\) (that is \(\epsilon_{2}=-6\), it should be noted that this is also the case studied in ref. [31]). In this case, everything can be computed analytically. We find the scaling
\[\lim_{\delta N\to 0}\int_{1}^{e^{\Delta N_{\rm USR}}}\frac{d\bar{q}}{\bar{q}}| \bar{\zeta}_{q}(N_{\rm end})|^{2}\approx\frac{e^{6\Delta N_{\rm USR}}}{4}(1+ \Delta N_{\rm USR})=\frac{1}{4}\left(\frac{k_{\rm end}}{k_{\rm in}}\right)^{6 }\left[1+\log\left(\frac{k_{\rm end}}{k_{\rm in}}\right)\right]\,, \tag{87}\]
which becomes more and more accurate for larger \(k_{\rm end}/k_{\rm in}\). The final result is
* Leading out-loop correction at \(\Delta\mathcal{P}_{\rm 1-loop}\) scales in the instantaneous SR (USR) SR transition
\[\lim_{\delta N\to 0}\Delta\mathcal{P}_{\rm 1-loop}(k_{*})\approx\left(\frac{H^{2} }{8\pi^{2}\epsilon_{\rm ref}}\right)\eta_{\rm II}^{2}\left(\frac{k_{\rm end}} {k_{\rm in}}\right)^{6}\left[1+\log\left(\frac{k_{\rm end}}{k_{\rm in}} \right)\right]\,,\qquad\mbox{with}\ \ \eta_{\rm II}=3,\ \eta_{\rm III}=0 \tag{88}\]
which perfectly agrees with the findings of ref. [31] in the same limit.
The above result has a number of limitations, which we address separately in the following subsections:
* **Dynamics during USR, section IV.1.2.** We modify the assumption \(\eta_{\rm II}=3\) and we take \(\delta N\to 0\) and \(\eta_{\rm III}=0\).
* **Dynamics at the SR/USR/SR transition, section IV.1.3.** We consider \(\delta N\neq 0\), with generic \(\eta_{\rm II}\) but \(\eta_{\rm III}=0\). Considering a non-zero value of \(\delta N\) is very important because it corresponds to a more realistic smooth SR/USR/SR transition, as opposed to the instantaneous limit with \(\delta N=0\).
#### iv.1.2 Dynamics during USR
We compute eq. (86) for generic values of \(\eta_{\rm II}\), still keeping \(\delta N\to 0\) and \(\eta_{\rm III}=0\). From the computation of the tree-level power spectrum (cf. section II.2 and fig. 3) we define
\[\frac{\mathcal{P}_{\rm USR}}{\mathcal{P}_{\rm CMB}}\equiv\frac{\mathcal{P}( \bar{k}_{\rm max})}{\mathcal{P}(\bar{k}\ll 1)}\,, \tag{89}\]
where \(\bar{k}_{\rm max}\) represent the position of the max of \(\mathcal{P}(\bar{k})\) after the growth due to the USR dynamics. We compare in the left panel of fig. 4 contours of constant \(\mathcal{P}_{\rm USR}/\mathcal{P}_{\rm CMB}\) (dashed blue) and constant \(\lim_{\delta N\to 0}\Delta\mathcal{P}_{\rm 1-loop}(k_{*})\) (solid
red). We take \(H^{2}/8\pi^{2}\epsilon_{\rm ref}=2.1\times 10^{-9}\). Our analysis shows that enhancements \({\cal P}_{\rm USR}/{\cal P}_{\rm CMB}\gtrsim 10^{8}\) are barely compatible with the perturbativity condition \(\lim_{\delta N\to 0}\Delta{\cal P}_{\rm 1-loop}(k_{*})<1\), which roughly means "loops \(<\) tree level". The region \(\lim_{\delta N\to 0}\Delta{\cal P}_{\rm 1-loop}(k_{*})>1\) is hatched in red in fig. 4.
We can actually do better and compare with a careful computation of the PBH abundance. The parameters of the dynamics in section II.2 (with \(\eta_{\rm III}=0\) and \(\delta N\to 0\)) are chosen in such a way that the integral
\[f_{\rm PBH}\equiv\frac{\Omega_{\rm PBH}}{\Omega_{\rm CDM}}=\int f_{\rm PBH}(M _{\rm PBH})d\log M_{\rm PBH}\approx 1\,, \tag{90}\]
which means that we get \(\approx 100\%\) of DM in the form of PBHs. More in detail, we tune, for each \(\eta_{\rm II}\), the value of \(\Delta N_{\rm USR}\) so to get \(f_{\rm PBH}\approx 1\); we choose the numerical value of \(k_{\rm in}\) in such a way that the peak of the PBH mass distribution \(f_{\rm PBH}(M_{\rm PBH})\) falls within the interval \(M_{\rm PBH}/M_{\odot}\in[10^{-14},\,10^{-13}]\) in which the condition \(f_{\rm PBH}\approx 1\) is experimentally allowed, the so-called asteroid-mass PBHs [10]. We compute eq. (90) using threshold statistics and including the full non-linear relation between the curvature and the density contrast fields (cf. [56; 57]). The interested reader can find more details on the computation of the abundance in ref. [49] and refs. therein.3 In the right panel of fig. 4 we plot the line defined by the condition \(f_{\rm PBH}\approx 1\). The comparison between the left- and right-hand side of fig. 4 shows that, in order to fulfill the condition \(f_{\rm PBH}\approx 1\), one needs \({\cal P}_{\rm USR}/{\cal P}_{\rm CMB}=O(10^{7})\).4
Footnote 3: It is possible to further improve our analysis by including the presence of primordial non-gaussianity (e.g. [58; 59; 60; 61; 62; 63; 64; 65; 66]). In the case of local non-gaussianity parametrized by a positive non-Gaussian parameter \(f_{\rm NL}\), as expected in the case of USR, we generically expect a larger abundance of PBHs compared to the Gaussian case [67; 68; 69]. This means that, in order to achieve the same abundance of PBHs, one needs a power spectrum with a smaller peak amplitude. This argument implies that the presence of primordial non-Gaussianity will tend to decrease the relevance of the one-loop corrections.
Footnote 4: There is some difference between peak theory and threshold statistics in the computation of the abundance, already present at the Gaussian level (see, e.g., refs. [70; 71]). The approach based on peak theory usually requires slightly smaller values of \({\cal P}_{\rm USR}/{\cal P}_{\rm CMB}\) in order to get the same abundance of PBHs, thus making our findings, based on threshold statistics, even stronger.
We conclude that the condition \(f_{\rm PBH}\approx 1\) lies within the region in which perturbativity is still applicable. This is in contrast with the conclusion reached in refs. [31; 33; 35] in the limit of instantaneous SR/USR/SR transition. The origin of the difference is the more accurate calculation of the PBHs abundance performed in our work. In particular, in previous analyses, estimates of \(f_{\rm PBH}\approx 1\) are based on requiring \({\cal P}_{\rm USR}\simeq 10^{-2}\), and on the scaling \(\Delta{\cal P}=(k_{\rm end}/k_{\rm in})^{2\eta_{\rm II}}\) in order to capture the growth of the power spectrum at small scales. However, as explained in section II.2, this scaling does not accurately describe the amplitude of the power spectrum at its peak, see the left panel of fig. 3. Ref. [72] computed one-loop corrections in the limit of instantaneous SR/USR/SR transitions in scenarios with \(\eta_{\rm II}\leq 3\), finding that the perturbativity bound is relaxed for \(\eta_{\rm II}\) smaller than 3. At a qualitative level, similar results are obtained in the left panel of fig. 4. In ref. [72], the perturbativity bound is produced in an upper limit on the power-spectrum at the scale \(k_{\rm end}\). However, as explained above, this procedure underestimates the maximum amplitude of the power spectrum, see again the left panel of fig. 3.
Figure 4: _In both panels, we consider a generic USR dynamics with varying \(\eta_{\rm II}\) (\(x\)-axis) and \(\Delta N_{\rm USR}\) (\(y\)-axis). We take \(\eta_{\rm III}=0\) and the instantaneous limit \(\delta N=0\). We plot in solid red contours of constant \(\lim_{\delta N\to 0}\Delta{\cal P}_{\rm 1-loop}(k_{*})\), defined in eq. (79) and computed according to eq. (86) with \(H^{2}/8\pi^{2}\epsilon_{\rm ref}=2.1\times 10^{-9}\). The region hatched in red is defined by the condition \(\lim_{\delta N\to 0}\Delta{\cal P}_{\rm 1-loop}(k_{*})>1\). **Left panel:** We superimpose contours of constant \({\cal P}_{\rm USR}/{\cal P}_{\rm CMB}\) as defined in eq. (89) (dashed blue). **Right panel:** We superimpose the line defined by the condition \(f_{\rm PBH}=1\). Along this line, we get 100% of DM in the form of asteroid-mass PBHs._
Dynamics at the SR/USR/SR transition
We go beyond the instantaneous transition to check if there are cancellations that affect the order-of-magnitude of the result in eq. (88). There are indeed compelling reasons to believe that this is the case, as advocated in refs. [34; 35; 36]. The story goes as follows (the original argument was proposed in ref. [60] in which the role of non-Gaussianity from non-attractor inflation models was considered). From the Hubble parameters in eq. (2) and the background dynamics that follow from the action in eq. (4), it is possible to calculate the potential and its derivatives exactly. Up to the third order in field derivatives, we find (see also ref. [73])
\[V(\phi) =H^{2}(3-\epsilon)\,, \tag{91}\] \[V^{\prime}(\phi) =\frac{H^{2}}{\sqrt{2}}\epsilon^{1/2}\left(6-2\epsilon+\epsilon_{ 2}\right)\,,\] (92) \[V^{\prime\prime}(\phi) =H^{2}\left(6\epsilon-2\epsilon^{2}-\frac{3\epsilon_{2}}{2}+ \frac{5\epsilon\epsilon_{2}}{2}-\frac{\epsilon_{2}^{2}}{4}-\frac{\epsilon_{2} \epsilon_{3}}{2}\right)\,,\] (93) \[V^{\prime\prime\prime}(\phi) =\frac{H^{2}}{2\sqrt{2\epsilon}}\left[-8\epsilon^{3}+6\epsilon^{ 2}(4+3\epsilon_{2})-\epsilon\epsilon_{2}(18+6\epsilon_{2}+7\epsilon_{3})+ \epsilon_{2}\epsilon_{3}(3+\epsilon_{2}+\epsilon_{3}+\epsilon_{4})\right]\] (94) \[=\frac{1}{2\sqrt{2\epsilon}}\left\{H^{2}\left[\frac{\ddot{ \epsilon_{2}}}{H^{2}}+(3+\epsilon_{2})\frac{\dot{\epsilon_{2}}}{H}\right]+O( \epsilon)\right\}\,, \tag{95}\]
where in eq. (95) we expanded in the parameter \(\epsilon\) and wrote \(\epsilon_{3,4}\) in terms of \(\epsilon_{2}\). Consider the flat gauge in which curvature perturbations are entirely encoded into field fluctuations \(\delta\phi\) by means of the relation \(\zeta=H\delta\phi/\dot{\phi}=-\delta\phi/\sqrt{2\epsilon}\). In this gauge, the interactions come from Taylor-expanding the quadratic action in field fluctuations and, at the cubic order, one expects
\[\mathcal{L}_{3}\supset\frac{a^{3}}{6}V^{\prime\prime\prime}\delta\phi^{3}=- \frac{a^{3}\epsilon}{3}(\sqrt{2\epsilon}V^{\prime\prime\prime})\zeta^{3}=- \frac{a^{3}\epsilon}{6}\left\{H^{2}\left[\frac{\dot{\epsilon_{2}}}{H^{2}}+(3 +\epsilon_{2})\frac{\dot{\epsilon_{2}}}{H}\right]+O(\epsilon)\right\}\zeta^{3}\,. \tag{96}\]
As shown in ref. [59], the above interaction agrees (modulo a surface term) with eq. (42) if we integrate by parts
\[\int d^{4}x\,\frac{\epsilon\dot{\epsilon_{2}}}{2}a^{3}\dot{\zeta}\zeta^{2} \quad\rightarrow\quad-\int d^{4}x\,\frac{1}{6}\frac{d}{dt}\left(\epsilon\dot{ \epsilon_{2}}a^{3}\right)\zeta^{3}=-\int d^{4}x\,\frac{a^{3}\epsilon}{6}H^{2} \left[\frac{\ddot{\epsilon_{2}}}{H^{2}}+(3+\epsilon_{2})\frac{\dot{\epsilon_ {2}}}{H}\right]\zeta^{3}\,, \tag{97}\]
where in the last step we used the exact identity
\[\frac{d}{dt}\left(\epsilon\dot{\epsilon_{2}}a^{3}\right)=a^{3}\epsilon H^{2} \left[\frac{\dot{\epsilon_{2}}}{H^{2}}+(3+\epsilon_{2})\frac{\dot{\epsilon_ {2}}}{H}\right]\,. \tag{98}\]
The cubic interaction in eq. (97) agrees with eq. (96) up to \(\epsilon\)-suppressed terms. Rewriting the interaction as in eq. (96) is quite instructive. From eq. (98), it seems plausible that drastic variations in time of \(\epsilon_{2}\) could enhance the cubic interaction. However, eq. (96) shows that these interactions are ultimately controlled by \(V^{\prime\prime\prime}\) so that in the case with a smooth SR/USR/SR transition in which \(V^{\prime\prime\prime}\) is expected to be "small", there must be cancellations at work within the combination in eq. (98) so that the relevant coupling in eq. (96) reduces to the term that is SR suppressed. This is the main argument that was put forth in refs. [34; 35; 36].
We shall elaborate further on this point. First of all, let us clarify what "\(V^{\prime\prime\prime}\) small" means. We rewrite eq. (95) as follows (we omit the \(O(\epsilon)\) terms and, for clarity's sake, we write explicitly the reduced Planck mass)
\[\frac{V^{\prime\prime\prime}}{H}=\left(\frac{H}{M_{\rm Pl}}\right)\frac{1}{2 \sqrt{2\epsilon}}\left[\frac{\ddot{\epsilon_{2}}}{H^{2}}+(3+\epsilon_{2}) \frac{\dot{\epsilon_{2}}}{H}\right]\,. \tag{99}\]
On the left-hand side, the quantity \(V^{\prime\prime\prime}/H\) has the dimension of a coupling. Consequently, imposing the condition \(V^{\prime\prime\prime}/H<1\) corresponds to a weak coupling regime while \(V^{\prime\prime\prime}/H>1\) corresponds to a strongly coupled one. Said differently, from the perspective of the right-hand side of eq. (99), the condition \(V^{\prime\prime\prime}/H>1\) corresponds to a situation in which the a-dimensional factor in front of \(H/\bar{M}_{\rm Pl}\) becomes so large that it overcomes the natural suppression given by \(H/\bar{M}_{\rm Pl}\ll 1\). In the left panel of fig. 5, we compute the ratio \(V^{\prime\prime\prime}/H\) for two benchmark SR/USR/SR dynamics with different values of \(\delta N\). In the case in which \(\delta N\to 0\) (sharp transition), we observe that \(V^{\prime\prime\prime}/H\) dangerously grows towards the strongly coupled regime while in the case of a smooth transition it safely takes \(O(\ll 1)\) values. As anticipated at the beginning of this section, this argument confirms that in the case of a smooth transition we expect a reduction in the size of the trilinear interaction controlled by the factor in eq. (98).
With this motivation in mind, we go back to the analysis in section IV.1.1 and we perform the following calculation.
We compute numerically the integral in eq. (79) in order to check the validity of the scaling in eq. (88) beyond the limit of instantaneous transition. We define the quantity
\[\begin{split}&\mathcal{J}_{\delta N}(\eta_{\Pi},\Delta N_{\rm USR}) \equiv\Delta\mathcal{P}_{\rm 1-loop}(k_{*})/\mathcal{P}_{\rm tree}(k_{*})=\\ &\qquad 32\int_{N_{\rm in}-\Delta N}^{N_{\rm end}+\Delta N}dN_{1} \frac{d\eta}{dN}(N_{1})\int_{N_{\rm in}-\Delta N}^{N_{1}}dN_{2}\bar{\epsilon}( N_{2})\frac{d\eta}{dN}(N_{2})e^{3(N_{2}-N_{\rm in})}\int_{1}^{e^{\Delta N_{\rm USR}}} \frac{d\bar{q}}{\bar{q}^{4}}{\rm Im}\left[\bar{\zeta}_{q}(N_{1})^{2}\bar{\zeta }_{q}^{*}(N_{2})\frac{d\bar{\zeta}_{q}^{*}}{dN}(N_{2})\right]\,,\end{split} \tag{100}\]
that we can directly compare, in the case \(\eta_{\Pi}=3\), with \(\eta_{\Pi}^{2}(k_{\rm end}/k_{\rm in})^{6}[1+\log(k_{\rm end}/k_{\rm in})]\) in eq. (88) using the fact that \(k_{\rm end}/k_{\rm in}=e^{\Delta N_{\rm USR}}\). First, we set \(\delta N\) to a very small number, in order to mimic the limit \(\delta N\to 0\), and evaluate \(\mathcal{J}_{\delta N}(3,\Delta N_{\rm USR})\) as function of \(\Delta N_{\rm USR}\).
The comparison is shown in the left panel of fig. 6. We find an excellent agreement in particular for large \(\Delta N_{\rm USR}\). This is expected, since the approximation in eq. (87) is more accurate for
Figure 5: **Left panel:** _Graph of \(V^{\prime\prime\prime}/H\) as function of the background field value \(\phi\) for two representative dynamics with, respectively, \(\delta N=0.025\) (dashed black) and \(\delta N=0.4\) (solid black). Starting from the dynamics defined as in section II.2, we compute the potential by means of the reverse engineering approach described in ref. [49]. The values \(V_{*}\) and \(H_{*}\) of, respectively, the potential and the Hubble rate at CMB scales are chosen in such a way that both dynamics are consistent with CMB observations (namely, \(V_{*}\simeq 3\times 10^{-9}\) and \(H_{*}\simeq 3\times 10^{-5}\) with the reduced Planck mass set to 1). On the right (left) side of the field value \(\phi=\phi_{\rm in}\), \(V^{\prime\prime\prime}/H\) is negative (positive). **Right panel:** _Left-side \(y\)-axis: time evolution of the curvature modes \(|\bar{\zeta}_{q}(N)|\) for \(\bar{q}=2\) in the case \(\delta N=0.025\) (dashed black line) and \(\delta N=0.4\) (solid black line). Right-side \(y\)-axis: profile of \(\eta\) in the case \(\delta N=0.025\) (dashed red line) and \(\delta N=0.4\) (solid red line). The region shaded in red highlights the difference between the sharp and the smooth transition in terms of \(\eta\): in the case of a sharp transition, the curvature mode has more time to grow under the effect of the negative friction phase implied by the condition \(\eta>3/2\)._
Figure 6: **Left panel:** _Comparison between the value of the full integral in eq. (100) and the analytical estimate in eq. (88). To mimic the instantaneous transition we take \(\delta N=0.025\). **Right panel:** _We plot the ratio \(\mathcal{J}_{\delta N}(3,\Delta N_{\rm USR})\) as function of \(\delta N\). In both figures we take \(\eta_{\Pi}=3\)._
larger \(k_{\rm end}/k_{\rm in}\). Then, we set \(\Delta N_{\rm USR}=3\) and compare the value of \({\cal J}_{\delta N\to 0}(3,3)\) with \({\cal J}_{\delta N}(3,3)\) as function of \(\delta N\). We plot the ratio \({\cal J}_{\delta N}(3,3)/{\cal J}_{\delta N\to 0}(3,3)\) in the right panel of fig. 6.
Realistic single-field models that feature the presence of a phase of USR dynamics typically have \(\delta N=0.4\div 0.5\) (cf., e.g., ref. [12; 48]). This means that, according to our result in the right panel of fig. 6, we expect that in realistic models the size of the loop correction gets reduced by one order of magnitude with respect to what is obtained in the limit of instantaneous SR/USR/SR transition. This confirms the intuition presented in refs. [34].
It should be noted, however, as evident from our discussion in section II.2, that in the case of smooth SR/USR/SR transition the amplitude of the power spectrum gets reduced with respect to the \(\delta N\to 0\) limit (cf. the left panel of fig. 3). The origin of this effect becomes evident if we consider the right panel of fig. 5. In this figure, we plot the time evolution of the curvature mode \(|\tilde{\zeta}_{q}|\) with \(\bar{q}=2\) in the two cases of a sharp and smooth transition (dashed and solid lines, respectively - see caption for details). In the case of a sharp transition, the curvature mode experiences a longer USR phase, and its final amplitude is larger with respect to the case of a smooth transition. As a consequence, therefore, we expect that the smaller size of the loop correction will be, at least partially, compensated by the fact that finite \(\delta N\) also reduces the amplitude of the tree-level power spectrum. In order to quantify this information, we repeat the analysis done in section IV.1.2 but now for finite \(\delta N\). We plot our result in fig. 7. For definiteness, we consider the benchmark value \(\delta N=0.4\) while we keep \(\eta_{\rm II}\) and \(\Delta N_{\rm USR}\) generic as in fig. 4.
Our numerical analysis mirrors the previous intuition. The perturbativity bound (the region hatched in red corresponds to the condition \(\Delta{\cal P}_{\rm 1-loop}(k_{\star})>0\)) gets weaker because of the partial cancellation illustrated in the right panel of fig. 6. However, as previously discussed, the drawback is that taking \(\delta N\neq 0\) also reduces the peak amplitude of the power spectrum. Consequently, the condition \(f_{\rm PBH}=1\) requires, for fixed \(\eta_{\rm II}\), larger \(\Delta N_{\rm USR}\).
As for the limit of instantaneous transition, the condition \(f_{\rm PBH}=1\) does not violate the perturbativity bound since the two above-mentioned effects nearly compensate each other. However, our analysis reveals an interesting aspect: modelling the SR/USR/SR transition (and, in particular, the final USR/SR one) beyond the instantaneous limit reduces the impact of the loop correction but, at the same time, lowers the peak amplitude of the tree-level power spectrum, which must be compensated by a larger \(\Delta N_{\rm USR}\) see fig. 8. As illustrated in fig. 7, both these effects must be considered together in order to properly quantify the impact of loop corrections and the consequent perturbativity bound.
This is an interesting point. Refs. [34; 35; 36] argue that if one goes beyond the limit of instantaneous transition then the loop correction to the CMB power spectrum becomes effectively harmless. Technically speaking, in our analysis the role of the parameter \(-6<h<0\) that in [34; 35; 36] (see also ref. [60]) controls the sharpness of the transition is played by our parameter \(\delta N\) (with \(h\to-6\) that corresponds to our \(\delta N\to 0\) and \(h\to 0\) that corresponds to increasing values of \(\delta N\)).
Figure 7: _We consider a generic USR dynamics with varying \(\eta_{\rm II}\) (x-axis) and \(\Delta N_{\rm USR}\) (y-axis). We take \(\eta_{\rm II}=0\) and the smooth limit \(\delta N=0.4\). The region hatched in red corresponds to \(\Delta{\cal P}_{\rm 1-loop}(k_{\star})>0\). Along the line defined by the condition \(f_{\rm PBH}=1\), we get 100% of DM in the form of asteroid-mass PBHs. The dotted blue line and the red dashed line correspond, respectively, to the conditions \(f_{\rm PBH}=1\) and \(\lim_{\delta N\to 0}\Delta{\cal P}_{\rm 1-loop}(k_{\star})>0\) as derived in the limit of instantaneous transition._
In light of our analysis, a very important remark naturally arises: There is a non-trivial and crucial interplay between the detail of the USR/SR transition and the amplitude of the tree-level power spectrum that must be properly included before drawing any conclusion about the relative size of the loop corrections. On the one hand, it is true that a smooth USR/SR transition reduces the size of the loop correction; on the other one, the same smoothing also reduces the amplitude of the power spectrum so that, in order to keep \(f_{\rm PBH}\) fixed, one is forced to either increase the duration of the USR phase or the magnitude of \(\eta\) during the latter. In the end, the two effects tend to compensate each other if one imposes the condition \(f_{\rm PBH}=1\) (cf. fig. 7).
### Loop evaluation at any scales
We evaluate the loop correction at a generic external momentum \(k\), thus alleviating the assumption \(k\ll q\). The dominant modes contributing to the loop integration remain the ones that cross the horizon during the USR phase \(q\in[k_{\rm in},k_{\rm end}]\). As done in the previous section, we are interested in comparing the one-loop correction with the tree level power spectrum at the end of inflation, and therefore we perform the late time limit \(\tau\to 0^{-}\). Following the notation introduced in eq. (54), we define
\[\mathcal{P}(k)=\lim_{\tau\to 0^{-}}\left(\frac{k^{3}}{2\pi^{2}}\right)\left[ \left|\zeta_{k}(\tau)\right|^{2}+\frac{1}{(4\pi)^{2}}\Delta P(k,\tau)\right] \equiv\mathcal{P}_{\rm tree}(k)\left(1+\Delta\mathcal{P}_{\text{1-loop}} \right)\,, \tag{101}\]
In order to simplify the computation we consider the instantaneous limit \(\delta N\to 0\) of eq. (77). We perform both time integrations keeping the dominant contribution given by the first Dirac delta in eq. (31). This implies that we evaluate the integrand function at \(\tau_{1}=\tau_{2}=\tau_{\rm end}\). Notice also that, since the second integration only gets contributions from half of the Dirac delta domain, we additionally include a factor of \(1/2\). Finally, the jump in
Figure 8: **Left:** _Value of \(\Delta N_{\rm USR}\) required in order to have \(f_{\rm PBH}=1\) for \(\eta_{\rm I}=3\). **Right:** Different examples of evolution of \(\eta(N)\) responsible for the USR, assuming various \(\delta N\) and fixing \(\eta_{\rm II}=3\). Dashed lines reports the scenario where \(\delta N\) is increased while \(\Delta N_{\rm USR}\) is kept fixed to the value imposed to have unit PBH abundance in the limit \(\delta N\to 0\). Solid lines report the result when \(\Delta N_{\rm USR}\) is instead adjusted to keep \(f_{\rm PBH}=1\) fixed. We see that smoother transitions results in longer USR phases._
leaves a factor \((2\eta_{\rm II})\) for each time integration. Therefore, we find
\[\Delta P(k,\tau)\equiv-32\eta_{\rm II}^{2}[\epsilon(\tau_{\rm end}) a^{2}(\tau_{\rm end})]^{2}\int_{q_{\rm in}}^{q_{\rm end}}dq\,q^{2}\,\int_{-1}^{1}d( \cos\theta)\times\Big{\{}\] \[{\rm Im}\big{[}\zeta_{k}^{*}(\tau)\zeta_{k}^{\prime}(\tau_{\rm end })\big{]}\times\Big{[}{\rm Im}\big{[}\zeta_{k}(\tau)\zeta_{k}^{\prime*}(\tau_{ \rm end})|\zeta_{q}(\tau_{\rm end})|^{2}|\zeta_{k-q}(\tau_{\rm end})|^{2} \big{]}+\] \[{\rm Im}\big{[}\zeta_{k}(\tau)\zeta_{k}^{*}(\tau_{\rm end})\big{(} |\zeta_{q}(\tau_{\rm end})|^{2}\zeta_{k-q}(\tau_{\rm end})\zeta_{k-q}^{\prime \,*}(\tau_{\rm end})+|\zeta_{k-q}(\tau_{\rm end})|^{2}\zeta_{q}(\tau_{\rm end })\zeta_{q}^{\prime\,*}(\tau_{\rm end})\big{)}\Big{]}\Big{]}+\] \[{\rm Im}\big{[}\zeta_{k}^{*}(\tau)\zeta_{k}(\tau_{\rm end}) \big{(}|\zeta_{k-q}^{\prime}(\tau_{\rm end})|^{2}|\zeta_{q}(\tau_{\rm end})|^ {2}+\zeta_{k-q}^{\prime\,*}(\tau_{\rm end})\zeta_{k-q}^{*}(\tau_{\rm end}) \zeta_{q}(\tau_{\rm end})\big{)}\big{]}\Big{\}}. \tag{102}\]
We have collected the pieces such that each line corresponds to the \(i\)-th term in the sum of eq. (77) and \(k-q\equiv\sqrt{k^{2}+q^{2}-2kq\cos(\theta)}\) as in the previous section.
In the left panel of fig. 9, we show the resulting 1-loop correction as a function of the wavenumber \(k\) for a representative set of parameters leading to \(f_{\rm BH}\approx 1\): \(\eta_{\rm II}=3\) and \(\Delta N_{\rm USR}=2.2\). We find values of \(\Delta\mathcal{P}_{\rm 1\mbox{-}loop}\) of the order of few percent, barring small oscillatory features. A notable exception is the scale where the tree level power spectrum presents a dip, see fig. 3, \(k_{\rm dip}/k_{\rm in}\approx\sqrt{5/4}e^{-3\Delta N_{\rm USR}/2}\)[47]. At that scale the 1-loop correction dominates, resulting in a spike in \(\Delta\mathcal{P}_{\rm 1\mbox{-}loop}\). As a consequence, the dip is only realised if the 1-loop correction is neglected, see the right panel of fig. 9. We also observe that in the limit of small \(k\ll k_{\rm in}\) the result quickly converges towards the one discussed in the previous section, as expected. Finally, it is also interesting to notice that the correction \(\Delta\mathcal{P}_{\rm 1\mbox{-}loop}\) stays almost the same at any scale, except around \(k_{\rm dip}\). For this reason, we expect that a generalisation of this calculation to the case for \(\delta N\neq 0\) will lead to results similar to ones presented in the previous section for \(k\ll q\).
At first sight, our result that loop corrections impact the tree-level power spectrum at the percent level seems at odds with the findings of ref. [53] in which it was found that the one-loop power spectrum could dominate over the tree-level one, thus indicating the breakdown of the perturbation theory. Upon a closer look, however, there is no contradiction. Ref. [53] considers a particular instance of background dynamics in which curvature perturbations are resonantly amplified due to a specific pattern of oscillatory features in the inflaton potential. In such a model, we checked that the condition \(V^{\prime\prime\prime}/H\ll 1\) (cf. eq. (99) and the related discussion) is not verified and, therefore, it is not unexpected to find an amplification of loop effects.
It is instructive to consider also a different limit. Since we are assuming that the USR is followed by a second period of slow roll, characterised by a negligible \(\eta_{\rm II}\) and a small \(\epsilon\), modes in the range \(q\in[k_{\rm in},k_{\rm end}]\) freeze around \(\tau_{\rm end}\). Therefore, the loop correction at \(\tau_{\rm end}\) is very close to its limit at \(\tau\to 0^{-}\), as we verified through a numerical
Figure 9: _In both panels, we consider a USR dynamics with \(\eta_{\rm II}=3\), \(\Delta N_{\rm USR}=2.2\), \(\eta_{\rm III}=0\) and the instantaneous limit \(\delta N=0\). These values corresponds to a scenario producing \(f_{\rm PBH}\simeq 1\). The vertical gridlines corresponds to \(k=k_{\rm in}\) and \(k_{\rm end}\) in both panels. **Left panel:** correction to the tree level power spectrum as a function of \(k\) in the limit of \(\tau\to 0^{-}\). **Right panel:** tree level power spectrum (black) compared to the 1-loop correction (red line) and their sum (blue dashed line)._
calculation. For this reason, we set \(\tau\to\tau_{\rm end}\) in eq. (102) and drop the factors proportional to \({\rm Im}[\zeta_{k}^{*}(\tau_{\rm end})\zeta_{k}(\tau_{\rm end})]\) which vanish identically. Next, we switch to the barred fields and momenta notation introduced in sec. IV.1 and simplify the expression using the Wroksian identity (82). Finally, we arrive at the expression
\[\begin{split}&\text{\small{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Discussion and outlook
In this work, we discussed the implications of perturbativity in the context of single-field inflationary models that feature the presence of a transient phase of USR. More in detail, we defined the perturbativity condition
\[\mathcal{P}(k)\equiv\mathcal{P}_{\text{tree}}(k)\left[1+\Delta\mathcal{P}_{ \text{1-loop}}(k)\right]\quad\Longrightarrow\quad\Delta\mathcal{P}_{\text{1- loop}}(k)\stackrel{{!}}{{<}}1\,, \tag{106}\]
in which the one-loop correction is integrated over the short modes that are enhanced by the USR dynamics. We explored the consequences of eq. (106) at any scale \(k\) even though the main motivation for our analysis was the recent claim of ref. [31] according to which the relative size of the loop correction at scales relevant for CMB observations (that is, \(k=O(k_{\star})\) with \(k_{\star}=0.05\) Mpc\({}^{-1}\)) threatens the validity of perturbativity at the point of ruling out the idea of PBH formation via USR dynamics in single-field inflation.
In this section, we summarize the main results and limitations of our analysis and we will discuss future prospects.
* not even in the limit of instantaneous SR/USR/SR transition.
* We extend the analysis of ref. [31] by considering a more realistic USR dynamics. In particular, we implement a smooth description of the SR/USR/SR transition. Recently, refs. [34; 35; 36] claimed that the presence of a smooth transition in the final USR/SR transition makes the loop correction effectively harmless. Our analysis shows that this conclusion could be invalidated by the fact that there is an interplay between the size of the loop correction and the amplitude of the tree-level power spectrum that is needed to generate a sizable abundance of PBHs. On the one hand, it is true that a smooth USR/SR transition reduces the size of the loop correction; on the other one, the same smoothing also reduces the amplitude of the tree-level power spectrum so that, in order to keep \(f_{\text{PBH}}\) fixed, one is forced to either increase the duration of the USR phase or the magnitude of \(\eta\) during the latter. In the end, the two effects tend to compensate each other. As for this part of the analysis, our findings are summarized in fig. 7.
* We consider the one-loop correction of short modes to the tree-level power spectrum at any scale. We find that perturbativity is always satisfied in models that account for the condition \(f_{\text{PBH}}=1\). More quantitatively, we find that the relative size of the loop correction with respect to the tree-level value of the power spectrum does not exceed the level of a few percent. As for this part of the analysis, our findings are summarized in fig. 9. We point out one notable exception of phenomenological relevance. A generic feature of the USR dynamics is that it produces a characteristic dip in the tree-level power spectrum, as the one observed in the left panel of fig. 3. The phenomenological consequences of such a putative dip range from CMB \(\mu\)-space distortions [75] to 21-cm signals [76]. Our analysis shows that the existence of the dip is nothing but an artifact of the tree-level computation, and disappears after including loop corrections. This is because, due to the smallness of the tree-level power spectrum around the characteristic wavenumbers of the dip, the non-vanishing loop correction gives the dominant contribution. This is illustrated in the right panel of fig. 9.
At the conceptual level, it remains true that, in the presence of USR dynamics, loop corrections of short modes may sizably affect the power spectrum at CMB scales. This result echoes an issue of naturalness - an infrared quantity (the amplitude of the curvature power spectrum at CMB scales) appears to be sensitive, via loop effects, to physics that takes place at much shorter scales (those related to PBH formation) - and clashes with the intuition that physics at such vastly different scales should decouple.
The coupling between short and long modes gives a physical effect for the following reason. As discussed in section IV.2, the relevant loop correction to the power spectrum at CMB scales comes from the correlation between homogeneous and inhomogeneous solutions. This is most easily seen within the source method in which one considers the correlation between a freely evolving long mode and a second long mode which evolves in the presence of interactions, cf. eq. (105). Borrowing from ref. [74] (see also ref. [32]), we write the formal solution of the non-linear evolution equations for a long wavelength mode \(\zeta_{\rm L}\) as \(\zeta_{\rm L}=\hat{O}^{-1}[S[\zeta_{\rm S},\zeta_{\rm S},\zeta_{\rm L}]]\), where \(S\) represents a generic sum of operators that are quadratic in the short wavelength mode \(\zeta_{\rm S}\) and that can also depend on \(\zeta_{\rm L}\) if one considers the short modes in the background perturbed by the long mode. More concretely, in our case such a solution is the one given by eq. (105). The one-loop power spectrum is given by
\[\langle\zeta_{\rm L}\zeta_{\rm L}\rangle\sim\langle\hat{O}^{-1}[S[\zeta_{\rm S },\zeta_{\rm S},\zeta_{\rm L}=0]]\,\hat{O}^{-1}[S[\zeta_{\rm S},\zeta_{\rm S}, \zeta_{\rm L}=0]]\rangle+\langle\hat{O}^{-1}[S[\zeta_{\rm S},\zeta_{\rm S}, \zeta_{\rm L}]]\,\zeta_{\rm L}\rangle\,. \tag{107}\]
The first term represents the effect of the short-scale modes in their unperturbed state (that is, with \(\zeta_{\rm L}=0\)) directly on the power spectrum of the long wavelength mode. This is our first term in eq. (103). As discussed in section IV.2, this term does not alter the long-wavelength correlation since it is very improbable that random short-scale fluctuations coherently add up to induce a long-wavelength correlation. The second term in eq. (107), on the contrary, correlates a freely evolving long mode \(\zeta_{\rm L}\) with the effect that the long mode itself has on the expectation value of quadratic operators made of short modes. Let us explain this point, which is crucial. Consider the schematic in fig. 10.
The key point is the following. In the comoving gauge, the short modes evolve in the background that is perturbed by the long mode. In the limit in which the long mode \(\zeta_{\rm L}\) has a wavelength much longer than the horizon, it simply acts as a rescaling of the coordinates since it enters as a local change of the scale factor. This is schematically illustrated in fig. 10. This figure shows intuitively that the short scales are modulated by the presence of the long mode. The presence of the long mode acts as a rescaling of the coordinates and we can absorb it by rescaling the short-scale momenta \(q\to\tilde{q}=e^{\zeta_{\rm L}}q\)[32]. If the power spectrum of the short modes is scale-invariant, this rescaling does nothing. However, if the power spectrum of the short modes breaks scale invariance, we schematically have in the loop integral over the short modes, expanding at the first order
\[\int\frac{dq}{q}\mathcal{P}(q)\stackrel{{ q\to\tilde{q}\in e^{ \zeta_{\rm L}}q}}{{\Longrightarrow}}\;\int\frac{dq}{\tilde{q}}\mathcal{P}( \tilde{q})=\int\frac{dq}{q}\mathcal{P}(e^{\zeta_{\rm L}}q)=\int\frac{dq}{q} \left[\mathcal{P}(q)+\zeta_{\rm L}\frac{d\mathcal{P}}{dq}q\right]=\int\frac{ dq}{q}\left[\mathcal{P}(q)+\zeta_{\rm L}\,\mathcal{P}(q)\,\frac{d\log \mathcal{P}}{d\log q}\right]\,, \tag{108}\]
so that the presence of the long mode affects the correlation of short modes when their power spectrum is not scale invariant. The second term in the above equation describes precisely the effect put forth before: the presence of the long mode alters the expectation value of quadratic operators made of short modes, in this case the short-mode two-point function. Back to eq. (107), one expects the one-loop correction [32]
\[\Delta\mathcal{P}_{\rm 1-loop}(k)\sim\mathcal{P}(k)\int\frac{dq}{q}\, \mathcal{P}(q)\,\frac{d\log\mathcal{P}}{d\log q}\,. \tag{109}\]
The above discussion shows that the one-loop corrections on long modes do not decouple when the power spectrum of the short modes is not scale invariant. This explains why our correction vanishes in the limit \(\eta_{\rm II}=0\) in which
Figure 10: **Left: Expansion in time of the unperturbed universe (time passes by along the y-axis); the universe expands by the same amount at every point. Right: Expansion in time of the perturbed universe. The long mode (\(\zeta_{\rm L}\), blue) acts as a local rescaling of the scale factor, and short scales are modulated accordingly. More specifically, if we consider the black dots we see that they experience a different amount of expansion depending on the value of \(\zeta_{\rm L}\).**
indeed the power spectrum does become scale invariant. The breaking of scale invariance is the hallmark of the USR dynamics and, more importantly, a necessary feature in all models of single-field inflation that generate an order-one abundance of PBHs (cf. the intuitive schematic in fig. 11).
The last, and most important, remark that we would like to stress is the following. The analysis of ref. [31] triggered an intense debate about ruling out or not the mechanism of PBH formation via USR in single-field inflation (refs. [33, 34, 35, 36, 37, 38, 39]). Following these analysis, we have estimated the 1-loop correction to the curvature power spectrum including the contribution of loop momenta between \(q_{\rm in}\) and \(q_{\rm end}\), i.e. the window of momenta where the power spectrum peaks. Within this procedure, we find corrections to the tree-level power spectrum at the percent level in the region of parameter space where \(f_{\rm PBH}\approx 1\). Therefore, at first glance, a sizeable abundance of PBHs in USR single-field inflation is not in conflict with perturbativity constraints. On the other hand, the aforementioned corrections are sizeable, and the contribution of short wavelengths to the power spectrum at large scales does not decouple. This suggests that theoretical constraints dictated by the requirement of perturbativity might be important. As a concrete example, we have shown that loop corrections affect the dip in the tree-level power spectrum. Therefore, a more comprehensive analysis is needed.
We identify the following relevant directions. _i)_ More realistic modelling of the USR dynamics. As discussed in section II.2, in realistic single-field inflationary models we expect \(\eta_{\rm III}<0\) and sizable; this is because at the end of the USR we are left with \(\epsilon\ll 1\) but we need \(\epsilon=O(1)\) to end inflation. Since \(\epsilon\sim e^{-2\eta N}\), we need \(\eta\) large and negative after USR. Consequently, after USR we do not expect a scale-invariant power spectrum and eq. (109) applies. _ii)_ Understanding the role of quartic interactions, tadpoles and interactions with spatial derivatives. So far, most of the attention has been focused on the role of the cubic interaction Hamiltonian in eq. (46). However, as schematically shown in eq. (26), quartic interactions and non-1PI diagrams involving tadpoles are also present. In particular, the schematic in eq. (26) shows that tadpole diagrams may be relevant because, by attaching them to propagators, they modify the two-point correlator. The correct way to deal with tadpoles is by changing the background solution (cf. ref. [41]; see also ref. [53]). Since it is well-known that background solutions in USR models for PBH formation suffer a high-level of parametric tuning (cf. ref. [80]), the role of tadpole corrections may have some relevance. Furthermore, all interactions with spatial derivatives have been so far discarded. However, the short modes running in the loop cross the horizon precisely during the USR phase and, therefore, their spatial
Figure 11: _Illustrative schematic of the correction induced on the two-point correlator of long modes by a loop of short modes. On the right side, we plot the prototypical tree-level power spectrum of curvature perturbations as a function of the comoving wavenumber \(k\) in the presence of SR/USR/SR dynamics (with \(\eta_{\rm III}=0\) in the language of the parametrization given in section II.2). The power spectrum features a strong violation of scale invariance at small scales which is needed in order to produce a sizable abundance of PBHs. For illustration, we plot the region excluded by CMB anisotropy measurements, ref. [77], the FIRAS bound on CMB spectral distortions, ref. [78] and the bound obtained from Lyman-\(\alpha\) forest data [79]. If \(\mathcal{P}(k)\gtrsim 10^{-2}\), the abundance of PBHs overcloses the Universe. The plot is rotated in such a way as to share the same \(y\)-axis with the left part of the figure. On the left side, we schematically plot the evolution of the comoving Hubble horizon \(R_{H}=1/aH\) during inflation. Observable CMB modes (horizontal green band) cross the Hubble horizon earlier (bottom-end of the figure) and, at the tree level, their correlation remains frozen from this time on. At a much later time, the dynamics experience a phase of USR. Modes that cross the horizon during the USR phase have their tree-level power spectrum greatly enhanced and the latter strongly violates scale invariance. Loop of such short modes may induce a sizable correction to the tree-level correlation of long modes, cf. eq. (109)._
derivatives do not pay any super-horizon suppression. _iii)_ Renormalization. It would be interesting to explore the consequences of a full renormalization procedure, which have not been fully addressed so far.
We will tackle all the above points in a forthcoming work.
###### Acknowledgements.
We thank C. Byrnes and H. Veermae for discussions and A. Riotto for interesting discussions and comments on the draft. The research of A.U. was supported in part by the MIUR under contract 2017 FMJFMW ("New Avenues in Strong Dynamics," PRIN 2017). G.F. acknowledges financial support provided under the European Union's H2020 ERC, Starting Grant agreement no. DarkGRA-757480 and under the MIUR PRIN programme, and support from the Amaldi Research Center funded by the MIUR program "Dipartimento di Eccellenza" (CUP: B81I18001170001). This work was supported by the EU Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 101007855 and and additional financial support provided by "Progetti per Avvio alla Ricerca - Tipo 2", protocol number AR2221816C515921. A.J.I. acknowledges additional financial support provided under the "Progetti per Avvio alla Ricerca Tipo 1", protocol number AR1221816T06D36. MT acknowledges the research grant "The Dark Universe: A Synergic Multimessenger Approach No. 2017X7X85" funded by MIUR, and the project "Theoretical Astroparticle Physics (TAPs)" funded by Istituto Nazionale di Fisica Nucleare (INFN).
## Appendix A Dynamics of curvature modes, some essential results
The main purpose of this appendix is to understand, both numerically and analytically, the behaviour of the time derivative \(d\zeta_{k}/dN\).
We rewrite the M-S equation in the form
\[\frac{d^{2}\zeta_{k}}{dN^{2}}+(3+\epsilon-2\eta)\frac{d\zeta_{k}}{dN}+\frac{ k^{2}}{(aH)^{2}}\zeta_{k}=0\,. \tag{104}\]
Assuming \(\epsilon\approx 0\), constant \(\eta\) and constant \(H\), this equation admits the solution
\[\zeta_{k}(N)\propto e^{-\left(\frac{3}{2}-\eta\right)N}\left[c_{1}\,J_{\frac{ 3}{2}-\eta}\left(\bar{k}e^{N_{\rm in}-N}\right)\Gamma\left(\frac{5}{2}-\eta \right)+c_{2}\,J_{-\frac{3}{2}+\eta}\left(\bar{k}e^{N_{\rm in}-N}\right) \Gamma\left(-\frac{1}{2}+\eta\right)\right]\,, \tag{105}\]
where \(J_{\alpha}(x)\) are Bessel functions of the first kind and \(\Gamma(x)\) is the Euler gamma function. Consequently, we find
\[\frac{d\zeta_{k}}{dN}(N)\propto e^{-\left(\frac{3}{2}-\eta\right)N}\left[-c_ {1}\,J_{\frac{1}{2}-\eta}\left(\bar{k}e^{N_{\rm in}-N}\right)\Gamma\left( \frac{5}{2}-\eta\right)+c_{2}\,J_{-\frac{1}{2}+\eta}\left(\bar{k}e^{N_{\rm in }-N}\right)\Gamma\left(-\frac{1}{2}+\eta\right)\right]\,. \tag{106}\]
This approximation is applicable for \(N<N_{\rm in}\) with \(\eta=0\), for \(N_{\rm in}<N<N_{\rm end}\) with \(\eta=\eta_{\rm II}\) and for \(N>N_{\rm end}\) with \(\eta=\eta_{\rm III}\). We have the following asymptotic behaviours
\[J_{\alpha}(x)\sim\left\{\begin{array}{cc}1/\sqrt{x}&\mbox{for}\ \ x\gg 1\\ x^{\alpha}&\mbox{for}\ \ x\ll 1\end{array}\right.\qquad\mbox{where}\qquad x \equiv\bar{k}e^{N_{\rm in}-N}=e^{N_{k}-N}\,. \tag{107}\]
Consequently, we highlight the following scalings.
* On sub-horizon scales, we find \[\mbox{sub-horizon scales, }N\ll N_{k}\qquad\zeta_{k}(N)\sim e^{-(1-\eta)N} \quad\mbox{and}\quad\frac{d\zeta_{k}}{dN}(N)\sim e^{-(2-\eta)N}\,.\] (108) The above scaling implies, for instance, that before the USR phase (that is, for \(N<N_{\rm in}\) with \(\eta=0\)) sub-horizon modes decay according to \(\zeta_{k}\sim e^{-N}\) and \(d\zeta_{k}/dN\sim e^{-2N}\).
* On super-horizon scales, we find \[\mbox{super-horizon scales, }N\gg N_{k}\qquad\zeta_{k}(N)\sim c_{1}\,e^{-(3-2\eta)N}+c_{2} \quad\mbox{and}\quad\frac{d\zeta_{k}}{dN}(N)\sim-c_{1}\,e^{-(3-2\eta)N}+c_{2} \,e^{-2N}\,.\] (109) Consider a mode that is super-horizon after the end of the USR phase (that is, for \(N>N_{\rm end}\) with \(\eta=\eta_{\rm III}<0\)). Eq. (109) tells us that \(d\zeta_{k}/dN\) is given by the superposition of two functions: the first one decays faster, as \(e^{-(3-2\eta_{\rm III})N}\), while the second one decays slower, as \(e^{-2N}\). On the contrary, \(\zeta_{k}\) quickly settles to a constant value.
* Consider the evolution during the USR phase. We have \(\eta=\eta_{\rm II}>3/2\) and \(N_{\rm in}<N<N_{\rm end}\). We have two possibilities that are relevant to our analysis. 1. If the mode is way outside the horizon at the beginning of the USR phase, it stays constant even though its derivative exponentially grows because of the term \(\sim e^{-(3-2\eta_{\rm II})N}\). 2. Consider a mode that crosses the Hubble horizon during the USR phase. The curvature perturbation (and its derivative) grows because of the factor \(e^{-(3/2-\eta_{\rm II})N}\). However, it is not immediate to find the exact scaling in time because in this case none of the approximations in eq. (100) can be applied.
All the above features, even though obtained in the context of the over-simplified framework given by eq. (101) and eq. (102), are valid in general. In fig. 12, we plot \(|\zeta_{k}|\) and \(|d\zeta_{k}/dN|\) using the dynamics presented in section II.2. We checked that all the relevant scaling properties discussed above are indeed verified. It is possible to derive some useful analytical approximations.
First of all, we consider the Wronskian condition
\[i\left[u_{k}^{\prime}(\tau)u_{k}^{*}(\tau)-{u_{k}^{\prime}}^{*}(\tau)u_{k}( \tau)\right]=1\,, \tag{103}\]
which we rewrite as
\[i(aH)\left[\frac{du_{k}}{dN}(N)u_{k}^{*}(N)-\frac{du_{k}^{*}}{dN}(N)u_{k}(N) \right]=1\,. \tag{104}\]
As far as \(du_{k}/dN\) is concerned, we find
\[\frac{du_{k}}{dN}=a\sqrt{2\epsilon}(1+\epsilon-\eta)\zeta_{k}+a\sqrt{2\epsilon }\frac{d\zeta_{k}}{dN}\,, \tag{105}\]
so that the Wronskian condition reads
\[{\rm Im}\bigg{[}\zeta_{k}(N)\frac{d\zeta_{k}^{*}}{dN}(N)\bigg{]}=\frac{H^{2}} {4\epsilon_{\rm ref}\bar{\epsilon}(N)(aH)^{3}}\,. \tag{106}\]
If we introduce the field \(\bar{\zeta}_{k}\) as in eq. (81), we find
\[W(N)\equiv{\rm Im}\bigg{[}\bar{\zeta}_{k}(N)\frac{d\bar{\zeta}_{k}^{*}}{dN}(N )\bigg{]}=-{\rm Im}\bigg{[}\bar{\zeta}_{k}^{*}(N)\frac{d\bar{\zeta}_{k}}{dN}(N )\bigg{]}=\frac{\bar{k}^{3}}{4\bar{\epsilon}(N)}e^{3(N_{\rm in}-N)}\,, \tag{107}\]
with \(\epsilon(N)\) given by eq. (32) for generic \(\delta N\). In the limit \(\delta N\to 0\) and at time \(N=N_{\rm end}\), we find
\[\lim_{\delta N\to 0}W(N_{\rm end})=\frac{\bar{k}^{3}}{4}e^{(2\eta_{\rm II}-3)(N _{\rm end}-N_{\rm in})}=\frac{\bar{k}^{3}}{4}\left(\frac{k_{\rm end}}{k_{\rm in }}\right)^{2\eta_{\rm II}-3}=\frac{k^{3}}{4}\left(\frac{k_{\rm end}^{2\eta_{ \rm II}-3}}{k_{\rm in}^{2\eta_{\rm II}}}\right)\,. \tag{108}\]
Figure 12: _Comparison of the time evolution of \(|\bar{\zeta}_{k}|\) and \(|d\bar{\zeta}_{k}/dN|\) computed numerically (solid lines) and with the analytical approximation (dashed lines) within the minimal dynamics presented in section II.2. We take \(\bar{k}=10^{-3}\) (left panel) and \(\bar{k}=1\) (right panel). To draw this figure we consider the benchmark values \(\eta_{\rm II}=3.5\), \(\eta_{\rm II}=0\), \(\Delta N_{\rm USR}=2.5\) and \(\delta N=0.3\)._
If we further take \(\eta_{\rm II}=3\), the above equation is compatible with ref. [31].
We now consider the limit \(\delta N\to 0\) and the case \(\eta_{\rm II}=3\). In this case, it is possible to compute the function \(\bar{\zeta}_{q}(N)\) by solving analytically the M-S equations in both the SR (for \(N\leqslant N_{\rm in}\)) and USR (for \(N_{\rm in}\leqslant N\leqslant N_{\rm end}\)) regime and then matching the solutions at \(N_{\rm in}\), as done in ref. [31] (see also refs. [12; 47]). We find (\(x\equiv e^{\Delta N_{\rm USR}}\))
\[|\bar{\zeta}_{q}(N_{\rm end})|^{2}= \frac{x^{6}}{8\bar{q}^{6}}\left[9+18\bar{q}^{2}+9\bar{q}^{4}+2 \bar{q}^{6}+3(-3+7\bar{q}^{4})\cos\left(2\bar{q}-\frac{2\bar{q}}{x}\right)-6 \bar{q}(3+4\bar{q}^{2}-\bar{q}^{4})\sin\left(2\bar{q}-\frac{2\bar{q}}{x} \right)\right]+\] \[\frac{x^{5}}{8\bar{q}^{6}}\left[12\bar{q}^{2}(-3-4\bar{q}^{2}+ \bar{q}^{4})\cos\left(2\bar{q}-\frac{2\bar{q}}{x}\right)-6\bar{q}(-3+7\bar{q} ^{4})\sin\left(2\bar{q}-\frac{2\bar{q}}{x}\right)\right]+\] \[\frac{x^{4}}{8\bar{q}^{6}}\left[\bar{q}^{2}(9+18\bar{q}^{2}+9\bar {q}^{4}+2\bar{q}^{6})+\bar{q}^{2}(9-21\bar{q}^{4})\cos\left(2\bar{q}-\frac{2 \bar{q}}{x}\right)-6\bar{q}^{3}(-3-4\bar{q}^{2}+\bar{q}^{4})\sin\left(2\bar{q} -\frac{2\bar{q}}{x}\right)\right]\,, \tag{104}\]
which enters into the computation of eq. (87).
|
2302.08563 | PACMAN Attack: A Mobility-Powered Attack in Private 5G-Enabled
Industrial Automation System | 3GPP has introduced Private 5G to support the next-generation industrial
automation system (IAS) due to the versatility and flexibility of 5G
architecture. Besides the 3.5GHz CBRS band, unlicensed spectrum bands, like
5GHz, are considered as an additional medium because of their free and abundant
nature. However, while utilizing the unlicensed band, industrial equipment must
coexist with incumbents, e.g., Wi-Fi, which could introduce new security
threats and resuscitate old ones. In this paper, we propose a novel attack
strategy conducted by a mobility-enabled malicious Wi-Fi access point (mmAP),
namely \textit{PACMAN} attack, to exploit vulnerabilities introduced by
heterogeneous coexistence. A mmAP is capable of moving around the physical
surface to identify mission-critical devices, hopping through the frequency
domain to detect the victim's operating channel, and launching traditional MAC
layer-based attacks. The multi-dimensional mobility of the attacker makes it
impervious to state-of-the-art detection techniques that assume static
adversaries. In addition, we propose a novel Markov Decision Process (MDP)
based framework to intelligently design an attacker's multi-dimensional
mobility in space and frequency. Mathematical analysis and extensive simulation
results exhibit the adverse effect of the proposed mobility-powered attack. | Md Rashedur Rahman, Moinul Hossain, Jiang Xie | 2023-02-16T20:12:56Z | http://arxiv.org/abs/2302.08563v1 | # PACMAN Attack: A Mobility-Powered Attack in Private 5G-Enabled Industrial Automation System
###### Abstract
3GPP has introduced Private 5G to support the next-generation industrial automation system (IAS) due to the versatility and flexibility of 5G architecture. Besides the 3.5GHz CBRS band, unlicensed spectrum bands, like 5GHz, are considered as an additional medium because of their free and abundant nature. However, while utilizing the unlicensed band, industrial equipment must coexist with incumbents, e.g., Wi-Fi, which could introduce new security threats and resuscitate old ones. In this paper, we propose a novel attack strategy conducted by a mobility-enabled malicious Wi-Fi access point (mmAP), namely _PACMAN_ attack, to exploit vulnerabilities introduced by heterogeneous coexistence. A mmAP is capable of moving around the physical surface to identify mission-critical devices, hopping through the frequency domain to detect the victim's operating channel, and launching traditional MAC layer-based attacks. The multi-dimensional mobility of the attacker makes it impervious to state-of-the-art detection techniques that assume static adversaries. In addition, we propose a novel Markov Decision Process (MDP) based framework to intelligently design an attacker's multi-dimensional mobility in space and frequency. Mathematical analysis and extensive simulation results exhibit the adverse effect of the proposed mobility-powered attack.
## I Introduction
The future industrial automation systems (IAS) is envisioned to adopt the full-scale wireless connectivity offered by the fifth-generation (5G) cellular technology [1]. Regulatory authorities and researchers are advocating the implementation of Private 5G in IAS to accomplish the precise industry-specific QoS standards [2, 3]. Due to the limited availability of licensed radio spectrum and the additional logistics required to access such resources, _unlicensed spectrum bands_, such as 5 GHz, have the potential to play a significant role [4]. Though industries may prioritize using licensed spectrum bands as anchor carriers, unlicensed spectrum bands provide an unparalleled resource to meet the demands of advanced AI/ML-enabled IAS applications. The caveat, however, is that Private 5G is required to coexist with the incumbents in these unlicensed bands, e.g., Radar and Wi-Fi in the 5GHz band.
**Motivation:** While Private 5G-enabled IAS plan to utilize unlicensed spectrum bands like 5GHz, the heterogeneity between Wi-Fi and cellular technologies may hinder their fair and effective coexistence. For example, Wi-Fi employs a preamble-based detection mechanism for Wi-Fi signals and an energy-sensing-based detection mechanism for non-WiFi ones, whereas LTE/5G employs the latter. Cellular technologies have adopted CSMA/CA-based methods along with a similar contention window structure and backoff techniques to maintain a uniform channel access framework in 5GHz spectrum band. The CSMA/CA mechanism, however, has several vulnerabilities that may have an adverse impact on IAS applications. Additionally, the absence of a preamble-based detection mechanism makes it more difficult to detect such malicious behaviors. Therefore, given the national security significance of manufacturing and supply chain industries, it is crucial to assess such vulnerabilities and propose a more secure coexistence framework in unlicensed spectrum bands.
**Challenges:** Most research on the fair coexistence of Wi-Fi and cellular technologies in the 5GHz spectrum band has prioritized Wi-Fi's performance. In contrast, only a small amount of research has addressed the Quality-of-Service (QoS) of cellular technologies. Moreover, the impact of various PHY/MAC layer-based attacks in _heterogeneous_ spectrum coexistence scenarios is rarely studied. In [5], authors have conducted a comprehensive survey on the vulnerabilities in the future heterogeneous coexistence of 802.11 and cellular technologies in an unlicensed spectrum band. Researchers in [6] and [7] have proposed intelligent jamming attack and MAC layer-based misbehavior, respectively, in the context of spectrum coexistence of Wi-Fi and LTE in the 5GHz spectrum band.
However, only [6] has considered cellular technologies as a victim against malicious AP. Evidently, it is an effective approach to use malicious APs to disrupt cellular communication in the 5 GHz spectrum band. Nonetheless, the attack strategies mentioned previously would not be able to make a consequential impact due to the consideration of the fixed physical location of the attacker and not considering delay-sensitive application scenarios--important security considerations for industrial applications. Additionally, a comprehensive attack strategy must account for the risk of exposure and the reward of persistent attacks. Hence, the consideration of latency-sensitive critical application scenarios (e.g., IAS) with added mobility considerations at the attacker (in both space and frequency), introduces novel challenges to design intelligent adversarial strategies and assess associated vulnerabilities.
**Contribution:** Based on the above discussion, in this paper, (i) we propose a Wi-Fi-based mobility-powered attack model called _PACMAN attack_, where the attacker can traverse the physical area in a private 5G-enabled IAS, locate critical
areas for IAS operations, and perpetrate MAC-layer attacks on the devices residing in these areas. To the best of our knowledge, this is the first work to propose such an attack in private 5G-enabled IAS; (ii) in addition, we propose a Markov decision process (MDP) based framework for path planning, attack strategy design, and detection avoidance where the attacker trade-offs between the path that maximizes the attack performance and evasive maneuvers that minimizes the risk of detection. The proposed framework aims to aid in modeling and assessing security vulnerabilities.
## II Related Work
In the following, we discuss the prior research on the path planning model based on the MDP and PHY/MAC layer vulnerabilities of heterogeneous spectrum coexistence.
### _Path Planning Model_
Path planning models are used in applications such as UAVs [8], drones [9], autonomous vehicles [10], and others where an agent must move through an environment while taking actions that are linked to rewards and penalties to accomplish specific goals. MDP has been a popular method for developing these models. In [9], the authors used a combination of Jump Point Search and MDP to propose a 3D path planning model and real-time collision resolution for multi-rotor drones operating in hazardous urban low-altitude airspace. The trajectory planning for UAV-mounted mobile edge computing systems is formulated using the combination of MDP and Reinforcement Learning in [11]. In [8, 10], authors considered the partially observable MDP-based path planning models for military-based UAVs and the detection of hidden road users by autonomous vehicles. However, these models consider the traversal in the physical domain, whereas our proposed model considers the mobility in both the physical and frequency domains. Additionally, in our proposed model, the adversary traverses the physical space to locate the critical devices and hops through channels to detect the victim's operating channel, all while avoiding exposure.
### _MAC and PHY Layer-based Vulnerabilities_
MAC and PHY layer-based vulnerabilities like jamming, selfish backoff attack etc. have been prevalent in wireless communications since its inception. Spectrum coexistence of heterogeneous technologies can bring a new perspective in terms of vulnerabilities and the detection and defense strategies against them. In [6], the author proposed a jamming attack perpetrated by a malicious Wi-Fi AP to degrade the performance of coexisting LTE users. Conventional jamming attack would not be an energy-efficient strategy for a malicious entity with energy constraints [12]. Although a reactive jamming attack is effective and energy-efficient, it suffers from hardware constraints [12]. Moreover, MAC-layer-based security attacks have the potential to disrupt the harmonious coexistence of heterogeneous technologies in unlicensed spectrum bands. Authors in [13] introduced a logistic classification approach to detect Selfish Backoff attacks in IEEE 802.15.4 networks, whereas [14] utilized a supervised learning model for detecting the backoff manipulation attack in cognitive radio. [15] considered time series analysis to detect malicious nodes using the greedy MAC Protocol. However, while detecting malicious actors or behaviors in the network, none of these studies considered the heterogeneity of technologies in a specific spectrum band. Although researchers in [7] have proposed such MAC layer misbehavior for the first time in such scenarios, their work did not address the scenarios of malicious Wi-Fi APs. At the same time, none of the proposed work on this topic considered the attacker's ability to move across the physical surface. Though Wi-Fi AP-based spoofing attacks [16, 17, 18] also have the potential to impact the coexistence of heterogeneous technologies, the difference in the proposed work lies in the fact that the malicious entity would act as a legitimate user of the spectrum band and have the required mobility throughout the attack surface.
## III Proposed Attack Strategy
In the PACMAN attack scenario, the attacker has two ways of traversal throughout the attack surface i.e. spatial and frequency. The attacker will divide the physical space into multiple polygons i.e. zones depending on its interference range (we consider each zone a hexagon) and will initially traverse throughout the surface randomly to have a better understanding of the surface. While moving through different zones, the attacker learns about transmitting devices in different zones. We assume that the attacker has an out-of-bound link (i.e., a secure control channel for the attacker only), through which it derives the reward of attack in each zone. The short-term goal is to _cause successive transmission failures_ to reach the maximum limit of transmission attempts (or until the information becomes stale) and seize the victims' operation. The long-term goal is to _locate the sectors_ crucial to industrial automation, seize its operation, and remain undetected.
### _MAC Layer Misbehavior Strategy_
In the context of this paper, we only focus on the 5GHz unlicensed spectrum band where Wi-Fi acts as an incumbent user and Listen Before Talk (LBT) based access mechanism is promoted by the regulatory bodies for the cellular technologies. In an LBT-based mechanism, cellular technologies are required to adopt a CSMA/CA-based access mechanism while employing an energy detection-based sensing method. Although in different research, the authors used energy detection levels ranging from -62 dBm to -82 dBm, we would only focus on the energy detection level of -72 dBm based on 3GPP specification [19]. To limit the scope, we only focus on the selfish backoff attack approach [14], in which a malicious Wi-Fi AP tries to employ a lower backoff value to gain more access to the channel while restricting other users. The attacker's choice of backoff value is configurable and depends on the attack objective. In a PACMAN attack, the goal is to restrict the victim from accessing the channel and increase the victim's channel access delay, which will affect the operation of IAS. Although selfish backoff attacks have been studied before, the absence of preamble-based mechanisms to detect malicious behaviors and the attacker's mobility (in space
and frequency) create an opportunity for adversaries to stay undetected by traditional intrusion detection systems (IDS).
### _Frequency Hopping Mechanism_
**Short-term Strategy:** A zone has \(M\) channels available for the victims device, and the attacker is unaware of the channel that is currently in use. Hence, the attacker visits \(m\) different channels during each slot to identify the operating channel. The attacker randomly creates a channel-hopping sequence and periodically hops through it until it locates the victim's active channel. The strategy of channel hopping helps attackers to put an upper bound on how long a victim device can continuously use a channel. Fig. 1(a) shows an illustration of the attack sequence with \(M=10\) and \(m=2\). It shows the hopping sequence of attackers before a successful attack. Here, the operating channel of the victim network in a sector is channel-3 and, in slot-3, the attacker perpetrates the attack.\(a_{j}\) represents the channels where attackers have conducted the attack and \(j\) represents the number of successive attacks. After realizing performance degradation, the victim will randomly hop to a new channel, will try to stay on that channel as long as plausible, and will not hop back to the previously attacked channels (i.e., \(a_{j}\)) until it achieves a successful transmission. Hence, the attacker will discard the previously attacked channels for a particular transmission attempt. After each successful attack, the attacker randomizes its hopping sequence, excluding \(a_{j}\). Therefore, after \(j\) successive transmission failures, attackers have \(M-j\) channels to randomize. Fig. 1(b) illustrates a new hopping sequence of attackers where the attacker detects and perpetrates the attack in the first slot.
**Long-term Strategy:** Given the flexibility in the frequency domain, the attacker aims to gain more success in detecting the correct operating channel of the victim. To successfully detect the target victim's operating channel, the attacker has two assumptions. Firstly, when a victim is being denied access to the channel for a continuous period, after a certain threshold it would move to a new channel. Secondly, the victim would not go to a channel where it previously faced anomaly in terms of accessing the channel. When the attacker assuming after \(G\) consecutive transmission failures (\(G\)<\(M\) ) the victim cancels the current transmission, the attacker stays persistent to increase its chance of successful attack after each successive attacks; hence, it discards earlier attacked channels. Fig. 2 shows an illustration of a scenario, where \(G=4\), and attackers are successful to drop the packet with successive attacks.
After \(j^{th}\) successful attack, if the attacker is not successful in the (\(j+1\))\({}^{th}\) slot, it assumes that the victim had a successful transmission. Hence, it will re-randomize the hopping sequence (i.e., nullify \(a_{j}\)), but exclude the channels it has visited in the current slot (since currently visited channels are free, visiting again is not required), and begin a new period (one period = \(\lceil M/m\rceil\) slots).
### _Physical Mobility_
The malicious entity performing a PACMAN attack has the capability of traversing throughout the physical domain of the physical surface. The mobility across the attack surface makes it more difficult for the IDS to locate and identify the selfish or hostile transmissions. To begin, the attack is launched initially in a certain area of the surface, which has a minimal effect on the aggregate but a substantial effect on the zone. Second, while the IDS can detect an abnormal event across the entire surface, differentiating an attack from any physical anomaly in that particular zone can be difficult. Finally, tracking down the attacker's current location as well as its intended future location can be challenging because it may continuously move around the physical surface.
Fig. 3 illustrates a physical surface that contains multiple distinct locations in hexagonal shapes comprised of different application areas of the IAS. Also, an estimated travel path of the attacker throughout the surface is shown. The attacker's goal is to identify the operating channel of the victim in a certain location and conduct its MAC layer-based attack to disrupt the network while minimizing the mobility cost. However, the increased duration of attack in a zone would increase the probability of being detected by the IDS. Hence, the attacker moves to an optimal adjac
Fig. 1: First phase of the attack.
Fig. 3: The physical surface.
Fig. 2: An illustration of successful DoS attack with \(G=4\).
from being detected. An optimal location is considered a zone where the attacker would be able to cause the most damage by affecting critical communications. The process of choosing the ideal zone depends on the relative importance of each sector and is discussed in detail in the subsequent section.
**Summary:** The proposed strategy introduces uncertainties in the actions and location of the attacker. Unlike deterministic approaches, the proposed attack strategy introduces a random hopping sequence and path trajectory. Also, the attacker can only be detected if the victim experiences transmission failures; hence, the first attack will always be undetected.
## IV Proposed MDP-based Attack Model
### _Formation of the MDP_
We have utilized the MDP framework proposed in [20, 21] and scaled it to incorporate the mobility of the attacker. We presume that the operating channel of the victim in a location is unknown to the attacker; the attacker iteratively sweeps through the available channels, detects it, and perpetrates a MAC-based attack. As we consider the attacker can sense multiple channels at once (i.e., \(m\)), instead of waiting on a certain frequency, it will hop through different channels. The attacker will decide its action at the end of each time slot, based on the observation of the current state. The attacker receives an immediate reward \(U(n)\) in the \(n_{th}\) time slot,
\[U(n) =L.\textbf{1}(\textit{Single attack})+Q.\textbf{1}(\textit{Packet drop}) \tag{1}\] \[-B.\textbf{1}(\textit{Busy channel})-V.\textbf{1}(\textit{ Moving cost})\] \[-C.\textbf{1}(\textit{Hopping cost})-E.\textbf{1}(\textit{Attacker detection})\]
where \(\textbf{1}(\cdot)\) is an indicator function of the event in brackets.
As the employed strategy impacts the current state and also the future states, the expected reward of this game is,
\[\overline{U}=\sum_{n=1}^{\infty}\delta^{n-1}U(n), \tag{2}\]
where \(\delta\) represents the discount factor (\(0<\delta\leq 1\)). It measures the significance of the future reward values.
### _Markov Model_
This subsection demonstrates the proposed MDP model and defines state space, action space, rewards, and transition probabilities. We assume that the attacker sweep through all channels periodically; hence, the probability of an operating channel being detected depends on the channels that have been visited earlier in the sequence, conforming the requirement of a Markov process (i.e., a future state of the Markov process depends only on the current state).
**Markov States:** The state denotes the status of an attacker at the end of a time-slot at location \(l\). Here, the proposed Markov model (Fig. 4) has four kinds of states in each location:
\(Pl:\) The attacker senses that the channel is occupied by a PU. \(H_{i}^{l}:\) The attacker hopped onto a new channel and had \(i\) consecutive unsuccessful detection of the victim (\(1\leq i\leq K\)). \(A_{j}^{l}:\) The attacker successfully perpetrated \(j\) consecutive attacks (\(1\leq j\leq G\)).
\(D^{l}:\) The attacker is detected by the IDS system at the site.
\(\{P_{j}^{l},H_{i}^{l},H_{j}^{r},\cdots,A_{j}^{l},A_{i}^{l},\cdots,D_{l}^{l}\}\), where \(l\in\{1,\cdots,L\}\). For example, \(L=7\) for \(7\) sector model. In Fig. 4, blacked dotted arrows represent the incoming and outgoing transitions to neighboring locations.
**Actions:** We have three action types available at each state: \(\textit{stay}_{loc}+\textit{hop}_{freq}\) (\(sh\)): The attacker stays at the current location in the next time-slot and hops to the next channels in the hopping sequence.
\(move_{loc}+\textit{hop}_{freq}\) (\(mh\)): The attacker moves to a new location and hops on new channels.
\(move_{loc}+\textit{stay}_{freq}\) (\(ms\)): The attacker moves to a new location and stays on the current channel.
We represent the whole action space as \(\mathbb{A}\triangleq\{sh,mh,ms\}\).
**Rewards:** Let \(U\) (\(S\), \(a\), \(S\)) represent the reward when an attacker takes action \(a\in\mathbb{A}\) in state \(S\in\mathbb{X}\) and enters into state \(S^{\prime}\in\mathbb{X}\). Now using (1), we define rewards:
\(\Box(S,a,S)=\)\(\Box(S,a,S^{\prime})=\)\(\{\)\(\times,sh,H_{i}\},i=1,\cdots,K-1\)\(L-C\), if \(\{S,a,S\}=\)\(\{\times,sh,A_{j}\},j=1,\cdots,G-1\)\(Q-C\), if \(\{S,a,S\}=\)\(\{A_{G-i},sh,A_{G}\}\)\(\Box-B-C\), if \(\{S,a,S\}=\)\(\{\times,sh,P\}\)\(-E-C\), if \(\{S,a,S^{\prime}\}=\)\(\{A_{j},sh,D\},j=2,\cdots,G\)\(-V\), if \(\{S,a,S\}=\)\(\{\times,ms,H_{i}\}\)\(\Box L-V\), if \(\{S,a,S^{\prime}\}=\)\(\{\times,ms,A_{i}\}\)\(\Box\)\(-B=V\), if \(\{S,a\},i=S\)\(\{\times,ms,H_{i}\}\)\(\Box L-C-V\), if \(\{S,a,S^{\prime}\}=\)\(\{\times,mh,H_{i}\}\)\(\Box L-B-C-V\), if \(\{S,a,S^{\prime}\}=\)\(\{\times,mh,P\}\)\(\Box L-V\), if \(\{S,a,S^{\prime}\}=\)\(\{\times,mh,A_{i}\}\)\(\Box L-B-C-V\), if \(\{S,a,S^{\prime}\}=\)\(\{\times,mh,P\}\)\(\Box L-V\). (3)
**Transition Probabilities:** As the attacker can sense \(m\) channels at once and go through its attack channel sequence, at state \(H_{i}\), only \(\max(M-im,\) o) channels have yet to be visited by the attacker, and another \(m\) channels will be visited in the subsequent slot. Therefore, the probability of an attack (with action \(sh\)) in the absence of a victim on the channel,
\[Pr_{\textit{at}|sh}=\frac{m}{M-im},\;\;\text{if}\;i<K \tag{4}\]
We assume a 5G transmission is \(q\) mini-slots long. Also, we can approximate the probability of finding the channel busy with action \(\textit{hop}_{freq}\) as the steady-state probability,
\[Pr_{\textit{|a,s}}=\frac{q}{\alpha+\beta}=\rho,\;\;a\in\mathbb{A}\;and\;s\in \mathbb{X}, \tag{5}\]
where \(\alpha\) and \(\beta\) represent radar activity and denote transition probabilities from OFF to ON and ON to OFF, respectively. Now, the transition probabilities from state \(H_{i}\) with action \(sh\):
\[\Pr(H_{i+1}|H_{i},sh)=(1-\rho)(1-Pr_{\textit{at}|sh}),\] (6) \[\Pr(A_{i}|H_{i},sh)=(1-\rho)(1-\alpha)^{q}Pr_{\textit{at}|sh},\] \[\Pr(P|H_{i},sh)=\rho+(1-\rho)\{1-(1-\alpha)\alpha\alpha\alpha\beta \gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma \gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\gamma\\gamma\gamma\gamma\gamma\gamma\gamma\\gamma\gamma\\gamma\gamma\\gamma\\gamma\gamma\\gamma\\gamma\\\gamma\\gamma\\\gamma\\\gamma\\\\gamma\\\\gamma\\\\gamma\\\\\gamma\\\\\\gamma\\\\\\gamma\\\\\gamma\\\\\\.\ (3)
Fig. 4: The proposed MDP-based attack model.
Moreover, aside from the sensing time, the attacker can still end up in state \(P\) during the attack interval. The second part of \(\Pr(P\mid T_{i},\,sh)\) in Eq. 6 represents this situation. The transition probabilities from state \(P\) with action \(sh\) is,
\[\begin{array}{l}\Pr(H_{i}|P,sh)=(1-\rho)(1-P_{\mathrm{\emph{det}}[sh,A_{j}]}), \\ \Pr(A_{i}|P,sh)=(1-\rho)(1-\alpha)\,\,Pr_{\mathrm{\emph{det}}[sh,A_{i}]}\\ \Pr(P|P,sh)=\rho+(1-\rho)\{1-(1-\alpha)\}Pr_{\mathrm{\emph{det}}[sh,A_{i}]} \end{array} \tag{7}\]
In state \(A_{j}\), as the victim device has experienced transmission failures \(j\) times in \(j\) different channels, it refrains from visiting back to these channels until it successfully finishes the current transmission. Therefore, when an attacker takes action \(\emph{hop}_{\mathrm{\emph{fp}}[eq]}\) from state \(A_{j}\), it randomizes its attack sequence, excluding these \(j\) channels. Therefore, the probability that the attacker will attack the new operating channel of the victim in the next slot is uniformly distributed over \(M-j\) channels. Hence, the probability of an attack is,
\[\begin{array}{l}Pr_{\mathrm{\emph{det}}[sh,A_{j}]}=\frac{m}{M-j}.\end{array} \tag{8}\]
The transition probabilities from state\(A_{j}\) with action \(sh\) is,
\[\begin{array}{l}\Pr(H_{1}|A_{j},sh)=(1-\rho)(1-P_{\mathrm{\emph{det}}[sh,A_{ j}]}),\\ \Pr(A_{j+1}|A_{j},sh)=(1-\rho)(1-\alpha)^{\mathrm{\emph{d}}}Pr_{\mathrm{\emph{det }}[sh,A_{j}]}\,\,\{1-Pr^{j}\},\\ \Pr(D|A_{j},sh)=(1-\rho)(1-\alpha)^{\mathrm{\emph{d}}}Pr_{\mathrm{\emph{det}}[ sh,A_{j}]}Pr^{j}_{\mathrm{\emph{det}}[sh,A_{j}]}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
### _Steady-State Sojourn Time and Optimal Policy_
The performance of our proposed MDP-based attack strategy is evaluated through its ability to identify critical physical locations. We run the MDP framework in a 7-sectored physical surface (as shown in Fig. 7(a)) with varying degrees of importance toward the industrial automation system. In Fig. 7(a), the color intensity represents the importance of each sector (relative importance is also shown, e.g., S7 is 7 times more important than S4), the name of each sector is provided at the top, and the numerical values represent the normalized total sojourn time of the attacker at each sector. Here, S7 contains the critical devices, and the attacker spends the most time in that location (i.e., 0.99). Also, the attacker moves between S7 and its neighbor, S6, to avoid detection. Fig. 7(b) exhibits the optimal policy in each sector where the arrow and circle represent the \(m\)s and \(sh\) actions, respectively. The dominant action is shown in a filled line with translucent color, and the secondary action is shown in a dotted line. For instance, the dominant action in S7 is to \(sh\), and the secondary action is \(m\)s to S6 to avoid detection.
## VI Conclusion
We proposed a novel mobility-powered Wi-Fi emulation attack model, i.e., PACMAN attack, which exploits the MAC-layer vulnerabilities in a Private 5G-enabled IAS. We also proposed an MDP-based mathematical model to study and assess different dimensions of this attack model. Numerical investigations and simulation results showed that the proposed attack successfully localized the physical locations of critical devices, significantly degraded the performance of the IAS, and compromised network operations. To the best of our knowledge, this is the first work to propose a mobility-powered smart attack against private 5G-enabled IAS.
|
2308.02773 | EduChat: A Large-Scale Language Model-based Chatbot System for
Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | 2023-08-05T02:55:35Z | http://arxiv.org/abs/2308.02773v1 | # EduChat: A Large-Scale Language Model-based Chatbot System
###### Abstract
EduChat1 is a large-scale language model (LLM)-based chatbot system in the education domain. Its goal is to support personalized, fair, and compassionate intelligent education, serving teachers, students, and parents. Guided by theories from psychology and education, it further strengthens educational functions such as open question answering, essay assessment, Socratic teaching, and emotional support based on the existing basic LLMs. Particularly, we learn domain-specific knowledge by pre-training on the educational corpus and stimulate various skills with tool use by fine-tuning on designed system prompts and instructions. Currently, EduChat is available online as an open-source project, with its code, data, and model parameters available on platforms (e.g., GitHub2, Hugging Face3). We also prepare a demonstration of its capabilities online4. This initiative aims to promote research and applications of LLMs for intelligent education.
Footnote 1: [https://www.educhat.top/](https://www.educhat.top/)
Footnote 2: [https://github.com/icalk-nlp/EduChat](https://github.com/icalk-nlp/EduChat)
Footnote 3: [https://huggingface.co/ecnu-icalk](https://huggingface.co/ecnu-icalk)
Footnote 4: [https://vimeo.com/851004454?share=copy](https://vimeo.com/851004454?share=copy)
## 1 Introduction
Recently, large-scale language models (LLMs), such as ChatGPT Schulman et al. (2022), LLaMa Touvron et al. (2023), have achieved great success in the field of natural language processing Zhou et al. (2023). LLMs obtained the ability of reasoning, long-range context modeling, and task generalization by training on large-scale textual corpus with some strategies, such as code pre-training Chen et al. (2021), instruction tuning Wei et al. (2022), and reinforcement learning from human feedback (RLHF) Stiennon et al. (2020). With the advent of LLMs, they have the potential to revolutionize intelligent education by providing personalized, comprehensive, and timely support to teachers, students, and parents.
However, there are several challenges of applying LLMs into education domain. One challenge (**C1**) is that there is still a gap between the LLMs and the educational expert since LLMs are pre-trained on the general corpus, which lack sufficient educational knowledge and can not align well with real scenarios (e.g., essay assessment). The other challenge (**C2**) is that the knowledge in the field of education is updating, while LLMs can not learn up-to-date knowledge due to the training mechanism. Moreover, LLMs suffer from the hallucination problem, and may generate responses that are not truthful.
To address these problems, we propose EduChat, an LLM-based chatbot system for intelligent education. For **C1**, we pre-train LLMs on a large number of educational books (e.g., psychology, ancient poetry) and 4 million cleaned diverse instructions to learn the fundamental knowledge. Then, we fine-tune the model on 500 thousand high-quality customized instructions to activate education-specific functions (e.g., essay assessment, Socratic teaching and emotional support), by aligning with the feedbacks from psychology experts and frontline teachers. For **C2**, we explore a retrieval-augmented technology, which enables LLMs to automatically judge the helpfulness of the retrieved information, and generate the response based on the relevant information and knowledge stored in LLMs. In this way, our EduChat can access the latest information from the internet, ensuring that the responses are accurate and credible. As an open-source project, EduChat improves the performance of education-specific functions while maintaining comparable foundational capabilities to other large-scale models with equivalent parameter size. The main contributions are as follows:
* We explore the potential of incorporating theories of psychology and education into LLMs, which
sheds light on how to adapt general LLMs to specific domains;
* Diverse system prompts and instructions are designed to control the tool use and stimulate different skills, which alleviates the problem of hallucination and is more applicable in real education scenarios;
* We develop and release the EduChat system with various educational functions, thus developers and researchers can help speed up the research and applications of intelligent education.
## 2 Related Work
Recently, LLMs like ChatGPT Schulman et al. (2022), ChatGLM Du et al. (2022), and LLaMA2-Chat Touvron et al. (2023) have emerged as a breakthrough technology in natural language processing, achieving strong performance on language generation and understanding through pre-training on massive text and instruction tuning.
While LLMs demonstrate impressive capabilities in general domains, their lack of subject-matter expertise becomes apparent when applied to specialized verticals. For instance, we can find specialized language models catering to various domains, such as ChatDoctor Li et al. (2023) and HuaTuoGPT Zhang et al. (2023) in healthcare, FinGPT Yang et al. (2023) in finance, and ChatLaw Cui et al. (2023) in the legal domain. These niche fields inherently necessitate models to possess comprehensive domain knowledge to address relevant queries, especially when assisting real users in practical scenarios. In education, Baladn et al. (2023) tune open-source LLMs for generating better teacher responses in BEA 2023 Shared Task Tack et al. (2023). But challenges still exist, such as the lack of domain knowledge in general LLMs and the necessity for them to align with educational abilities (e.g., essay assessment, emotional support, and Socratic teaching). EduChat is pre-trained on a diverse education corpus to ensure the alignment of EduChat with educational abilities.
## 3 Core Functions of EduChat
Retrieval-Augmented Open Question Answering (QA)The education domain demands high accuracy and real-time updates regarding knowledge and related policies. However, existing generative LLMs suffer from issues like fabricating information and lagging behind in knowledge updates. To address this, we explore retrieval-augmented open QA methods. By utilizing real-time updated corpora from the internet as an external knowledge source, we enable LLMs to autonomously assess the relevance of retrieved information to answer a given question and decide which information to incorporate for generating responses. Through extensive experimental analysis, we discover that our model exhibits significant advantages over general LLMs in terms of eliminating fabrications and maintaining up-to-date knowledge.
Fine-grained Essay AssessmentIn essay assessment, teachers meticulously annotate grammar errors, provide scores, and offer feedback on standout sentences. Existing language models often have coarse granularity in grading, limiting students' writing skill improvement. Our research focuses on more fine-grained and comprehensive essay assessment. Combining frontline teaching professionals' expertise, we provide overall scores, aspect-level ratings, and detailed comments on content, expression, paragraph, and overall evaluation. Our model can identify standout sentences, highlighting strengths and areas for improvement, enabling personalized guidance for students' essay writing skills. This ensures timely and professional support in all aspects of writing.
Socratic TeachingWe focus on developing Socratic teaching capabilities in LLMs rather than providing direct answers to students. We adopt the Socratic dialogue method, engaging in multi-step question-and-answer interactions to encourage independent thinking. By stimulating discussions, debates, evaluations, and analyses, we aim to foster advanced cognitive skills and cultivate students' autonomy in learning. Our ultimate goal is to enhance critical thinking and innovation abilities to their fullest extent.
Psychology-based Emotional SupportAdolescents and children face more severe psychological pressures due to their immature cognitive development. Whereas, current LLMs usually provide generic advice, which can not well fit the specific emotional problem. To address this, we develop a psychological inquiry framework based on emotion psychology, such as Rational Emotive Behavior Therapy (REBT) and the ABC theory Ellis (1991). Our fine-tuned model can simulate a psychological counselor, providing personalized diagnoses and emotional support for users. EduChat fosters a deeper understanding of users' emotional states
and offers accurate and professional assistance.
## 4 Data Construction
### Pre-training Data
Textbooks DataIn our research, we gather a vast amount of educational textbook and online question bank data from Chinese middle and high school exams for pre-training. Additionally, we enrich our model with over 70,000 Chinese poetries, providing detailed information on authors, backgrounds, and poetry appreciation to enhance its poetry creation and appreciation capabilities. To facilitate empathetic emotional support dialogues, we carefully select 60 famous works from hundreds of psychology books. These selected books belong to two main categories. The first category consists of 15 branches of psychological theory, including developmental and educational psychology, social psychology, behavioral psychology, counseling psychology and others. The second category contains various psychological practices, which offer practical cases of psychological consultation and emotional support dialogues. By incorporating the diverse fundamental data into pre-training, our model gains a deeper understanding of education and psychology, enabling it to generate more helpful responses.
Fundamental Instruction DataTo achieve a more natural human-computer interaction, we collect a large volume of bilingual instruct tuning data from reputable open-source repositories like Alpaca5, BELLE (Ji et al., 2023), GPT4All6, OpenAssistant7, FLANCoT8, and Firefly9. The data spans various task types, enabling our models to acquire foundational instruction following capabilities for diverse instruction types. In addition, we source high-quality multi-turn dialogue data from MOSS (Sun et al., 2023), BELLE (Ji et al., 2023), COIG (Zhang et al., 2023a), LIDMA (Zhou et al., 2023a), and ShareGPT10. This data covers various dialogue contexts, including role-playing, creative writing, and code-related discussions, ensuring our models' competence in engaging and sustaining meaningful multi-turn conversations.
Footnote 5: [https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca)
Footnote 6: [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)
Footnote 7: [https://github.com/LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
Footnote 8: [https://huggingface.co/datasets/lucasmcacbe-lim/FLAN_CoT_alpaca_style](https://huggingface.co/datasets/lucasmcacbe-lim/FLAN_CoT_alpaca_style)
Footnote 9: [https://github.com/yangjianxin/Firefly](https://github.com/yangjianxin/Firefly)
Footnote 10: [https://huggingface.co/datasets/gozfarb/ShareGPT_Vicuna_unified](https://huggingface.co/datasets/gozfarb/ShareGPT_Vicuna_unified)
### Fine-tuning Data
To enhance the capability of education, we construct the **Educational Instruction Data** for fine-tuning, which covers retrieval-augmented open QA, emotional support, Socratic teaching and essay assessment. The distribution is shown in Figure 1.
Retrieval-Augmented Open QA DataTo address hallucination and timely knowledge issues in Open QA, we design a retrieval-augmented open QA technique. We sample high-quality data through ChatGPT scoring in relevant Open QA and Subject QA datasets. To tackle irrelevant retrieved content, we introduce self-checking. ChatGPT assesses whether the retrieval content helps answer the question and then generates the answer using an self-check, incorporating the useful retrieval content and questions. To maintain data quality, we manually verify the data during this process.
Emotional Support DataTo overcome the scarcity of Chinese emotional support dialogue data, we adopt a translation and expansion approach. We translate the widely-used English emotional support dataset, ESConv (Liu et al., 2021), into Chinese as ESConv-zh. After manual review and cleaning, we simulate multi-agent dialogues based on various patient scenarios within ESConv-zh and also collect real-life Chinese psychological counseling consultation data, incorporating patient information and diagnosis results. By training our models on diverse datasets, we empower them to provide robust emotional support and act as compassionate counselors during consultations.
Socratic Teaching DataTeachers play a key role in guiding and encouraging heuristic exploration rather than just providing answers. To support this, we generate dialogues simulating the Socratic teaching method by incorporating multi-step Q&A involving counter-questions, challenges, and inquiries. These dialogues are manually evaluated
Figure 1: Distribution of educational data.
for accuracy, fluency, and progression from easy to complex questions. Integrating this dataset into training equips our model with a strong capability in Socratic teaching, distinguishing it from other LLMs that only offer direct answers.
Essay Assessment DataThe lack of timely and detailed feedback often hinders students' writing improvement. To tackle this issue, we create a high-quality essay assessment dataset. Initially, we collect essays and employ ChatGPT to evaluate them in terms of content, expression, and overall quality. To ensure data quality, we invite pedagogical experts to manually curate the comments. This dataset empowers EduChat with the ability to provide students with high-quality feedback, aiding in the enhancement of their writing skills.
### Data Preprocessing
To enhance data quality, we conduct semantic-level deduplication on the dataset. Using the sentence-transformers model Reimers and Gurevych (2019), we obtain sentence embeddings for each data point and calculate cosine similarity between all pairs of embeddings. For similarities exceeding a threshold of 0.7, we remove one of the duplicates. We implement the similarity calculation using CUDA for GPU acceleration, speeding up the process.
## 5 EduChat
EducChat is an LLM designed for the education domain (Figure 2). We first **pre-train** on large-scale education corpus (e.g., textbooks, instructions for foundational tasks) to learn the domain-specific and foundational knowledge. We then learn the pedagogical skills by **fine-tuning** EduChat on task-specific instruction datasets. Moreover, we leverage online **retrieval** to enhance the accuracy and timeliness of knowledge in its responses. To control skills, we design various **system prompts** to unlock different scenes with tool usage.
### Training Procedure of EduChat
The training of EduChat is mainly divided into two stages: fundamental capabilities acquisition and educational skills acquisition. In the first stage, we **pre-train** the model on educational books and Q&A pairs (detailed in Section 4.1) to equip it with foundational knowledge across disciplines. Besides, large-scale instruction tuning and open-domain dialogue datasets are also incorporated to enable basic instruction following ability and dialogue ability (detailed in Section 4.2). In the second
Figure 2: The overall framework of EduChat.
stage, we develop EduChat's pedagogical skills by **fine-tuning** the model on our carefully curated data, including retrieval-augmented open QA, emotional support, Socratic teaching and essay assessment datasets mentioned in Section 4.2.
### Online Knowledge Retrieval
Existing generative LLMs all suffer from the issues of generating hallucinations and outdated information, which is detrimental to an educational model. To mitigate this problem, we introduce self-check as shown in Figure 2. Specifically, when online knowledge retrieval is enabled, the model picks useful retrieval results by asking itself "Is this helpful for answering the question?" and append filtered snippets before the dialogue history.
### System Prompt Design
Teachers always utilize various tools with different skills to enhance their teaching across different subjects. To enable EduChat to emulate an authentic teacher-student interaction, we carefully craft the system prompt that consists of personal profile, tool usage and skill selection (see Figure 2). Detailed settings can be found in Table 2.
**1) Personal Profile:** To remind the model of its own identity, the system prompt begins with: "EduChat is a conversational language model developed by East China Normal University."; **2) Tool Usage:** To regulate tool availability, the second part of the system prompt commences with "EduChat's tools:", listing all tool names and their respective accessibility. For instance, "Web search: Enable" indicates the model's ability to use retrieval, while "Calculator: Disable" signifies the model's inability to utilize a calculator; **3) Skill Selection:** Teachers in various settings possess unique communication skills, such as Socratic Teaching or Psychology-based Emotional Support. To cater to specific scenarios, we include function names at the end of the system prompt, which activates corresponding abilities based on the scene's requirements.
### Demonstration
We also develop a user-friendly demo system for EduChat (see Figure 3). Upon logging in, users can select from various functions, such as Open QA and Emotional Support, each offering a scene-specific system prompt to activate the corresponding ability. With this intuitive interface, users can easily engage in interactive conversations with EduChat to assist students, teachers and parents. Additionally, the system is designed to be adaptive, continuously learning from user interactions to further improve its capabilities and provide more personalized and effective assistance over time.
## 6 Experimental Results
### Results of C-Eval
Table 1 presents the results of our model on the C-Eval benchmark (Huang et al., 2023), a comprehensive Chinese evaluation suite for foundation models. The dataset consists of 13,948 multi-choice questions, spanning 52 diverse disciplines and categorized into four difficulty levels. Analyzing the table, we observe that our model achieves commendable performance compared to models with
Figure 3: Demo of EduChat.
similar parameter scales, such as Chinese Alpaca-13B and WastlackLM. Notably, both EduChat and Chinese Alpaca-13B are built on the LLaMa-13B base model. However, EduChat outperforms Chinese Alpaca-13B by over seven points. Furthermore, our integration of retrieval into LLMs proves to be highly effective, demonstrating the power of our retrieval-augmented open QA technique in enhancing model performance.
### Case Studies
Figure 4 shows the cases of our EduChat on retrieval-augmented open QA and socratic teaching. EduChat can provide precise answer with retrieved relevant information, and learn to guide the student to solve the problems like a teacher step by step. For emotional support, EduChat can interact like a psychological counselor rather than giving the general advice. For space limitation, we provide more cases of psychology-based emotional support and fine-grained essay assessment in the Appendix (Figure 5).
## 7 Conclusion
In this paper, we introduce EduChat, an LLM-based chatbot system for intelligent education. Our goal is to provide personalized, fair, and compassionate support to teachers, students, and parents. By leveraging psychology and education theories, we enhance educational functions like open QA, essay assessment, Socratic teaching, and emotional support. Through pre-training on educational corpus and fine-tuning with task-specific instructions, EduChat demonstrates great performance on the C-Eval benchmark. Overall, EduChat exhibits great potential towards revolutionizing intelligent education. In future work, we aim to expand EduChat on more functions, such as career planning, course guidance, question generation and so on.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multicolumn{6}{c}{**STEGRial Social Science Humanities Others Arg(hard). Avg**} \\ \hline GPT-4 & 67.1 & 77.6 & 64.5 & 67.8 & 54.9 & 68.7 \\ ChatGPT & 52.9 & 61.8 & 50.9 & 53.6 & 41.4 & 54.4 \\ Baichuan-13B & 47.0 & 66.8 & 57.3 & 49.8 & 36.7 & 53.6 \\ IntemLM-7B & 48.0 & 67.4 & 55.4 & 45.8 & 37.1 & 52.8 \\ ChatGLM2-6B & 48.6 & 60.5 & 51.3 & 49.8 & 37.1 & 51.8 \\ WestlackLM-19B & 41.6 & 51.0 & 44.3 & 44.5 & 34.9 & 44.6 \\ Bischman-7B & 38.2 & 52.0 & 46.2 & 39.3 & 31.5 & 42.8 \\ Chinese-Alpaca-33B & 37.0 & 51.6 & 42.3 & 40.3 & 30.3 & 41.6 \\ Chinese-Alpaca-13B & 31.6 & 37.2 & 33.6 & 32.8 & 27.3 & 33.3 \\ EduChat & 36.2 & 50.7 & 42.9 & 37.7 & 28.3 & 40.7 \\ EduChat (w Retrieval) & 43.5 & 59.3 & 53.7 & 46.6 & 33.1 & 49.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of C-Eval.
Figure 4: Cases of retrieval-augmented open QA and socratic teaching. |
2305.13682 | Correlated Anharmonicity and Dynamic Disorder Control Carrier Transport
in Halide Perovskites | Halide pervoskites are an important class of semiconducting materials which
hold great promise for optoelectronic applications. In this work we investigate
the relationship between vibrational anharmonicity and dynamic disorder in this
class of solids. Via a multi-scale model parameterized from first-principles
calculations, we demonstrate that the non-Gaussian lattice motion in halide
perovskites is microscopically connected to the dynamic disorder of overlap
fluctuations among electronic states. This connection allows us to rationalize
the emergent differences in temperature-dependent mobilities of prototypical
MAPbI$_3$ and MAPbBr$_3$ compounds across structural phase-transitions, in
agreement with experimental findings. Our analysis suggests that the details of
vibrational anharmonicity and dynamic disorder can complement known predictors
of electronic conductivity and can provide structure-property guidelines for
the tuning of carrier transport characteristics in anharmonic semiconductors. | Maximilian J. Schilcher, David J. Abramovitch, Matthew Z. Mayers, Liang Z. Tan, David R. Reichman, David A. Egger | 2023-05-23T04:41:35Z | http://arxiv.org/abs/2305.13682v2 | # Correlated Anharmonicity and Dynamic Disorder
###### Abstract
Halide pervoskites are an important class of semiconducting materials which hold great promise for optoelectronic applications. In this work we investigate the relationship between vibrational anharmonicity and dynamic disorder in this class of solids. Via a multi-scale model parameterized from first-principles calculations, we demonstrate that the non-Gaussian lattice motion in halide perovskites is microscopically connected to the dynamic disorder of overlap fluctuations among electronic states. This connection allows us to rationalize the emergent differences in temperature-dependent mobilities of prototypical MAPbI\({}_{3}\) and MAPbBr\({}_{3}\) compounds across structural phase-transitions, in agreement with experimental findings. Our analysis suggests that the details of vibrational anharmonicity and dynamic disorder can complement known predictors of electronic conductivity and can provide structure-property guidelines for the tuning of carrier transport characteristics in anharmonic semiconductors.
Halide perovskites (HaPs) are crystalline semiconductors that are relevant for a variety of technological applications, in particular as photovoltaic materials [1; 2; 3; 4; 5; 6]. The favorable device characteristics of HaPs are seemingly rooted in their optoelectronic properties [7; 8]. In particular, they possess direct band gaps and exciton binding energies smaller than thermal energy at ambient conditions. These factors enable strong sunlight absorption and rapid separation of electrons and holes in HaP thin films. Furthermore, the low carrier effectiveness in these materials signal efficient electronic transport. Together with low non-radiative recombination rates [9], these properties enable efficient capture of light-generated carriers at the contacts.
Interest in HaPs as a promising material platform is heightened by their tunability. In particular, chemical variation across the \(A\), \(B\), and \(X\) ions of their \(ABX_{3}\) stoichiometry can, in principle, create a knob with which to alter their properties with seemingly small changes in their overall structure [10]. Indeed, the electronic, vibrational and dielectric properties of HaPs can be adjusted _via_ tailoring their ionic composition even in high-symmetry HaP phases [7; 8; 10]. This is relevant technologically since it enables, _e.g._, control over the fundamental band gap which can be used to increase power-conversion efficiencies of HaP tandem solar cells [11].
However, the predictive power of established structure-property relationships is challenged in HaPs because their finite-temperature properties are unusual among optoelectronic materials [12], especially with respect to their charge transport characteristics. Experiment and theory agree that carrier mobilities around room temperature are limited by phonon scattering [13]. However several contradictions between experimental data and predictions from standard transport theories remain unexplained [14]. Indeed, the confluence of large amplitude, anharmonic atomic displacements [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28] in a polar lattice _and_ dispersive electronic band structures [7; 8; 13] introduces behavior that is difficult to capture in standard theoretical models [14; 29; 30; 31; 32].
Specifically, HaPs have been discussed to feature ultrashort carrier relaxation times and mean-free paths on the order of only a few unit cells as shown experimentally and theoretically [33; 34; 35], which violates the Mott-Ioffe-Regel (MIR) criterion and renders the most widely-used versions of standard kinetic theory inapplicable [14; 36; 37]. Related to this, recent experimental [38] and theoretical studies [14; 39; 35] have highlighted the shortcomings of Boltzmann transport approaches in explaining the charge transport characteristics of HaPs that have been established experimentally [13]. Supporting this viewpoint, high-level numerical treatments confirm that in the Frohlich polaron model a quasiparticle-based momentum representation of charge carriers is inadequate in the intermediate coupling regime of relevance for semiconductors such as HaPs [40]: it was shown that for the intermediate coupling regime (\(\alpha=2.5\)) the MIR limit is violated in the Frohlich polaron model over a range of \(0.2<k_{\rm B}T/\hbar\omega<10\), with \(\hbar\omega\) being an optical phonon energy. [40] Using \(\hbar\omega_{\rm LO}\)\(\approx\)15 meV for MAPbI\({}_{3}\)[41], this translates into a wide temperature range of 30 K to 1740 K where the MIR limit is violated and standard kinetic theory does not apply, as more recently re-emphasized in Ref. [42].
In this context, it is interesting that the lattice dynamics in HaPs are localized in real space because of strong an
harmonicity [16; 22; 28]. This type of vibrational anharmonicity occurs when the atomic motions in the system enter regimes of the potential energy surface that deviate from the harmonic approximation. However, traditional approaches to both the Frohlich polaron model and the Boltzmann transport equation employ the harmonic approximation. Together with the aforementioned further shortcomings of traditional kinetic methods to describe carrier scattering in HaPs that have been discussed in the literature, this motivates us to explore a real-space theoretical approach that leaves aside a purely particle-like momentum-space representation of carriers. Parametrizing such a method from first-principles and comparing the mechanism of charge transport across related materials enables us to detect how the transient localization of carriers influences their mobility.
Previous work by several of the present authors on the prototypical variant MAPbI\({}_{3}\) demonstrated that for near room temperature conditions, _dynamic disorder_ is prevalent. Namely, large atomic displacements induce strong fluctuations in electronic overlaps, which dictate carrier mobility and its temperature-dependence [39]. Lacroix _et al._ found that a Frohlich-type scattering, where strong disorder induces localization of charge, is consistent with measured carrier diffusion coefficients and experimentally-measured mobility magnitudes [35]. Both studies centered on mechanisms where the modulation of the electronic couplings by anharmonic atomic displacements, which have been found to be substantially nonlinear in MAPbI\({}_{3}\)[39; 43], are used to predict transport properties. However, the precise connections between vibrational anharmonicity and dynamic disorder are not known, despite their relevance for various systems, including organic [44; 45; 46; 47; 48; 49; 50; 51; 52] and ionic semiconductors, _e.g._, SrTiO\({}_{3}\)[53]. Since carrier scattering by phonons is a limiting mechanism for electronic transport close to room temperature, rationalizing the underlying microscopic origins and connections between anharmonicity and dynamic disorder is clearly required for the development of predictive structure-property relationships for HaPs and similarly for a broader class of anharmonic semiconductors. One way to establish such connections is _via_ comparison of related but distinct material compounds in regard to their dominant scattering mechanisms and charge-transport behavior.
In this letter, we investigate carrier dynamics in the prototypical anharmonic HaP semiconductors MAPbI\({}_{3}\) and MAPbBr\({}_{3}\) through a multi-scale theoretical model that is parameterized from first-principles calculations. Analyzing the temperature-dependent vibrational anharmonicity, it is found that MAPbBr\({}_{3}\) is significantly _more anharmonic_ at lower temperatures, in line with what can be expected from its lower tetragonal-to-cubic phase-transition temperature. We show that MAPbBr\({}_{3}\) has a reduced carrier mobility compared to MAPbI\({}_{3}\) in this temperature range because of the stronger anharmonicity of its lattice, which results in a weaker mobility temperature dependence overall. A spectral analysis of the dynamic disorder provides precise connections to anharmonicity, since both effects become more similar in the two compounds as temperature increases, until carrier mobilities are comparable. Our work supports a transient localization-type picture of carrier mobility in HaPs, where carrier diffusion follows atomic vibrations. It is demonstrated that carrier mobilities can be altered through anharmonicity and dynamic disorder, establishing these effects as handles for tuning transport properties in an important class of semiconductors.
We perform molecular dynamics (MD) calculations of MAPbI\({}_{3}\) and MAPbBr\({}_{3}\) to account for anharmonic vibrations at various temperatures that include the tetragonal and cubic phase of both materials. Specifically, we apply previously-reported force fields [54; 55] in order to enable large-scale/long-time MD calculations of \(16\times 16\times 16\) supercells (49152 atoms) with LAMMPS [56] (see Ref. [57] for details). Notably, the force-field MD calculations include anharmonic effects because they were shown to capture phenomena in MAPbI\({}_{3}\) and MAPbBr\({}_{3}\) that are explicitly anharmonic, _e.g._, temperature-induced lattice expansions and phase-transitions [54; 55].
Fig. 1 shows histograms of computed Pb-_X_ bond-distances of the two compounds at 200 K and 350 K that were extracted from \(NVT\)-MD production runs following extensive \(NpT\)-MD equil
Figure 1: Histograms of the Pb-_X_ bond-distances in MAPbI\({}_{3}\) and MAPbBr\({}_{3}\) at 200 K (panel a) and 350 K (b), computed _via_ force-field-based MD calculations. The dashed lines are Gaussian fits to the respective distributions, where deviations to the actual data signify vibrational anharmonic effects. The mean value of Pb-_X_ bond-distances is set to zero in all plots. Histograms have been normalized by dividing each data point by the total number of points (number of bins: 50).
Pb-Br bond-distance distribution is significantly more non-Gaussian than its Pb-I counterpart. In particular, the histograms reveal that deviations from Gaussian behavior for the larger-distance displacements in MAPbBr\({}_{3}\) are significantly more prominent at that temperature. This can be quantified by calculating the ratios of the standard deviations of the recorded Pb-_X_ bond-distance distribution and the Gaussian fit. At 200 K, they are found to be 1.04 and 1.12 for MAPbI\({}_{3}\) and MAPbBr\({}_{3}\), respectively, confirming that the latter is deviating more from the harmonic behavior. The finding agrees with expectations borne from the substantially lower tetragonal-to-cubic phase-transition temperature in MAPbBr\({}_{3}\) (\(\approx\)240 K) compared to MAPbI\({}_{3}\) (\(\approx\)330 K) [58]. It can be rationalized by the larger ionic radius of iodine, which implies that a higher thermal energy is required for reaching an on-average cubic symmetry of MAPbI\({}_{3}\) compared to MAPbBr\({}_{3}\).
In line with this expectation and our findings, previous work found that MAPbI\({}_{3}\) features a potential surface that is significantly more anharmonic in the cubic than in the tetragonal phase, where large-amplitude anharmonic displacements accompanying octahedral tiltings are confined to occur only in two spatial dimensions [59]. Furthermore, recent neutron scattering experiments comparing the two compounds found that the disorder correlation-length is significantly shorter in MAPbBr\({}_{3}\) at lower temperature, in line with our findings [28]. Accordingly, above the phase-transition temperature of MAPbI\({}_{3}\) at 340 K, when both materials are in the cubic phase, differences in the bond-distance distributions are minor and the two compounds are similarly anharmonic (see Fig. 1). Calculating the ratios of the standard deviations of the recorded Pb-_X_ bond-distance distribution and the Gaussian fit like above, we find them to be 1.08 and 1.12 for MAPbI\({}_{3}\) and MAPbBr\({}_{3}\), respectively, confirming that at 350 K the degree of anharmonicity in both compounds is more similar than at 200 K. The differences in anharmonic vibrational behaviors of MAPbI\({}_{3}\) and MAPbBr\({}_{3}\) at lower and higher temperatures allows for a determination of the impact of this effect on finite-temperature electronic structure and carrier dynamics.
We determine the finite-temperature electronic properties through a multi-scale tight-binding (TB) model (see [57] and Ref. [39] for details) that is parameterized _via_ first-principles MD and one-shot Wannier projections onto a local atomic basis [60], both using density functional theory (DFT) as implemented in VASP [61] and Quantum Espresso [62]. Importantly, this TB model is sensitive to structural fluctuations _via_ inclusion of distance-dependent onsite and overlap terms in the Hamiltonian which are fitted using DFT-based MD. The model employs temperature-dependent trajectories from force-field-based MD to obtain statistical information on the finite-temperature electronic structure and uses this information in conjunction with quantum-dynamical simulations of the carrier dynamics. The latter are performed using an Ehrenfest approach that neglects the back reaction forces on the lattice, applied on \(96\times 96\times 96\) real-space supercell Pb-_X_ motifs. The impact of back reaction forces on the carrier scattering is expected to be small: formation of a Frohlich polaron would require coherent long wavelength vibrations whereas in HaPs the relevant lattice dynamics are localized in real space.
The resulting temperature-dependent carrier mobilities of MAPbI\({}_{3}\) and MAPbBr\({}_{3}\) are shown in Fig. 2. In the region where MAPbBr\({}_{3}\) was found to exhibit more profound anharmonicity than MAPbI\({}_{3}\) (200-300 K, _cf._ Fig. 1), its mobility is reduced and its temperature dependence is flatter. When temperature is increased, progressively more anharmonic displacements appear in MAPbI\({}_{3}\) and the temperature-dependence of its mobility is concomitantly altered. Close to room temperature, where MAPbI\({}_{3}\) is still in the tetragonal phase, its carrier mobility remains noticeably higher than the one of MAPbBr\({}_{3}\). Interestingly, at 350 K the carrier mobilities of the compounds are comparable, since both are in the cubic phase and their atomic dynamics are similarly anharmonic (_cf._ Fig. 1).
The observed power-law behaviors of the mobilities (see Fig. 2) are in broad agreement with experimental observations [63, 64, 65, 66, 13, 33, 67, 68, 69]. In particular, the room temperature mobility magnitudes and the finding that MAPbI\({}_{3}\) is more conductive than MAPbBr\({}_{3}\) at that temperature match well with recent experimental findings [70]. It is noted that perfect agreement between theory and experiment, both for mobility magnitudes and temperature dependencies, cannot be expected because of experimental variations induced by sample fabrication and characterization methods [13] as well as neglect of certain mechanisms, _e.g._, defect scattering, in our model. Furthermore, our model applies approximate
Figure 2: Temperature-dependent charge carrier mobilities (sum of electrons and holes) for MAPbI\({}_{3}\) and MAPbBr\({}_{3}\) computed _via_ our multi-scale TB model and quantum dynamics approach. The lines represent best fits to the power-law behavior of the temperature-dependent mobility data.
treatments to calculate electronic properties and their dependencies on structural fluctuations, which may lead to additional inaccuracies. The finding that our approach correctly captures the changes of the mobility characteristics when comparing MAPbI\({}_{3}\) and MAPbBr\({}_{3}\) signifies that the model accounts for the carrier scattering mechanisms that determine charge transport behavior in these materials. In the following, we will provide a detailed description of these mechanisms.
We investigate the connections between anharmonicity and dynamic disorder through a statistical analysis of the temperature-dependent atomic and electronic dynamics. The vibrational density of states (VDOS) at 300 K (see Fig. 3a) shows prominent THz-range contributions in both compounds and a slight shift of the MAPbBr\({}_{3}\) spectrum to higher frequencies. A spectral analysis of the finite-temperature fluctuations of the corresponding onsite and overlap terms in the TB model is presented in Figs. 3b and c. Importantly, pronounced intensities in the \(t_{\mathrm{pp}\sigma}\) overlap fluctuations, which is the dominant scattering channel for carriers in these materials [39, 57], appear in a similarly low frequency region as the pronounced intensities in the VDOS (_cf._ Figs. 3a and c). Furthermore, a shift to higher frequencies is seen in the \(t_{\mathrm{pp}\sigma}\) fluctuations for MAPbBr\({}_{3}\), similar to what is observed in the VDOS. Therefore, the overlap fluctuations follow the VDOS in both compounds. At 300 K, these fluctuations are more pronounced in the more anharmonic MAPbBr\({}_{3}\): standard deviations of the \(t_{\mathrm{pp}\sigma}\) fluctuations are 0.21 eV and 0.23 eV for MAPbI\({}_{3}\) and MAPbBr\({}_{3}\) at 300 K, respectively, confirming that the more anharmonic MAPbBr\({}_{3}\) is more dynamically disordered at that temperature.
To connect these findings to carrier dynamics, we construct a series of artificial onsite and coupling signals augmenting the original TB Hamiltonian (see Ref. [57]). Interestingly, when we increase the fluctuations in the \(t_{\mathrm{pp}\sigma}\) couplings of MAPbI\({}_{3}\) (see Fig. 3c for the corresponding spectral density) its carrier mobility is significantly reduced (by 20 cm\({}^{2}\)/Vs) at 300 K, while changes to the onsite terms have a smaller effect and a shift of the fluctuations to higher frequencies is inconsequential [57]. Therefore, what distinguishes the carrier dynamics in the two materials at lower temperature are differences in the degree of dynamic disorder.
Having established the critical role of dynamic disorder for the carrier mobility through the \(t_{\mathrm{pp}\sigma}\) fluctuations, it is interesting to analyze their temperature dependencies in both materials. Fig. 4 shows temperature-dependent relative fluctuations in Pb-\(X\) bond distances and \(t_{\mathrm{pp}\sigma}\) overlaps for the two compounds. Concurrent with the more anharmonic behavior of MAPbBr\({}_{3}\) at lower temperatures are larger relative fluctuations in bond distances compared to MAPbI\({}_{3}\), which become more similar as temperature is raised. Similarly, the relative fluctuations in \(t_{\mathrm{pp}\sigma}\) overlaps are larger in MAPbBr\({}_{3}\) at lower temperatures, but those of MAPbI\({}_{3}\) increase more strongly as temperature is raised, until they are very similar in the two materials at 350 K where both materials are in the cubic phase. Together with the findings outlined above, these data show that anharmonicity and dynamic disorder are _microscopically connected_ and appear to be the two critical factors determining the carrier mobility and its temperature dependence in HaPs.
Finally, we discuss the implications of our findings for modeling of electron-phonon interactions in soft, anharmonic materials more broadly. It is useful to attempt to rationalize our findings presented in Fig. 2 from a purely electronic structure perspective, using the static crystal structures and effective masses of the tetragonal and cubic phase for both materials. We find that the effective mass of MAPbI\({}_{3}\) is indeed lower than that of MAPbBr\({}_{3}\), in line with the majority of previous studies[71, 72, 73, 74, 75, 76, 77, 78, 79], which seemingly explains the trend we find up to \(\approx\)300 K. However, changes in the _relative differences_ of the effective masses of the two compounds upon undergoing the tetragonal-to-cubic phase transition show that they are significantly more similar in the tetragonal phase than in the cubic phase [57], which is opposite to the trend exposed in Fig. 2.
The effective masses of the compounds alone cannot explain our findings, which signifies potential limitations of a momentum-space quasiparticle representation of carri
Figure 3: Vibrational density of states (VDOS, panel a) and spectral densities of onsite (panel b) and the \(t_{\mathrm{pp}\sigma}\) overlaps (panel c) at 300 K. Spectral densities were computed from instantaneous fluctuations occurring in the multi-scale TB model. The dashed lines shows an artificial signal for the \(t_{\mathrm{pp}\sigma}\) spectral density where fluctuations have been manually increased by 20 %, which caused a mobility reduction of \(\approx\)20cm\({}^{2}\)/Vs. All panels show the low-frequency region of the spectra.
ers at finite temperature in these systems. Indeed, established electron-phonon models rooted in band theory apply static electronic band structures as a starting point in a perturbative treatment of finite-temperature effects, for which the aforementioned findings concerning effective masses suggest that their predictive power may be limited. As a case in point, previous theoretical studies applying band theory and the Boltzmann transport equation generically report _stronger_ mobility temperature dependencies at lower temperatures when HaPs adopt lower-symmetry phases [80; 29; 81]. By contrast, experimental studies on various HaP compounds have consistently reported stronger mobility temperature dependencies in high-symmetry phases [66; 33; 38; 67].
In contrast to methods based on a momentum-space representation and perturbative electron-phonon couplings, the approach adopted here does not rely on band theory. Rather it includes all carrier-phonon scattering effects that arise in the semiclassical treatment of the finite-temperature atomic motion in the material [39; 14]. Several methods based on effective harmonic potentials have been developed to extend the perturbative momentum-space electron-phonon methods to anharmonic materials [82; 31]. However, the non-Gaussian nature of the atomic displacements in Fig. 1 suggests that no effective harmonic potential would fully capture the lattice dynamics in MAPbI\({}_{3}\) and MAPbBr\({}_{3}\). This has been discussed also for the related CsPbBr\({}_{3}\) compound in previous work [83]. Additionally, perturbative momentum-space methods typically employ a linear electron-phonon coupling and neglect phonon scattering resulting from anharmonicity, which are central to the lattice dynamics in HaPs [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]; such limitations are not present in our method. Moreover, our treatment of the quantum dynamics also accounts for the electronic dynamics at a higher level of theory than semi-classical Boltzmann transport approaches and it naturally includes interband effects which are often excluded in other methods. Therefore, we hypothesize that increased anharmonicity and dynamic disorder we found using this method reduces carrier mobilities and thus resolves remaining contradictions between experiment and theory on the carrier-scattering mechanisms that are active in HaPs. Hence, including finite-temperature effects directly into a dynamic disorder-based representation of the carrier scattering enhances the predictive power of the theory.
In summary, we have studied the connections betweeen anharmonicity and dynamic disorder by comparing two prototypical variants of anharmonic semiconductors, namely the HaPs MAPbI\({}_{3}\) and MAPbBr\({}_{3}\). Using a TB model that is parameterized from first-principles calculations, MD simulations, and semiclassical quantum dynamics, we have rationalized subtle differences in the power-law behavior of temperature-dependent mobilities for both compounds. Most critically, we demonstrated that charge carriers follow the atomic dynamics by revealing that in the temperature region where MAPbBr\({}_{3}\) is more anharmonic, its charge carrier mobility is reduced. Our model and the real-space picture underlying it enabled us to determine that anharmonicity and dynamic disorder are connected to one another, and that they critically impact carrier mobility characteristics, including their temperature dependencies, in a systematic manner. These findings have relevance for development of structure-property relations which promises to be useful for tuning the properties of a wide class of semiconductors and anharmonic solids, as well as for the devices which utilize the relations for materials design.
We thank Andrew M. Rappe for past collaborations on related work. Funding provided by the Alexander von Humboldt-Foundation in the framework of the Sofja Kovalevskaja Award, endowed by the German Federal Ministry of Education and Research, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) _via_ Germany's Excellence Strategy - EXC 2089/1-390776260, and by TU Munich - IAS, funded by the German Excellence Initiative and the European Union Seventh Framework Programme under Grant Agreement No. 291763, are gratefully acknowledged. The work of DRR was performed with support from the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program, under Award No. DE-SC0022088. This work was supported by the user program of the Molecular Foundry, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of En
Figure 4: Relative fluctuations in Pb-_X_ bond-distances and \(t_{\mathrm{p}p\sigma}\) overlaps for both materials as a function of temperature. Data calculated as the ratio between the standard deviation (\(\sigma_{\mathrm{d}_{\mathrm{Pb-X}}}\) and \(\sigma_{\mathrm{p}p\sigma}\), respectively) and mean value (\(\overline{d}_{\mathrm{Pb-X}}\) and \(\overline{t}_{\mathrm{p}p\sigma}\), respectively).
ergy under Contract No. DE-AC02-05CH11231. The Gauss Centre for Supercomputing e.V. is acknowledged for providing computing time through the John von Neumann Institute for Computing on the GCS Supercomputer JUWELS at Julich Supercomputing Centre. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
|
2307.05699 | Electrons interacting with Goldstone modes and the rotating frame | We consider electronic systems with a spontaneously broken continuous
symmetry. The scattering vertex between electrons and Goldstone modes is
calculated over the entire Brillouin zone using the random phase approximation.
This calculation reveals two things: (1) electrons always couple to both $\phi$
and $\partial_t \phi$, where $\phi$ is the Goldstone field, and (2)
quasi-particles in a state with continuous symmetry breaking have to be defined
in a rotating frame, which locally follows the fluctuations of the order
parameter. The implications of these findings for electron spectral functions
in both symmetry-broken and thermally disordered systems are discussed, and the
examples of anti-ferromagnetism in the Hubbard model and spin spiral order in
the three-band model are worked out in detail. | Konstantinos Vasiliou, Yuchi He, Nick Bultinck | 2023-07-11T18:04:33Z | http://arxiv.org/abs/2307.05699v2 | # Electrons interacting with Goldstone modes and the rotating frame
###### Abstract
We consider electronic systems with a spontaneously broken continuous symmetry. The scattering vertex between electrons and Goldstone modes is calculated over the entire Brillouin zone using the random phase approximation. This calculation reveals two things: (1) electrons always couple to both \(\phi\) and \(\partial_{t}\phi\), where \(\phi\) is the Goldstone field, and (2) quasi-particles in a state with continuous symmetry breaking have to be defined in a rotating frame, which locally follows the fluctuations of the order parameter. The implications of these findings for electron spectral functions in both symmetry-broken and thermally disordered systems are discussed, and the examples of anti-ferromagnetism in the Hubbard model and spin spiral order in the three-band model are worked out in detail.
###### Contents
* I Introduction
* I.1 Summary of results and connection to previous works
* I.2 Structure of the paper
* II Mean-field starting point
* III Hartree-Fock path integral
* IV Effective RPA interaction
* V Interacting electrons and Goldstone modes
* V.1 Properties of the Goldstone wavefunctions
* V.2 Electron-Goldstone scattering vertex
* V.3 Effective electron-boson model
* V.4 Transforming to the real basis
* VI The rotating frame
* VI.1 The problem: strong interband scattering in the \(\mathbf{q}\to 0\) limit
* VI.2 The solution: the rotating frame
* VI.3 Implications for electron spectral functions
* VII Example I: anti-ferromagnetism in the Hubbard model
* VII.1 Mean-field state and Goldstone mode energies
* VII.2 Electron-Goldstone scattering vertices
* VII.3 Electron spectral function
* VIII Example II: spin-spiral order in the three-band model
* VIII.1 Mean-field state and Goldstone mode energies
* VII.2 Electron spectral function
* IX Conclusions
* A Solution of the Bethe-Salpeter equation and its properties
## I Introduction
Many strongly-correlated materials display continuous symmetry breaking in some part of their phase diagram. Some paradigmatic examples are spin-density wave orders in the cuprate and iron-based superconductors, and valley/spin orders in 2D moire materials.
As a result of the continuous symmetry-breaking, the low-energy spectrum contains gapless Goldstone modes. In this work we set out to calculate the scattering vertex between these Goldstone modes and the electrons. Specifically, our goal is to obtain an expression for the scattering vertex which is valid over the entire Brillouin zone, and not just in the long-wavelength limit for the Goldstone modes. One reason for going beyond a long-wavelength approximation is that just like acoustic phonons, Goldstone modes of the electron liquid should decouple from the electrons at long wavelengths. Hence one expects the most important scattering processes to be those where an electron emits or absorbs a zone-boundary Goldstone mode.
The starting point of our analysis is a simple mean-field description of the broken-symmetry state. From the optimal broken-symmetry Slater determinant, we then construct a path-integral representation of the partition function in order to study fluctuations on top of the mean-field state. Via the random phase approximation (RPA)1 we obtain both the Bethe-Salpeter equation for the collective modes (whose solution gives the Goldstone mode energies and wavefunctions), and an explicit expression for the electron-Goldstone mode scattering vertex. Using these results, we then construct an effective electron-boson model with both fermionic and bosonic fields, which emulates the interacting electron-Goldstone mode system. Towards the end of the paper we illustrate
our formalism with two examples: anti-ferromagnetism in the Hubbard model, and circular spin-spiral order in the three-band model.
### Summary of results and connection to previous works
Even though both the goal (obtaining the electron-Goldstone mode scattering vertex) and the methods (mean-field theory and RPA) of this work are straightforward and have previously been discussed in classic works such as e.g. Refs. [4; 5; 6], we nevertheless find that obtaining a result which is in line with physical expectations requires some non-trivial (and to the best of our knowledge new) steps. In this section we briefly discuss what these non-trivial steps are, why they are necessary, and what interesting physics is hiding behind them.
Via the RPA analysis we obtain a scattering vertex which in the long-wavelength limit agrees with the previous result of Ref. [7]. However, this result is not entirely satisfactory, as it seems to imply that long-wavelength Goldstone modes do not decouple from the electrons. Instead, they give rise to inter-band scattering processes for the electrons with a scattering vertex of the order of the mean-field bandgap which results from the spontaneous symmetry breaking. Hence the long-wavelength inter-band scattering is of the order of the interaction energy scale. Already at tree-level, exchange of Goldstone modes therefore introduces an effective interaction with a magnitude corresponding to the square of the bare repulsive interaction, divided by a Goldstone propagator which vanishes in the low-energy and long-wavelength limit. This leads to strong self-energy corrections for the mean-field electrons, casting doubt on their validity as well-defined quasi-particles.
However, we observe that the spurious contribution to the electron-Goldstone mode vertex is 'pure gauge', meaning that it can be removed by a gauge transformation in the path integral. This gauge transformation implements a spatially-dependent symmetry transformation on the mean-field electrons. In particular, the transformed electron fields are defined in a rotating frame which follows the local fluctuations of the order parameter. Crucially, electrons in the rotating frame completely decouple from long-wavelength Goldstone modes. We note that a similar rotating frame has already been introduced in previous studies of superconducting [8; 9] and magnetic [10; 11; 12] orders, but for different reasons. In particular, these previous works did not obtain the Goldstone mode energies and wavefunctions by solving the Bethe-Salpeter equation, and did not motivate the rotating frame by requiring a decoupling between electrons and Goldstone modes at long wavelengths.
The mean-field electrons are related via a frequency-independent, and hence non-dynamical, transformation to the microscopic electrons. The linear response functions which are measured in experiment are therefore directly related to correlation functions of the mean-field electrons. But we argued above that the mean-field electrons are not a good approximation to the true quasi-particles of the system - one should instead use the electrons in the rotating frame to approximate the quasi-particles. However, the electron propagator can be approximated as a convolution (in frequency and momentum space) of two propagators of well-defined quasi-particles: fermions in the rotating frame, and the Goldstone modes (we momentarily ignore the possibility of Landau damping for the Goldstone modes). This is explained in detail in Sec. VII.3. This result has previously also been obtained in Ref. [13], where, similarly to Ref. [12], the rotating frame is introduced to correctly describe fluctuations beyond mean-field theory via a Hubbard-Stratanovich decoupling of the interaction. Again, this work did not motivate the necessity of a rotating frame by deriving the electron-Goldstone mode scattering vertex in the RPA formalism as we do here. We note that a similar expression for the microscopic fermion two-point function as a convolution of two propagators is also obtained in theories where the electron is assumed to fractionalize in a spinon and a holon [14; 15; 16; 17; 18; 19; 20].
The RPA approach describes small fluctuations around the mean-field state. In the effective electron-boson model, which we construct to emulate the interacting electrons and Goldstone modes at the RPA level, the Goldstone fields are a collection of real scalar fields. However, once the dispersion relation of small order parameter fluctuations is known, it is straightforward to improve the theory and incorporate the correct global structure of the order parameter manifold by describing the Goldstone mode dynamics with a non-linear sigma model. This is especially important in 2D, where the non-linear sigma model is disordered at any non-zero temperature, and hence ensures that the effective electron-boson model respects the Hohenberg-Mermin-Wagner theorem.
As a final point, let us elaborate on the physical behaviour which can be extracted from the expression for the microscopic fermion propagator as a convolution of the Goldstone propagator and the propagator of the fermions in the rotating frame. In particular, let us consider 2D systems, and focus on temperatures \(T\) which are much smaller than the mean-field bandgap. If we use the correct non-linear sigma model action for the Goldstone mode dynamics, the Goldstone mode propagator at finite \(T\) will be invariant under global symmetry transformations, and acquire a thermal mass. As a result, the propagator for the microscopic fermions (which is a convolution containing the Goldstone mode propagator) will also be symmetric (this is explained in detail in Sec. VII.3, and illustrated via the examples in Sec. VII and VIII). However, the spectral weight of the microscopic electrons is still predominantly determined by the poles of the fermion propagator in the rotating frame. As we assume \(T\) to be much smaller than the mean-field bandgap, the poles of the rotating-frame fermion propagator will be located at the same energies of the symmetry-broken
mean-field band spectrum. As a result, we find that even though the microscopic fermion propagator is invariant under the continuous global symmetry, it nevertheless produces a spectral weight that is predominantly determined by the spectrum of the broken-symmetry state - and hence is very different from the spectral weight of the non-interacting fermions. We thus see that the introduction of a rotating frame, which is necessary to have the electrons decouple from the long-wavelength Goldstone modes, automatically leads to an expression for the electron propagator which captures the physically intuitive behaviour of an electron system with an order parameter that has well-defined magnitude, but whose orientation is disordered by thermal fluctuations.
### Structure of the paper
The remainder of this work is organized as follows. We start in Sec. II by introducing the mean-field starting point of our analysis. In Sec. III we use the optimal Slater determinant, which we obtain from the mean-field analysis, to construct a path integral representation of the partition function, which can be used to study fluctuations beyond mean-field. Using this path integral, we introduce the infinite series of RPA Feynman diagrams that give rise to the Bethe-Salpeter equation, and also the effective interaction between electrons mediated by collective mode exchange, in Sec. IV. In Sec. V we first discuss the general properties of the Goldstone mode wavefunctions that we obtain by solving the Bethe-Salpeter equation. We then construct an effective electron-boson model which mimics the interacting electron-Goldstone mode system at tree-level. In Sec. VI we use the properties of the Goldstone mode wavefunctions to study the electron-Goldstone mode scattering vertex obtained from the RPA analysis in the long-wavelength limit. Here we uncover the the problematic inter-band scattering processes which do not vanish in the long-wavelength limit. In the same section, we then explain how this problem can be solved by doing a gauge transformation in the path integral, and defining fermions in a rotating frame. The implications of the rotating frame for the electron spectral functions are discussed at the end of this section. Finally, we illustrate the general formalism for antiferromagnetism in the Hubbard model in Sec. VII, and for circular spin-spiral order in the three-band model in Sec. VIII. The appendix reviews some properties of the Bethe-Salpeter equation and its solutions.
## II Mean-field starting point
In this section we set the stage for our main results presented below. In particular, we introduce the mean-field theory which is the starting point for our calculations. All the concepts discussed here are standard, and the main purpose of this section is therefore to introduce the context and notation necessary to understand the following sections.
We consider interacting electron systems described by the following general Hamiltonian:
\[H=\sum_{\mathbf{r},\mathbf{r}^{\prime}}\sum_{a,b}h(\mathbf{r}-\mathbf{r}^{ \prime})_{ab}c^{\dagger}_{\mathbf{r},a}c_{\mathbf{r}^{\prime},b}+\frac{1}{2} \sum_{\mathbf{r},\mathbf{r}^{\prime}}V(\mathbf{r}-\mathbf{r}^{\prime}):n_{ \mathbf{r}}n_{\mathbf{r}^{\prime}}:\,, \tag{1}\]
where \(\mathbf{r}\) denote the sites of a Bravais lattice in \(d\) spatial dimensions, \(h(\mathbf{r})\) is a general hopping Hamiltonian, and \(n_{\mathbf{r}}=\sum_{a}c^{\dagger}_{\mathbf{r},a}c_{\mathbf{r},a}\) is the electron density at site \(\mathbf{r}\). We also assume that \(h(\mathbf{r})\) is short range, and \(V(\mathbf{r})\) has a well-defined Fourier transform (for example it could be a screened Coulomb potential). Our results can be generalized to other Hamiltonians, for example with other (not density-density) interactions, but for ease of presentation we do not consider these generalizations here.
We will make use of the translation invariance and work in momentum space, where the Hamiltonian in Eq. (1) takes the form
\[H=\sum_{\mathbf{k}}\sum_{a,b}h(\mathbf{k})_{ab}c^{\dagger}_{\mathbf{k},a}c_{ \mathbf{k},b}+\frac{1}{2}\sum_{\mathbf{q}}V(\mathbf{q}):n_{\mathbf{q}}n_{- \mathbf{q}}:\,, \tag{2}\]
where
\[n_{\mathbf{q}}=\frac{1}{\sqrt{N}}\sum_{\mathbf{k}}\sum_{a}c^{\dagger}_{\mathbf{ k}+\mathbf{q},a}c_{\mathbf{k},a}\,, \tag{3}\]
with \(N\) the number of sites, is the Fourier transform of the electron density.
We are interested in Hamiltonians with a continuous symmetry, such as spin rotation symmetry, which gets broken spontaneously. The starting point of our analysis is a mean-field treatment of the symmetry breaking at zero temperature. In particular, we assume that the Slater determinants which minimize the energy of \(H\) are not invariant under the continuous symmetry of the Hamiltonian. Let us choose one such Slater determinant and write it as
\[|\psi_{0}\rangle=\prod_{\mathbf{k}}\prod_{i=1}^{N_{\alpha}(\mathbf{k})}c^{ \dagger}_{\mathbf{k},i}|0\rangle\,, \tag{4}\]
where \(c^{\dagger}_{\mathbf{k},i}\) creates an electron in a mean-field single-particle state, and \(N_{\alpha}(\mathbf{k})\) is the number of occupied states at momentum \(\mathbf{k}\). Note that we have assumed that the optimal Slater determinants preserve the translation symmetry. Also this restriction can be relaxed, but we will not do this here. The mean-field single-particle states (both occupied and unoccupied) will be labeled by Greek indices, and the corresponding creation operators are defined as
\[c^{\dagger}_{{\bf k},\alpha}=\sum_{a}u^{a}_{\alpha}({\bf k})c^{\dagger}_{{\bf k},a }\,, \tag{5}\]
where \(|u_{\alpha}({\bf k})\rangle\) are a set of orthonormal vectors at every \({\bf k}\): \(\langle u_{\alpha}({\bf k})|u_{\beta}({\bf k})\rangle=\delta_{\alpha\beta}\). We will use the convention that \(i,j,k\) denote occupied mean-field states, whereas \(m,n,o\) denote the unoccupied states. The single-particle correlation matrix of the optimal Slater determinant in Eq. (4), which by assumption is diagonal in momentum space, is thus given by
\[P({\bf k})_{\alpha\beta}\ :=\ \langle\psi_{0}|c^{\dagger}_{{\bf k},\beta}c_{{ \bf k},\alpha}|\psi_{0}\rangle=\delta_{\alpha\beta}n_{\alpha}({\bf k})\,, \tag{6}\]
with \(n_{\alpha}({\bf k})\in\{0,1\}\), \(n_{i/j/k}({\bf k})=1\) and \(n_{m/n/o}({\bf k})=0\).
Next we construct the Hartree-Fock mean-field Hamiltonian using \(P({\bf k})\). For future use we find it convenient to first rewrite the exact Hamiltonian in the mean-field basis:
\[H = \sum_{\bf k}h({\bf k})_{\alpha\beta}c^{\dagger}_{{\bf k},\alpha }c_{{\bf k},\beta}+\frac{1}{2}\sum_{\bf q}V({\bf q}):n_{\bf q}n_{-\bf q}: \tag{7}\] \[n_{\bf q} = \frac{1}{\sqrt{N_{s}}}\sum_{\bf k}\sum_{\alpha,\beta}c^{\dagger }_{{\bf k}-{\bf q},\alpha}[\Lambda_{\bf q}({\bf k})]_{\alpha\beta}c_{{\bf k}, \beta}\,, \tag{8}\]
where \(h({\bf k})_{\alpha\beta}=\sum_{a,b}u^{a*}_{\alpha}({\bf k})h({\bf k})_{ab}u^{ b}_{\beta}({\bf k})\), and
\[[\Lambda_{\bf q}({\bf k})]_{\alpha\beta}:=\langle u_{\alpha}({\bf k}-{\bf q} )|u_{\beta}({\bf k})\rangle\,. \tag{9}\]
As we are only considering translationally invariant states, the Hartree Hamiltonian is trivial and is simply given by
\[H_{H}[P]=V(0)\frac{N_{e}}{N}\sum_{\bf k}\sum_{\alpha}c^{\dagger}_{{\bf k}, \alpha}c_{{\bf k},\alpha}\,, \tag{10}\]
where \(N_{e}\) is the number of electrons in the system. The Fock Hamiltonian is given by
\[H_{F}[P]=\frac{-1}{N}\sum_{\bf q,k}\sum_{\alpha,\beta,i}V({\bf q})[\Lambda_{- \bf q}({\bf k}-{\bf q})]_{\alpha i}[\Lambda_{\bf q}({\bf k})]_{i\beta}c^{ \dagger}_{{\bf k},\alpha}c_{{\bf k},\beta} \tag{11}\]
The assumption that the Slater determinant in Eq. (4) is a variational energy minimum is equivalent to the statement that the Hartree-Fock self-consistency equation is satisfied, which in our notation takes the following form:
\[\sum_{\alpha,\beta}h({\bf k})_{\alpha\beta}c^{\dagger}_{{\bf k},\alpha}c_{{ \bf k},\beta}+H_{H}[P]+H_{F}[P]=\sum_{\alpha}E_{{\bf k},\alpha}c^{\dagger}_{{ \bf k},\alpha}c_{{\bf k},\alpha}\,, \tag{12}\]
where \(E_{{\bf k},\alpha}\) are the mean-field single-particle energies satisfying \(\mbox{sgn}(E_{{\bf k},\alpha})=1-2n_{\alpha}({\bf k})\).
## III Hartree-Fock path integral
We are interested in fluctuations beyond mean-field theory, and in particular in the role of the Goldstone modes associated with the spontaneous symmetry breaking. To study these fluctuations, we will use the path integral formalism. However, we will not use the standard path integral construction which relies on fermionic coherent states constructed on top of the Fock vacuum. Instead, we will construct coherent states on top of the Slater determinant in Eq. (4). Working in the single-particle mean-field basis, coherent states associated with unoccupied states take on the conventional form:
\[|\psi_{{\bf k},n}\rangle = (1-\psi_{{\bf k},n}c^{\dagger}_{{\bf k},n})|0\rangle \tag{13}\] \[\langle\bar{\psi}_{{\bf k},n}| = \langle 0|(1-c_{{\bf k},n}\bar{\psi}_{{\bf k},n})\,, \tag{14}\]
where \(\psi_{{\bf k},n}\) and \(\bar{\psi}_{{\bf k},n}\) are Grassmann numbers. These coherent states have the usual properties:
\[c_{{\bf k},n}|\psi_{{\bf k},n}\rangle = \psi_{{\bf k},n}|\psi_{{\bf k},n}\rangle \tag{15}\] \[\langle\bar{\psi}_{{\bf k},n}|\psi_{{\bf k},n}\rangle = e^{\bar{\psi}_{{\bf k},n}\psi_{{\bf k},n}}\,. \tag{16}\]
For the occupied states, we will use _particle-hole transformed_ coherent states, which we define as
\[|\bar{\psi}_{{\bf k},i}\rangle = (1-\bar{\psi}_{{\bf k},i}c_{{\bf k},i})c^{\dagger}_{{\bf k},i}|0\rangle \tag{17}\] \[\langle\psi_{{\bf k},i}| = \langle 0|c_{{\bf k},i}(1-c^{\dagger}_{{\bf k},i}\psi_{{\bf k},i})\,, \tag{18}\]
where \(\bar{\psi}_{{\bf k},i}\) and \(\psi_{{\bf k},i}\) are again Grassmann numbers. The particle-hole transformed coherent states satisfy following properties:
\[c^{\dagger}_{{\bf k},i}|\bar{\psi}_{{\bf k},i}\rangle = \bar{\psi}_{{\bf k},i}|\bar{\psi}_{{\bf k},i}\rangle \tag{19}\] \[\langle\psi_{{\bf k},i}|\bar{\psi}_{{\bf k},i}\rangle = e^{\psi_{{\bf k},i}\bar{\psi}_{{\bf k},i}}\,, \tag{20}\]
To construct a path integral representation of the partition function, we need to insert resolutions of the identity in terms of the coherent states, which are given by
\[\int\mbox{d}\psi_{{\bf k},n}\int\mbox{d}\bar{\psi}_{{\bf k},n}e^ {-\bar{\psi}_{{\bf k},n}\psi_{{\bf k},n}}|\psi_{{\bf k},n}\rangle\langle\bar{ \psi}_{{\bf k},n}| = \mathds{1} \tag{21}\] \[\int\mbox{d}\bar{\psi}_{{\bf k},i}\int\mbox{d}\psi_{{\bf k},i}e^ {-\psi_{{\bf k},i}\bar{\psi}_{{\bf k},i}}|\bar{\psi}_{{\bf k},i}\rangle\langle \psi_{{\bf k},i}| = \mathds{1} \tag{22}\]
Using the above resolutions of the identity, the partition function at temperature \(T\) can be written as
\[Z(T) = \mbox{tr}\left(e^{-H/T}\right) \tag{23}\] \[= e^{-E^{HF}_{0}/T}\int[D\psi]\int[D\bar{\psi}]\,e^{-S}\,, \tag{24}\]
where \(E_{0}^{HF}=\langle\psi_{0}|H|\psi_{0}\rangle\) is the Hartree-Fock variational ground state energy, and the action \(S\) is given by
\[S= \int_{0}^{1/T}\mathrm{d}\tau\,\sum_{\mathbf{k}}\sum_{\alpha}\bar{ \psi}_{\mathbf{k},\alpha}(\partial_{\tau}+E_{\mathbf{k},\alpha})\psi_{\mathbf{k },\alpha}\] \[+ \frac{1}{2N}\sum_{\mathbf{q},\mathbf{k},\mathbf{k}^{\prime}}V( \mathbf{q})\left(\bar{\psi}_{\mathbf{k}-\mathbf{q}}\mathrm{A}_{\mathbf{q}}( \mathbf{k})\psi_{\mathbf{k}}\right)\left(\bar{\psi}_{\mathbf{k}^{\prime}+ \mathbf{q}}\mathrm{A}_{-\mathbf{q}}(\mathbf{k}^{\prime})\psi_{\mathbf{k}^{ \prime}}\right)\,, \tag{25}\]
where in the last line we have suppressed the indices \(\alpha,\beta,\dots\). A few comments are in order. First, note that the kinetic term of the action contains the mean-field single-particle energies, and not the bare band energies (which would correspond to the eigenvalues of \(h(\mathbf{k})\)). So it will be the properties of the renormalized mean-field bands, e.g. their nesting or density-of-states, which controls the perturbation theory in the interaction. Secondly, we emphasize that no approximation is involved in our derivation of the path integral - Eqs. (24), (25) constitute an exact representation of the partition function. Thirdly, the unusual form of the action, and the additional factor \(\exp(-E_{0}^{HF}/T)\) in Eq. (24), result from the different form of normal ordering required by the use of particle-hole transformed coherent states: one has to normal order the Hamiltonian with respect to the Hartree-Fock ground state, and not with respect to the Fock vacuum state. This different choice of normal ordering produces additional quadratic terms, which exactly correspond to the Hartree and Fock Hamiltonians, which combined with the bare kinetic term produce the mean-field single-particle energies via the Hartree-Fock self-consistency equation (12).
In working with the path integral in Eqs. (24), (25) the conventional imaginary-time Feynman rules can be used, except for equal-time diagrams. These diagrams are usually defined by inserting a factor \(e^{i\epsilon\omega_{n}}\), where \(\omega_{n}\) is the fermionic Matsubara frequency, and taking the limit \(\epsilon\to 0\) at the end of the calculation. Here, to reflect the different normal ordering, one has to add the factor \(e^{i\epsilon\omega_{n}\mathrm{sgn}(E_{\mathbf{k},\alpha})}\) to the equal-time propagators. With this regularization, the Hartree and Fock self-energy diagrams vanish at zero temperature. This makes sense physically, as the Hartree-Fock path integral already takes these self-energy effects into account from the outset, and one should not double count these terms.
To conclude this section, we present the diagrammatic Feynman rules that will be used in this work. First, the electron propagator is represented in the usual was as a straight line with an arrow:
\[=(i\omega_{n}-E_{\mathbf{k},\alpha})^{-1}\,\,\,.\]
The interaction will be represented as a dashed line:
\[\mathbf{k}-\mathbf{q},\beta\] \[\mathbf{k},\alpha\]
With these definitions and conventions in place, we can now turn our attention to the RPA fluctuations on top of the mean-field result, which is the topic of the next section.
## IV Effective RPA interaction
In the previous section we derived the Hartree-Fock path integral, which contains the complete bare Coulomb repulsion in the action. Here we study the effective action which arises from summing all contributions from RPA diagrams. In Fig. 1 we show these diagrams. They contain, among others, the familiar bubble/polarization diagrams, and Berk-Schrieffer diagrams [21]. The central object defined in Fig. 1 is called \(G_{2}(\mathbf{q},i\nu)\), which is a matrix diagonal in momentum \(\mathbf{q}\) and bosonic Matsubara frequency \(i\nu\). Writing out the indices explicitly, this matrix is \([G_{2}(\mathbf{q},i\nu)]^{\mathbf{k}\alpha\beta}_{|\mathbf{k}^{\prime}\lambda \sigma}\). It is defined diagramatically in Fig. 1(b) as the infinite sum of direct and exchange diagrams concatenated with electron propagators. An explicit expression for this infinite sum can be obtained by solving the familiar Bethe-Salpeter equation, which is shown diagramatically in Fig. 2. Written out explicitly, the Bethe-Salpeter equation for \(G_{2}(\mathbf{q},i\nu)\) is
\[[G_{2}(\mathbf{q},i\nu)]^{\mathbf{k}\alpha\beta}_{|\mathbf{k}^{ \prime}\lambda\sigma}=\Bigg{[}-T\sum_{i\omega_{n}}\frac{1}{i\omega_{n}-E_{ \mathbf{k},\alpha}}\frac{1}{i(\omega_{n}-\nu)-E_{\mathbf{k}+\mathbf{q},\beta}} \Bigg{]}\times\] \[\Bigg{[}\delta_{\alpha\lambda}\delta_{\beta\sigma}\delta_{ \mathbf{k},\mathbf{k}^{\prime}}-\frac{1}{N_{s}}\sum_{\mathbf{k}^{\prime \prime}\mu\nu}\bigg{(}V(\mathbf{q})[\Lambda_{\mathbf{q}}(\mathbf{k})]_{\beta \alpha}[\Lambda_{-\mathbf{q}}(\mathbf{k}^{\prime\prime}-\mathbf{q})]_{\mu\nu} -V(\mathbf{k}^{\prime\prime}-\mathbf{k}^{\prime})[\Lambda_{\mathbf{k}- \mathbf{k}^{\prime\prime}}(\mathbf{k})]_{\mu\alpha}[\Lambda_{\mathbf{k}^{ \prime\prime}-\mathbf{k}}(\mathbf{k}^{\prime\prime}-\mathbf{q})]_{\beta\nu} \bigg{)}[G_{2}(\mathbf{q},i\nu)]^{\mathbf{k}^{\prime\prime}\mu\nu}_{\mathbf{k }^{\prime}\lambda\sigma}\Bigg{]} \tag{26}\]
The details of the Bethe-Salpeter equation and its solution are presented in Appendix A. As the Bethe-Salpeter
equation is well-known, we simply present the results here. At \(T=0\), the general form of \(G_{2}({\bf q},i\nu)\) is
\[\left[G_{2}(i\nu,{\bf q})\right]^{{\bf k},\alpha\beta}_{{\bf k}^{\prime},\lambda \sigma}=\sum_{s}\varphi^{s}_{{\bf q},\alpha\beta}({\bf k})\frac{1}{\omega_{{\bf q },s}-i\nu\,\eta_{{\bf q},s}}\varphi^{s*}_{{\bf q},\lambda\sigma}({\bf k}^{ \prime})\,, \tag{27}\]
where \(\omega_{{\bf q},s}\) are the energies of the RPA collective modes, which are guaranteed to be real and positive if the Slater determinant in Eq. (4) is a local energy minimum, i.e. if it satisfies the Hartree-Fock self-consistency equations (12). The sum in Eq. (27) runs only over those modes which have non-zero energy. Eq. (27) shows that \(G_{2}({\bf q},i\nu)\) takes the form of a propagator for the collective modes, which consist of electron-hole pairs with a pair-wavefunction \(\varphi^{s}_{{\bf q},\alpha\beta}({\bf k})\).
The collective mode wavefunctions \(\varphi^{s}_{{\bf q},\alpha\beta}({\bf k})\) and energies \(\omega_{{\bf q},s}\) are obtained by solving the following generalized eigenvalue equation:
\[(n_{\alpha}({\bf k})-n_{\beta}({\bf k}-{\bf q}))\varphi^{s}_{{\bf q },\alpha\beta}({\bf k})\eta_{{\bf q},s}\omega_{{\bf q},s}=\] \[|E_{\alpha}({\bf k})-E_{\beta}({\bf k}-{\bf q})|\varphi^{s}_{{\bf q },\alpha\beta}({\bf k})\] \[+V({\bf q})\frac{1}{N}\sum_{{\bf k}^{\prime}}{\rm tr}\left( \varphi^{s}_{{\bf q}}({\bf k}^{\prime})\Lambda_{{\bf q}}({\bf k}^{\prime}) \right)\left[\Lambda^{\dagger}_{{\bf q}}({\bf k})\right]_{\alpha\beta} \tag{28}\] \[-\frac{1}{N}\sum_{{\bf q}^{\prime}}V({\bf q}^{\prime})\left[ \Lambda^{\dagger}_{{\bf q}^{\prime}}({\bf k})\varphi^{s}_{{\bf q}}({\bf k}-{ \bf q}^{\prime})\Lambda_{{\bf q}^{\prime}}({\bf k}-{\bf q})\right]_{\alpha\beta }\,.\]
Note that \(\varphi^{s}_{{\bf q},\alpha\beta}({\bf k})\) is defined to be zero when \(n_{\alpha}({\bf k})=n_{\beta}({\bf k}-{\bf q})\). For collective modes with non-zero energy \(\omega_{{\bf q},s}\), the signs \(\eta_{{\bf q},s}\) are related to the collective mode wavefunctions through the following relation:
\[\sum_{{\bf k}}\sum_{i,n}\left(|\varphi^{s}_{{\bf q},ni}({\bf k})|^{2}-|\varphi^ {s}_{{\bf q},in}({\bf k})|^{2}\right)=\eta_{{\bf q},s}=\pm 1\,, \tag{29}\]
where we have again used the convention that \(i\) labels occupied states, and \(n\) labels empty states. The physical meaning of the \(\eta_{{\bf q},s}\) sign factors and their appearance in the denominator of the collective mode propagator in Eq. (27) can be understood as follows. The Bethe-Salpeter equation at momentum \({\bf q}\) describes both the collective mode creation operators, which we denote as \(b^{\dagger}_{{\bf q}}\), and the collective mode annihilation operators, denoted as \(b_{-{\bf q}}\). The collective mode propagator therefore contains contributions of the form \(\int{\rm d}\tau e^{i\nu\tau}\langle\hat{T}b_{-{\bf k}}(\tau)b^{\dagger}_{-{\bf k }}(0)\rangle\) and \(\int{\rm d}\tau e^{i\nu\tau}\langle\hat{T}b^{\dagger}_{{\bf k}}(\tau)b_{{\bf k }}(0)\rangle\), where \(\hat{T}\) is the time-ordering operator. The latter can be rewritten as \(\int{\rm d}\tau e^{-i\nu\tau}\langle\hat{T}b_{{\bf k}}(\tau)b^{\dagger}_{{\bf k }}(0)\rangle\), hence the origin of the minus sign in front of \(i\nu\) in some of the collective mode propagators. From this we conclude that \(\eta_{{\bf q},s}=1\) means that \(\varphi^{s}_{{\bf q}}\) corresponds to a collective mode creation operator, whereas \(\eta_{{\bf q},s}=-1\) means that \(\varphi^{s}_{{\bf q}}\) corresponds to a collective mode annihilation operator.
The collective mode propagator is guaranteed to have a particle-hole symmetry, which implies that if \(\varphi^{s}_{{\bf q}}\) is a solution to the generalized eigenvalue equation (28) with energy \(\omega_{{\bf q},s}\), then so is its particle-hole conjugate partner \(\varphi^{s^{\prime}}_{-{\bf q}}\) defined as
\[\varphi^{s^{\prime}}_{-{\bf q},\alpha\beta}({\bf k}):={\cal P}\left[\varphi^{s} _{{\bf q},\alpha\beta}({\bf k})\right]:=\varphi^{s*}_{{\bf q},\beta\alpha}({ \bf k}+{\bf q})\,, \tag{30}\]
where \({\cal P}\) is the (anti-unitary) particle-hole conjugation operator. Note that the particle-hole transformation inverts the momentum \({\bf q}\), but preserves the energy \(\omega_{{\bf q},s}\). From Eq. (29) we also see that the particle-hole transformation flips the signs \(\eta_{{\bf q},s}\).
When the self-consistent Hartree-Fock state has a time-reversal symmetry \({\cal T}\), the combined action \({\cal PT}\) is a unitary symmetry which preserves momentum but flips the signs \(\eta_{{\bf q},s}\) defined in Eq. (29). The \({\cal PT}\) symmetry guarantees that non-zero energies appear in degenerate pairs \(\omega_{{\bf q},s}=\omega_{{\bf q},s^{\prime}}\). Furthermore, the corresponding wavefunctions are related by
\[\varphi^{s^{\prime}}_{{\bf q},\alpha\beta}({\bf k}):={\cal PT}[\varphi^{s}_{{ \bf q},\alpha\beta}({\bf k})]:=\varphi^{s}_{{\bf q},\beta\alpha}(-{\bf k}+{\bf q })\,, \tag{31}\]
where we have without loss of generality chosen to work in a gauge where \({\cal T}|u_{\alpha}({\bf k})\rangle=|u_{\alpha}(-{\bf k})\rangle\). Below, we will focus on time-reversal symmetric self-consistent Hartree-Fock states and make use of this property.
At this point we have introduced all the properties of the collective mode propagator in Eq. (27) that we will need for our further analysis. It is now straightforward to obtain the RPA effective interaction by plugging in the collective mode propagator in the diagrams in Fig.
Figure 1: Definition of the effective interaction obtained from summing RPA diagrams.
Figure 2: Diagrammatic representation of the Bethe-Salpeter equation for \(G_{2}({\bf q},i\nu)\), the solution of which represents the infinite sum of RPA diagrams shown in Fig. 1(b).
1 (a). The last four diagrams in this figure can then be interpreted as the effective interaction between electrons which results from the exchange of collective modes. In the following section, we will return to the main topic of this work, which is interacting electron systems with a spontaneously broken continuous symmetry. The collective modes of interest will be the Goldstone modes, and we will use the general formalism discussed in this section to obtain an expression for the scattering vertex between electrons and the Goldstone modes.
## V Interacting electrons and Goldstone modes
When the self-consistent Hartree-Fock state spontaneously breaks a continuous symmetry the collective mode spectrum is guaranteed to have gapless branches corresponding to the Goldstone modes. From now on we will exclusively focus on these Goldstone modes, and ignore the other collective modes. In this section we (1) elaborate on the properties of the Goldstone mode wavefunctions, (2) construct the scattering vertex between electrons and Goldstone modes, and (3) synthesize our findings in an effective electron-boson model.
### Properties of the Goldstone wavefunctions
Assuming that the mean-field state preserves time-reversal symmetry, for every non-zero Goldstone mode energy \(\omega_{\mathbf{q},s}\), there are two collective mode wavefunctions \(\varphi^{s}_{\mathbf{q}}\) which are related by the action of \(\mathcal{PT}\), and which have opposite signs \(\eta_{\mathbf{q},s}\) as defined in Eq. (29). We will use the convention that for every \(s>0\), the Goldstone mode wavefunctions corresponding to \(\omega_{\mathbf{q},s}\) are denoted as \(\varphi^{\pm s}_{\mathbf{q}}\), where the sign is determined by the corresponding value of \(\eta_{\mathbf{q},s}\). In this work, we are mainly interested in linearly dispersing Goldstone modes (quadratically dispersing Goldstone modes are simpler, and do not require the full power of the formalism that we develop here). So we assume that at small momenta \(\omega_{\mathbf{q},s}=c_{s}|\mathbf{q}|+\mathcal{O}(\mathbf{q}^{2})\), where \(c_{s}\) are the velocities of the Goldstone modes.
For a Goldstone branch corresponding to broken symmetry generator \(Q_{s}\), we can write down the exact analytic expression for the zero mode at \(\mathbf{q}=0\)2:
Footnote 2: Note that as we are only considering linearly dispersing Goldstone modes, the broken symmetry generators \(Q_{s}\) are in one-to-one correspondence with the Goldstone modes [22].
\[\tilde{Q}_{s,\alpha\beta}(\mathbf{k})=i(u_{\alpha}(\mathbf{k})|Q_{s}|u_{ \beta}(\mathbf{k}))\left[n_{\alpha}(\mathbf{k})-n_{\beta}(\mathbf{k})\right]\,, \tag{32}\]
where the factor \(i\) is added to ensure that \(\tilde{\varphi}^{s}_{\mathbf{q}=0}\) is an eigenstate of the particle-hole conjugation operator \(\mathcal{P}\) defined in Eq. (30) with eigenvalue 1. Given that \(Q_{s}\) commutes with \(h(\mathbf{k})\), one can check that \(\tilde{Q}_{s}\) is indeed a zero mode of the generalized eigenvalue equation in Eq. (28). Furthermore, due to the factor \([n_{\alpha}(\mathbf{k})-n_{\beta}(\mathbf{k})]\) the wavefunction \(\tilde{Q}_{s,\alpha\beta}(\mathbf{k})\) is non-zero only when the symmetry generator \(Q_{s}\) is broken (if it is not broken, then \(Q_{s}\) commutes with the mean-field Hamiltonian and hence is diagonal in the mean-field basis \(|u_{\alpha}(\mathbf{k}))\rangle\). The zero mode is also an eigenstate of the \(\mathcal{PT}\) symmetry in Eq. (31) with eigenvalue \((-1)^{\kappa+1}\), where \(\kappa\) encodes whether \(Q_{s}\) is time-reversal even or odd:
\[\mathcal{T}Q_{s}\mathcal{T}^{-1}=(-1)^{\kappa}Q_{s}\,. \tag{33}\]
At non-zero \(\mathbf{q}\), for every broken generator \(Q_{s}\) there are two collective mode wavefunctions \(\tilde{\varphi}^{\pm s}_{\mathbf{q}}\) which are mapped to each other by the action of \(\mathcal{PT}\). In order to have a non-singular and smooth \(\mathbf{q}\to 0\) limit, we define the following rescaled collective mode wavefunctions at non-zero \(\mathbf{q}\):
\[\tilde{\varphi}^{\pm s}_{\mathbf{q}}:=\sqrt{\frac{2\omega_{\mathbf{q},s}aw_{s }N}{c_{s}}}\varphi^{\pm s}_{\mathbf{q}}\,, \tag{34}\]
where \(a\) is the lattice constant, and \(w_{s}\) a dimensionless number that will be determined below. The components \(\tilde{\varphi}^{\pm s}_{\mathbf{q},\alpha\beta}(\mathbf{k})\) are generically order one numbers which for fixed non-zero \(\mathbf{q}\) do not go to zero in the thermodynamic limit. Moreover, in the appendices we show that in the thermodynamic limit \(\tilde{\varphi}^{\pm s}_{\mathbf{q},\alpha\beta}(\mathbf{k})\) also remains finite as \(\mathbf{q}\to 0\).
The two wavefunctions \(\tilde{\varphi}^{\pm s}_{\mathbf{q},\alpha\beta}(\mathbf{k})\) at non-zero \(\mathbf{q}\) become identical (possibly up to a phase factor) as \(\mathbf{q}\to 0\). This is possible because the Goldstone mode wavefunctions are _not_ orthogonal eigenvectors obtained from a Hermitian eigenvalue problem, but instead are obtained from the _generalized_ eigenvalue equation in Eq. (28). Exactly at \(\mathbf{q}=0\), there is only one Goldstone wavefunction - which corresponds to the exact zero mode in Eq. (32). This happens because in the presence of zero modes, solutions to the generalized eigenvalue equation (28) do not form a complete basis [23]. As a next step, we fix the phase and norm of the rescaled wavefunctions such that
\[\lim_{\mathbf{q}\to 0}\tilde{\varphi}^{\pm s}_{\mathbf{q},\alpha\beta}(\mathbf{k}) =\tilde{Q}_{s,\alpha\beta}(\mathbf{k})\,, \tag{35}\]
where \(\tilde{Q}_{s}\) is defined in Eq. (32). As a first consistency check, let us note that it follows from Eq. (29) that the rescaled wavefunctions satisfy
\[\frac{1}{N}\sum_{\mathbf{k}}\sum_{i,n}\left(|\tilde{\varphi}^{\pm s}_{ \mathbf{q},ni}(\mathbf{k})|^{2}-|\tilde{\varphi}^{\pm s}_{\mathbf{q},in}( \mathbf{k})|^{2}\right)=\pm 2\omega_{\mathbf{q},s}\frac{aw_{s}}{c_{s}}\,. \tag{36}\]
The exact zero mode \(\tilde{Q}_{s}\) defined in Eq. (32) also satisfies this equation with \(\omega_{\mathbf{q}=0,s}=0\).
To satisfy Eq. (35) we have to adjust the phase and norm of the wavefunctions \(\tilde{\varphi}_{\mathbf{q}}^{\pm s}\). The norm we fix via the dimensionless parameter \(w_{s}\) defined in Eq. (34). The phase can be partially fixed using the symmetries \(\mathcal{P}\) and \(\mathcal{PT}\). In particular, as the zero mode \(\tilde{Q}_{s}\) is even under \(\mathcal{P}\) and has eigenvalue \((-1)^{\kappa+1}\) under \(\mathcal{PT}\), we require that
\[\mathcal{PT}[\tilde{\varphi}_{\mathbf{q}}^{\pm s}] = (-1)^{\kappa+1}\tilde{\varphi}_{\mathbf{q}}^{\mp s} \tag{37}\] \[\mathcal{P}[\tilde{\varphi}_{\mathbf{q}}^{\pm s}] = \tilde{\varphi}_{-\mathbf{q}}^{\mp s}\,. \tag{38}\]
After this partial gauge fixing, there is a remaining phase freedom over half of the Brillouin zone for one of the two wavefunctions, say \(\tilde{\varphi}_{\mathbf{q}}^{+s}\). We fix this phase by requiring \(\tilde{\varphi}_{\mathbf{q}}^{+s}\) to be a continuous function of \(\mathbf{q}\), and Eq. (35) to hold. Note that if there is an additional inversion symmetry \(\mathcal{I}\) under which \(\mathbf{q}\rightarrow-\mathbf{q}\), then we can fix the remaining phase of \(\tilde{\varphi}_{\mathbf{q}}^{+s}\) over half of the Brillouin zone up to a minus sign by requiring that \(\tilde{\varphi}_{\mathbf{q}}^{\pm s}\) is either even or odd under \(\mathcal{IT}\) (depending on whether \(\tilde{Q}_{s}\) is even or odd under \(\mathcal{IT}\)). The requirements of continuity and a smooth \(\mathbf{q}\to 0\) limit as in Eq. (35) then uniquely fix the phase.
With the gauge choice in Eq. (37), it follows that \((\tilde{\varphi}_{\mathbf{q}}^{s}\pm\tilde{\varphi}_{\mathbf{q}}^{-s})/2\) is an eigenstate of \(\mathcal{PT}\) with eigenvalue \(\pm(-1)^{\kappa+1}\). In the limit \(\mathbf{q}\to 0\), the symmetric combination therefore becomes the unique zero mode \(\tilde{Q}_{s}\), which has \(\mathcal{PT}\) eigenvalue \((-1)^{\kappa+1}\). The anti-symmetric combination becomes a zero mode with \(\mathcal{PT}\) eigenvalue \(-(-1)^{\kappa+1}\), which must be the zero vector. From this it follows that with the gauge choice in Eq. (37), equation (35) indeed holds.
This concludes our discussion of the Goldstone wavefunctions. Below we will make use of the properties of these wavefunctions when we go to the rotating frame in Sec. VI.
### Electron-Goldstone scattering vertex
In Sec. IV we defined the effective RPA interaction, which contains the collective mode propagator \(G_{2}(\mathbf{q},i\nu)\) that we obtained by solving the Bethe-Salpeter equation. We also discussed the properties of the solutions to the Bether-Salpeter equation (the details of which are contained in the appendices), and in the previous subsection we focused in particular on solutions that correspond to Goldstone modes. In this section, we come back to the effective RPA interaction. In particular, now that we understand better the properties of \(G_{2}(\mathbf{q},i\nu)\) and the associated collective mode wavefunctions, we will plug \(G_{2}(\mathbf{q},i\nu)\) into the diagrams in Fig. 1 (a) and study the interaction between electrons which results from the exchange of Goldstone modes.
Our starting point is Eq. (27), which is the general form for \(G_{2}(\mathbf{q},i\nu)\) as a solution to the Bethe-Salpeter equation. Taking this expression, and plugging it into the diagrams in Fig. 1 (a), we see that the effective RPA interaction is given by the bare interaction (the left most diagram in Fig. 1 (a)), plus an interaction where the electrons emit and absorb a Goldstone mode, which can be denoted by the following single diagram:
(39)
In this diagram, the wavy line denotes the boson propagator \((i\nu\eta_{\mathbf{q},s}-\omega_{\mathbf{q},s})^{-1}\), and the scattering vertex is given by
(40)
where the box represents the Goldstone wavefunction \(\varphi_{\mathbf{q},\alpha\beta}^{s}(\mathbf{k})\). Written out in equations, the vertex is given by
\[\begin{split}\hat{g}_{\mathbf{q},\alpha\beta}^{s}(\mathbf{k})& =V(\mathbf{q})\frac{1}{N}\sum_{\mathbf{k^{\prime}}}\text{tr}\left( \varphi_{\mathbf{q}}^{s}(\mathbf{k^{\prime}})\Lambda_{\mathbf{q}}(\mathbf{k^{ \prime}})\right)\left[\Lambda_{\mathbf{q}}^{\dagger}(\mathbf{k})\right]_{ \alpha\beta}\\ -\frac{1}{N}\sum_{\mathbf{q^{\prime}}}V(\mathbf{q^{\prime}})\left[ \Lambda_{\mathbf{q^{\prime}}}^{\dagger}(\mathbf{k})\varphi_{\mathbf{q}}^{s}( \mathbf{k}-\mathbf{q^{\prime}})\Lambda_{\mathbf{q^{\prime}}}(\mathbf{k}- \mathbf{q})\right]_{\alpha\beta}\end{split} \tag{41}\]
Note that \(\hat{g}_{\mathbf{q},\alpha\beta}^{s}(\mathbf{k})\) is generically non-zero for all \(\alpha\) and \(\beta\), whereas \(\varphi_{\mathbf{q},\alpha\beta}^{s}(\mathbf{k})\) is only non-zero if \(n_{\alpha}(\mathbf{k})\neq n_{\beta}(\mathbf{k}-\mathbf{q})\). The terms in Eq. (41) which define \(\hat{g}_{\mathbf{q}}^{s}\) have appeared previously as part of the generalized eigenvalue equation in Eq. (28). Since the Goldstone wavefunctions \(\varphi_{\mathbf{q}}^{s}\) are obtained as solutions to that equation, we immediately obtain from Eqs. (41) and (28) that
\[\begin{split}\hat{g}_{\mathbf{q},\alpha\beta}^{s}(\mathbf{k})& =\left([n_{\alpha}(\mathbf{k})-n_{\beta}(\mathbf{k}-\mathbf{q})] \eta_{\mathbf{q},s}\omega_{\mathbf{q},s}\right.\\ &\left.-\left|E_{\alpha}(\mathbf{k})-E_{\beta}(\mathbf{k}- \mathbf{q})\right|\right)\varphi_{\mathbf{q},\alpha\beta}^{s}(\mathbf{k})\\ \text{if}\ n_{\alpha}(\mathbf{k})\neq n_{\beta}(\mathbf{k}- \mathbf{q})\,.\end{split} \tag{42}\]
For the components \(\hat{g}_{\mathbf{q},\alpha\beta}^{s}(\mathbf{k})\) with \(n_{\alpha}(\mathbf{k})=n_{\beta}(\mathbf{k}-\mathbf{q})\) we have no general analytic expression. Using the \(\mathcal{P}\) symmetry in the gauge of Eq. (38), Eq. (42) also makes it explicit that the electron-Goldstone vertex satisfies
\[\hat{g}_{\mathbf{q},\alpha\beta}^{s\ast}(\mathbf{k})=\hat{g}_{-\mathbf{q}, \alpha\beta}^{-s}(\mathbf{k}-\mathbf{q})\,, \tag{43}\]
where we have used that \(\eta_{\mathbf{q},s}=-\eta_{-\mathbf{q},-s}\) and \(\omega_{\mathbf{q},s}=\omega_{-\mathbf{q},-s}\). This property ensures that the Goldstone-mediated interaction between electrons is Hermitian.
### Effective electron-boson model
In the previous section we have derived an explicit expression for the interaction between electrons mediated by the exchange of Goldstone modes. The Goldstone modes are collective excitations of the electron fluid, and hence are not described by independent microscopic degrees of freedom. In this section, however, we will write down an effective theory which does contain both independent electronic and bosonic degrees of freedom, and which mimics the behaviour of the coupled electron-Goldstone-boson system. The action for the effective electron-boson model is given by
\[S=S_{el}+S_{V}+S_{B}+S_{el-B}+S_{C}\,, \tag{44}\]
where \(S_{el}\) and \(S_{V}\) respectively denote the quadratic and interacting part of the electronic action that appeared previously in the Hartree-Fock path integral in Eq. (25).
The terms \(S_{B}\), \(S_{el-B}\), and \(S_{C}\) are not present in the Hartree-Fock action. Of these, the first two describe the Goldstone dynamics and the electron-Goldstone-mode scattering. The Goldstone dynamics is described by
\[S_{B}=\int_{0}^{1/T}\mathrm{d}\tau\,\sum_{s>0}\sum_{\mathbf{q}}\bar{b}_{ \mathbf{q},s}(\partial_{\tau}+\omega_{\mathbf{q},s})b_{\mathbf{q},s}\,, \tag{45}\]
where \(\bar{b}_{\mathbf{q},s},b_{\mathbf{q},s}\) are bosonic fields corresponding to the Goldstone creation and annihilation operators. The electron-Goldstone coupling is given by
\[\begin{split} S_{el-B}=\int_{0}^{1/T}\mathrm{d}\tau\,\frac{1}{ \sqrt{N}}\sum_{\mathbf{q},\mathbf{k}}\bar{\psi}_{\mathbf{k},\alpha}\psi_{ \mathbf{k}-\mathbf{q},\beta}\times\\ \sum_{s>0}\sqrt{\frac{c_{s}}{2\omega_{\mathbf{q}}aw_{s}}}\left( \tilde{g}^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k})\bar{b}_{-\mathbf{q},s}+ \tilde{g}^{-s}_{\mathbf{q},\alpha\beta}(\mathbf{k})b_{\mathbf{q},s}\right)\,, \end{split} \tag{46}\]
where for future use we have defined the vertex functions \(\tilde{g}^{s}_{\mathbf{q}}\) using the same rescaling as in Eq. (34), i.e. \(\tilde{g}^{s}_{\mathbf{q}}=\tilde{g}^{s}_{\mathbf{q}}\sqrt{2\omega_{\mathbf{q }}aw_{s}N/c_{s}}\). It follows from Eq. (43) that tree-level boson exchange in the effective electron-boson model indeed generates the interaction in Eq. (39).
So far we have taken the Hartree-Fock action, and supplemented it with the bosonic degrees of freedom. The theories with and without the bosonic fields are obviously not the same. In particular, the bosonic degrees of freedom are designed to capture the RPA fluctuations which result from the electronic interaction. As a result, working at tree level with the effective action in Eq. (44) (without \(S_{C}\)) is the same as doing the infinite sum of RPA diagrams in the original theory. Once loops are taken into account in the effective theory, however, one has to be careful that the loop diagrams do not correspond to a double counting of RPA diagrams in the original theory. For example, \(\omega_{\mathbf{q},s}\) is already the complete boson dispersion which is generated by the electron dynamics, and it should therefore not be further renormalized by coupling to the electrons. To ensure this, we have to add a counter term which is quadratic in the boson fields, and which removes all self-energy diagrams for the bosons. Another counter term quartic in the fermion fields has to be added to remove the RPA renormalization of the electron repulsion, as this effect is already contained in the interaction mediated by boson exchange. All these counter terms are contained in \(S_{C}\). It is not necessary to construct these terms explicitly - all one needs to know is that they are present, and that they eliminate certain diagrams such that working with the effective theory beyond tree level is the same as working with the original theory.
### Transforming to the real basis
In this final section we bring the effective electron-boson model introduced in the previous section in a more standard form. First, we rescale the boson fields as follows
\[\bar{b}_{\mathbf{q},s},b_{\mathbf{q},s}\rightarrow\sqrt{\frac{aw_{s}}{c_{s}}} \bar{b}_{\mathbf{q},s},\sqrt{\frac{aw_{s}}{c_{s}}}b_{\mathbf{q},s}\,, \tag{47}\]
and then define the usual (Fourier transformed) canonically conjugate real fields as
\[\phi_{\mathbf{q},s} = \frac{1}{\sqrt{2\omega_{\mathbf{q}}}}(\bar{b}_{\mathbf{q},s}+b_{ -\mathbf{q},s}) \tag{48}\] \[\pi_{\mathbf{q},s} = i\sqrt{\frac{\omega_{\mathbf{q}}}{2}}(\bar{b}_{\mathbf{q},s}-b_{ -\mathbf{q},s})\,, \tag{49}\]
where we adopted the standard notation. In terms of the new fields, the quadratic boson action becomes
\[\begin{split} S_{B}=\int_{0}^{1/T}\mathrm{d}\tau& \sum_{s>0}\frac{aw_{s}}{c_{s}}\sum_{\mathbf{q}}\bigg{(}-i\pi_{ \mathbf{q},s}\partial_{\tau}\phi_{-\mathbf{q},s}\\ &+\frac{1}{2}\pi_{\mathbf{q},s}\pi_{-\mathbf{q},s}+\frac{1}{2} \omega_{\mathbf{q},s}^{2}\phi_{\mathbf{q},s}\phi_{-\mathbf{q},s}\bigg{)}\,, \end{split} \tag{50}\]
and the electron-boson vertex is given by
\[\begin{split} S_{el-B}=&\int_{0}^{1/T}\mathrm{d}\tau \,\frac{1}{\sqrt{N}}\sum_{\mathbf{q},\mathbf{k}}\bar{\psi}_{\mathbf{k},\alpha} \psi_{\mathbf{k}-\mathbf{q},\beta}\times\\ &\sum_{s>0}\left(g^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k}) \phi_{-\mathbf{q},s}+f^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k})\pi_{-\mathbf{q},s}\right)\,,\end{split} \tag{51}\]
with
\[g^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k}) = \frac{1}{2}\left(\tilde{g}^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k}) +\tilde{g}^{-s}_{\mathbf{q},\alpha\beta}(\mathbf{k})\right) \tag{52}\] \[f^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k}) = \frac{-i}{2\omega_{\mathbf{q},s}}\left(\tilde{g}^{s}_{\mathbf{q}, \alpha\beta}(\mathbf{k})-\tilde{g}^{-s}_{\mathbf{q},\alpha\beta}(\mathbf{k}) \right)\,. \tag{53}\]
As a final step, we integrate out the \(\pi_{\mathbf{q},s}\) fields. This produces the following quadratic boson action:
\[S_{B}=\frac{1}{2}\int\mathrm{d}\tau\,\sum_{s>0}\frac{aw_{s}}{c_{s}}\sum_{ \mathbf{q}}\left(-\phi_{-\mathbf{q}}\partial_{\tau}^{2}\phi_{\mathbf{q}}+ \omega_{\mathbf{q},s}^{2}\phi_{\mathbf{q},s}\phi_{-\mathbf{q},s}\right)\,. \tag{54}\]
In the long-wavelength continuum limit, using \(\omega_{\mathbf{q},s}\sim c_{s}^{2}\mathbf{q}^{2}\), we can write this as the standard action for relativistic real scalar fields:
\[S_{B}=\frac{1}{2}\int\mathrm{d}\tau\int\mathrm{d}^{2}\mathbf{r}\,\sum_{s>0} \chi_{s}(\partial_{\tau}\phi_{s})^{2}+\rho_{s}(\nabla\phi_{s})^{2}\,, \tag{55}\]
with \(\chi_{s}=w_{s}/ac_{s}\), and \(\rho_{s}=w_{s}c_{s}/a\) is the stiffness. After integrating out \(\pi_{\mathbf{q},s}\), the electron-Goldstone vertex becomes
\[\begin{split} S_{el-B}=&\int_{0}^{1/T}\mathrm{d} \tau\,\frac{1}{\sqrt{N}}\sum_{\mathbf{q},\mathbf{k}}\tilde{\psi}_{\mathbf{k}, \alpha}\psi_{\mathbf{k}-\mathbf{q},\beta}\times\\ &\sum_{s>0}\left(g^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k})\phi_{ -\mathbf{q},s}+f^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k})i\partial_{\tau}\phi _{-\mathbf{q},s}\right)\,,\end{split} \tag{56}\]
where in the second term \(\pi_{\mathbf{q},s}\) is now replaced with \(i\partial_{\tau}\phi_{\mathbf{q},s}\). Finally, we also obtain following two-body interaction from integrating out \(\pi_{\mathbf{q},s}\):
\[\begin{split} g^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k})& =\ \frac{1}{2}\bigg{(}[n_{\alpha}(\mathbf{k})-n_{\beta}(\mathbf{k}- \mathbf{q})]\omega_{\mathbf{q},s}\left(\tilde{\varphi}^{s}_{\mathbf{q}, \alpha\beta}(\mathbf{k})-\tilde{\varphi}^{-s}_{\mathbf{q},\alpha\beta}( \mathbf{k})\right)-|E_{\alpha}(\mathbf{k})-E_{\beta}(\mathbf{k}-\mathbf{q})| \left(\tilde{\varphi}^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k})+\tilde{ \varphi}^{-s}_{\mathbf{q},\alpha\beta}(\mathbf{k})\right)\bigg{)}\end{split} \tag{59}\] \[f^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k}) =\ \frac{-i}{2\omega_{\mathbf{q},s}}\bigg{(}[n_{\alpha}(\mathbf{k} )-n_{\beta}(\mathbf{k}-\mathbf{q})]\omega_{\mathbf{q},s}\left(\tilde{\varphi} ^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k})+\tilde{\varphi}^{-s}_{\mathbf{q}, \alpha\beta}(\mathbf{k})\right)-|E_{\alpha}(\mathbf{k})-E_{\beta}(\mathbf{k}- \mathbf{q})|\left(\tilde{\varphi}^{s}_{\mathbf{q},\alpha\beta}(\mathbf{k})- \tilde{\varphi}^{-s}_{\mathbf{q},\alpha\beta}(\mathbf{k})\right)\bigg{)}\,,\]
where we have used that
\[\begin{split}&[n_{\alpha}(\mathbf{k})-n_{\beta}(\mathbf{k})] \times|E_{\alpha}(\mathbf{k})-E_{\beta}(\mathbf{k})|=\\ &-|n_{\alpha}(\mathbf{k})-n_{\beta}(\mathbf{k})|\times[E_{\alpha }(\mathbf{k})-E_{\beta}(\mathbf{k})]\,.\end{split} \tag{60}\]
Equation (60) shows that \(\mathbf{q}=0\) Goldstone modes induce scattering between electrons in different mean-field bands. This is because the mean-field bands have gaps which result from the presence of a non-zero symmetry-breaking order parameter. The way the original, non-interacting bands are split to generate the mean-field bands depends on the orientation of the order parameter. So a \(\mathbf{q}=0\) Goldstone mode, which corresponds to a global rotation of the order parameter, will induce a mixing of the mean-field bands which enter the Hartree-Fock path integral, as they are defined using an order parameter with a chosen fixed orientation. We note Eq. (60) has previously also been obtained in Ref. [7], although the authors of that work did not use the RPA
approach adopted here.
The exchange of Goldstone modes thus generates an interaction between the mean-field electrons which contains a contribution of the form \(\sim\Delta^{2}/(\chi_{s}(i\nu)^{2}-\rho_{s}{\bf q}^{2})\), where \(\Delta\) is the gap in the mean-field bands induced by the symmetry breaking. This interaction is singular in the \(\nu,{\bf q}\to 0\) limit. Moreover, \(\Delta\) is of the interaction energy scale. So for strongly interacting electron systems, one finds that the energy scale of the interaction mediated by Goldstone exchange is much larger than the energy scale of the bare density-density interaction. This results in a situation where the mean-field electrons acquire large self-energy corrections, which casts doubt on their existence as well-defined quasi-particles. In the next section we will show that electrons which are defined in a rotating frame are not affected by \({\bf q}=0\) Goldstone modes, and do not experience any singular interactions.
Finally, from Eq. (59) it follows that also \(\lim_{{\bf q}\to 0}f_{\bf q}^{s}\neq 0\). However, as \(f_{\bf q}^{s}\) is the vertex which couples the mean-field electrons to \(\partial_{\tau}\phi\), this vertex does not induce a singular interaction.
### The solution: the rotating frame
To remove the singular interaction between the mean-field electrons discussed in the previous section we perform a change of integration variables in the path integral of the electron-boson model. In particular, in the local orbital basis we change the Grassmann fields as
\[\psi_{a}({\bf r})\to\sum_{b}R_{ab}({\bf r})\psi_{b}({\bf r})\,, \tag{62}\]
with
\[R({\bf r})=\exp\left(-i\sum_{s>0}\phi_{s}({\bf r})Q_{s}\right)\,. \tag{63}\]
As this is a unitary transformation, the corresponding change of integration variables has a trivial Jacobian. In momentum space, and working in the mean-field basis, the transformation in Eq. (62) can be written as
\[\psi_{\alpha}({\bf k})\to\psi_{\alpha}({\bf k})-\frac{i}{\sqrt{N}}\sum_{s,{ \bf q},\beta}\phi_{{\bf q},s}Q^{s}_{{\bf q},\alpha\beta}({\bf k})\psi_{\beta} ({\bf k}-{\bf q})+\mathcal{O}(\phi^{2})\,, \tag{64}\]
where we have defined
\[Q^{s}_{{\bf q},\alpha\beta}({\bf k})=\langle u_{\alpha}({\bf k})|Q_{s}|u_{ \beta}({\bf k}-{\bf q})\rangle\,. \tag{65}\]
Performing this change of variables in the kinetic term of the mean-field fermions, \(\sum_{{\bf k},\alpha}\bar{\psi}_{{\bf k},\alpha}(\partial_{\tau}+E_{\alpha}( {\bf k}))\psi_{{\bf k},\alpha}\), produces terms which can be absorbed in \(S_{el-B}\). In particular, using Eq. (64) we find that the change of Grassmann variables induces following shift in the vertex functions:
\[g^{s}_{{\bf q},\alpha\beta}({\bf k}) \to g^{s}_{{\bf q},\alpha\beta}({\bf k})-iQ^{s}_{{\bf q},\alpha\beta}({ \bf k})[E_{\alpha}({\bf k})-E_{\beta}({\bf k}-{\bf q})] \tag{66}\] \[=:g^{Rs}_{{\bf q},\alpha\beta}({\bf k})\] \[f^{s}_{{\bf q},\alpha\beta}({\bf k}) \to f^{s}_{{\bf q},\alpha\beta}({\bf k})-Q^{s}_{{\bf q},\alpha\beta}({ \bf k})\] (67) \[=:f^{Rs}_{{\bf q},\alpha\beta}({\bf k})\]
The change of variables also introduces higher-order interaction terms of the form \(\phi^{n}\bar{\psi}\psi\) with \(n>1\), but we ignore these terms here.
From Eq. (60) we see that
\[\lim_{{\bf q}\to 0}g^{Rs}_{{\bf q},\alpha\beta}({\bf k})=0\,, \tag{68}\]
i.e. the \(\nu,{\bf q}=0\) Goldstone modes decouple from the electrons. This shows that the electrons defined in Eq. (62), which live in a rotating frame that locally follows the order parameter fluctuations, will have much smaller self-energy corrections than the mean-field electrons, and hence will be much closer to the true quasi-particles of the symmetry-broken system.
Before concluding this section let us mention the effect of going to the rotating frame on the other terms in the action of the electron-boson model. First, the change of Grassmann variables in Eq. (62) leaves the bare interaction term \(S_{V}\) invariant. However, in general it will change the interaction contained in \(S_{V\pi}\) (58). For some applications it might be important to keep this in mind, but in the remainder of this work we will not use the interactions contained in \(S_{V}\) and \(S_{V\pi}\) anymore.
### Implications for electron spectral functions
In this section we investigate the consequences of the fact that quasi-particles are defined in a rotating frame for the correlation functions of microscopic fermions. In particular, we are interested in the two-point functions
\[G_{ab}(\tau,{\bf r}-{\bf r}^{\prime}) = -\langle\hat{T}c_{{\bf r},a}(\tau)c^{\dagger}_{{\bf r}^{\prime},b} (0)\rangle_{\beta} \tag{70}\] \[= \frac{-1}{Z}\int_{\bar{\psi},\psi,\phi}\psi_{{\bf r},a}\bar{\psi }_{{\bf r}^{\prime},b}\,e^{-S[\bar{\psi},\bar{\psi},\phi]}\,,\]
where \(\hat{T}\) is again the time-ordering operator, \(\langle\cdot\rangle_{\beta}\) is the thermal average, and \(\int_{\bar{\psi},\psi,\phi}\) a path integral for the fields \(\bar{\psi},\psi\) and \(\phi\). Note that the fermion fields in Eq. (70) are in the original, unrotated frame. Performing the change of integration variables as in Eq. (62), we obtain
\[G_{ab}(\tau,{\bf r})=-\sum_{c,d}\langle R^{*}_{ca}(\tau,{\bf r})\psi_{c,{\bf r} }(\tau)R_{db}(0,0)\bar{\psi}_{d,0}(0)\rangle_{\beta}\,,\]
where \(\bar{\psi},\psi\) now live in the rotated frame, and hence describe the quasi-particles of the broken-symmetry state.
As a first approximation, we ignore the interaction between the electrons and the Goldstone modes, such that the correlation function of the microscopic electrons factorizes as
\[G_{ab}(\tau,\mathbf{r})=-\sum_{c,d}\langle R_{ca}^{*}(\tau,\mathbf{r})R_{db}(0,0 )\rangle\,\langle\psi_{c,\mathbf{r}}(\tau)\bar{\psi}_{d}(0,0)\rangle_{\beta}\,.\]
By working with this approximation we make a physical assumption that corrections resulting from electron-Goldstone mode interactions will be subleading compared to the above 'zeroth order' contribution. This assumption is partly justified by the fact that the fermions in the rotating frame decouple from the Goldstone modes at long wavelengths. Going to frequency and momentum space we then obtain
\[\begin{split} G_{ab}(i\omega,\mathbf{k})=\frac{-T}{N}\sum_{i \nu,\mathbf{q}}&\sum_{cd\alpha}D_{ab,cd}^{R}(i\nu,\mathbf{q})\\ &\times\frac{u_{\alpha}^{c}(\mathbf{k}-\mathbf{q})u_{\alpha}^{d*} (\mathbf{k}-\mathbf{q})}{i(\omega-\nu)-E_{\mathbf{k}-\mathbf{q},\alpha}}\,, \end{split} \tag{71}\]
where \(D_{ab,cd}^{R}(i\nu,\mathbf{q})=-\langle R_{ca}^{*}(i\nu,\mathbf{q})R_{db}(i \nu,\mathbf{q})\rangle\). We thus find that the fermion correlation function in frequency-momentum space is a convolution of the fermionic quasiparticle propagator with the propagator of the Goldstone modes. Note that an expression very similar to Eq. (71) is obtained for the electron Green's function in theories where the electron is assumed to fractionalize in a spinon and a holon [14; 15; 16; 17; 18; 19; 20].
Up to now our effective electron-boson model describes a collection of independent generalized'spin-waves', and does not take topological order parameter configurations, such as e.g. vortices or skyrmions, or interactions between Goldstone modes into account. To remedy this, one can improve the model to correctly capture the compact nature of the order parameter manifold by rewriting the Goldstone action as a non-linear sigma model. We will do this explicitly in the next section for the anti-ferromagnetic ground state of the Hubbard model. With the non-linear sigma model description of the Goldstone modes, it is possible that thermal fluctuations destroy the long-range order at a much lower temperature than the mean-field transition temperature at which the symmetry-breaking gap disappears from the Hartree-Fock band spectrum. This is especially relevant in 2D, where the Hohenberg-Mermin-Wagner theorem states that long-range order from continuous symmetry breaking disappears at any non-zero temperature. In the resulting thermally disordered phase, the propagator \(D_{ab,cd}^{R}(i\nu,\mathbf{q})\) is symmetric, i.e. \(D_{ab,cd}^{R}(i\nu,\mathbf{q})\propto\delta_{ab}\), and acquires a mass. From Eq. (71), we see that in that case the electron Green's function \(G_{ab}(i\omega,\mathbf{k})\) also becomes symmetric in the thermally disordered phase, even though the order-parameter-induced gap can still be present in the mean-field spectrum of the fermions in the rotating frame. This effect therefore naturally leads to 'pseudo-gap' physics in thermally disordered broken-symmetry states. Below we illustrate this explicitly in the two example sections.
## VII Example I: Anti-ferromagnetism in the Hubbard Model
In this first example section we apply the general formalism introduced above to the anti-ferromagnetic ground state of the square-lattice Hubbard model.
### Mean-field state and Goldstone mode energies
We are interested in the Hubbard model on the square lattice, which is defined by the following Hamiltonian:
\[H=-t\sum_{\langle ij\rangle}\sum_{s}c_{i,s}^{\dagger}c_{j,s}-t^{\prime}\sum_{ \langle\langle ij\rangle\rangle}c_{i,s}^{\dagger}c_{j,s}+h.c.+U\sum_{i}n_{i, \uparrow}n_{i,\downarrow}\,, \tag{72}\]
where the first sum is over nearest neighbors, and the second sum over next nearest neighbors. In the interaction term, \(n_{i,s}=c_{i,s}^{\dagger}c_{i,s}\) is the density of electrons with spin \(s\) at site \(i\). We choose units of energy such that \(t\equiv 1\), and take \(t^{\prime}=-0.35\).
For our purposes, it suffices to only consider half filling, where it is well-known that the ground state is an insulating anti-ferromagnet for sufficiently large \(U\). An antiferromagnet (AFM) breaks translation symmetry, so at first sight the general formalism introduced above, which explicitly assumes translation invariance, does not seem to apply. However, an AFM is invariant under the combined action of translating by one lattice site and flipping all the spins. Let's call this modified translation symmetry \(T^{\prime}_{x/y}\). If we consider the AFM state with periodic boundary conditions, i.e. on a torus of size \(L_{x}\times L_{y}\) (note that both \(L_{x}\) and \(L_{y}\) have to be even), then we see that \(T^{{}^{\prime}L_{x}}_{x}=T^{{}^{\prime}L_{y}}_{y}=\mathds{1}\). This shows that the eigenvalues of \(T^{\prime}_{x}\) (\(T^{\prime}_{y}\)) are phases \(e^{ik_{x}}\) (\(e^{i\tilde{k}_{y}}\)), with \(\tilde{k}_{x}=\frac{2\pi n}{L_{x}}\) (\(\tilde{k}_{y}=\frac{2\pi n}{L_{y}}\)), just as for the conventional translation operators. Because the AFM is invariant under \(T^{\prime}_{x/y}\), the correlation functions satisfy \(\langle c_{\mathbf{\tilde{k}}}^{\dagger}c_{\mathbf{\tilde{k}}^{\prime}} \rangle\propto\delta_{\mathbf{\tilde{k}},\mathbf{\tilde{k}}^{\prime}}\), i.e. the single-particle density matrix is diagonal in \(\tilde{\mathbf{k}}\). So by working in the basis where \(T^{\prime}_{x/y}\) is diagonal, we can treat the AFM in the same way as conventional translationly invariant states, by making use of the conserved pseudo-momentum \(\tilde{\mathbf{k}}\).
Let us assume without loss of generality that the AFM order is in the XY plane. \(T^{\prime}_{x/y}\) can then be defined as a translation followed by a \(\pi\) rotation along the \(z\) axis:
\[T^{\prime}_{\mathbf{r}}=e^{i\mathbf{r}\cdot\mathbf{Q}\,s^{*}/2}T_{\mathbf{r}}\,, \tag{73}\]
where \(\mathbf{r}=(n,m)\) with \(n,m\in\mathbb{Z}\) describes a general lattice vector, \(\mathbf{Q}=(\pi,\pi)\), and \(s^{i}\) are the Pauli matrices acting on the spin indices. In this case, the connection
between the pseudo-momentum \(\tilde{\mathbf{k}}\) and the crystal momentum \(\mathbf{k}\) is given by:
\[c^{\dagger}_{\tilde{\mathbf{k}},\uparrow} = c^{\dagger}_{\mathbf{k}-\mathbf{Q}/2,\uparrow} \tag{74}\] \[c^{\dagger}_{\tilde{\mathbf{k}},\downarrow} = c^{\dagger}_{\mathbf{k}+\mathbf{Q}/2,\downarrow} \tag{75}\]
The self-consistent Hartree-Fock mean-field Hamiltonian is diagonal in the pseudo-momentum \(\tilde{\mathbf{k}}\), and, assuming AFM order along the \(x\)-direction, can be written as
\[H_{HF}=\sum_{\tilde{\mathbf{k}}}c^{\dagger}_{\tilde{\mathbf{k}}}\begin{pmatrix} \varepsilon_{\tilde{\mathbf{k}},\uparrow}&\Delta_{\tilde{\mathbf{k}}}\\ \Delta_{\tilde{\mathbf{k}}}&\varepsilon_{\tilde{\mathbf{k}},\downarrow} \end{pmatrix}c_{\tilde{\mathbf{k}}}\,, \tag{76}\]
where \(c^{\dagger}_{\tilde{\mathbf{k}}}=(c^{\dagger}_{\tilde{\mathbf{k}},\uparrow},c^{\dagger}_{\tilde{\mathbf{k}},\downarrow})\). We have numerically solved the Hartree-Fock self-consistency equations on a \(34\times 34\) pseudo-momentum grid using \(U=5\), and we find an AFM order parameter \(\Delta_{\tilde{\mathbf{k}}}=\Delta\approx 1.93\).
We next obtain the Goldstone mode energies and wavefunctions by numerically solving the generalized eigenvalue equation in Eq. (28), where each momentum label \(\mathbf{k}\) is to be replaced with a pseudo-momentum label \(\tilde{\mathbf{k}}\). The numerically obtained energy for the lowest-energy collective mode (i.e. the Goldstone mode) is shown in Fig. 3 over the entire pseudo-momentum Brillouin zone, and along two cuts \(\tilde{q}_{y}=0\) and \(\tilde{q}_{y}=\pi\). As expected, we find two linearly dispersing Goldstone modes, one at pseudo-momentum \(\tilde{\mathbf{q}}=(0,0)\), and one at \(\tilde{\mathbf{q}}=(\pi,\pi)\). As the AFM order is along the \(x\)-direction, we can understand this by noting that the two broken-symmetry generators \(s^{z}\) and \(s^{y}\) at crystal momentum \(\mathbf{q}=(0,0)\) respectively have pseudo-momentum \(\tilde{\mathbf{q}}=(0,0)\) and \(\tilde{\mathbf{q}}=(\pi,\pi)\). From a fit to the linear part of the dispersion relation \(\omega_{\tilde{\mathbf{q}}}\), we find a Goldstone velocity \(c\approx 0.988\).
### Electron-Goldstone scattering vertices
By solving the generalized eigenvalue equation (28), we also obtain the collective mode wavefunctions \(\varphi^{\pm}_{\tilde{\mathbf{q}},\alpha\beta}(\tilde{\mathbf{k}})\), which can be used to construct the electron-Goldstone scattering vertices \(g_{\tilde{\mathbf{q}},\alpha\beta}(\tilde{\mathbf{k}})\) and \(f_{\tilde{\mathbf{q}},\alpha\beta}(\tilde{\mathbf{k}})\) as explained in Sec. V. Recall that \(\alpha\) and \(\beta\) label the mean-field bands, i.e. the eigenstates of \(H_{HF}\) defined in Eq. (76), and not the spin indices.
To go to the rotating frame, we use
\[R(\mathbf{r})=\exp\left(-i\left[\phi_{z}(\mathbf{r})s^{z}+\phi_{y}(\mathbf{r} )(-1)^{\mathbf{Q}\cdot\mathbf{r}}s^{y}\right]\right)\,, \tag{77}\]
where \(\phi_{z}(\mathbf{r})\) and \(\phi_{y}(\mathbf{r})\) contain pseudo-momenta \(\tilde{\mathbf{k}}\) which lie in the magnetic Brillouin zone, i.e. the Brillouin zone with reciprocal vectors \((\pi,\pm\pi)\). It thus follows that \(Q_{\tilde{\mathbf{q}},\alpha\beta}(\tilde{\mathbf{k}})\), as defined in Eq. (64), is given by
\[Q_{\tilde{\mathbf{q}},\alpha\beta}(\tilde{\mathbf{k}})=\langle u_{\alpha}( \tilde{\mathbf{k}})|s^{z}|u_{\beta}(\tilde{\mathbf{k}}-\tilde{\mathbf{q}}) \rangle\,, \tag{78}\]
if \(\tilde{\mathbf{q}}\) lies in the first magnetic Brillouin zone, and
\[Q_{\tilde{\mathbf{q}},\alpha\beta}(\tilde{\mathbf{k}})=\langle u_{\alpha}( \tilde{\mathbf{k}})|s^{y}|u_{\beta}(\tilde{\mathbf{k}}-\tilde{\mathbf{q}}) \rangle\,, \tag{79}\]
if \(\tilde{\mathbf{q}}\) lies in the complement of the first magnetic Brillouin zone in the full pseudo-momentum Brillouin zone. From Eq. (76), it is clear that the mean-field states satisfy \(|u_{\alpha}(\tilde{\mathbf{k}}+\mathbf{Q})\rangle=\pm s^{z}|u_{\alpha}(\tilde{ \mathbf{k}})\rangle\) (assuming that we work in a gauge where the mean-field states are real). From this it follows that \(Q_{\tilde{\mathbf{q}}}\) is periodic under shifts by \(\mathbf{Q}\) and multiplication by \(i\), up to gauge-dependent factors \(\pm 1\).
The rotating frame is introduced to ensure that the scattering vertex \(g^{R}_{\tilde{\mathbf{q}},\alpha\beta}(\tilde{\mathbf{k}})\), defined in Eq. (66), is zero as \(\tilde{\mathbf{q}}\to 0,\mathbf{Q}\). As explained in Sec. V.1, this requires both a rescaling of the collective mode wavefunctions by a factor \(w^{1/2}\), and a gauge fixing procedure. The gauge fixing procedure is simplified by choosing AFM order along the \(x\)-direction, such that the collective mode wavefunctions \(\varphi_{\tilde{\mathbf{q}}}\) can initially be taken to be real. We then fix the gauge of the \(\varphi_{\tilde{\mathbf{q}}}\) by multiplication with \(\pm i\) when \(\tilde{\mathbf{q}}\) lies in the first magnetic Brillouin zone, and \(\pm 1\) otherwise, such that the phase of an arbitrary off-diagonal element in \(iQ_{\tilde{\mathbf{q}},\alpha\beta}(0)\) agrees with the phase of the corresponding off-diagonal element in \(iQ_{\tilde{\mathbf{q}},\alpha\beta}(0)[n_{\alpha}(0)-n_{\beta}(\tilde{\mathbf{ q}})]\). After the gauge fixing, we find that \(\lim_{\tilde{\mathbf{q}}\to 0,\mathbf{Q}}g^{R}_{\tilde{\mathbf{q}}}=0\) if we take \(w\approx 0.922\).
To illustrate our numerical results for the electron-Goldstone scattering vertices, we define
\[\langle g_{\mathbf{q},D}\rangle^{2} = \frac{1}{L_{x}L_{y}}\sum_{\tilde{\mathbf{k}}}|g_{\tilde{\mathbf{ q}},00}(\tilde{\mathbf{k}})|^{2}+|g_{\tilde{\mathbf{q}},11}(\tilde{\mathbf{k}})|^{2}\,, \tag{80}\] \[\langle g_{\mathbf{q},O}\rangle^{2} = \frac{1}{L_{x}L_{y}}\sum_{\tilde{\mathbf{k}}}|g_{\tilde{\mathbf{q}},01}(\tilde{\mathbf{k}})|^{2}+|g_{\tilde{\mathbf{q}},10}(\tilde{\mathbf{k}})|^{2}\,, \tag{81}\]
as a pseudo-momentum averaged version of respectively the intra-band (diagonal) and inter-band (off-diagonal) scattering vertices in the mean-field basis. Similarly, we define the averaged scattering vertices in the rotating frame as
Figure 3: Goldstone energy \(\omega_{\tilde{\mathbf{q}}}\), obtained by solving the Bethe-Salpeter equation (28) for the square-lattice Hubbard model with \(U=5\) on a \(34\times 34\) system, as a function of the pseudo-momentum.
\[\langle g^{R}_{\mathbf{q},D}\rangle^{2} = \frac{1}{L_{x}L_{y}}\sum_{\tilde{\mathbf{k}}}|g^{R}_{\mathbf{q},00} (\tilde{\mathbf{k}})|^{2}+|g^{R}_{\mathbf{q},11}(\tilde{\mathbf{k}})|^{2}\,, \tag{82}\] \[\langle g^{R}_{\mathbf{q},O}\rangle^{2} = \frac{1}{L_{x}L_{y}}\sum_{\tilde{\mathbf{k}}}|g^{R}_{\mathbf{q},01 }(\tilde{\mathbf{k}})|^{2}+|g^{R}_{\mathbf{q},10}(\tilde{\mathbf{k}})|^{2}\,. \tag{83}\]
In Fig. 4 (a) and (b) we show the averaged scattering vertices in the mean-field basis. As expected, the intra-band scattering vertex \(\langle g_{\tilde{\mathbf{q}},D}\rangle\) goes to zero as \(\tilde{\mathbf{q}}\to 0,\mathbf{Q}\), whereas the inter-band vertex \(\langle g_{\tilde{\mathbf{q}},O}\rangle\) remains non-zero. We see that \(\langle g_{\tilde{\mathbf{q}}=0,O}\rangle\approx U=5\), which illustrates the strong inter-band scattering for mean-field electrons induced by \(\tilde{\mathbf{q}}=0,\mathbf{Q}\) Goldstone modes. Fig. 4 (c-d) shows the averaged scattering vertices in the rotating frame. From Fig. 4 (d) we see that the inter-band scattering in the rotating frame indeed goes to zero as \(\tilde{\mathbf{q}}\to 0,\mathbf{Q}\), and is now maximal at the magnetic Brillouin zone boundary. We also see that the intra-band scattering vertex \(\langle g^{R}_{\mathbf{\tilde{q}}}\rangle\) is slightly enhanced in the rotating frame, and is almost identical (up to an overall scale factor) to the Goldstone dispersion \(\omega_{\tilde{\mathbf{q}}}\) shown in Fig. 3.
We can also similarly define the averaged intra/inter-band scattering vertices \(\langle f_{\tilde{\mathbf{q}},D/O}\rangle\) and \(\langle f^{R}_{\tilde{\mathbf{q}},D/O}\rangle\) in the mean-field and rotating frame respectively. These quantities are shown in Fig. 5. Both the intra- and inter-band scattering vertices become smaller in the rotating frame, are maximal near the magnetic Brillouin zone boundary, but remain non-zero and order one as \(\tilde{\mathbf{q}}\to 0,\mathbf{Q}\).
### Electron spectral function
In this last section, we go back to the original crystal momentum basis, and again assume that the AFM order is along the \(x\)-direction. The free fermion part of the effective action can then be written as
\[\begin{split} S_{\psi}=\int_{0}^{\beta}\mathrm{d}\tau& \sum_{\mathbf{k}}\sum_{s}\bar{\psi}_{\mathbf{k},s}(\partial_{\tau}+\varepsilon _{\mathbf{k}})\psi_{\mathbf{k},s}\\ &+\Delta(\bar{\psi}_{\mathbf{k},\uparrow}\psi_{\mathbf{k}+ \mathbf{Q},\downarrow}+\bar{\psi}_{\mathbf{k}+\mathbf{Q},\downarrow}\psi_{ \mathbf{k},\uparrow})\,.\end{split} \tag{84}\]
We will assume that these fermion fields are already in the rotated frame, such that their scattering with the Goldstone modes vanishes as \(\mathbf{q}\to 0\). The components of the fermion Green's function in the rotated frame which are diagonal in the spin-\(z\) indices are given by
\[\left[G^{R}_{\uparrow\uparrow}(i\omega,\mathbf{k})\right]^{-1}=\left[G^{R}_{ \downarrow\downarrow}(i\omega,\mathbf{k})\right]^{-1}=i\omega-\varepsilon_{ \mathbf{k}}-\frac{\Delta^{2}}{i\omega-\varepsilon_{\mathbf{k}+\mathbf{Q}}}\,. \tag{85}\]
In experiments one obviously does not measure the electrons in the rotated frame. One measures the original physical fermions in the unrotated frame, which are given by \(R^{\dagger}(\mathbf{r})\bar{\psi}_{\mathbf{r}}\), with
\[R(\mathbf{r})=\exp\left(i[\varphi_{y}(\mathbf{r})s^{y}+\varphi_{z}(\mathbf{r}) s^{z}]/2\right)\,, \tag{86}\]
where \(\varphi_{y}\) and \(\varphi_{z}\) are the Goldstone boson fields. To encode the compactness of the Goldstone modes we need to write the quadratic Goldstone mode action in terms of a general SU(2) matrix \(R(\mathbf{r})\) (and not in terms of \(\varphi_{y}\) and \(\varphi_{z}\)). For this, it is easiest to start with the continuum version of the Goldstone mode action
\[S_{G}=\frac{1}{2}\int\mathrm{d}\tau\int\mathrm{d}^{2}\mathbf{r}\sum_{n=y,z} \chi(\partial_{\tau}\varphi_{n})^{2}+\rho(\nabla\varphi_{n})^{2}\,. \tag{87}\]
Figure 4: Averaged electron-Goldstone scattering vertices \(\langle g_{\tilde{\mathbf{q}}}\rangle\). (a) Intra-band vertex in the mean-field basis [Eq. (80)]. (b) Inter-band vertex in the mean-field basis [Eq. (81)]. (c) Intra-band vertex in the rotating frame [Eq. (82)]. (d) Inter-band vertex in the rotating frame [Eq. (83)].
Figure 5: Averaged electron-Goldstone scattering vertices \(\langle f_{\tilde{\mathbf{q}}}\rangle\). (a) Intra-band vertex in the mean-field basis. (b) Inter-band vertex in the mean-field basis. (c) Intra-band vertex in the rotating frame. (d) Inter-band vertex in the rotating frame.
As a next step, we note that
\[R^{\dagger}(-i\partial_{\mu})R=\frac{1}{2}\sum_{n=y,z}\partial_{\mu}\varphi_{n}s^{ n}+\mathcal{O}(\varphi^{2}) \tag{88}\]
As a result, the following action agrees with Eq. (87) to lowest order in \(\varphi\):
\[S_{G}=\frac{1}{2}\int_{\tau,\mathbf{r}}\sum_{n=y,z}\chi\left[\mathrm{tr}(R^{ \dagger}i\partial_{\tau}Rs^{n})\right]^{2}+\rho\,\left[\mathrm{tr}(R^{\dagger} i\nabla Rs^{n})\right]^{2} \tag{89}\]
This is the improved action which correctly incorporates the compactness of the order parameter manifold. Note that this action is invariant under
\[R(\mathbf{r})\to R(\mathbf{r})e^{i\theta(\mathbf{r})s^{z}}\,, \tag{90}\]
with \(\theta(\mathbf{r})\) an arbitrary function. This is the U(1) gauge invariance which is familiar from the CP\({}^{1}\) formulation of the AFM Goldstone action [24].
The Green's function of the physical fermions is given by
\[G_{s_{1}s_{1}^{\prime}}(\tau,\mathbf{r})=-\langle R_{s_{2}s_{1}}^{*}(\tau, \mathbf{r})\psi_{s_{2},\mathbf{r}}(\tau)R_{s_{2}^{\prime}s_{1}^{\prime}}(0,0) \bar{\psi}_{s_{2}^{\prime}}(0,0)\rangle_{\beta}\,,\]
with repeated indices summed over. As a first approximation, we ignore the interactions between the electrons and the Goldstone modes, in which case the Green's function factorizes:
\[G_{s_{1}s_{1}^{\prime}}(\tau,\mathbf{r})\approx-\langle R_{s_{2}s_{1}}^{*}( \tau,\mathbf{r})R_{s_{2}^{\prime}s_{1}^{\prime}}(0,0)\rangle\langle\psi_{s_{ 2},\mathbf{r}}(\tau)\bar{\psi}_{s_{2}^{\prime}}(0,0)\rangle_{\beta}\,.\]
This is expected to be a reasonable approximation as the fermions in the rotating frame decouple from the low-energy Goldstone modes.
An approximate expression for the Goldstone propagator can be obtained from a large-N analysis of the CP\({}^{1}\) model [25], which leads to the following result [13]:
\[\langle R_{s_{2},s_{1}}^{*}(\tau,\mathbf{r})R_{s_{2}^{\prime},s_{1}^{\prime}}( 0,0)\rangle=-D(\tau,\mathbf{r})\delta_{s_{1}s_{1}^{\prime}}\delta_{s_{2},s_{2} ^{\prime}}\,, \tag{91}\]
where \(D(\tau,\mathbf{r})\) is the Fourier transform of
\[D(i\nu,\mathbf{q})=\frac{\chi^{-1}}{(i\nu)^{2}-c^{2}\mathbf{q}^{2}-m^{2}}\,. \tag{92}\]
Here \(m^{2}\) is a small mass for the Goldstone modes generated by thermal fluctuations, which restore the symmetry in 2 spatial dimensions due to the Hohenberg-Mermin-Wagner theorem.
Putting everything together, we find that the fermion Green's function is given by
\[\begin{split}& G_{ss^{\prime}}(i\omega,\mathbf{k})=-\delta_{ss^{ \prime}}T\sum_{i\nu}\frac{1}{L_{x}L_{y}}\times\\ &\sum_{\mathbf{q}}\frac{\chi^{-1}}{(i\nu)^{2}-\omega_{\mathbf{q} }^{2}-m^{2}}2G^{R}_{\uparrow\uparrow}(i\omega-i\nu,\mathbf{k}-\mathbf{q})\end{split} \tag{93}\]
This result has previously also been obtained by Borejsza and Dupuis [13] starting from the rotating-frame mean-field approach motivated by Schultz [12]. To simplify Eq. (93), we first rewrite \(G^{R}_{\uparrow\uparrow}\) as
\[G^{R}_{\uparrow\uparrow}(i\omega,\mathbf{k}) = \frac{i\omega-\varepsilon_{\mathbf{k}+\mathbf{q}}}{(i\omega-E_{ \mathbf{k}}^{+})(i\omega-E_{\mathbf{k}}^{-})} \tag{94}\] \[= \frac{|u_{+}(\mathbf{k})|^{2}}{i\omega-E_{\mathbf{k}}^{+}}+\frac{ |u_{-}(\mathbf{k})|^{2}}{i\omega-E_{\mathbf{k}}^{-}}\,, \tag{95}\]
where
\[E_{\mathbf{k}}^{\pm} = \frac{1}{2}\left(\varepsilon_{\mathbf{k}}+\varepsilon_{\mathbf{k }+\mathbf{Q}}\pm\sqrt{(\varepsilon_{\mathbf{k}}-\varepsilon_{\mathbf{k}+ \mathbf{Q}})^{2}+4\Delta^{2}}\right)\,,\] \[|u_{\pm}(\mathbf{k})|^{2} = \frac{1}{2}\left(1\pm\frac{\varepsilon_{\mathbf{k}}-\varepsilon_{ \mathbf{k}+\mathbf{Q}}}{\sqrt{(\varepsilon_{\mathbf{k}}-\varepsilon_{\mathbf{ k}+\mathbf{Q}})^{2}+4\Delta^{2}}}\right)\,. \tag{96}\]
It is now straightforward to perform the summation over \(i\nu\) in Eq. (93), which gives
\[\begin{split} G_{ss^{\prime}}(i\omega,\mathbf{k})=& \frac{\delta_{ss^{\prime}}}{L_{x}L_{y}}\sum_{\mathbf{q}}\sum_{\sigma=\pm}\frac{ |u_{\sigma}(\mathbf{k}-\mathbf{q})|^{2}}{\chi\tilde{\omega}_{\mathbf{q}}} \times\\ &\left(\frac{n(\tilde{\omega}_{\mathbf{q}})+f(-E_{\mathbf{k}- \mathbf{q}}^{\sigma})}{i\omega-E_{\mathbf{k}-\mathbf{q}}^{\sigma}-\tilde{ \omega}_{\mathbf{q}}}+\frac{n(\tilde{\omega}_{\mathbf{q}})+f(E_{\mathbf{k}- \mathbf{q}}^{\sigma})}{i\omega-E_{\mathbf{k}-\mathbf{q}}^{\sigma}+\tilde{ \omega}_{\mathbf{q}}}\right)\,,\end{split} \tag{97}\]
where \(n(\omega)\) and \(f(E)\) are respectively the Bose-Einstein and Fermi-Dirac distributions, and \(\tilde{\omega}_{\mathbf{q}}=\sqrt{c^{2}\mathbf{q}^{2}+m^{2}}\).
The spectral function, defined for spin-rotation invariant systems as,
\[\mathcal{A}(\omega,\mathbf{k})=\frac{2}{\pi}\mathrm{Im}\,G_{\uparrow\uparrow}( \omega-i\epsilon,\mathbf{k}) \tag{98}\]
is now easily obtained [13]:
\[\mathcal{A}(\omega,\mathbf{k})=\frac{2}{L_{x}L_{y}}\sum_{\mathbf{q }}\sum_{\sigma=\pm}\frac{|u_{\sigma}(\mathbf{k}-\mathbf{q})|^{2}}{\chi \tilde{\omega}_{\mathbf{q}}}\times\] \[\frac{\epsilon}{\pi}\left(\frac{n(\tilde{\omega}_{\mathbf{q}})+f(- E_{\mathbf{k}-\mathbf{q}}^{\sigma})}{(\omega-E_{\mathbf{k}-\mathbf{q}}^{ \sigma}-\tilde{\omega}_{\mathbf{q}})^{2}+\epsilon^{2}}+\frac{n(\tilde{\omega}_{ \mathbf{q}})+f(E_{\mathbf{k}-\mathbf{q}}^{\sigma})}{(\omega-E_{\mathbf{k}- \mathbf{q}}^{\sigma}+\tilde{\omega}_{\mathbf{q}})^{2}+\epsilon^{2}}\right)\,. \tag{99}\]
In Fig. 6 (a) we show the numerically obtained spectral weight \(\mathcal{A}(\omega,\mathbf{k})\) at frequency \(\omega=-1.73\). To obtain this result, we have kept a small non-zero \(\epsilon=T=0.05\) to smear out the numerical results obtained on a finite-size discrete momentum grid. We see that at this negative frequency, the spectral weight is located along contours demarcating the boundaries of 'hole pockets' centered at \((\pm\pi/2,\pm\pi/2)\). The part of the contour oriented towards the center of the Brillouin zone is brighter than the backside facing \((\pi,\pi)\) as a result of the coherence factors \(|u_{\sigma}(\mathbf{k})|^{2}\) in Eq. (99).
Fluctuations will cause the AFM order to decrease. To illustrate the effect of a reduced order in the rotating frame on the spectral functio
weight at \(\omega=-0.53\) obtained using a smaller \(\Delta=0.5\) in Fig. 6 (b). Note that both in Fig. 6 (a) and (b), the area inside the 'hole pockets' is \(\sim 10\%\) of the total Brillouin zone area. From Fig. 6 (b) we see that reducing \(\Delta\) elongates the contours of high spectral weight in the directions toward \((0,\pm\pi)\) and \((\pm\pi,0)\). Also the spectral weight on the backside of the contours is further reduced.
Before concluding we want to reiterate that the spectral weights shown in Figs. 6 (a-b) are those of a thermally disordered state, with no long-range AFM order. Nevertheless, the spectral weight retains important features of the \(T=0\) AFM state, and is strikingly different from the spectral weight of the conventional Fermi liquid at small \(U\).
## VIII Example II: Spin-Spiral Order in the Three-band Model
Having discussed the anti-ferromagnetic Mott insulator in the Hubbard model in the previous section, we now turn to the slightly more involved example of spin spiral order in the hole-doped three-band Hubbard model [26].
### Mean-field state and Goldstone mode energies
The three-band model is described by the following Hamiltonian:
\[H =\sum_{i}\epsilon_{d}c_{1i}^{\dagger}c_{1i}+\epsilon_{p}\left(c_ {2i}^{\dagger}c_{2i}+c_{3i}^{\dagger}c_{3i}\right)\] \[+\sum_{\langle ij\rangle}t_{pd}^{ij}\left(c_{1i}^{\dagger}c_{2j }+c_{1i}^{\dagger}c_{3j}+h.c.\right)+\sum_{\langle ij\rangle}t_{pp}^{ij}\left( c_{2i}^{\dagger}c_{3j}+h.c.\right)\] \[+\sum_{i}U_{d}n_{1i\uparrow}n_{1i\downarrow}+U_{p}\left(n_{2i \uparrow}n_{2i\downarrow}+n_{3i\uparrow}n_{3i\downarrow}\right)\,,\]
where \(a=1,2,3\) respectively refers to the Copper \(d_{x^{2}-y^{2}}\), Oxygen \(2p_{x}\) and \(2p_{y}\) orbitals, \(s\) denotes the spin, and \(i,j\) label the unit cells. Note that following most of the literature on this model, we have formulated the Hamiltonian in terms of the hole degrees of freedom. The first two lines in Eq. (IV.1) represent a potential energy difference for the Copper and Oxygen orbitals, and nearest and next-nearest neighbour hopping (in these terms, the spin summation is implicit). Note that the signs of the hopping parameters \(t_{pd}^{ij}\) and \(t_{pp}^{ij}\) depend on the relative orientation of the sites \(i\) and \(j\). See Fig. 7 for a graphical representation of the different hopping processes in the three-band model, and how the corresponding signs depend on the orientation. The last line in Eq. (IV.1) contains the on-site Hubbard interactions for electrons in the Copper and Oxygen orbitals.
An important parameter in the three-band model is the charge-transfer parameter \(\delta=\epsilon_{p}-\epsilon_{d}>0\). Previous works [27; 28] have investigated the groundstate properties of the three-band model for a range of values of \(\delta\) using different numerical methods, including Hartree Fock, and identified a parameter regime with spin-spiral order. To make contact with the results of [27], we choose the following values for the parameters of the three-band model: \(U_{d}=8.4,U_{p}=2.0,\epsilon_{d}=-7.6,\epsilon_{p}=-6.1,|t_{pd}^{ij}|=1.2,|t_{ pp}^{ij}|=0.7\). Note that for these parameter values the charge-transfer parameter is \(\delta=1.5\). At half filling, these parameters lead to an insulating AFM ground state, with orbital hole densities \(n_{1}\approx 0.46\),\(n_{2}=n_{3}\approx 0.27\). Upon hole doping, the anti-ferromagnet becomes a circular spin spiral with wave vector \(\mathbf{Q}=(\pi,\pi-2\pi\eta)\) or \(\mathbf{Q}=(\pi-2\pi\eta,\pi)\), where \(\eta\) is the incommensurability of the spiral. In this work we investigate the ground state at a hole doping of \(1/8\).
Let us assume without loss of generality that the spiral order is in the XY plane. In this case, the spin expecta
Figure 7: Hopping processes in the three-band model. The copper \(d_{x^{2}-y^{2}}\) orbitals are denoted by the black dots while the Oxygen \(2p_{x}\) and \(2p_{y}\) orbitals are denoted by the red and blue crosses respectively. The notation for the hopping amplitudes is the same as in Eq. (IV.1).
Figure 6: Spectral weight \(\mathcal{A}(\omega,\mathbf{k})\) defined in Eq. (99) with \(T=\epsilon=0.05\) and \(m=0.002\), normalized such that the maximal value is one. (a) Spectral weight at frequency \(\omega=-1.73\), using the mean-field value \(\Delta\approx 1.93\). (b) Spectral weight at frequency \(\omega=-0.53\), using a renormalized value \(\Delta=0.5\). In both (a) and (b), the total area contained in the pockets centered at \((\pm\pi/2,\pm\pi/2)\) is \(\sim 10\%\) of the total Brillouin zone area. The results were obtained by numerically interpolating the mean-field results obtained on a \(34\times 34\) momentum grid to a \(170\times 170\) grid, using a bivariate spline interpolation.
tion value is given by
\[\langle\mathbf{S}_{i}\rangle=\Delta\left(\cos(\mathbf{Q}_{\mathrm{sp}}\cdot\mathbf{ r}_{i})\mathbf{\hat{x}}+\sin(\mathbf{Q}\cdot\mathbf{r}_{i})\mathbf{\hat{y}} \right)\,, \tag{101}\]
where \(\mathbf{S}_{i}=\frac{1}{2}\sum_{a,s,s^{\prime}}c^{\dagger}_{j,s,a}\sigma_{ss^{ \prime}}c_{j,s^{\prime},a}\). Similar to the AFM, the circular spiral order breaks both SU(2) spin symmetry and translation symmetry, but is invariant under the modified translation symmetry
\[T^{\prime}_{\mathbf{r}}=e^{i\mathbf{r}\cdot\mathbf{Q}s^{z}/2}T_{\mathbf{r}}\,, \tag{102}\]
which allows us to define a conserved pseudo-momentum \(\mathbf{k}\) as in Eq.(75).
The self-consistent Hartree-Fock mean-field Hamiltonian can then be written as
\[H_{HF}=\sum_{\tilde{\mathbf{k}}}c^{\dagger}_{\tilde{\mathbf{k}}}\begin{pmatrix} \varepsilon_{\tilde{\mathbf{k}},\uparrow}&\Delta_{\tilde{\mathbf{k}}}\\ \Delta_{\tilde{\mathbf{k}}}&\varepsilon_{\tilde{\mathbf{k}},\downarrow} \end{pmatrix}c_{\tilde{\mathbf{k}}}\,, \tag{103}\]
where \(c^{\dagger}_{\tilde{\mathbf{k}}}=(c^{\dagger}_{1,\tilde{\mathbf{k}},\uparrow },c^{\dagger}_{2,\tilde{\mathbf{k}},\uparrow},c^{\dagger}_{3,\tilde{\mathbf{k }},\uparrow},c^{\dagger}_{1\tilde{\mathbf{k}},\downarrow},c^{\dagger}_{2 \tilde{\mathbf{k}},\downarrow},c^{\dagger}_{3\tilde{\mathbf{k}},\downarrow})\) and \(\Delta_{\tilde{\mathbf{k}}}=\mathrm{diag}(\Delta_{1,\tilde{\mathbf{k}}},\Delta_ {2,\tilde{\mathbf{k}}},\Delta_{3,\tilde{\mathbf{k}}})\). Here, \(\varepsilon_{\tilde{\mathbf{k}}}\) are not the bare band energies since the contribution from the Hartree term is not just a simple overall energy shift, but rather, each orbital gets shifted by a different amount, corresponding to the orbital's hole density.
To solve the HF self-consistency equations, we have applied the following procedure. First, we used restricted HF with conserved pseudo-momentum, and minimized the energy for different values of \(\mathbf{Q}\). Even though the optimal \(\mathbf{Q}\) varies slightly with system size, it was consistently found to be \(\mathbf{Q}\approx(\pi,0.80\pi)\) (assuming spiral order along the \(y\)-direction). As a next step we used the optimal solution of the restricted HF as an initial seed for completely unrestricted HF, which did not assume any translational symmetry. The converged unrestricted HF yielded ground states with the same spiral order as the restricted HF, but with a small \(\delta n/n\sim 10^{-5}-10^{-3}\) charge density modulation, which may be a result of the incommensurability of the exact optimal \(\mathbf{Q}\) with the finite system size. The difference in energy between the spiral states with and without charge density modulation was found to decrease with system size. Our results are thus consistent with a circular spin spiral state with \(\mathbf{Q}=(\pi,\pi-2\pi\eta)\) and \(\eta\approx 0.10\), in agreement with Refs. [27; 29]. The spin hybridization parameters in the mean-field Hamiltonian were found to be \(\left(\Delta_{1,\tilde{\mathbf{k}}},\Delta_{2,\tilde{\mathbf{k}}},\Delta_{3, \tilde{\mathbf{k}}}\right)=(\Delta_{1},\Delta_{2},\Delta_{3})\approx(1.31,0.0 0,0.013)\).
The spiral order has three broken symmetry generators, and three Goldstone modes [30; 31]. Two of the modes are associated with rotating the plane of the spiral order (out-of-plane modes), and one mode corresponds to in-plane rotations. For a \(XY\) spiral, the out-of-plane modes are related to the broken \(s^{x},s^{y}\) generators while the in-plane mode is related to the broken \(s^{z}\) generator. We can understand the location of the Goldstone modes in the pseudo-momentum frame by relating them to the original frame, where they are all at \(\mathbf{q}=(0,0)\). The in-plane mode remains at zero, \(\tilde{\mathbf{q}}=(0,0)\), while the two out-of-plane modes go to \(\tilde{\mathbf{q}}=\pm\mathbf{Q}\). We obtain the collective mode spectrum by numerically solving Eq.(28) in the pseudo-momentum basis. This was first done over the entire pseudo-momentum Brillouin zone on a \(24\times 24\) grid, and subsequently on a larger \(40\times 40\) grid near the locations of the Goldstone modes. The energies for the low-energy collective modes are shown in Fig.8, near \(\tilde{\mathbf{q}}=(0,0)\) and \(\tilde{\mathbf{q}}=\pm\mathbf{Q}\). We see that the Goldstone modes near \(\pm\mathbf{Q}\), which correspond to the out-of-plane fluctuations, lie outside the particle-hole continuum and hence will not be damped. The Goldstone mode near \(\tilde{\mathbf{q}}\), on the other hand, does lie in the particle-hole continuum, and hence will be Landau damped. These observations agree with Refs. [30; 31; 32]. The velocities of the out-of-plane Goldstone modes are anisotropic, and from a linear fit we find \(c_{x}\approx 0.32\) and \(c_{y}\approx 0.27\).
### Electron spectral function
Working in the original crystal momentum basis, and taking the spiral order to be in the XY plane, the fermion action is given by form
\[\begin{split}& S_{\psi}=\int_{0}^{\beta}\mathrm{d}\tau\sum_{ \mathbf{k}}\sum_{s,a,b}\bar{\psi}_{\mathbf{k},s,a}(\delta_{ab}\partial_{\tau}+ \varepsilon_{\mathbf{k},ab})\psi_{\mathbf{k},s,b}\\ &+\sum_{a}\Delta_{a}(\bar{\psi}_{\mathbf{k},\uparrow,a}\psi_{ \mathbf{k}+\mathbf{Q},\downarrow,b}+\bar{\psi}_{\mathbf{k}+\mathbf{Q}, \downarrow,b}\psi_{\mathbf{k},\uparrow,a})\,.\end{split} \tag{104}\]
As before, we assume that these fermion fields are already in the rotated frame. In a basis \((\psi_{\mathbf{k},\uparrow},\psi_{\mathbf{k}+\mathbf{Q},\downarrow})\), suppressing the orbital indices, the rotated-frame Green's function is given by
\[[G^{R}(i\omega,\mathbf{k})]^{-1}=\begin{pmatrix}i\omega\mathbb{1}-\varepsilon_ {\mathbf{k}}&-\Delta\\ -\Delta&i\omega\mathbb{1}-\varepsilon_{\mathbf{k}+\mathbf{Q}}\end{pmatrix}\,, \tag{105}\]
where \(\Delta,\varepsilon_{\mathbf{k}}\) are \(3\times 3\) matrices with orbital indices, with values as described in the previous section.
The spin-diagonal part of the Green's function can be written as
\[G^{R\,ab}_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
action is then given by
\[\begin{split} S_{G}=\frac{1}{2}\int\mathrm{d}\tau&\int \mathrm{d}^{2}\mathbf{r}\sum_{n=x,y}\left(\chi^{\perp}(\partial_{\tau}\varphi_{ n})^{2}+\sum_{i=x,y}\rho_{i}^{\perp}(\partial_{i}\varphi_{n})^{2}\right)\\ &+\chi^{\square}(\partial_{\tau}\varphi_{z})^{2}+\sum_{i=x,y}\rho_ {i}^{\square}(\partial_{i}\varphi_{z})^{2}\,,\end{split} \tag{108}\]
where the out-of-plane and in-plane parameters are denoted by \(\perp\) and \(\square\). Note that terms involving \(\partial_{i}\varphi\partial_{j}\varphi\) with \(i\neq j\) are forbidden by the reflection symmetry of the spiral state.
By again using Eq. (88), we obtain following non-linear sigma action for the Goldstone modes:
\[\begin{split} S_{G}=&\frac{1}{2}\int_{\tau,\mathbf{ r}}\sum_{n=x,y}\chi^{\perp}\left[\mathrm{tr}(R^{\dagger}i\partial_{\tau}Rs^{n} )^{2}+\rho_{j}^{\perp}\left[\mathrm{tr}(R^{\dagger}i\partial_{j}Rs^{n}\right] ^{2}\right.\\ &+\left.\chi^{\square}\left[\mathrm{tr}(R^{\dagger}i\partial_{\tau }Rs^{z}\right]^{2}+\rho_{j}^{\square}\left[\mathrm{tr}(R^{\dagger}i\partial_{j }Rs^{z}\right]^{2}\,,\right.\end{split} \tag{109}\]
where the sum over \(j=x,y\) is implicit. A large-N analysis of the non-linear sigma model [20], leads to the following result for the Goldstone propagator:
\[\langle R_{s_{2},s_{1}}^{c,a\;*}(\tau,\mathbf{r})R_{s_{2}^{\prime},s_{1}^{ \prime}}^{d,b}(0,0)\rangle=-D(\tau,\mathbf{r})\delta_{s_{1}s_{1}^{\prime}} \delta_{s_{2}s_{2}^{\prime}}\delta_{ad}\delta_{cb}\,, \tag{110}\]
where \(D(\tau,\mathbf{r})\) is the Fourier transform of
\[D(i\nu,\mathbf{q})=\frac{(\chi^{\perp})^{-1}}{(i\nu)^{2}-\tilde{\omega}_{ \mathbf{q}}^{2}}\,. \tag{111}\]
Here, \(\tilde{\omega}_{\mathbf{q}}=\sqrt{\rho_{\alpha}^{\perp}q_{\alpha}^{2}/\chi^{ \perp}+m^{2}}\) is the Goldstone mode dispersion, including the exponentially small (in \(T\)) spin-gap mass generated by the thermal fluctuations at non-zero temperature, as determined via the saddle point equations. The velocities \(c_{\alpha}=\sqrt{\rho_{\alpha}^{\perp}/\chi^{\perp}}\) are obtained from the collective mode spectrum.
Finally, we express the fermion Green's function as
\[G_{s_{1}s_{1}^{\prime}}^{ab}(\tau,\mathbf{r}) =-\langle R_{s_{2}s_{1}}^{c\;*}(\tau,\mathbf{r})\psi_{s_{2}}^{c}( \tau,\mathbf{r})\bar{\psi}_{s_{2}}^{d}(0,0)R_{s_{2}^{\prime}s_{1}^{\prime}}^{ db}(0,0)\rangle_{\beta}\] \[\approx-\langle R_{s_{2}s_{1}}^{c\;*}(\tau,\mathbf{r})R_{s_{2}^{ \prime}s_{1}^{\prime}}^{db}(0,0)\rangle\,G^{\mathrm{R}\;cd}_{s_{2}s_{2}^{ \prime}}(\tau,\mathbf{r})\]
where we have again ignored the interaction between the Goldstone modes and the electrons in the rotating frame, so that the Green's function factorizes. Going to momentum space, and using Eqs. (110) and (111), we find that the fermion Green's function is given by
\[\begin{split} G_{ss^{\prime}}^{ab}(i\omega,\mathbf{k})& =-\delta_{ss^{\prime}}T\sum_{i\nu}\frac{1}{N_{x}N_{y}}\times\\ &\sum_{\mathbf{q}}\frac{(\chi^{\perp})^{-1}}{(i\nu)^{2}-\tilde{ \omega}_{\mathbf{q}}^{2}}\sum_{\sigma,\alpha}\frac{u_{\alpha}^{a,\sigma}( \mathbf{k}-\mathbf{q})u_{\alpha}^{b,\sigma\;*}(\mathbf{k}-\mathbf{q})}{i( \omega-\nu)-E_{\mathbf{k}-\mathbf{q},\alpha}}\,,\end{split} \tag{112}\]
from which we obtain the spectral weight.
In Fig 9(a) we show the Fermi surfaces of the spiral state mean-field band structure in the pseudo-momentum Brillouin zone. The band spectrum clearly breaks \(C_{4}\) symmetry, in contrast to the AFM case, but retains a reflection symmetry about \(x\). The spectral weight of the physical electrons involves a spin sum, as seen in Eq. (112), which adds two copies of the Fermi levels of Fig 9(a), with a relative shift of \(\mathbf{Q}\) and the associated coherence factors. This restores the reflection symmetry about \(y\), but not the \(C_{4}\) symmetry. To illustrate this, we have calculated the spectral weight of the following fermions:
\[f_{\mathbf{r}}^{\dagger}(\theta)=\cos\theta\,c_{1,\mathbf{r}}^{\dagger}+\frac{ \sin\theta}{2}\left(c_{2,\mathbf{r}}^{\dagger}-c_{3,\mathbf{r}}^{\dagger}-c_ {2,\mathbf{r}-x}^{\dagger}+c_{3,\mathbf{r}-y}^{\dagger}\right)\,, \tag{113}\]
Figure 8: Plots of the collective modes of the spin spiral state on a \(40\times 40\) grid, across different cuts of the Brillouin Zone. The red points show the Goldstone modes where they can be distinguished from the particle-hole continuum. The linear low-energy dispersion of the Goldstone modes near \(\tilde{\mathbf{q}}=\pm\mathbf{Q}=(\pi,\pm 0.8\pi)\) is clear in (a), (b) and (c). In (d), the Goldstone mode at \(\tilde{\mathbf{q}}=(0,0)\) is dissolved in the particle-hole continuum.
where the combination of orbitals in this operator is chosen to obey all the point group symmetries of the three-band model. We have optimized the spectral weight at the Fermi energy as a function of \(\theta\), and found that the maximal spectral weight occurs at \(\theta=\theta^{*}\approx 0.27\pi\). In Fig. 9(b) we plot the spectral weight of \(f_{\mathbf{r}}^{\dagger}(\theta^{*})\), which is indeed reflection symmetric, but breaks \(C_{4}\). Similar to the AFM case discussed above, the highest spectral weight is along four contours with a suppressed backside due to the coherence factors \(|u_{\alpha}^{a}(\mathbf{k})|^{2}\). However, compared to the AFM, the centers of the high spectral-weight contours are shifted along \(k_{y}\). Also, the fraction of the Brillouin zone area contained in the contours is _twice_ the number of doped holes per unit cell relative to half filling, i.e. \(1/4\) for the doping that we consider here. The implications of this were previously discussed in Ref. [33].
## IX Conclusions
We have shown that a naive application of mean-field theory + RPA produces scattering vertices between electrons and Goldstone modes which do not vanish in the long-wavelength limit. However, fermions which live in a frame that locally follows the order parameter fluctuations do decouple from \(\mathbf{q}=0\) Goldstone modes. This has important consequences for electron correlation functions, which by making use of the rotating frame naturally reflect the properties of thermally disordered systems which nevertheless retain features associated with a non-zero magnitude for the order parameter.
Although we have illustrated our formalism only for square-lattice Hubbard and three-band models, its applicability is far more general. In particular, it can also be used to study the broken-symmetry states in moire materials, such as e.g. the Incommensurate Kekule Spiral (IKS) state [34; 35; 36] which is observed experimentally in both magic-angle twisted bilayer [37] and trilayer [38] graphene. The IKS order is in fact very similar to the circular spin-spiral order that we have studied in the three-band model in Sec. VIII. We leave these applications for near-future work.
_Acknowledgements -_ N.B. would like to thank Patrick Ledwidth for helpful discussions about the manuscript, and Steve Kivelson for bringing Ref. [8] to our attention. This work was supported by a Leverhulme Trust International Professorship grant [number LIP-202-014]. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. The research was also in part supported by the National Science Foundation under Grant No. NSF PHY-1748958. N.B. was supported by a University Research Fellowship of the Royal Society, and thanks Ecole Normale Superieure Paris, where part of this work was completed, for hospitality. YH acknowledges support from the European Research Council (ERC) under the European Union Horizon 2020 Research and Innovation Programme (Grant Agreement Nos. 804213-TMCS)
|
2305.02957 | A Monoidal View on Fixpoint Checks | Fixpoints are ubiquitous in computer science as they play a central role in
providing a meaning to recursive and cyclic definitions. Bisimilarity,
behavioural metrics, termination probabilities for Markov chains and stochastic
games are defined in terms of least or greatest fixpoints. Here we show that
our recent work which proposes a technique for checking whether the fixpoint of
a function is the least (or the largest) admits a natural categorical
interpretation in terms of gs-monoidal categories. The technique is based on a
construction that maps a function to a suitable approximation and the
compositionality properties of this mapping are naturally interpreted as a
gs-monoidal functor. This guides the realisation of a tool, called UDEfix that
allows to build functions (and their approximations) like a circuit out of
basic building blocks and subsequently perform the fixpoints checks. We also
show that a slight generalisation of the theory allows one to treat a new
relevant case study: coalgebraic behavioural metrics based on Wasserstein
liftings. | Paolo Baldan, Richard Eggert, Barbara König, Timo Matt, Tommaso Padoan | 2023-05-04T16:04:34Z | http://arxiv.org/abs/2305.02957v2 | # A Monoidal View on Fixpoint Checks+
###### Abstract
Fixpoints are ubiquitous in computer science as they play a central role in providing a meaning to recursive and cyclic definitions. Bisimilarity, behavioural metrics, termination probabilities for Markov chains and stochastic games are defined in terms of least or greatest fixpoints. Here we show that our recent work which proposes a technique for checking whether the fixpoint of a function is the least (or the largest) admits a natural categorical interpretation in terms of gs-monoidal categories. The technique is based on a construction that maps a function to a suitable approximation and the compositionality properties of this mapping are naturally interpreted as a gs-monoidal functor. This guides the realisation of a tool, called UDEfix that allows to build functions (and their approximations) like a circuit out of basic building blocks and subsequently perform the fixpoints checks. We also show that a slight generalisation of the theory allows one to treat a new relevant case study: coalgebraic behavioural metrics based on Wasserstein liftings.
## 1 Introduction
For the compositional modelling of graphs and graph-like structures it has proven useful to use the notion of monoidal categories [16], i.e., categories equipped with a tensor product. There are several extensions of such categories, such as gs-monoidal categories that have been shown to be suitable for specifying term rewriting (see e.g. [14, 15]). In essence gs-monoidal categories describe graph-like structures with dedicated input and output interfaces, operators for disjoint union (tensor), duplication and termination of wires, quotiented by the axioms satisfied by these operators. Particularly useful are gs-monoidal functors that preserve such operators and hence naturally describe compositional operations.
We show that gs-monoidal categories and the composition concepts that come with them can be fruitfully used in a scenario that - at first sight - might seem quite unrelated: methods for fixpoints checks. In particular, we build upon [7] where a theory is proposed for checking whether a fixpoint of a given function is the least (greatest) fixpoint. The theory applies to a variety of fairly diverse
application scenarios, such as bisimilarity [20], behavioural metrics [12, 23, 9, 4], termination probabilities for Markov chains [3] and simple stochastic games [10].
More precisely, the theory above deals with non-expansive functions \(f\colon\mathbb{M}^{Y}\to\mathbb{M}^{Y}\), where \(\mathbb{M}\) is a set of values (more precisely, an MV-chain) and \(Y\) is a finite set. The rough idea consists in mapping such functions to corresponding approximations, whose fixpoints can be computed effectively and give insights on the fixpoints of the original function.
We show that the approximation framework and its compositionality properties can be naturally interpreted in categorical terms. This is done by introducing two gs-monoidal categories in which the concrete functions respectively their approximations live as arrows, together with a gs-monoidal functor, called \(\#\), mapping one to the other. Besides shedding further light on the theoretical approximation framework of [7], this view guided the realisation of a tool, called UDEfix that allows to build functions (and their approximations) like a circuit out of basic building blocks and subsequently perform the fixpoints checks.
We also show that the functor \(\#\) can be extended to deal with functions \(f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}\) where \(Y\) is not necessarily finite, becoming a lax functor. We prove some properties of this functor that enable us to give a recipe for finding approximations for a special type of functions: predicate liftings that have been introduced for coalgebraic modal logic [18, 21]. This recipe allows us to include a new case study for the machinery for fixpoint checking: coalgebraic behavioural metrics, based on Wasserstein liftings.
The paper is organized as follows: In Section 2 we give some high-level motivation, while in Section 3 we review the theory from [7]. Subsequently in Section 4 we introduce two (gs-monoidal) categories \(\mathbb{C}\), \(\mathbb{A}\) (of concrete and abstract functions), show that the approximation \(\#\) is a (lax) functor between these categories and prove some of its properties, which are used to handle predicate liftings (Section 5) and behavioural metrics (Section 6). Next, we show that the categories \(\mathbb{C}\), \(\mathbb{A}\) and the functor \(\#\) are indeed gs-monoidal (Section 7) and lastly discuss the tool UDEfix in Section 8. We end by giving a conclusion (Section 9). Proofs and further material can be found in the appendix.
## 2 Motivation
We start with some motivations for our theory and the tool UDEfix, which is based on it, via a case study on behavioural metrics. We consider probabilistic transition systems (Markov chains) with labelled states, given by a finite set of states \(X\), a function \(\delta\colon X\to\mathcal{D}(X)\) mapping each state \(x\in X\) to a probability distribution on \(X\) and a labelling function \(\ell\colon X\to\Lambda\), where \(\Lambda\) is a fixed set of labels (for examples see Figure 1). Our aim is to determine the behavioural distance of two states, whose definition is based on the so-called Kantorovich or Wasserstein lifting [24] that measures the distance of two probability distributions on \(X\), based on a distance \(d\colon X\times X\to[0,1]\). In more detail: given \(d\), we
define \(d^{\mathcal{D}}\colon\mathcal{D}(X)\times\mathcal{D}(X)\to[0,1]\) as
\[d^{\mathcal{D}}(p_{1},p_{2})=\inf\{\sum_{x_{1},x_{2}\in X}d(x_{1},x_{2})\cdot t( x_{1},x_{2})\mid t\in\Gamma(p_{1},p_{2})\}\]
where \(\Gamma(p_{1},p_{2})\) is the set of couplings of \(p_{1},p_{2}\) (i.e., distributions \(t\colon X\times X\to[0,1]\) such that \(\sum_{x_{2}\in X}t(x_{1},x_{2})=p_{1}(x_{1})\), \(\sum_{x_{1}\in X}t(x_{1},x_{2})=p_{2}(x_{2})\)). The Wasserstein lifting gives in fact the solution of a transport problem, where we interpret \(p_{1},p_{2}\) as the supply respectively demand at each point \(x\in X\). Transporting a unit from \(x_{1}\) to \(x_{2}\) costs \(d(x_{1},x_{2})\) and \(t\) is a transport plan (= coupling) whose marginals are \(p_{1},p_{2}\). In other words: \(d^{\mathcal{D}}(p_{1},p_{2})\) is the cost of the optimal transport plan, moving the supply \(p_{1}\) to the demand \(p_{2}\).
The behavioural metric is then defined as the least fixpoint of the function \(f\colon[0,1]^{X\times X}\to[0,1]^{X\times X}\) where \(f(d)(x_{1},x_{2})=1\) if \(\ell(x_{1})\neq\ell(x_{2})\) and \(f(d)(x_{1},x_{2})=d^{\mathcal{D}}(\delta(x_{1}),\delta(x_{2}))\) otherwise. For instance, the best transport plan for the system on the left-hand side of Figure 1 and the distributions \(\delta(1),\delta(2)\) is \(t\) with \(t(3,3)=\nicefrac{{1}}{{3}}\), \(t(3,4)=\nicefrac{{1}}{{6}}\), \(t(4,4)=\nicefrac{{1}}{{2}}\) and \(0\) otherwise.
One can observe that the function \(f\) can be decomposed as
\[f=\max_{\rho}\circ(c_{k}+(\delta\times\delta)^{*}\circ\min_{u}\circ\tilde{ \mathcal{D}}),\]
where \(+\) stands for disjoint union and we use the functions given in Table 1.4 More concretely, the types of the components and the parameters \(k,u,\rho\) are given as follows, where \(Y=X\times X\):
Footnote 4: If the underlying sets are infinite, min, max can be replaced by inf, sup.
* \(c_{k}\colon[0,1]^{\emptyset}\to[0,1]^{Y}\) where \(k(x,x^{\prime})=1\) if \(\ell(x)\neq\ell(x^{\prime})\) and \(0\) otherwise.
* \(\tilde{D}\colon[0,1]^{Y}\to[0,1]^{\mathcal{D}(Y)}\).
* \(\min_{u}\colon[0,1]^{\mathcal{D}(Y)}\to[0,1]^{\mathcal{D}(X)\times\mathcal{D} (X)}\) where \(u\colon\mathcal{D}(Y)\to\mathcal{D}(X)\times\mathcal{D}(X)\), \(u(t)=(p,q)\) with \(p(x)=\sum_{x^{\prime}\in X}t(x,x^{\prime})\), \(q(x)=\sum_{x^{\prime}\in X}t(x^{\prime},x)\).
* \((\delta\times\delta)^{*}\colon[0,1]^{\mathcal{D}(X)\times\mathcal{D}(X)}\to[0,1]^{Y}\).
* \(\max_{\rho}\colon[0,1]^{Y+Y}\to[0,1]^{Y}\) where \(\rho\colon Y+Y\to Y\) is the obvious map from the coproduct (disjoint union) \(Y+Y\) to \(Y\).
In fact this decomposition can be depicted diagrammatically, as in Figure 2.
Figure 1: Two probabilistic transition systems.
The function \(f\) is a monotone function on a complete lattice, hence it has a least fixpoint by Knaster-Tarski's fixpoint theorem [22], which is the behavioural metric. By giving a transport plan as above, it is possible to provide an upper bound for the Wasserstein lifting and hence there are strategy iteration algorithms that can approach a fixpoint from above. The problem with these algorithms is that they might get stuck at a fixpoint that is not the least. Hence, it is essential to be able to determine whether a given fixpoint is indeed the smallest one (cf. [2]).
Consider for instance the transition system in Figure 1 on the right. It contains two states \(1,2\) on a cycle. In fact these two states should be indistinguishable and hence \(d(1,2)=d(2,1)=0\) if \(d=\mu f\) is the least fixpoint of \(f\). However, the metric \(a\) with \(a(1,2)=a(2,1)=1\) (\(0\) otherwise) is also a fixpoint and the question is how to determine that it is not the least.
For this, we use the techniques developed in [7] that require in particular that \(f\) is non-expansive (i.e., given two metrics \(d_{1},d_{2}\), the sup-distance of \(f(d_{1}),f(d_{2})\) is smaller or equal than the sup-distance of \(d_{1},d_{2}\)). In this case we can associate \(f\) with an approximation \(f^{a}_{\#}\) on subsets of \(X\times X\) such that, given \(Y^{\prime}\subseteq X\times X\), \(f^{a}_{\#}(Y^{\prime})\) intuitively contains all pairs \((x_{1},x_{2})\) such that, decreasing function \(a\) by some value \(\delta\) over \(Y^{\prime}\), resulting in a function \(b\) (defined as \(b(x_{1},x_{2})=a(x_{1},x_{2})-\delta\) if \((x_{1},x_{2})\in Y^{\prime}\) and \(b(x_{1},x_{2})=a(x_{1},x_{2})\) otherwise) and applying \(f\), we obtain a function \(f(b)\), where the same decrease took place at \((x_{1},x_{2})\) (i.e., \(f(b)(x_{1},x_{2})=f(a)(x_{1},x_{2})-\delta\)). More concretely, here \(f^{a}_{\#}(\{(1,2)\})=\{(2,1)\}\), since a decrease at \((1,2)\) will cause a decrease at \((2,1)\) in the next iteration. In fact the greatest fixpoint of \(f^{a}_{\#}\) (here: \(\{(1,2),(2,1)\}\)) gives us those elements that have a potential for decrease (intuitively there is "slack" or "wiggle room") and form a vicious cycle as above. It holds that \(a\) is the least fixpoint of \(f\) iff the the greatest fixpoint of \(f^{a}_{\#}\) is the empty set, a non-trivial result [7, 5].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Function & \(c_{k}\) & \(g^{*}\) & \(\min_{u}\) & \(\max_{u}\) & \(\mathrm{av}_{D}=\tilde{D}\) \\ & \(k\colon Z\to\mathbb{M}\) & \(g\colon Z\to Y\) & \(u\colon Y\to Z\) & \(u\colon Y\to Z\) & \(\mathbb{M}=[0,1]\), \(Z=\mathcal{D}(Y)\) \\ \hline Name & constant & reindexing & minimum & maximum & expectation \\ \hline \(a\mapsto\ldots\) & \(k\) & \(a\circ g\) & \(\lambda z.\min\limits_{u(y)=z}a(y)\) & \(\lambda z.\max\limits_{u(y)=z}a(y)\) & \(\lambda z.\lambda y.\sum\limits_{y\in Y}z(y)\cdot a(y)\) \\ \hline \end{tabular}
\end{table}
Table 1: Basic functions of type \(\mathbb{M}^{Y}\to\mathbb{M}^{Z}\), \(a\colon Y\to\mathbb{M}\).
Figure 2: Decomposition of the fixpoint function for computing behavioural metrics.
The importance of the decomposition stems from the fact that the approximation is in fact compositional, that is \(f^{a}_{\#}\) can be built out of the approximations of \(\max_{p}\), \(c_{k}\), \((\delta\times\delta)^{*}\), \(\min_{u}\), \(\tilde{D}=\mathrm{av}_{D}\), which can be easily determined (see [7] and Table 2 in the appendix). For general functors, beyond the distribution functor, the characterization is however still missing and will be provided in this paper.
We anticipate that in our tool \(\mathsf{UDEfiv}\) we can draw a diagram as in Figure 2, from which the approximation and its greatest fixpoint is automatically computed in a compositional way, allowing us to perform such fixpoint checks.
## 3 Preliminaries
This section reviews some background used throughout the paper. This includes the basics of lattices and MV-algebras, where the functions of interest take values. We also recap some results from [7] useful for detecting if a fixpoint of a given function is the least (or greatest).
We will also need some standard notions from _category theory_, in particular categories, functors and natural transformations. The definition of (strict) gsmonoidal categories is spelled out in detail in Definition 7.1.
For sets \(X,Y\), we denote by \(\mathcal{P}(X)\) the powerset of \(X\) and \(\mathcal{P}_{f}(X)\) the set of finite subsets of \(X\). The set of functions from \(X\) to \(Y\) is denoted by \(Y^{X}\).
A partially ordered set \((P,\sqsubseteq)\) is often denoted simply as \(P\), omitting the order relation. The _join_ and the _meet_ of a subset \(X\subseteq P\) (if they exist) are denoted \(\bigsqcup X\) and \(\bigsqcap X\). We write \(x\sqsubset y\) when \(x\sqsubseteq y\) and \(x\neq y\).
A _complete lattice_ is a partially ordered set \((\mathbb{L},\sqsubseteq)\) such that each subset \(X\subseteq\mathbb{L}\) admits a join \(\bigsqcup X\) and a meet \(\bigsqcap X\). A complete lattice \((\mathbb{L},\sqsubseteq)\) always has a least element \(\bot=\bigsqcap\mathbb{L}\) and a greatest element \(\top=\bigsqcup\mathbb{L}\).
A function \(f:\mathbb{L}\to\mathbb{L}\) is _monotone_ if for all \(l,l^{\prime}\in\mathbb{L}\), if \(l\sqsubseteq l^{\prime}\) then \(f(l)\sqsubseteq f(l^{\prime})\). By Knaster-Tarski's theorem [22, Theorem 1], any monotone function on a complete lattice has a least fixpoint \(\mu f\) and a greatest fixpoint \(\nu f\).
For a set \(Y\) and a complete lattice \(\mathbb{L}\), the set of functions \(\mathbb{L}^{Y}\), with pointwise order (for \(a,b\in\mathbb{L}^{Y}\), \(a\sqsubseteq b\) if \(a(y)\sqsubseteq b(y)\) for all \(y\in Y\)), is a complete lattice.
We are also interested in the set of finitely supported probability distributions \(\mathcal{D}(Y)\subseteq[0,1]^{Y}\), i.e., functions \(\beta:Y\to[0,1]\) with finite support such that \(\sum_{y\in Y}\beta(y)=1\).
An _MV-algebra_[17] is a tuple \(\mathbb{M}=(M,\oplus,0,\overline{(\cdot)})\) where \((M,\oplus,0)\) is a commutative monoid and \(\overline{(\cdot)}:M\to M\) maps each element to its _complement_, such that for all \(x,y\in M\) (i) \(\overline{\overline{x}}=x\); (ii) \(x\oplus\overline{0}=\overline{0}\); (iii) \(\overline{(\overline{x}\oplus y)}\oplus y=\overline{(\overline{y}\oplus x)}\oplus x\).
We define \(1=\overline{0}\) and subtraction \(x\ominus y=\overline{\overline{x}\oplus y}\).
MV-algebras are endowed with a partial order, the so-called _natural order_, defined for \(x,y\in M\), by \(x\sqsubseteq y\) if \(x\oplus z=y\) for some \(z\in M\). When \(\sqsubseteq\) is total, \(\mathbb{M}\) is called an _MV-chain_. We will write \(\mathbb{M}\) instead of \(M\).
The natural order gives an MV-algebra a lattice structure where \(\bot=0\), \(\top=1\), \(x\sqcup y=(x\ominus y)\oplus y\) and \(x\sqcap y=\overline{\overline{x}\sqcup\overline{y}}=x\ominus(x\ominus y)\). We call the MV-algebra _complete_ if it is a complete lattice, which is not true in general, e.g., \(([0,1]\cap\mathbb{Q},\leq)\).
**Example 3.1**.: _A prototypical MV-algebra is \(([0,1],\oplus,0,(\overline{\cdot}))\) where \(x\oplus y=\min\{x+y,1\}\), \(\overline{x}=1-x\) and \(x\om y=\max\{0,x-y\}\) for \(x,y\in[0,1]\). The natural order is \(\leq\) (less or equal) on the reals. Another example is \(K=(\{0,\ldots,k\},\oplus,0,\overline{(\cdot)})\) where \(n\oplus m=\min\{n+m,k\}\), \(\overline{n}=k-n\) and \(n\om m=\max\{n-m,0\}\) for \(n,m\in\{0,\ldots,k\}\). Both MV-algebras are complete and MV-chains._
We next briefly recap the theory from [7] which will be helpful in the paper for checking whether a fixpoint is the least or the greatest fixpoint of some underlying endo-function. For the purposes of the present paper we actually need a generalisation of the theory which provides the approximation also for functions with an infinite domain (while the theory in [7] was restricted to finite sets). Hence in the following, sets \(Y\) and \(Z\) are possibly infinite.
Given \(a\in\mathbb{M}^{Y}\) we define its _norm_ as \(\|a\|=\sup\{a(y)\mid y\in Y\}\). A function \(f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}\) is _non-expansive_ if for all \(a,b\in\mathbb{M}^{Y}\) it holds \(\|f(b)\ominus f(a)\|\sqsubseteq\|b\ominus a\|\). It can be seen that non-expansive functions are monotone. A number of standard operators are non-expansive (e.g., constants, reindexing, max and min over a relation, average in Table 1), and non-expansiveness is preserved by composition and disjoint union (see [7]). Given \(Y^{\prime}\subseteq Y\) and \(\delta\in\mathbb{M}\), we write \(\delta_{Y^{\prime}}\) for the function defined by \(\delta_{Y^{\prime}}(y)=\delta\) if \(y\in Y^{\prime}\) and \(\delta_{Y^{\prime}}(y)=0\), otherwise.
Let \(f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}\), \(a\in\mathbb{M}^{Y}\) and \(0\sqsubset\delta\in\mathbb{M}\). Define \([Y]^{a}=\{y\in Y\mid a(y)\neq 0\}\) and consider the functions \(\alpha^{a,\delta}:\mathcal{P}([Y]^{a})\to[a\ominus\delta,a]\) and \(\gamma^{a,\delta}:[a\ominus\delta,a]\to\mathcal{P}([Y]^{a})\), defined, for \(Y^{\prime}\in\mathcal{P}([Y]^{a})\) and \(b\in[a\ominus\delta,a]\), by
\[\alpha^{a,\delta}(Y^{\prime})=a\ominus\delta_{Y^{\prime}}\qquad\qquad\gamma^{ a,\delta}(b)=\{y\in[Y]^{a}\mid a(y)\ominus b(y)\sqsupseteq\delta\}.\]
Here \([a,b]=\{c\in\mathbb{M}^{Y}\mid a\sqsubseteq c\sqsubseteq b\}\). In fact, for suitable values of \(\delta\), the functions \(\alpha^{a,\delta},\gamma^{a,\delta}\) form a Galois connection.
For a non-expansive function \(f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}\) and \(\delta\in\mathbb{M}\), define \(f^{a,\delta}_{\#}\colon\mathcal{P}([Y]^{a})\to\mathcal{P}([Z]^{f(a)})\) as \(f^{a,\delta}_{\#}=\gamma^{f(a),\delta}\circ f\circ\alpha^{a,\delta}\). The function \(f^{a,\delta}_{\#}\) is antitone in the parameter \(\delta\) and we define the _\(a\)-approximation_ of \(f\) as
\[f^{a}_{\#}=\bigcup_{\delta\sqsupset 0}f^{a,\delta}_{\#}.\]
For finite sets \(Y\) and \(Z\) there exists a suitable value \(\iota^{a}_{f}\sqsupset 0\), such that all functions \(f^{a,\delta}_{\#}\) for \(0\sqsubset\delta\sqsubseteq\iota^{a}_{f}\) are equal. Here, the \(a\)-approximation is given by \(f^{a}_{\#}=f^{a,\delta}_{\#}\) for \(\delta=\iota^{a}_{f}\).
Intuitively, given some \(Y^{\prime}\), the set \(f^{a}_{\#}(Y^{\prime})\) contains the points where a decrease of the values of \(a\) on the points in \(Y^{\prime}\) "propagates" through the function \(f\). The greatest fixpoint of \(f^{a}_{\#}\) gives us the subset of \(Y\) where such a decrease is propagated in a cycle (so-called "vicious cycle"). Whenever \(\nu f^{a}_{\#}\) is non-empty, one can argue that \(a\) cannot be the least fixpoint of \(f\) since we can decrease the value in all elements of \(\nu f^{a}_{\#}\), obtaining a smaller prefixpoint. Interestingly, for non-expansive functions, it is shown in [7] that also the converse holds, i.e., emptiness of the greatest fixpoint of \(f^{a}_{\#}\) implies that \(a\) is the least fixpoint.
**Theorem 3.2**: **(soundness and completeness for fixpoints).** _Let \(\mathbb{M}\) be a complete MV-chain, \(Y\) a finite set and \(f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}\) be a non-expansive function. Let \(a\in\mathbb{M}^{Y}\) be a fixpoint of \(f\). Then \(\nu f^{a}_{\#}=\emptyset\) if and only if \(a=\mu f\)._
Using the above theorem we can check whether some fixpoint \(a\) of \(f\) is the least fixpoint. Whenever \(a\) is a fixpoint, but not yet the least fixpoint of \(f\), it can be decreased by a fixed value in \(\mathbb{M}\) (see [7] for the details) on the points in \(\nu f^{a}_{\#}\) to obtain a smaller pre-fixpoint.
**Lemma 3.3**: _Let \(\mathbb{M}\) be a complete MV-chain, \(Y\) a finite set and \(f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}\) a non-expansive function, \(a\in\mathbb{M}^{Y}\) a fixpoint of \(f\), and let \(f^{a}_{\#}\) be the corresponding \(a\)-approximation. If \(a\) is not the least fixpoint and thus \(\nu f^{a}_{\#}\neq\emptyset\) then there is \(0\sqsubset\delta\in\mathbb{M}\) such that \(a\ominus\delta_{\nu f^{a}_{\#}}\) is a pre-fixpoint of \(f\)._
The above theory can easily be dualised [7].
## 4 A Categorical View of the Approximation Framework
The framework from [7], summarized in the previous section, is not based on category theory, but - as we shall see - can be naturally reformulated in a categorical setting. In particular, casting the compositionality results into a monoidal structure (see Section 7) is a valuable basis for our tool. But first, we will show how the operation \(\#\) of taking the \(a\)-approximation of a function can be seen as a (lax) functor between two categories: a concrete category \(\mathbb{C}\) whose arrows are the non-expansive functions for which we seek the least (or greatest) fixpoint and an abstract category \(\mathbb{A}\) whose arrows are the corresponding approximations.
More precisely, recall that given a non-expansive function \(f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}\), the approximation of \(f\) is relative to a fixed map \(a\in\mathbb{M}^{Y}\). Hence objects in \(\mathbb{C}\) are elements \(a\in\mathbb{M}^{Y}\) and an arrow from \(a\in\mathbb{M}^{Y}\) to \(b\in\mathbb{M}^{Z}\) is a non-expansive function \(f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}\) required to map \(a\) into \(b\). The approximations instead live in \(\mathbb{A}\). Recall that the approximation is \(f^{a}_{\#}:\mathcal{P}([Y]^{a})\to\mathcal{P}([Z]^{b})\). Since their domains and codomains are dependent again on a map \(a\), we still employ elements of \(\mathbb{M}^{Y}\) as objects, but functions between powersets as arrows.
**Definition 4.1**: **(concrete and abstract categories).** _The concrete category \(\mathbb{C}\) has as objects maps \(a\in\mathbb{M}^{Y}\) where \(Y\) is a (possibly infinite) set. Given \(a\in\mathbb{M}^{Y}\), \(b\in\mathbb{M}^{Z}\) an arrow \(f:a\dashrightarrow b\) is a non-expansive function \(f\colon\mathbb{M}^{Y}\to\mathbb{M}^{Z}\), such that \(f(a)=b\). The abstract category \(\mathbb{A}\) has again maps \(a\in\mathbb{M}^{Y}\) as objects. Given \(a\in\mathbb{M}^{Y}\), \(b\in\mathbb{M}^{Z}\) an arrow \(f:a\dashrightarrow b\) is a monotone (wrt. inclusion) function \(f\colon\mathcal{P}([Y]^{a})\to\mathcal{P}([Z]^{b})\). Arrow composition and identities are the obvious ones._
_The lax functor \(\#\colon\mathbb{C}\to\mathbb{A}\) is defined as follows: for an object \(a\in\mathbb{M}^{Y}\), we let \(\#(a)=a\) and, given an arrow \(f:a\dashrightarrow b\), we let \(\#(f)=f^{a}_{\#}\)._
Note that abstract arrows are dashed (\(\dashrightarrow\)), while the underlying functions are represented by standard arrows (\(\to\)).
**Lemma 4.2**: **(well-definedness).** _The categories \(\mathbb{C}\) and \(\mathbb{A}\) are well-defined and \(\#\) is a lax functor, i.e., identities are preserved and \(\#(f\circ g)\subseteq\#(f)\circ\#(g)\) for composable arrows \(f,g\) in \(\mathbb{C}\)._
It will be convenient to restrict to the subcategory of \(\mathbb{C}\) where arrows are reindexings and to subcategories of \(\mathbb{C}\), \(\mathbb{A}\) with maps on finite sets.
**Definition 4.3**: **(reindexing subcategory).** _We denote by \(\mathbb{C}^{*}\) the lluf5 subcategory of \(\mathbb{C}\) where arrows are reindexing, i.e., given objects \(a\in\mathbb{M}^{Y}\), \(b\in\mathbb{M}^{Z}\) we consider only arrows \(f:a\dashrightarrow b\) such that \(f=g^{*}\) for some \(g\colon Z\to Y\) (hence, in particular, \(b=g^{*}(a)=a\circ g\)). We denote \(E\colon\mathbb{C}^{*}\hookrightarrow\mathbb{C}\) the embedding functor._
Footnote 5: A _lluf sub-category_ is a sub-category that contains all objects.
**Definition 4.4**: **(finite subcategories).** _We denote by \(\mathbb{C}_{f}\), \(\mathbb{A}_{f}\) the full subcategories of \(\mathbb{C},\mathbb{A}\) where objects are of the kind \(a\in\mathbb{M}^{Y}\) for a finite set \(Y\)._
**Lemma 4.5**: **.** _The lax functor \(\#\colon\mathbb{C}\rightarrow\mathbb{A}\) restricts to \(\#\colon\mathbb{C}_{f}\rightarrow\mathbb{A}_{f}\), which is a (proper) functor._
## 5 Predicate Liftings
In this section we discuss how predicate liftings [18, 21] can be integrated into our theory. In this context the idea is to view a map in \(\mathbb{M}^{Y}\) as a predicate over \(Y\) with values in \(\mathbb{M}\) (e.g., if \(\mathbb{M}=\{0,1\}\) we obtain Boolean predicates). Then, given a functor \(F\), a predicate lifting transforms a predicate over \(Y\) (a map in \(\mathbb{M}^{Y}\)), to a predicate over \(FY\) (a map in \(\mathbb{M}^{FY}\)). It must be remarked that every complete MV-algebra is a quantale6 with respect to \(\oplus\) and the inverse of the natural order (see [13] or the explicit proof in Lemma C.1 in the appendix) and predicate liftings for arbitrary quantales have been studied, for instance, in [8].
Footnote 6: A quantale is a complete lattice with an associative operator that distributes over arbitrary joins.
First, we characterise which predicate liftings are non-expansive and second, derive their approximations. We will address both these issues in this section and then use predicate liftings to define behavioural metrics in Section 6.
The fact that there are some functors \(F\), for which \(FY\) is infinite, even if \(Y\) is finite, is the reason why the categories \(\mathbb{C}\) and \(\mathbb{A}\) also include infinite sets. However note, that the resulting fixpoint function will be always defined for finite sets, although intermediate functions might not conform to this.
**Definition 5.1**: **(predicate lifting).** _Given a functor \(F\colon\mathbf{Set}\rightarrow\mathbf{Set}\), a predicate lifting is a family of functions \(\tilde{F}_{Y}\colon\mathbb{M}^{Y}\rightarrow\mathbb{M}^{FY}\) (where \(Y\) is a set), such that for \(g\colon Z\to Y\), \(a\colon Y\rightarrow\mathbb{M}\) it holds that \((Fg)^{*}(\tilde{F}_{Y}(a))=\tilde{F}_{Z}(g^{*}(a))\)._
That is, predicate liftings must commute with reindexings. The index \(Y\) will be omitted if clear from the context. Such predicate liftings are in one-to-one
correspondence to so called evaluation maps \(\mathit{ev}\colon F\mathbb{M}\to\mathbb{M}\).7 Given \(\mathit{ev}\), we define the corresponding lifting to be \(\tilde{F}(a)=\mathit{ev}\circ Fa\colon FY\to\mathbb{M}\), where \(a\colon Y\to\mathbb{M}\).
Footnote 7: This follows from the Yoneda lemma, see e.g. [16].
In the sequel we will only consider well-behaved liftings [4, 8], i.e., we require that (i) \(\tilde{F}\) is monotone; (ii) \(\tilde{F}(0_{Y})=0_{FY}\) where \(0\) is the constant \(0\)-function; (iii) \(\tilde{F}(a\oplus b)\sqsubseteq\tilde{F}(a)\oplus\tilde{F}(b)\) for \(a,b\colon Y\to\mathbb{M}\); (iv) \(F\) preserves weak pullbacks.
We aim to have not only monotone, but non-expansive liftings.
**Lemma 5.2**.: _Let \(\mathit{ev}\colon F\mathbb{M}\to\mathbb{M}\) be an evaluation map and assume that its corresponding lifting \(\tilde{F}\colon\mathbb{M}^{Y}\to\mathbb{M}^{FY}\) is well-behaved. Then \(\tilde{F}\) is non-expansive iff for all \(\delta\in\mathbb{M}\) it holds that \(\tilde{F}\delta_{Y}\sqsubseteq\delta_{FY}\)_
**Example 5.3**.: _We consider the (finitely supported) distribution functor \(\mathcal{D}\) that maps a set \(X\) to all maps \(p\colon X\to[0,1]\) that have finite support and satisfy \(\sum_{x\in X}p(x)=1\). (Here \(\mathbb{M}=[0,1]\).) One evaluation map is \(\mathit{ev}\colon\mathcal{D}[0,1]\to[0,1]\) with \(\mathit{ev}(p)=\sum_{r\in[0,1]}r\cdot p(r)\), where \(p\) is a distribution on \([0,1]\) (expectation). It is easy to see that \(\tilde{D}\) is well-behaved and non-expansive. The latter follows from \(\tilde{D}(\delta_{Y})=\delta_{\mathcal{D}Y}\)._
**Example 5.4**.: _Another example is given by the finite powerset functor \(\mathcal{P}_{f}\). We are given the evaluation map \(\mathit{ev}\colon\mathcal{P}_{f}\mathbb{M}\to\mathbb{M}\), defined for finite \(S\subseteq\mathbb{M}\) as \(\mathit{ev}(S)=\max S\), where \(\max\emptyset=0\). The lifting \(\tilde{\mathcal{P}}_{f}\) is well-behaved (see [4]) and non-expansive. To show the latter, observe that \(\tilde{\mathcal{P}}_{f}(\delta_{Y})=\delta_{\mathcal{P}_{f}(Y)\setminus\{ \emptyset\}}\sqsubseteq\delta_{\mathcal{P}_{f}(Y)}\)._
Non-expansive predicate liftings can be seen as functors \(\tilde{F}\colon\mathbb{C}^{*}\to\mathbb{C}^{*}\). To be more precise, \(\tilde{F}\) maps an object \(a\in\mathbb{M}^{Y}\) to \(\tilde{F}(a)\in\mathbb{M}^{FY}\) and an arrow \(g^{*}\colon a\dashrightarrow a\circ g\),, where \(g\colon Z\to Y\), to \((Fg)^{*}\colon\tilde{F}a\dashrightarrow\tilde{F}(a\circ g)\).
**Proposition 5.5**.: _Let \(\tilde{F}\) be a (non-expansive) predicate lifting. There is a natural transformation \(\beta\colon\#E\Rightarrow\#E\tilde{F}\) between (lax) functors \(\#E,\#E\tilde{F}\colon\mathbb{C}^{*}\to\mathbb{A}\), whose components, for \(a\in\mathbb{M}^{Y}\), are \(\beta_{a}\colon a\dashrightarrow\tilde{F}(a)\) in \(\mathbb{A}\), defined by \(\beta_{a}(U)=\tilde{F}^{a}_{\#}(U)\) for \(U\subseteq[Y]^{a}\)._
_That is, the following diagrams commute for every \(g\colon Z\to Y\) (on the left the diagram with formal arrows, omitting the embedding functor \(E\), and on the right the functions with corresponding domains). Note that \(\#(g)=g^{-1}\)._
## 6 Wasserstein Lifting and Behavioural Metrics
In this section we show how the framework for fixpoint checking described before can be used to deal with coalgebraic behavioural metrics.
We build on [4], where an approach is proposed for canonically defining a behavioural pseudometric for coalgebras of a functor \(F\colon\mathbf{Set}\to\mathbf{Set}\), that is, for functions of the form \(\xi\colon X\to FX\) where \(X\) is a set. Intuitively \(\xi\) specifies a transition system whose branching type is given by \(F\). Given such a coalgebra \(\xi\), the idea is to endow \(X\) with a pseudo-metric \(d_{\xi}\colon X\times X\to\mathbb{M}\) defined as the least fixpoint of the map \(d\mapsto d^{F}\circ(\xi\times\xi)\) where \(\_\)\(F\) lifts a metric \(d\colon X\times X\to\mathbb{M}\) to a metric \(d^{F}\colon FX\times FX\to\mathbb{M}\). Here we focus on the so-called Wasserstein lifting and show how approximations of the functions involved in the definition of the pseudometric can be determined.
### Wasserstein Lifting
Hereafter, \(F\) denotes a fixed endofunctor on \(\mathbf{Set}\) and \(\xi\colon X\to FX\) is a coalgebra over a finite set \(X\). We also fix a well-behaved non-expansive predicate lifting \(\tilde{F}\).
In order to define a Wasserstein lifting, a first ingredient is that of a coupling. Given \(t_{1},t_{2}\in FX\) a _coupling_ of \(t_{1}\) and \(t_{2}\) is an element \(t\in F(X\times X)\), such that \(F\pi_{i}(t)=t_{i}\) for \(i=1,2\), where \(\pi_{i}\colon X\times X\to X\) are the projections. We write \(\Gamma(t_{1},t_{2})\) for the set of all such couplings.
Definition 6.1 (Wasserstein lifting): The Wasserstein lifting \(\_^{F}\colon\mathbb{M}^{X\times X}\to\mathbb{M}^{FX\times FX}\) is defined for \(d\colon X\times X\to\mathbb{M}\) and \(t_{1},t_{2}\in FX\) as
\[d^{F}(t_{1},t_{2})=\inf_{t\in\Gamma(t_{1},t_{2})}\tilde{F}d(t)\]
For more intuition on the Wasserstein lifting see Section 2. Note that a coupling correspond to a transport plan. It can be shown that for well-behaved \(\tilde{F}\), the lifting preserves pseudometrics (see [4, 8]).
In order to make the theory for fixpoint checks effective we will need to restrict to a subclass of liftings.
Definition 6.2 (finitely coupled lifting): We call a lifting \(\tilde{F}\) _finitely coupled_ if for all \(X\) and \(t_{1},t_{2}\in FX\) there exists a finite \(\Gamma^{\prime}(t_{1},t_{2})\subseteq\Gamma(t_{1},t_{2})\), which can be computed given \(t_{1},t_{2}\), such that \(\inf_{t\in\Gamma(t_{1},t_{2})}\tilde{F}d(t)=\min_{t\in\Gamma^{\prime}(t_{1},t _{2})}\tilde{F}d(t)\).
Observe that whenever the infimum above is a minimum, there is trivially a finite \(\Gamma^{\prime}(t_{1},t_{2})\). We however ask that there is an effective way to determine it.
The lifting in Example 5.4 (for the finite powerset functor) is obviously finitely coupled. For the lifting \(\tilde{\mathcal{D}}\) from Example 5.3 we note that the set of couplings \(t\in\Gamma(t_{1},t_{2})\) forms a polytope with a finite number of vertices, which can be effectively computed and \(\Gamma^{\prime}(t_{1},t_{2})\) consists of these vertices. The infimum (minimum) is obtained at one of these vertices [1, Remark 4.5].
### A Compositional Representation
As mentioned above, for a coalgebra \(\xi\colon X\to FX\) the behavioural pseudometric \(d:X\times X\to\mathbb{M}\) arises as the least fixpoint of \(\mathcal{W}=(\xi\times\xi)^{*}\circ(\_^{F})\) where \((\_^{F})\) is the Wasserstein lifting.
**Example 6.3**.: _We can recover the motivating example from Section 2 by setting \(\mathbb{M}=[0,1]\) and using the functor \(FX=\Lambda\times\mathcal{D}(X)\), where \(\Lambda\) is a fixed set of labels. We observe that couplings of \((a_{1},p_{1}),(a_{2},p_{2})\in FX\) only exist if \(a_{1}=a_{2}\) and - if they do not exist - the Wasserstein distance is the empty infimum, hence \(1\). If \(a_{1}=a_{2}\), couplings correspond to the usual Wasserstein couplings of \(p_{1},p_{2}\) and the least fixpoint of \(\mathcal{W}\) equals the behavioural metrics, as explained in Section 2._
Note that we do not use a discount factor to ensure contractivity and hence the fixpoint might not be unique. Thus, given some fixpoint \(d\), the \(d\)-approximation \(\mathcal{W}^{d}_{\#}\) can be used for checking whether \(d=\mu\mathcal{W}\).
In the rest of the section we show how \(\mathcal{W}\) can be decomposed into basic components and study the corresponding approximation.
The Wasserstein lifting can be decomposed as \(\_^{F}=\min_{u}\circ\tilde{F}\) where \(\tilde{F}:\mathbb{M}^{X\times X}\to\mathbb{M}^{F(X\times X)}\) is the predicate lifting - which we require to be non-expansive (cf. Lemma 5.2) - and \(\min_{u}\) is the minimum over the coupling function \(u\colon F(X\times X)\to FX\times FX\) defined as \(u(t)=(F\pi_{1}(t),F\pi_{2}(t))\), which means that \(\min_{u}\colon\mathbb{M}^{F(X\times X)}\to\mathbb{M}^{FX\times FX}\) (see Table 1).
We can now derive the corresponding \(d\)-approximation.
**Proposition 6.4**.: _Assume that \(\tilde{F}\) is finitely coupled. Let \(Y=X\times X\), where \(X\) is finite. For \(d\in\mathbb{M}^{Y}\) and \(Y^{\prime}\subseteq[Y]^{d}\) we have_
\[\mathcal{W}^{d}_{\#}(Y^{\prime})=\{(x,y)\in[Y]^{d}\mid\exists t \in\tilde{F}^{d}_{\#}(Y^{\prime}),u(t)=(\xi(x),\xi(y)),\] \[\tilde{F}d(t)=\min_{t^{\prime}\in\Gamma(\xi(x),\xi(y))}\tilde{F} d(t^{\prime})\}.\]
Intuitively the statement of Proposition 6.4 means that the minimum must be reached in a coupling based on \(Y^{\prime}\).
For using the above result we next characterize \(\tilde{F}^{d}_{\#}(Y^{\prime})\). We rely on the fact that \(d\) can be decomposed into \(d=\pi_{1}\circ\bar{d}\), where the projection \(\pi_{1}\) is independent of \(d\) and \(\bar{d}\) is dependent on \(Y^{\prime}\), and exploit the natural transformation in Proposition 5.5.
**Proposition 6.5**.: _We fix \(Y^{\prime}\subseteq Y\). Let \(\pi_{1}\colon\mathbb{M}\times\{0,1\}\to\mathbb{M}\) be the projection to the first component and \(\bar{d}\colon Y\to\mathbb{M}\times\{0,1\}\) with \(\bar{d}(y)=(d(y),\chi_{Y^{\prime}}(y))\) where \(\chi_{Y^{\prime}}\colon Y\to\{0,1\}\) is the characteristic function of \(Y^{\prime}\). Then \(\tilde{F}^{d}_{\#}(Y^{\prime})=(F\bar{d})^{-1}(\tilde{F}^{\pi_{1}}_{\#}(( \mathbb{M}\backslash\{0\})\times\{1\}))\)._
Here \(\tilde{F}^{\pi_{1}}_{\#}((\mathbb{M}\backslash\{0\})\times\{1\})\subseteq F( \mathbb{M}\times\{0,1\})\) is independent of \(d\) and has to be determined only once for every predicate lifting \(\tilde{F}\). We will show how this set looks like for our example functors.
**Lemma 6.6**.: _Consider the lifting of the distribution functor presented in Example 5.3 and let \(Z=[0,1]\times\{0,1\}\). Then we have_
\[\tilde{D}_{\#}^{\pi_{1}}((0,1]\times\{1\})=\{p\in\mathcal{D}Z\mid\text{supp}(p) \in(0,1]\times\{1\}\}.\]
This means intuitively that a decrease or "slack" can exactly be propagated for elements whose probabilities are strictly larger than \(0\).
**Lemma 6.7**.: _Consider the lifting of the finite powerset functor from Example 5.4 and let \(Z=\mathbb{M}\times\{0,1\}\). Then we have_
\[(\tilde{\mathcal{P}}_{f})_{\#}^{\pi_{1}}((\mathbb{M}\backslash\{0\})\times\{1 \})=\{S\in[\mathcal{P}_{f}Z]^{\tilde{\mathcal{P}}_{f}\pi_{1}}\mid\exists(s,1) \in S\ \forall(s^{\prime},0)\in S:\ s\sqsupset s^{\prime}\}.\]
The idea is that the maximum of a set \(S\) decreases if we decrease at least one its values and all values which are not decreased are strictly smaller.
_Remark 8_.: Note that \(\#\) is a functor on the subcategory \(\mathbb{C}_{f}\), while some liftings (e.g., the one for the distribution functor) work with infinite sets. In this case, given a finite set \(Y\), we actually focus on a finite \(D\subseteq FY\). (This is possible since we consider coalgebras with finite state space and assume that all liftings are finitely coupled.) Then we consider \(\tilde{F}_{Y}\colon\mathbb{M}^{Y}\to\mathbb{M}^{FY}\) and \(e\colon D\hookrightarrow FY\) (the embedding of \(D\) into \(FY\)). We set \(f=e^{*}\circ\tilde{F}_{Y}\). Given \(a\colon Y\to\mathbb{M}\), we view \(f\) as an arrow \(a\dashrightarrow\tilde{F}(a)\circ e\) in \(\mathbb{C}\). The approximation in this subsection adapts to the "reduced" lifting, which can be seen as follows (cf. Lemma B.1 in the appendix, which shows that \(\#\) preserves composition if one of the arrows is a reindexing):
\[f_{\#}^{a}=\#(f)=\#(e^{*}\circ\tilde{F}_{Y})=\#(e^{*})\circ\#(\tilde{F}_{Y})= e^{-1}\circ\#(\tilde{F}_{Y})=\#(\tilde{F}_{Y})\cap D.\]
## 7 GS-Monoidality
We will now show that the categories \(\mathbb{C}_{f}\) and \(\mathbb{A}_{f}\) can be turned into gs-monoidal categories. This will give us a way to assemble functions and their approximations compositionally and this method will form the basis for the tool. We first define gs-monoidal categories in detail:
**Definition 7.1**.: _A strict gs-monoidal category is a strict symmetric monoidal category, where \(\otimes\) denotes the tensor and \(e\) its unit and symmetries are given by \(\rho_{a,b}\colon a\otimes b\to b\otimes a\). For every object \(a\) there exist morphisms \(\nabla_{a}\colon a\to a\times a\) (duplicator) and \(!_{a}\colon a\to e\) (discharger) satisfying the axioms given below. (See also the visualizations as string diagrams in Figure 3.)_
1. _functoriality of tensor:_ * \((g\otimes g^{\prime})\circ(f\otimes f^{\prime})=(g\circ f)\otimes(g^{\prime} \circ f^{\prime})\)__ * \(\text{id}_{a\otimes b}=\text{id}_{a}\otimes\text{id}_{b}\)__
2. _monoidality:_ * \((f\otimes g)\otimes h=f\otimes(g\otimes h)\)__ * \(f\otimes\text{id}_{e}=f=\text{id}_{e}\otimes f\)__
3. _naturality:_ * \((f^{\prime}\otimes f)\circ\rho_{a,a^{\prime}}=\rho_{b,b^{\prime}}\circ(f\otimes f ^{\prime})\)__
4. _symmetry:_ * \(\rho_{e,e}=\mathit{id}_{e}\)__ * \(\rho_{b,a}\circ\rho_{a,b}=\mathit{id}_{a\otimes b}\)__ * \(\rho_{a\otimes b,c}=(\rho_{a,c}\otimes\mathit{id}_{b})\circ(\mathit{id}_{a} \otimes\rho_{b,c})\)__
5. _gs-monoidality:_ * \(!_{e}=\nabla_{e}=\mathit{id}_{e}\)__ * _coherence axioms:_ * \((\mathit{id}_{a}\otimes\nabla_{a})\circ\nabla_{a}=(\nabla_{a}\otimes\mathit{ id}_{a})\circ\nabla_{a}\)__ * \(\mathit{id}_{a}=(\mathit{id}_{a}\otimes!_{a})\circ\nabla_{a}\)__ * \(\rho_{a,a}\circ\nabla_{a}=\nabla_{a}\)__ * _monoidality axioms:_ * \(!_{a\otimes b}=!_{a}\otimes!_{b}\)__ * \((\mathit{id}_{a}\otimes\rho_{a,b}\otimes\mathit{id}_{b})\circ(\nabla_{a} \otimes\nabla_{b})=\nabla_{a\otimes b}\) _(or, equivalently,_ \(\nabla_{a}\otimes\nabla_{b}=(\mathit{id}_{a}\otimes\rho_{b,a}\otimes\mathit{ id}_{b})\circ\nabla_{a\otimes b}\)_)_
_A functor \(\#\colon\mathbb{C}\to\mathbb{D}\) is gs-monoidal if the following holds:_
1. \(\mathbb{C}\) _and_ \(\mathbb{D}\) _are gs-monoidal categories_
2. _monoidality:_ * \(\#(e)=e^{\prime}\)__ * \(\#(a\otimes b)=\#(a)\otimes^{\prime}\#(b)\)__
3. _symmetry:_ * \(\#(\rho_{a,b})=\rho^{\prime}_{\#(a),\#(b)}\)__
4. _gs-monoidality:_ * \(\#(!_{a})=!^{\prime}_{\#(a)}\)__ * \(\#(\nabla_{a})=\nabla^{\prime}_{\#(a)}\)__
_where the primed operators are from the category \(\mathbb{D}\), the others from \(\mathbb{C}\)._
In fact, in order to obtain strict gs-monoidal categories with disjoint union, we will work with the skeleton categories where every finite set \(Y\) is represented by an isomorphic copy \(\{1,\ldots,|Y|\}\). This enables us to make disjoint union strict, i.e., associativity holds on the nose and not just up to isomorphism. In particular for finite sets \(Y,Z\), we define disjoint union as \(Y+Z=\{1,\ldots,|Y|,|Y|+1,\ldots,|Y|+|Z|\}\).
**Theorem 7.2**.: _The category \(\mathbb{C}_{f}\) with the following operators is gs-monoidal:_
1. _The tensor_ \(\otimes\) _on objects_ \(a\in\mathbb{M}^{Y}\) _and_ \(b\in\mathbb{M}^{Z}\) _is defined as_ \[a\otimes b=a+b\in\mathbb{M}^{Y+Z}\] _where for_ \(k\in Y+Z\) _we have_ \((a+b)(k)=a(k)\) _if_ \(k\leq|Y|\) _and_ \((a+b)(k)=b(k-|Y|)\) _if_ \(|Y|<k\leq|Y|+|Z|\)
_On arrows_ \(f\colon a\dashrightarrow b\) _and_ \(g\colon a^{\prime}\dashrightarrow b^{\prime}\) _(with_ \(a^{\prime}\in\mathbb{M}^{Y^{\prime}}\)_,_ \(b^{\prime}\in\mathbb{M}^{Z^{\prime}}\)_) tensor is given by_
\[f\otimes g\colon\mathbb{M}^{Y+Y^{\prime}}\to\mathbb{M}^{Z+Z^{\prime}},\quad(f \otimes g)(u)=f(\vec{u}_{Y})+g(\vec{u}_{Y})\]
_for_ \(u\in\mathbb{M}^{Y+Y^{\prime}}\) _where_ \(\vec{u}_{Y}\in\mathbb{M}^{Y}\) _and_ \(\vec{u}_{Y}\in\mathbb{M}^{Y^{\prime}}\)_, defined as_ \(\vec{u}_{Y}(k)=u(k)\) _(_\(1\leq k\leq|Y|\)_) and_ \(\vec{u}_{Y}(k)=u(|Y|+k)\) _(_\(1\leq k\leq|Y^{\prime}|\)_)._
2. _The symmetry_ \(\rho_{a,b}\colon a\otimes b\dashrightarrow b\otimes a\) _for_ \(a\in\mathbb{M}^{Y}\)_,_ \(b\in\mathbb{M}^{Z}\) _is defined for_ \(u\in\mathbb{M}^{Y+Z}\) _as_ \[\rho_{a,b}(u)=\vec{u}_{Y}+\vec{u}_{Y}.\]
3. _The unit_ \(e\) _is the unique mapping_ \(e\colon\emptyset\to\mathbb{M}\)_._
4. _The duplicator_ \(\nabla_{a}\colon a\dashrightarrow a\otimes a\) _for_ \(a\in\mathbb{M}^{Y}\) _is defined for_ \(u\in\mathbb{M}^{Y}\) _as_ \[\nabla_{a}(u)=u+u.\]
5. _The discharger_ \(\mathord{!}_{a}\colon a\dashrightarrow e\) _for_ \(a\in\mathbb{M}^{Y}\) _is defined for_ \(u\in\mathbb{M}^{Y}\) _as_ \(\mathord{!}_{a}(u)=e\)_._
We now turn to the abstract category \(\mathbb{A}_{f}\). Note that here functions have as parameters sets of the form \(U\subseteq[Y]^{a}\subseteq Y\). Hence, (the cardinality of) \(Y\) can not be determined directly from \(U\) and we need extra care with the tensor.
Figure 3: String diagrams of the axioms satisfied by gs-monoidal categories.
**Theorem 7.3**.: _The category \(\mathbb{A}_{f}\) with the following operators is gs-monoidal:_
1. _The tensor_ \(\otimes\) _on objects_ \(a\in\mathbb{M}^{Y}\) _and_ \(b\in\mathbb{M}^{Z}\) _is again defined as_ \(a\otimes b=a+b\)_. On arrows_ \(f\colon a\dashrightarrow b\) _and_ \(g\colon a^{\prime}\dashrightarrow b^{\prime}\) _(where_ \(a^{\prime}\in\mathbb{M}^{Y^{\prime}}\)_,_ \(b^{\prime}\in\mathbb{M}^{Z^{\prime}}\) _and_ \(f\colon\mathcal{P}([Y]^{a})\to\mathcal{P}([Z]^{b^{\prime}})\)_,_ \(g\colon\mathcal{P}([Y^{\prime}]^{a^{\prime}})\to\mathcal{P}([Z^{\prime}]^{b^{ \prime}})\) _are the underlying functions), the tensor is given by_ \[f\otimes g\colon\mathcal{P}([Y+Y^{\prime}]^{a+a^{\prime}})\to \mathcal{P}([Z+Z^{\prime}]^{b+b^{\prime}}),\quad(f\otimes g)(U)=f(\vec{U}_{Y}) \cup_{Z}g(\vec{U}_{Y})\] _where_ \(\vec{U}_{Y}=U\cap\{1,\dots,|Y|\}\) _and_ \(\vec{U}_{Y}=\{k\mid|Y|+k\in U\}\)_. Furthermore:_ \[U\cup_{Y}V=U\cup\{|Y|+k\mid k\in V\}\quad\text{(where $U\subseteq Y$)}\]
2. _The symmetry_ \(\rho_{a,b}\colon a\otimes b\dashrightarrow b\otimes a\) _for_ \(a\in\mathbb{M}^{Y}\)_,_ \(b\in\mathbb{M}^{Z}\) _is defined for_ \(U\subseteq[Y+Z]^{a+b}\) _as_ \[\rho_{a,b}(U)=\vec{U}_{Y}\cup_{Z}\vec{U}_{Y}\subseteq[Z+Y]^{b+a}\]
3. _The unit_ \(e\) _is again the unique mapping_ \(e\colon\emptyset\to\mathbb{M}\)_._
4. _The duplicator_ \(\nabla_{a}\colon a\dashrightarrow a\otimes a\) _for_ \(a\in\mathbb{M}^{Y}\) _is defined for_ \(U\subseteq[Y]^{a}\) _as_ \[\nabla_{a}(U)=U\cup_{Y}U\subseteq[Y+Y]^{a+a}.\]
5. _The discharger_ \(\mathord{!}_{a}\colon a\dashrightarrow e\) _for_ \(a\in\mathbb{M}^{Y}\) _is defined for_ \(U\subseteq[Y]^{a}\) _as_ \(\mathord{!}_{a}(U)=\emptyset\)_._
Finally, the approximation \(\#\) is indeed gs-monoidal, i.e., it preserves all the additional structure (tensor, symmetry, unit, duplicator and discharger).
**Theorem 7.4**.: \(\#\colon\mathbb{C}_{f}\to\mathbb{A}_{f}\) _is a gs-monoidal functor._
## 8 UDEfix: A Tool for Fixpoints Checks
We exploit gs-monoidality as discussed before and present a tool, called UDEfix, where the user can compose his or her very own function \(f\colon\mathbb{M}^{Y}\to\mathbb{M}^{Y}\) as a sort of circuit. Exploiting the fact that the functor \(\#\) is gs-monoidal, this circuit is then transformed automatically and in a compositional way into the corresponding abstraction \(f^{a}_{\#}\), for some given \(a\in\mathbb{M}^{Y}\). By computing the greatest fixpoint of \(f^{a}_{\#}\) and checking for emptiness, UDEfix can check whether \(a=\mu f\).
In fact, UDEfix can handle all functions presented in Section 2, where for \(\min_{u}\), \(\max_{u}\) we also allow \(u\) to be a relation, instead of a function. Moreover, addition and subtraction by a fixed constant (both non-expansive functions) can be handled (see [6] for details). In addition to fixpoint checks, it is possible to perform (non-complete) checks whether a given post-fixpoint \(a\) is below the least fixpoint \(\mu f\). The dual checks (for greatest fixpoint and pre-fixpoints) are implemented as well.
Building the desired function \(f\colon\mathbb{M}^{Y}\to\mathbb{M}^{Y}\) requires three steps:
* Choosing the MV-algebra \(\mathbb{M}\). Currently the MV-chains \([0,1]\) and \(\{0,\dots,k\}\) (for arbitrary \(k\)) are supported.
* Creating the required basic functions by specifying their parameters.
* Assembling \(f\) from these basic functions.
\(\mathsf{UDEfix}\) is a Windows-Tool created in Python, which can be obtained from [https://github.com/TimoMatt/UDEfix](https://github.com/TimoMatt/UDEfix). The GUI of \(\mathsf{UDEfix}\) is separated into three areas: Content area, Building area and Basic-Functions area. Under File the user can save/load contents and set the MV-algebra in Settings. Functions built in the Building area can be saved and loaded.
Basic-Functions area:The Basic-Functions area contains the basic functions, encompassing those listed in Table 1 and additional ones. Via drag-and-drop (or right-click) these basic functions can be added to the Building area to create a Function box. Each such box requires three (in the case of \(\tilde{D}\) two) \(\mathsf{Contents}\): The Input set, the Output set and the required parameters. These \(\mathsf{Contents}\) are to be created in the \(\mathsf{Content}\) area. Additionally the Basic-Functions area contains the auxiliary function \(\mathsf{Testing}\) which we will discuss in the next paragraph.
Building area:The user can connect the created \(\mathsf{Function}\) boxes to obtain the function of interest. Composing functions is as simple as connecting two \(\mathsf{Function}\) boxes in the correct order and disjoint union is achieved by connecting two boxes to the same box. We note that \(\mathsf{Input}\) and \(\mathsf{Output}\) sets of connected \(\mathsf{Function}\) boxes need to match, otherwise the function is not built correctly. Revisiting the example in Figure 1, we display in Figure 4 how this function can be assembled.
The special box \(\mathsf{Testing}\) is always required at the end. Here, the user can enter some mapping \(a\colon Y\to\mathbb{M}\), test if \(a\) is a fixpoint/pre-fixpoint/post-fixpoint of the built function \(f\) and afterwards compute the greatest fixpoint of the approximation \((\nu f^{a}_{\#}\) if we want to check whether \(\mu f=a)\). If the result is not the empty set \((\nu f^{\#}_{a}\neq\emptyset)\) one can compute a suitable value for decreasing \(a\), needed for iterating to the least fixpoint from above (respectively increasing \(a\) for iterating to the greatest fixpoint from below). There is additional support for comparison with pre- and post-fixpoints.
In the left-hand system in Figure 1, the function \(d\colon Y\to[0,1]\) with \(d(3,3)=0\), \(d(1,1)=\nicefrac{{1}}{{2}}\), \(d(1,2)=d(2,1)=d(2,2)=\nicefrac{{2}}{{3}}\) and \(1\) for all other pairs is
Figure 4: Assembling the function \(f\) from Section 2.
a fixpoint of \(f\) (\(d\) is not a pseudometric). By clicking \(\mathsf{Compute}\) in the \(\mathsf{Testing}\)-box, \(\mathsf{UDefix}\) displays that \(d\) is a fixpoint and tells us that \(d\) is in fact not the least and not the greatest fixpoint. It also computes the greatest fixpoints of the approximations step by step and displays the results to the user.
Content area:Here the user can create sets, mappings and relations which are used to specify the basic functions. Creating a set is done by entering a name for the new set and clicking on the plus ("+"). The user can create a variety of different types of sets, for example the basic set \(X=\{1,2,3,4\}\) or the set \(D=\{p_{1},p_{2},p_{3},p_{4}\}\) which is a set of mappings resp. probability distributions.
Once, \(\mathsf{Input}\) and \(\mathsf{Output}\) sets are created we can define the required parameters (cf. Table 1). Here, the created sets can be chosen as domain and co-domain. Relations can be handled in a similar fashion: Given the two sets one wants to relate, creating a relation can be easily achieved by checking some boxes. Additionally the user has access to some useful in-built relations: "is-element-of"-relation and projections to the \(i\)-th component.
To ease the use, by clicking on the "+" in a \(\mathsf{Function}\) box a new matching content with chosen \(\mathsf{Input}\) and \(\mathsf{Output}\) sets is created. The additional parameters (cf. Table 1) have domains and co-domains which need to be created or are the chosen MV-algebra. The \(\mathsf{Testing}\) function \(d\) is a mapping as well.
See Figure 5 for examples on how to create the contents \(Y\) (set), \(d\) (distance function) and \(\rho\) (relation).
Examples:There are pre-defined functions, implementing examples, that are shipped with the tool. These concern case studies on termination probability, bisimilarity, simple stochastic games, energy games, behavioural metrics and Rabin automata. See [7, 6] for more details.
## 9 Conclusion, Related and Future Work
We have shown how our framework from [7] can be cast into a gs-monoidal setting, justifying the development of the tool \(\mathsf{UDefix}\) for a compositional view
Figure 5: Contents: Set \(Y\), Mapping \(d\), Relation \(\rho\).
on fixpoint checks. In addition we studied properties of the gs-monoidal functor \(\#\), mapping from the concrete to the abstract universe and giving us a general procedure for approximating predicate liftings.
Related work:This paper is based on fixpoint theory, coalgebras, as well as on the theory of monoidal categories. Monoidal categories [16] are categories equipped with a tensor. It has long been realized that monoidal categories can have additional structure such as braiding or symmetries. Here we base our work on so called gs-monoidal categories [11, 15], called s-monoidal in [14]. These are symmetric monoidal categories, equipped with a discharger and a duplicator. Note that "gs" originally stood for "graph substitution" and such categories were first used for modelling term graph rewriting.
We view gs-monoidal categories as a means to compositionally build monotone non-expansive functions on complete lattices, for which we are interested in the (least or greatest) fixpoint. Such fixpoints are ubiquitous in computer science, here we are in particular interested in applications in concurrency theory and games, such as bisimilarity [20], behavioural metrics [12, 23, 9, 4] and simple stochastic games [10]. In recent work we have considered strategy iteration procedures inspired by games for solving fixpoint equations [6].
Fixpoint equations also arise in the context of coalgebra [19], a general framework for investigating behavioural equivalences for systems that are parameterized - via a functor - over their branching type (labelled, non-deterministic, probabilistic, etc.). Here in particular we are concerned with coalgebraic behavioural metrics [4], based on a generalization of the Wasserstein or Kantorovich lifting [24]. Such liftings require the notion of predicate liftings, well-known in coalgebraic modal logics [21], lifted to a quantitative setting [8].
Future work:One important question is still open: we defined a lax functor \(\#\), relating the concrete category \(\mathbb{C}\) of functions of type \(\mathbb{M}^{Y}\rightarrow\mathbb{M}^{Z}\) - where \(Y,Z\) might be infinite - to their approximations, living in \(\mathbb{A}\). It is unclear whether \(\#\) is a proper functor, i.e., preserves composition. For finite sets functoriality derives from a non-trivial result in [7] and it is unclear whether it can be extended to the infinite case. If so, this would be a valuable step to extend the theory.
In this paper we illustrated the approximation for predicate liftings via the powerset and the distribution functor. It would be interesting to study more functors and hence broaden the applicability to other types of transition systems.
Concerning UDEfix, we plan to extend the tool to compute fixpoints, either via Kleene iteration or strategy iteration (strategy iteration from above and below), as detailed in [6]. Furthermore for convenience it would be useful to have support for generating fixpoint functions directly from a given coalgebra respectively transition system.
|
2302.07888 | High-dimensional Encoding in the Round-Robin Differential-Phase-Shift
Protocol | In quantum key distribution (QKD), protocols are tailored to adopt desirable
experimental attributes, including high key rates, operation in high noise
levels, and practical security considerations. The round-robin differential
phase shift protocol (RRDPS), falling in the family of differential phase shift
protocols, was introduced to remove restrictions on the security analysis, such
as the requirement to monitor signal disturbances, improving its practicality
in implementations. While the RRDPS protocol requires the encoding of single
photons in high-dimensional quantum states, at most, only one bit of secret key
is distributed per sifted photon. However, another family of protocols, namely
high-dimensional (HD) QKD, enlarges the encoding alphabet, allowing single
photons to carry more than one bit of secret key each. The high-dimensional
BB84 protocol exemplifies the potential benefits of such an encoding scheme,
such as larger key rates and higher noise tolerance. Here, we devise an
approach to extend the RRDPS QKD to an arbitrarily large encoding alphabet and
explore the security consequences. We demonstrate our new framework with a
proof-of-concept experiment and show that it can adapt to various experimental
conditions by optimizing the protocol parameters. Our approach offers insight
into bridging the gap between seemingly incompatible quantum communication
schemes by leveraging the unique approaches to information encoding of both HD
and DPS QKD. | Mikka Stasiuk, Felix Hufnagel, Xiaoqin Gao, Aaron Z. Goldberg, Frédéric Bouchard, Ebrahim Karimi, Khabat Heshami | 2023-02-15T19:00:01Z | http://arxiv.org/abs/2302.07888v2 | # High-dimensional Encoding in the Round-Robin Differential-Phase-Shift Protocol
###### Abstract
In quantum key distribution (QKD), protocols are tailored to adopt desirable experimental attributes, including high key rates, operation in high noise levels, and practical security considerations. The round-robin differential phase shift protocol (RRDPS), falling in the family of differential phase shift protocols, was introduced to remove restrictions on the security analysis, such as the requirement to monitor signal disturbances. While the RRDPS protocol requires the encoding of single photons in high-dimensional quantum states, at most, only one bit of secret key is distributed per sifted photon. However, another family of protocols, namely high-dimensional (HD) QKD, enlarges the encoding alphabet, allowing single photons to carry more than one bit of secret key each. The high-dimensional BB84 protocol exemplifies the potential benefits of such an encoding scheme, such as larger key rates and higher noise tolerance. Here, we devise an approach to extend the RRDPS QKD to an arbitrarily large encoding alphabet and explore the security consequences. We demonstrate our new framework with a proof-of-concept experiment and show that it can adapt to various experimental conditions by optimizing the protocol parameters. Our approach offers insight into bridging the gap between seemingly incompatible quantum communication schemes by leveraging the unique approaches to information encoding of both HD and DPS QKD.
+
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
## I Introduction
The advent of quantum key distribution demonstrated the ability to use quantum physics in public key cryptography and established one of the most studied aspects of quantum technologies. Bennet and Brassard, with the BB84 protocol, showed that encoding in two mutually unbiased bases (MUB) and randomly alternating between these encodings enables the generation of a secure random key between two parties [1]. Any intervention to gain access to the random key results in detectable noise at the receiver and allows for the removal of unsecure key generation attempts [2; 3]. Monitoring noise has become the key elements in developing a variety of quantum key distribution approaches [4; 5; 6; 7; 8]. Recently, Sasaki _et al._[9] showed a different approach to encoding random binaries in a quantum state for quantum key distribution which did not require monitoring the signal disturbance. Utilizing a quantum state in a large Hilbert space of dimension \(L\), mapping random phases (therefore random phase differences) between basis vectors of the state, and a randomized interferometric measurement at the receiver lead to a fundamental bound on the mutual information between the sender and a potential eavesdropper. This introduced an entirely different approach to enforcing security in quantum key distribution [10; 11; 12; 13]. Subsequently, several experiments used temporal [14; 15; 16; 17; 18; 19; 20] and spatial [21] structures of photons to implement this protocol, also known as the Round-Robin Differential Phase Shift (RRDPS) protocol.
Advances in the preparation and measurement of high-dimensional quantum states of photons using the temporal and spatial degree of freedom motivated implementations of quantum key distribution protocols where photons carried more than one bit each. This proved beneficial at increasing secret key rates and noise tolerance [22; 23; 24; 25; 26; 27; 28; 29; 30]. In this work, we explore the possibility of increasing the encoding space of the secret keys in the RRDPS setting. We show that a high-dimensional quantum key distribution with and without monitoring signal disturbance is possible. Our approach both exploits the previously untapped potential of the original RRDPS encoding scheme and offers an avenue to explore the connection between the RRDPS and BB84 protocols. We structure the paper as follows. We first introduce the high-dimensional RRDPS (HD-RRDPS) protocol. A simple sketch of the security analysis is presented. We then carry out a proof-of-principle experiment to investigate the performance and limitations of our scheme in a practical setting. Our experiment exploits the orbital angular momentum (OAM) degree of freedom of single photons, which has been demonstrated as an invaluable testbed to investigate quantum communication schemes, to encode the random key on the single photons. Finally, in the discussion, we discuss adapations to protocol parameters that enable better performance of the HD-RRDPS protocol for varying amounts of channel loss and characterize the circumstances under which the fundamental gap between the RRDPS and BB84 protocols can be closed.
## II Protocol
We introduce the HD-RRDPS protocol as an extension of the established RRDPS protocol to a larger alphabet. In the original protocol, a superposition of pulses with randomly assigned phases of 0 or \(\pi\) resulted in constructive or destructive interferences at the measurement stage. The natural extension of this encoding scheme is achieved by considering a MUB in larger dimensions. The formal key generation of the HD-RRDPS protocol is presented below.
(1) Alice prepares a state \(|\psi\rangle\) consisting of a superposition of \(L\) modes which determines the size of the Hilbert space. In the time domain, this corresponds to a packet of \(L\) pulses. Each mode is modulated by a phase \(2\pi k_{j}/d\), where \(k_{j}\in\{0,1,2,...,(d-1)\}\), and the parameter \(d\) is called the encoding dimension and satisfies \(2\leqslant d<L\), i.e.,
\[|\psi\rangle=\frac{1}{\sqrt{L}}\sum_{j=1}^{L}e^{i\frac{2\pi k_{j}}{d}}|j\rangle. \tag{1}\]
(2) Upon receiving the signal state from Alice, Bob randomly selects a subset of \(d\) modes out of the \(L\) total modes, i.e. \(\{|j_{0}\rangle,|j_{1}\rangle,...,|j_{(d-1)}\rangle\}\subset\{|1\rangle,|2 \rangle,...,|L\rangle\}\), where we also have that \(j_{1}<j_{2}<...<j_{d}\). After selecting the \(d\)-dimensional subset, Bob performs a measurement in the MUB given by \(\{|\varphi_{m}^{(d)}\rangle;m\in\{0,1,...,d-1\}\}\), where
\[|\varphi_{m}^{(d)}\rangle=\frac{1}{\sqrt{d}}\sum_{n=0}^{d-1}e^{i\frac{2\pi n }{d}}|j_{n}\rangle. \tag{2}\]
The outcome of the MUB measurement is used to generate the raw key, \(m\), which is attributed to a measurement of the state \(|\varphi_{m}\rangle\). Moreover, Alice and Bob only keep the measurement outcomes where the state received by Bob is an element of the MUB that is being measured, i.e. \(\frac{1}{\sqrt{d}}\sum_{n=0}^{d-1}e^{i\frac{2\pi k_{j_{n}}}{d}}|j_{n}\rangle \in\{|\varphi_{m}^{(d)}\}\).
(3) Finally, Bob shares the values of \(j_{0}\), \(j_{1}\),..., \(j_{(d-1)}\) with Alice. They can then form their final shared secure key by performing the standard classical post-processing consisting of error reconciliation and privacy amplification.
## III Results
A sketch of the security proof of the HD-RRDPS is presented here in the single-photon case. We follow the procedure presented in [11]. A detailed calculation can be found in Appendix A.
Figure 1: **Experimental Setup**. Alice generates heralded single photons at a center wavelength of 810 nm coupled to a single-mode fiber (SMF) via spontaneous parametric downconversion (SPDC) by pumping a barium borate (BBO) crystal with a diode laser at a center wavelength of 405 nm. The signal and idler photons are then spatially separated using a knife-edge (KE) mirror. We note that the pump laser is filtered using an interference filter (IF). The idler photon is subsequently used to gate Bob’s measurement using coincidence measurement. Upon exiting the SMF, the signal photon is sent to a sequence of 4-f lens systems and a spatial light modulator (SLM). Alice’s prepared state, \(|\psi\rangle\), is encoded by imprinting the appropriate phase and intensity profile onto the signal photon using a holographic technique with SLM-A. The encoded photons are then sent to Bob’s stage, where a second SLM (SLM-B) is used to measure the MUB elements, i.e. \(|\varphi_{m}^{(d)}\rangle\). Finally, the signal and idler photons are measured using single-photon avalanche photodiode (APD) detectors and coincidence measurement.
We consider the strategy adopted by the eavesdropper, Eve, where she implements a general collective attack given by \(U_{\rm Eve}|j\rangle|e_{00}\rangle=\sum_{\ell=1}^{L}c_{j\ell}|\ell\rangle|e_{j\ell}\rangle\). The Holevo bound on Eve's reduced density matrix can be used to estimate the leaked information to Eve. For the protocol parameters \(L\) and \(d\), the bound on Alice and Eve's mutual information is given by
\[I_{\rm AE}(x_{1},x_{2})\leq\frac{\zeta^{(d)}\left(\binom{L-1}{d -1}x_{1},\binom{L-2}{d-2}x_{2},...,\binom{L-2}{d-2}x_{2}\right)}{\left(\frac{L -1}{d-1}\right)\left(x_{1}+x_{2}\right)}. \tag{3}\]
where we have defined \(x_{1}\) and \(x_{2}\) as non-negative parameters satisfying \(x_{1}+x_{2}=1\), \(\binom{n}{k}=n!/(k!(n-k)!)\) is the binomial coefficient, and
\[\zeta^{(d)}(x_{0}^{2},x_{1}^{2},...,x_{(d-1)}^{2})\] \[= -\sum_{i=0}^{d-1}x_{i}^{2}\log_{2}x_{i}^{2}+\left(\sum_{i=0}^{d-1 }x_{i}^{2}\right)\log_{2}\left(\sum_{i=0}^{d-1}x_{i}^{2}\right).\]
We note that the expression for \(I_{\rm AE}\) does not depend on the error rate, thus removing the requirement for monitoring signal disturbance. Nevertheless, it is possible to find a tighter bound on Alice and Eve's mutual information by determining the lower bound on the error rate of Bob's measurement in terms of \(x_{1}\) and \(x_{2}\). The detailed calculation is also shown in Appendix A. The secret key rate is then given by,
\[R(E)=\log_{2}(d)-h^{(d)}(E)-\max_{x_{1},x_{2}}I_{\rm AE}(x_{1},x_{ 2}), \tag{5}\]
where \(h^{(d)}(x){:=}-x\log_{d}(x/(d-1))-(1-x)\log_{2}(1-x)\) is the \(d\)-dimensional Shannon entropy. This expression of the secret key rate does not require monitoring signal disturbance. However, we can obtain the lower bound on the error rate given by
\[E\leq\frac{(d-1)}{d}\left(\frac{L-d}{L-1}\right)\left(\frac{x_{2 }}{x_{1}+x_{2}}\right). \tag{6}\]
This inequality can be used to find a lower bound on \(x_{1}\), i.e. \(x_{1}^{(L)}(E)=1-E(d/(d-1))(L-1)/(L-d)\). By monitoring signal disturbance and experimentally determining the error rate \(E\), an improved secret key rate is achieved, i.e.,
\[\mathcal{R}(E)=\log_{2}(d) -h^{(d)}(E) \tag{7}\] \[-I_{\rm AE}\left(x_{1}^{(L)}(E),1-x_{1}^{(L)}(E)\right).\]
## IV Experiment
We perform a proof-of-principle experimental demonstration of our protocol using the OAM degree of freedom of photons, see Fig. 1. In particular, we used the Laguerre-Gaussian (LG) modes as a basis for our OAM states. LG modes have a cylindrical symmetry and are composed of orthogonal states with a radial index, \(p\), and an azimuthal index, \(\ell\). In our experiment, we only use the azimuthal index, creating single photon states which carry angular momentum with a magnitude \(\ell\hbar\) per photon along the propagation direction. These states have the characteristic azimuthally dependent phase \(e^{i\ell\phi}\). The OAM degree of freedom is a popular approach in the application of high-dimensional QKD protocols, where their robustness has been repeatedly demonstrated in a wide range of quantum channels [31; 32; 33; 34; 35; 36].
Single photon pairs are produced through spontaneous parametric down-conversion (SPDC). A 360 mW Cobalt UV diode laser at a center wavelength of 405 nm is used to pump a type-I barium borate (BBO) crystal, which spontaneously produces pairs of photons, called the signal and idler, whose wavelengths are centered at 810 \(nm\). We use a knife-edge mirror placed at the center of the beam to separate the photon pairs. The signal and idler photons are each coupled to a single-mode fiber (SMF), which selects only the Gaussian optical mode from the SPDC process. Our heralded single-photon source has a coincidence rate of 22 kHz with a 5 ns coincidence time window. The detectors each have a dark count rate of 50 Hz. The idler photon is sent directly to Bob to make a coincidence measurement jointly with the measurement of the signal photon, thus reducing the background noise in the measured data. The signal photon is used by Alice to encode the information. After exiting a fiber coupling stage, the beam is expanded using a 4-f lens system with focal lengths of 50 mm and 200 mm. The photons are sent to Alice's spatial light modulator (SLM), which imprints the desired phase to the incoming Gaussian photon to produce an OAM state. We use a phase and intensity masking technique as well as a diffraction grating to produce high-quality optical modes [37]. The diffraction grating is used to send the desired phase to the first
Figure 2: **Experimental characterization of mode mismatch**. Experimental averages for the mode mismatch, \(e_{\rm mis}\) are shown for different values of the dimension, \(d\), and the size of the encoding Hilbert space, \(L\).
order of diffraction, which ensures that inefficient phase conversion inherent in the SLM does not result in the degradation of the mode quality. This comes at the cost of the overall efficiency of the process, as some photons will go into the other orders of diffraction. After Alice's SLM, a 4-f lens system is used to remove all other diffraction orders. We also use this 4-f system to image Alice's SLM onto Bob's SLM. The beam waist used on the SLMs was 640 \(\mu m\) and 650 \(\mu m\) for Alice and Bob, respectively.
Bob measures the state of the incoming photon using the intensity flattening technique [38]. Bob displays the conjugate phase of the mode that he would like to measure on the SLM. When the incoming mode corresponds to Bob's measurement, this effectively removes the transverse phase of the incoming beam resulting in a flattened wavefront which can then be made to couple to a SMF. Before coupling the beam to the single mode fiber, Bob demagnifies the beam using a 4-f system with focal lengths 100 mm and 250 mm, respectively. Finally, the photon is sent to a single-photon avalanche photodiode (APD) detector. The coincidence measurement of the signal photon is performed with the idler photon. After the state preparation and detection, the coincidence rate is 1250 Hz for the case of Alice and Bob projecting on a Gaussian state. We note that the idler photon does not contain any information about the key sent from Alice to Bob. The resulting raw counts are converted to a mode mismatch, \(e_{\text{mis}}\), for a given dimension, \(d\), and size of the encoding Hilbert space, \(L\), see Fig. 2. In the QKD protocol, the mode mismatch will result in a fix amount of error that is independent of the channel loss. We expect that the mode mismatch will be one limiting factor in the scaling of our protocol to larger values of \(d\) and \(L\). Nevertheless, other degrees of freedom may result in lower mode mismatch and improved performance of the protocol [39; 40].
## V Discussion and outlook
We now discuss the performance of our HD-RRDPS protocol for various protocol parameters, i.e. \(L\) and \(d\), at two extreme conditions. In the first case, it is instructive to consider the case where the error rate is increased to the point where the secret key rate goes to zero. In this regime, the channel condition is noisy and we are interested in the error threshold of our protocol. In Fig. 3, we show the error threshold with and without signal disturbance for various values of \(L\) and \(d\). At the other
Figure 3: **Error threshold of the HD-RRDPS protocol**. Error threshold as a function of protocol parameters \(L\) and \(d\) (**a**) without and (**b**) with monitoring signal disturbance.
Figure 4: **Error-free secret key rate of the HD-RRDPS protocol**. Error-free secret key rate as a function of protocol parameters \(L\) and \(d\) (**a**) without and (**b**) with monitoring signal disturbance.
extreme, we consider a condition where the level of noise is low enough to result in an error-free secret key rate. In this scenario, we are interested in the largest achievable secret key rate. For a short quantum communication link, the key rate is limited by the single photon detectors, e.g. saturation or dead time, and when limited by the number of detected photon per second, a promising strategy involves increasing the number of secret key carried per photon. By doing so, it is possible to increase the overall secret key rate for the same photon detection rate. In Fig. 4, we show the error-free (\(E=0\)) secret key rate with and without monitoring signal disturbance for various protocol parameters.
In practice, QKD protocols are operated at some point between the two extreme cases considered above. Imperfections in the generation and measurement devices, and noise in the channel or the detectors will result in a non-zero error rate. We will evaluate the performance of our QKD protocol with respect to the error and secret key rates through numerical simulations. In the simulations, an overall transmission of \(\eta(l)=10^{-l/10}\), where \(l\) is the total loss in dB, is assigned to the communication channel. Bob detects the single-photon states using \(d\) single photon detectors (SPD) with dark count rates \(p_{d}\). For the case where Bob successfully projects the incoming state onto a \(d\)-dimensional MUB subspace, the single-photon yield is given by,
\[Y=(1-p_{d})^{d-1}\left(\frac{d}{L}\eta+\left(1-\frac{d}{L}\eta\right)d\,p_{d} \right), \tag{8}\]
where the terms \((d/L)\eta\) and \((1-(d/L)\eta)d\,p_{d}\) respectively correspond to the case where the signal photon is not absorbed by the channel and the case where the signal photon is absorbed by the channel and a dark count occurs. Similarly, the error rate is given by,
\[EY=(1-p_{d})^{d-1}\left(\frac{d}{L}\eta\,e_{\rm mis}+\left(1-\frac{d}{L}\eta \right)d\,p_{d}\right), \tag{9}\]
where \(e_{\rm mis}\) is the probability that an error occurs due to a mismatch between the generation and the measurement bases. In practice, this may be the result of misalignment between the sender and the receiver, or imperfection in generation and detection devices. As a first step, we consider the performance of our protocol for a fix value of
Figure 5: **Performance of the HD-RRDPS protocol**. The secret key rates as a function channel loss are shown in (**a**-**b**) without and in (**c**-**d**) with monitoring signal disturbance. The performance of the HD-RRDPS protocol is simulated using a fix value of the mode mismatch, \(e_{\rm mis}=0.05\), for various values of the dimension, \(d\), and the size of the encoding Hilbert space, \(L\). In the simulation, we considered a dark count rate of \(p_{d}=10^{-4}\).
\(e_{\text{mis}}=0.05\) that is independent of the protocol parameters \(d\) and \(L\). Figure 5 demonstrates that at a low loss level, the performance of the HD-RRDPS protocol can be improved by increasing the encoding dimension, \(d\), with and without monitoring signal disturbance. This advantage can persist for larger values of the size of the Hilbert space, \(L\).
From our experimental measurements, we note that the mode mismatch can be dependent on the protocol parameters \(d\) and \(L\). This is particularly true for the case of OAM states of photons. In this case, we evaluate the performance of our protocol for varying values of the mode mismatch, \(e_{\text{mis}}\). In our proof-of-principle experiment, we characterize the value of \(e_{\text{mis}}\) for various protocol parameters such as \(L\) and \(d\). From these parameters, we can simulate the performance of our HD-RRDPS protocol versus channel loss, see Fig. 6.
By extending the RRDPS protocol to allow for multiple bits of raw key per photon, we gradually close the conceptual gap between differential phase-shift and high-dimensional QKD protocols. We note that our protocol can be straightforwardly extended to consider two MUB measurements rather than just one. When implementing two MUB measurements and the limiting case of \(d=L\), we retrieve the high-dimensional BB84 protocol. But interestingly, in the case where \(d\neq L\), we obtain a hybrid high-dimensional protocol that is a combination of the two cornerstone QKD protocols, i.e. differential phase shift and BB84. By continuously tuning the protocol parameter \(d\), one can optimize the performance of a quantum communication system under varying experimental conditions by employing the two unique information encoding schemes of HD and DPS QKD.
###### Acknowledgements.
This work is supported by the High Throughput Secure Networks Challenge Program at the National Research Council of Canada and the University of Ottawa-NRC Joint Centre for Extreme Photonics. We thank Duncan England, Philip Bustard, Benjamin Sussman, and Aaron Goldberg for insightful discussions. The authors acknowledge that the NRC headquarters is located on the traditional unceded territory of the Algonquin Anishinaabe and Mouhawk people.
|
2304.03960 | No-Existence Of Generalize Diffusion | We show that given two arbitrary states $\ket{\psi},\ket{\phi}$ it is
impossible to compute the transformation: $ \ket{\psi}\ket{\phi} \mapsto
\ket{\psi}\left( \mathbb{I} - 2 \ket{\psi}\bra{\psi} \right)\ket{\phi} $ The
contradiction of the existence of such operator follows by showing that using
it, two players can compute the disjoints of their sets in a single round and
$O\left( \sqrt{n} \right)$ communication complexity, which shown by Braverman
to be impossible \cite{Braverman}. | David Ponarovsky | 2023-04-08T09:03:36Z | http://arxiv.org/abs/2304.03960v2 | # No-Existence Of General Diffusion.
###### Abstract
_We show that given two arbitrary states \(\ket{\psi},\ket{\phi}\) it is impossible to compute the transformation: \(\ket{\psi}\ket{\phi}\mapsto\ket{\psi}\left(\mathbb{I}-2\ket{\psi}\bra{\psi}\right) \ket{\phi}\) The contradiction of the existence of such operator follows by showing that using it, two players can compute the disjoints of their sets in a single round and \(O\left(\sqrt{n}\right)\) communication complexity, which shown by Braverman to be impossible [1]._
## 1 Preamble
It's widely believed that quantum machines have a significant advantage over classical optimization tasks. Simple algorithms, which could be interpreted as the quantum version of "scanning all the options", cut the running time by the square root of the classical magnitude. That cut is achieved by using the superposition principle most straightforwardly known as the Amplitude Amplification algorithm [1], [2].
General speaking, this method transforms a known state \(\ket{\psi}\) with probability \(a\) to measure \(\ket{i}\) to a state in which the desired measurement obtained with probability greater than \(\frac{1}{2}\) at the cost of less than \(\sqrt{a}\) Grover iterations. Using this process, One can initialize a uniform distribution over \(n\) elements and amplify the probability to measure a desired state at \(\sqrt{n}\) time. To understand the power gained by this method, we mention max extraction as a use-case [1]. While any classical algorithm which runs at square root time scans at most \(\Theta(\sqrt{n})\) elements and might miss the maximum with probability at least \(1-\Theta(1/\sqrt{n})\). Therefore can't yield a constant probability to sample the maximum element, Quntemly, this limitation doesn't hold. And the gap amplification indeed enables a square root time maximum extraction algorithm.
A critical requirement for that procedure is to have the ability to generate copies of the initial state, Formulated by [1] as holding an algorithm \(\mathcal{A}\), which does not make any measurements, such that \(\mathcal{A}\ket{0}=\ket{\Psi}\). Assuming having this ability, one could mimic the scattering done in the Grover search but restrict himself to be supported on \(\ket{\Psi}\).
One question that might arise is whether the above amplification process can be done assuming nothing but having a single entity of the initial state. Both positive and negative answers will illuminate the fundamentals behind transferring probability weight. We partially answered that question by proving that the given copy alone cannot simulate the diffusion step. We formulate the above by the following theorem:
**Theorem 1**.: _There is no operator \(D\) that for given two arbitrary states \(\ket{\psi},\ket{\phi}\) compute the transformation:_
\[D\ket{\psi}\ket{\phi}=\ket{\psi}\otimes\left(\mathbb{I}-2\ket{\psi}\bra{\psi} \right)\ket{\phi}\]
We name the gate above the _General Diffusion_ gate. If such a gate existed, it could be used as the projection operator to simulate the amplitude amplification procedure. The contradiction of the existence follows by showing that using \(D\), two players can compute the disjoints of their sets in a single round and \(O\left(\sqrt{n}\right)\) communication complexity contradicts the fact that \(r\)-rounds two-party computation needs at least \(\Omega\left(\frac{n}{r}\right)\) communication to compute disjoined (up to log factors) [1].
Quantum Communication Complexity Of Disjointness.Consider the following communication problem. As inputs Alice gets an \(x\), and Bob gets a \(y\), where \(x,y\in\{0,1\}^{n}\), and by exchanging information, they want to determine if there is an index \(k\) such that \(x_{k}=y_{k}=1\) or not. In other words, if \(x\) encodes the set \(A=\{k|x_{k}=1\}\), and \(y\) encodes \(B=\{k|y_{k}=1\}\), then Alice and Bob want to determine whether \(A\cap B\) is empty.
The classical randomized communication complexity of this problem is \(\mathcal{O}\left(n\right)\)[10]. Assuming Alice and Bob can exchange quantum messages, It is known that Alice and Bob can solve the task correctly with probability greater than \(2/3\) by exchanging at most \(\mathcal{O}\left(\sqrt{n}\log n\right)\) qubits
## 2 The Reduction.
Assume by way of contradiction the existence of \(D\) defined above. Let \(x^{(j)}\) be the \(j\)-th \(\sqrt{n}\)-block of \(x\), e.g \(x^{(j)}=x_{j\sqrt{n}},x_{j\sqrt{n}+1}...,x_{(j+1)(\sqrt{n})-1}\). And denote by \(\left|\psi_{x}\right\rangle\in\mathcal{H}_{2}^{\bigotimes\sqrt{n}}\bigotimes \mathcal{H}_{\sqrt{n}}\) the uniform superposition state over the \(x^{(j)}\)-'s "tensored" with \(\sqrt{n}\)-qudit (which will correspond to the block number).
\[\left|\psi_{x}\right\rangle=\frac{1}{n^{\frac{1}{4}}}\sum_{j}^{\sqrt{n}} \left|x^{(j)}\right\rangle\left|j\right\rangle\]
Note that the encoding of \(\left|\psi_{x}\right\rangle\) require only \(\sqrt{n}+\log(\sqrt{n})\) qubits. Clearly, both Alice and Bob can generate the states \(\left|\psi_{x}\right\rangle,\left|\psi_{y}\right\rangle\), then Bob sends his share to Alice. We know that there is a classical circuit with logarithmic depth in \(\sqrt{n}\) that act over the pure states \(\left|x^{(j)}\right\rangle\left|j\right\rangle,\left|y^{(k)}\right\rangle \left|k\right\rangle\) and decides whether
\[\left(j=k\right)\;\bigwedge\;\left(\bigvee_{i\in\left[\sqrt{n}\right]}x_{i}^{ (j)}\;\wedge\;y_{i}^{(k)}\right)\]
Denote it by \(C\) and by \(U\) the phase flip controlled by \(C\) i.e. \(U\left|i\right\rangle=\left(-1\right)^{C(i)}\left|i\right\rangle\).
The following claim argues that \(D,U\) are sufficient for Alice to simulate a single iteration of the amplitude amplification. Since the technical details of the amplification procedure are not the focus of this paper, we only show equivalence without defining the operators, and the notation used by [1].
**Claim 1**.: _Recall the operator \(\mathbf{Q}=-\mathcal{A}\mathbf{S}_{0}\mathcal{A}^{-1}\mathbf{S}_{x}\) defined in [1], such that \(\mathcal{A}\left|0\right\rangle=\left|\Psi\right\rangle=\left|\psi_{x}\right\rangle \left|\psi_{y}\right\rangle\) and consider the generalize diffusion gate \(D\), Denote by \(\mathcal{H}_{\Psi}\) the space which is spanned by the \(\left|\Psi\right\rangle\) support. Then it holds that for any state \(\left|\phi\right\rangle\in\mathcal{H}_{\Psi}\):_
\[\left(\mathbb{I}\otimes\mathbf{Q}\right)\left|\psi_{x}\right\rangle\left|\psi _{y}\right\rangle\left|\phi\right\rangle=-D\left(\mathbb{I}\otimes U\right) \left|\psi_{x}\right\rangle\left|\psi_{y}\right\rangle\left|\phi\right\rangle\]
Proof.: Let \(\left|\Psi_{0}\right\rangle,\left|\Psi_{1}\right\rangle\) be the base which span \(\mathcal{H}_{\Psi}\) and in addition \(U\left|\Psi_{0}\right\rangle=\left|\Psi_{0}\right\rangle,U\left|\Psi_{1} \right\rangle=-\left|\Psi_{1}\right\rangle\).
First consider the case in which the dimension of \(\mathcal{H}_{\Psi}\) is exactly \(1\), If \(\left|\Psi\right\rangle\) supported only on non-satisfying states (i.e \(\left|\Psi\right\rangle=\left|\Psi_{0}\right\rangle\)) then it's clear that \(I\otimes U\) act over the \(\left|\Psi\right\rangle\left|\Psi\right\rangle\) as identity and therefore \(-D\left(I\otimes U\right)\) act also as identity:
\[-D\left(I\otimes U\right)\left|\Psi\right\rangle\left|\Psi\right\rangle=-\left| \Psi\right\rangle\left(I-2\left|\Psi\right\rangle\left\langle\Psi\right| \right)\left|\Psi\right\rangle=\left|\Psi\right\rangle\left|\Psi\right\rangle\]
Similar calculation yields that the action is trivial also when \(\mathcal{H}_{\Psi}\) supported only over \(\left|\Psi_{1}\right\rangle\).
It is left to show the equivalence when \(\left|\Psi\right\rangle\) supported both over \(\left|\Psi_{0}\right\rangle\) and \(\left|\Psi_{1}\right\rangle\). Then it follows that:
\[-D \left(\mathbb{I}\otimes U\right)\left|\psi_{x}\right\rangle\left|\psi_ {y}\right\rangle\left|\Psi_{1}\right\rangle\] \[=\left|\psi_{x}\right\rangle\left|\psi_{y}\right\rangle\left( \mathbb{I}-2\left|\psi_{x}\right\rangle\left|\psi_{y}\right\rangle\left\langle \psi_{x}\right|\left\langle\psi_{y}\right|\right)\left|\Psi_{1}\right\rangle\] \[=\left|\psi_{x}\right\rangle\left|\psi_{y}\right\rangle\left( \mathbb{I}-2\left|\Psi\right\rangle\left\langle\Psi\right|\right)\left|\Psi_ {1}\right\rangle\] \[=\left|\psi_{x}\right\rangle\left|\psi_{y}\right\rangle\left( \left(1-2a\right)\left|\Psi_{1}\right\rangle-2a\left|\Psi_{0}\right\rangle\right)\]
Now, it's clear that Alice could simulate the **algsearch** algorithm [1],
**Theorem 3**.: _Quadratic speedup without knowing \(\mathbf{a}\) There exists a quantum algorithm **algsearch** with the following property. Let \(\mathcal{A}\) be any quantum algorithm that uses no measurements, and let \(\chi:\mathbb{N}\rightarrow\{0,1\}\) be any Boolean function. Let \(a\) denote the initial success probability of \(\mathcal{A}\). Algorithm **algsearch** finds a good solution using an expected number of applications of \(\mathcal{A}\) and \(\mathcal{A}^{-1}\) which are in \(\Theta(\sqrt{a})\) if \(a>0\), and otherwise runs forever._
Proof of Theorem 1.: Suppose that \(A\cap B\neq\emptyset\) then, the support of \(\left|\psi_{x}\right\rangle\otimes\left|\psi_{y}\right\rangle\) contain a state \(\left|\phi\right\rangle\) which satisfies \(C\), or in other words \(a=\left|\left\langle\Psi_{1}|\Psi\right\rangle\right|^{2}>0\) and therefore by _Theorem 3_ there is an explicit procedure which takes a \(\Theta(\sqrt{a})\) time in expectation, Hence for any \(\varepsilon>0\) we could construct a finite algorithm that fails with probability less than \(\varepsilon\) by rejecting runs that last longer than \(\frac{1}{\varepsilon}\).
On the other hand, Consider the case when \(A\cap B=\emptyset\) then \(\Rightarrow a=0\Rightarrow\mathcal{H}_{\Psi}\) is 1-dimension space spanned only by \(\left|\Psi_{0}\right\rangle\), and the operator \(I-2\left|\Psi\right\rangle\left\langle\Psi\right|\) act over the \(\left|\Psi_{0}\right\rangle\) as identity and therefore after executing any number of iterations the probability to measure from \(\left|\Psi_{0}\right\rangle\) will remain 1.
Summarize the above yields the following protocol,
1. Bob create \(\left|\psi_{x}\right\rangle\) and send it to Alice.
2. Alice simulate **algsearch** either the algorithm accept or either \(n^{4}\) turns were passed.
3. If the algorithm accepts, Alice returns True; otherwise, Alice returns False.
The protocol computes the disjointness in a single round while requiring transmission of less than \(\Theta\left(\sqrt{n}\right)\) qubits. That is in contrast to the known lower bound proved by Braverman [1]:
**Theorem** (Theorem A).: _The \(r\)-round quantum communication complexity of Disjointness\({}_{n}\) is \(\Omega\left(\frac{n}{r\log^{3}r}\right)\)._
Conclusion And Open Problems.The reduction above demonstrate how known results can give us almost immediate insights into quantum compatibility. Besides being a no-go-to proof, we hope this work will also use as a hint for direction to other quantum advantages in the disturbed computing setting.
It's worth saying that the \(r\)-rounds communication bound on disjointness does not hold in many cases. For a simple example, consider that each set \(x,y\in\{0,1\}^{n}\) is drawn uniformly. Then it's clear that Alice and Bob could answer "Yes" and they will be correct with high probability. So the family of states, which one can project over them by only partly projection (diffusion operators), correspond to the distributions over pairs of Alice and Bob sets, which they can compute with their disjointness with less communication.
|
2303.01108 | Evolution of complex magnetic phases and metal-insulator transition
through Nb substitution in La$_{0.5}$Sr$_{0.5}$Co$_{1-x}$Nb$_x$O$_3$ | We report the evolution of structural, magnetic, transport, and electronic
properties of bulk polycrystalline La$_{0.5}$Sr$_{0.5}$Co$_{1-x}$Nb$_x$O$_3$
($x =$ 0.025--0.25) samples. The Rietveld refinement of the x-ray diffraction
patterns with R$\bar3$c space group reveals that the lattice parameters and
rhombohedral distortion monotonously increase with the Nb$^{5+}$(4$d^0$)
substitution ($x$). The magnetic susceptibility exhibits a decrease in the
magnetic ordering temperature and net magnetization with $x$, which manifests
that the Nb substitution dilutes the ferromagnetic (FM) double exchange
interaction and enhances the antiferromagnetic (AFM) super-exchange
interaction. Interestingly, for the $x>$ 0.1 samples the FM order is completely
suppressed and the emergence of a glassy state is clearly evident. Moreover,
the decrease in the coercivity (H$\rm_{C}$) and remanence (M$\rm_{r}$) with $x$
in the magnetic isotherms measured at 5~K further confirms the dominance of AFM
interactions and reduction of FM volume fraction for the $x>$ 0.1 samples. More
interestingly, we observe resistivity minima for the $x=$ 0.025 and 0.05
samples, which are analyzed using the quantum corrections in the conductivity,
and found that the weak localization effect dominates over the renormalized
electron-electron interactions in the 3D limit. Further, a semiconducting
resistivity behavior is obtained for $x>$ 0.05, which follows the Arrhenius law
at high temperatures ($\sim$160--320~K), and the 3D-variable range hopping
prevails in the low-temperature region ($<$160~K). The core-level photoemission
spectra confirm the valence state of constituent elements and the absence of
Co$^{2+}$ is discernible. | Rishabh Shukla, R. S. Dhaka | 2023-03-02T09:42:43Z | http://arxiv.org/abs/2303.01108v1 | Evolution of complex magnetic phases and metal-insulator transition through Nb substitution in La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\)
###### Abstract
We report the evolution of structural, magnetic, transport, and electronic properties of bulk polycrystalline La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) (\(x=0.025\)-0.25) samples. The Rietveld refinement of the x-ray diffraction patterns with R\(\bar{3}\)c space group reveals that the lattice parameters and rhombohedral distortion monotonously increase with the Nb\({}^{5+}(4d^{o})\) substitution (\(x\)). The magnetic susceptibility exhibits a decrease in the magnetic ordering temperature and net magnetization with \(x\), which manifests that the Nb substitution dilutes the ferromagnetic (FM) double exchange interaction and enhances the antiferromagnetic (AFM) super-exchange interaction. Interestingly, for the \(x>0.1\) samples the FM order is completely suppressed and the emergence of a glassy state is clearly evident. Moreover, the decrease in the coercivity (H\({}_{\rm C}\)) and remanence (M\({}_{\rm r}\)) with \(x\) in the magnetic isotherms measured at 5 K further confirms the dominance of AFM interactions and reduction of FM volume fraction for the \(x>0.1\) samples. More interestingly, we observe resistivity minima for the \(x=0.025\) and 0.05 samples, which are analyzed using the quantum corrections in the conductivity, and found that the weak localization effect dominates over the renormalized electron-electron interactions in the 3D limit. Further, a semiconducting resistivity behavior is obtained for \(x>0.05\), which follows the Arrhenius law at high temperatures (\(\sim\)160-320 K), and the 3D-variable range hopping prevails in the low-temperature region (\(<\)160 K). The core-level photoemission spectra confirm the valence state of constituent elements and the absence of Co\({}^{2+}\) is discernible.
## I Introduction
The exotic physical properties of LaCoO\({}_{3}\) are predominantly associated with the spin-state transition of Co\({}^{3+}\)[1; 2; 3; 4; 5; 6; 7], where it exhibits a nonmagnetic charge transfer type insulating ground state below \(\sim\)100 K and an insulator to metal transition near about 500 K [3; 4]. The ground state of LaCoO\({}_{3}\) is associated with the Co\({}^{3+}\) low-spin (LS) state (t\({}_{2g}^{6}\)e\({}_{g}^{9}\)), which evolves to a paramagnetic state (near 100 K) due to the emergence of high spin (HS; t\({}_{2g}^{4}\)e\({}_{g}^{2}\)) and/or intermediate spin (IS; t\({}_{2g}^{5}\)e\({}_{g}^{1}\)) states. Though the transition around 100 K was believed to be from LS to HS state [8; 9; 10]; however, this explanation was amended with the inclusion of the IS state, which was introduced in ref. [11] by LDA+U band structure calculations. At the same time, the evolution of the spin-state transition with external perturbation still remains enigmatic after several efforts, where a combination of the LS/IS scenario was supported in the explanation of insulator-to-metal transition near 500 K [4; 11], which contradicts with the description given using the LS/HS combination [1; 7]. Interestingly, the presence of Jahn-Teller distortion in LaCoO\({}_{3}\) strengthens the presence of IS state [12; 13; 14; 15]. Moreover, the IS state of Co\({}^{3+}\) is energetically near to the LS state (difference of \(\sim\) 10 meV) and exists at lower energy as compared to the HS state, while in Co\({}^{4+}\) the HS state has a lower energy (\(\sim\)1 eV) with respect to the LS state [2; 5; 11]. This energy difference (\(\Delta\)E) between the spin-states of Co ions can be controlled by the volume of the governing CoO\({}_{6}\) octahedron via various external parameters like temperature and mechanical pressure [14], magnetic field [15], and chemical pressure [16; 17; 18]. This great tunability of the physical properties of cobaltites make them fascinating for practical applications, like catalysis [19], solid oxide fuel cells [20], sensors [21], batteries [22; 23], etc.
Intriguingly, the substitution of divalent cations (like Ca, Sr, Ba) at La\({}^{3+}\) site results in the emergence of tetravalent Co\({}^{4+}\) ions, and a chemical pressure effect increases the lattice volume owing to the larger ionic radii of divalent cations, and a dramatic modification in the magnetic, and transport properties are observed [16; 17; 24; 25]. According to Imada _et al._, the La\({}_{1-x}\)Sr\({}_{x}\)CoO\({}_{3}\) is a distinct filling-control system, where holes are solely responsible for the metallic conduction and ferromagnetism [25]. Also, the substitution of larger size Sr\({}^{2+}\) (1.44 A) as compared to La\({}^{3+}\) (1.36 A) enhances the crystal symmetry and unit cell volume, which are believed to stabilize the IS state of Co\({}^{3+}\) ions and do not change the structure up to \(x=0.55\)[16; 17; 26]. Moreover, the magnetic phase diagram of La\({}_{1-x}\)Sr\({}_{x}\)CoO\({}_{3}\) manifests a spin-glass state in 0.05\(<x\leq\)0.18 and a ferromagnetic cluster glass for 0.18\(\leq x\leq\)0.50 due to the dominance of double-exchange FM interaction (Co\({}^{3+}\)-O\({}^{2-}\)-Co\({}^{4+}\)) over the super-exchange (Co\({}^{4+}\)-O\({}^{2-}\)-Co\({}^{4+}\))/(Co\({}^{3+}\)-O\({}^{2-}\)-Co\({}^{3+}\)) interactions [17; 27; 28; 29; 30; 31]. Also, an insulator to metal transition is induced for \(x\geq\)0.18 and further evolves to a complete metallic state for \(x\geq\)0.30 [17; 27], and the magnetoelectronic phase separation is reported up to \(x=0.5\) where FM metallic clusters are embedded in the AFM insulating matrix [18; 32]. More interestingly, the substitution of Nb\({}^{5+}\) (\(4d^{0}\)) in LaCo\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) converts Co\({}^{3+}\) into Co\({}^{2+}\) and induces structural transition owing to the larger ionic radius of Nb\({}^{5+}\) (0.64 A) and Co\({}^{2+}\) (0.75 A) as compared to Co\({}^{3+}\) (0.55 A) [6; 24]. The
substitution of Nb\({}^{5+}\) induces the enhancement in the unit cell volume and stabilizes the HS state of Co\({}^{3+}\) and Co\({}^{2+}\) ions and drives the system towards a more insulating nature [6]. Also, the perovskite cobaltites exhibit a strong coupling between the spin, charge, and lattice, where the correlation between charge carriers and localized spins plays a crucial role [33; 34], and can be tuned by cation substitution [35; 36]. Overall, it is convincing that the substitution of Sr at the La site and/or Nb at the Co site demonstrate interesting changes in the physical properties of LaCoO\({}_{3}\) (diamagnetic insulator) [37; 38]. However, to the best of our knowledge, a study to examine the effect of Nb substitution at the Co site in La\({}_{0.5}\)Sr\({}_{0.5}\)CoO\({}_{3}\) (ferromagnetic metal) has not been explored.
Therefore, in this paper, we investigate the evolution of structural, magnetic, transport, and electronic properties of bulk La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) (\(x=0.025\)-0.25) samples. The Rietveld refinement of x-ray diffraction (XRD) patterns reveals the monotonous increase of unit cell parameters and rhombohedral distortion with \(x\). The temperature-dependent susceptibility manifests the dilution of the magnetic order with \(x\) due to nonmagnetic Nb\({}^{5+}\)(4d\({}^{0}\)), which dominates for \(x>\)0.1, and the appearance of a glassy state is evident. Moreover, the isothermal MH loops measured at 5 K manifest that coercivity (H\({}_{\rm C}\)) and remanence (M\({}_{\rm r}\)) decrease for \(x>\)0.1 due to the dominance of AFM interactions and reduction of FM volume fractions. More interestingly, we observe resistivity minima for the \(x=0.025\), and 0.05 samples, which are analyzed using the quantum corrections in the conductivity, and find that the weak localization effect dominates over the renormalized electron-electron interactions in 3D limit. Moreover, a semiconducting behavior for the \(x\geq\)0.1 is analyzed with the Arrhenius model at high temperatures (\(\sim\)160-320 K) and 3D-variable range hopping conduction prevails in the low-temperature region (\(<\)160 K). Furthermore, the oxidation state of elements in La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) (\(x=0.1\), 0.15, 0.25) is studied using core-level photoemission spectroscopy.
## II Experimental
The polycrystalline samples of La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) (\(x=0.025\)-0.25) are synthesized through the conventional solid-state route using stoichiometric initial ratio of as purchased powders (purity \(>\)99.95%) of SrCoO\({}_{3}\), Co\({}_{3}\)O\({}_{4}\), and Nb\({}_{2}\)O\({}_{5}\) and dried La\({}_{2}\)O\({}_{3}\) (900\({}^{\rm o}\)C for 6 hrs). All the initial powders are grunded for 6-8 hrs using the agate mortar pestle and then obtained material is heated at 1000\({}^{\rm o}\)C for 48 hrs to ensure homogeneous mixing. After the first heating, obtained mixture is regroned for 4-6 hrs and cold-pressed into pellets for the final heating of 1200-1400\({}^{\rm o}\)C for 36 hrs, which ensures the formation of a pure-phase compound [6; 13]. The structural characterizations are performed at room temperature with the Panalytical X'pert Pro diffractometer using the Cu-K\({}_{\alpha}\) radiation (\(\lambda\)=1.5406 A). The magnetic measurements are performed using DynaCool Magnetic Property Measurement System from Quantum Design, USA. The temperature-dependent resistivity measurements were performed at the Physical Property Measurement System (PPMS) from Cryogenic Limited, UK, and the PPMS from Quantum Design, USA. The core-level x-ray photoemission spectra are recorded at room temperature using the AXIS Supra from the company Kratos Analytical limited, using a monochromatic Al-K\({}_{\alpha}\) (1486.6 eV) source having an overall energy resolution of \(\sim\)0.5 eV. We use the charge neutralizer during the measurement due to the insulating nature of the samples. The core-level spectra are deconvoluted and fitted with the Voigt-peak shapes using IgorPro software after subtracting the Tougaard inelastic background.
## III Results and Discussion
First, we perform the Rietveld refinement of measured XRD patterns of the La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\), as presented in Figs. 1(a-h) for the \(x=0.025\)-0.25 samples, respectively, which confirm the single phase and rhombohedral space group (R\(\bar{3}\)c) in hexagonal setting (\(a=b\), and \(\gamma=120^{\rm o}\)). The Rietveld refined lattice parameters are summarized in Table-I of [39] including the unit cell parameters determined for an equivalent rhombohedral cell (a\({}_{r}\) and \(\alpha_{r}\)) in the rhombohedral setting of the R\(\bar{3}\)c space group. Interestingly, the rhombohedral and hexagonal lattices are correlated via the symmetry transformation and, therefore, the Rietveld refinement using rhombohedral space group (R\(\bar{3}\)c) in a hexagonal setting will also result in the values of rhombohedral unit cell parameters, i.e., cell length (a\({}_{r}\)) and distortion angle (\(\alpha_{r}\)). Further, the \(x\)-dependence of the unit cell parameters are plotted in Fig. 1(i); \(a=b\) (open blue circles) and \(a_{r}\) (solid blue circles) on the left-axis, and \(c\) (open red triangles) on the right-axis. Here, we observe a monotonous enhancement in the values of these parameters with increasing the \(x\). Moreover, we find an enhancement in the rhombohedral angle (\(\alpha_{r}\)) and unit cell volume (V) with the \(x\), as shown on the left (open blue inverted triangles) and right (open red pentagons) axes of Fig. 1(j), respectively. This systematic monotonous enhancement in structural parameters with \(x\) is accredited to the large ionic radii of Nb\({}^{5+}\) (0.64 A) as compared to the Co\({}^{3+}\) (0.545 A in LS and 0.61 A in HS) and Co\({}^{4+}\) (0.53 A in HS) ions [24]. Further, the enhancement in \(\alpha_{r}\) manifests that the cationic substitution results in a higher degree of distortion of (Co/Nb)O\({}_{6}\) octahedra and therefore participates in the evolution of the spin-states of Co-ions with Nb substitution (discussed later) [40]. It has been reported that the substitution of Sr\({}^{2+}\) at the La site in LaCoO\({}_{3}\) suppresses the rhombohedral distortion (\(\alpha_{r}\)) and cubic crystal symmetry (Pm\(\bar{3}\)m) dominates for \(x\geq\)0.55 [17; 26], whereas the substitution of Nb\({}^{5+}\) at Co site drives the rhombohedral crystal symmetry from orthorhombic and then to monoclinic [6]. Intriguingly, in
the present case, the observation of an enhancement in the \(\alpha_{r}\) with \(x\) demonstrates a dominance of Nb\({}^{5+}\) ions on the structural parameters. Moreover, the observation of sharper diffraction peaks with much higher intensity (30-60k) indicates the absence of any microscopic inhomogeneities in these samples [41].
As discussed in the Introduction section, it is interesting to note here that the phenomenon of double exchange interaction in the Co\({}^{3+}\)(IS)-O-Co\({}^{4+}\)(LS) path is found to be responsible for the itinerant FM in La\({}_{0.5}\)Sr\({}_{0.5}\)CoO\({}_{3}\), which is achieved due to the spin pumping owing to the Sr\({}^{2+}\) substitution that converts Co\({}^{3+}\) ions into the Co\({}^{4+}\) ions and stabilizes the IS state of Co\({}^{3+}\)[16; 17; 27]. More interestingly, in the present case, the non-magnetic Nb\({}^{5+}\) (4\(d^{0}\)) substitution acts as a magnetic dilution in the FM state of La\({}_{0.5}\)Sr\({}_{0.5}\)CoO\({}_{3}\), and that suppresses the long-range FM ordering as Nb\({}^{5+}\) converts Co\({}^{4+}\) back to Co\({}^{3+}\) and hence reduces the double exchange mechanism. Also, in this case, the increased concentration of Co\({}^{3+}\) enhances the dominance of AFM super-exchange interaction through the Co\({}^{3+}\)-O-Co\({}^{3+}\) path [42; 43]. Therefore, in order to understand the expected complex magnetic behavior with Nb substitution in La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\), we measure the temperature-dependent zero-field cooled (ZFC) and field-cooled (FC) magnetization at an applied magnetic fields of 100 Oe (for the \(x\) = 0.025-0.25) and 1000 Oe (for the \(x\) = 0.025-0.15), as shown in Figs. 2(a-f). Interestingly, we find a complex magnetic ordering below the transition temperature, as shown in Figs. 2(a-d), for \(x\) = 0.025-0.1 samples, respectively and also highlighted in Figs. 1(a-f) of [39] by plotting the first-order-derivative of the dc-susceptibility. The sharp minima in Fig. 1(a) of [39] are clearly visible at 207 K and 204 K for the \(x\) = 0.025 and 0.05 samples, respectively, measured at 100 Oe. Moreover, in Figs. 1(b, c) of [39], the minima are observed at 203 K and 185 K when measured at 1000 Oe for the \(x\) = 0.025 and 0.05 samples, respectively. Similarly, the minima are found to be at 135 K (100 Oe), and 131 K (1000 Oe) for the \(x\) = 0.075 sample, and 130 K (100 Oe) and 121 K (1000 Oe) for the \(x\) = 0.1 sample, as shown in Figs. 1(d-f) of [39].
It reveals that the magnetic ordering temperature is sensitive to the measuring field [44], and we find that a small substitution of Nb\({}^{5+}\) (4\(d^{0}\)) significantly suppresses the magnetic ordering temperature (T\({}_{\rm MO}\)) from 250-253 K for the \(x\) = 0 sample [41; 45] to 205\(\pm\)2 K for the \(x\) = 0.025 sample, which may define the onset of magnetic dilution. Also, in Fig. 2 (a), the maximum FC magnetic susceptibility decreases from \(\sim\)20 emu/mole-Oe (\(x\) = 0) [44; 45] to 15 emu/mole-Oe (\(x\) = 0.025), and from \(\sim\)5.5 emu/mole-Oe (\(x\) = 0) [46; 47] to 4.8 emu/mole-Oe (\(x\) = 0.025) when the measurements are done at 100 Oe and 1000 Oe, respectively. This type of complex magnetic ordering remains enigmatic in literature; for example, it is believed to be FM cluster glass [17; 27] and/or FM to paramagnetic transition [28; 48]. Furthermore, in Figs. 2(a, b), the FC magnetization increases monotonously up to the lowest measured temperature (2 K), whereas the ZFC magnetization shows a cusp with broad peaks at around 183 K and 176 K (100 Oe) and 108 K and 88 K (1000 Oe) for the \(x\) = 0.025 and 0.05 samples, respectively. It has been reported that the cusp in the ZFC magnetization can be associated with the anisotropy in the FM clusters available in the AFM ma
Figure 1: (a–h) The Rietveld refined XRD patterns of La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) (\(x\) = 0.025–0.25) samples; where open red circles, solid black and blue line represent the experimental, refined, and difference between experimental and refined patterns, while green vertical markers manifest the Bragg positions corresponding to the R\(\bar{3}\)c space group. (i, j) The Rietveld refined unit cell parameters plotted as a function of \(x\).
trix, and that is associated with the intra-cluster interaction and inter-cluster anisotropy in the matrix [49]. Interestingly, a large bifurcation between the ZFC and FC magnetization further confirms the higher anisotropy in these samples [17]. However, we find that this anisotropy decreases with the substitution of Nb\({}^{5+}\) ions and applied magnetic field, which plays a crucial role in the suppression of the intra-cluster FM interactions.
More importantly, we find a drastic change in the behavior of ZFC and FC magnetization curves for the \(x\) = 0.075 and 0.1 samples, which exhibit a decreasing trend below a cusp and then remain constant to 2 K, as shown in Fig. 2(c, d). Here, the values of FC magnetic susceptibility at the peak are found to be around 6.8 emu/mole-Oe (100 Oe) and 1.8 emu/mole-Oe (1000 Oe) for the \(x\) = 0.075 sample, and 3.9 emu/mole-Oe (100 Oe) and 0.8 emu/mole-Oe (1000 Oe) for the \(x\) = 0.1 sample. These values decrease below \(\approx\)83 K and almost saturates to around 3.8 emu/mole-Oe (100 Oe), 1.1 emu/mole-Oe (1000 Oe), and 2.2 emu/mole-Oe (100 Oe), 0.35 emu/mole-Oe (1000 Oe) at the lowest measured temperature (2 K) for the \(x\) = 0.075 and 0.1 samples, respectively. The decreasing behavior of FC magnetic susceptibility is usually observed in the samples with an antiferromagnetic ordering [50] and/or samples with a higher volume fraction of the AFM interactions as compared to the FM interactions [51; 52]. The magnetization values are found to be further decreasing with \(x\), which is consistent with the fact that more Nb\({}^{5+}\) (4\(d^{0}\)) concentration acts like a magnetic dilution. Further, the difference between FC and ZFC magnetization (\(\Delta\)M, thermomagnetic irreversibility) at the lowest temperature is correlated with the presence of FM and/or AFM volume fraction in the sample. A decrease in the value of \(\Delta\)M with \(x\) is observed, i.e., 472 emu/mole (100 Oe), 1240 emu/mole (1000 Oe) for the \(x\) = 0.075, and 234 emu/mole (100 Oe), 350 emu/mole (1000 Oe) for the \(x\) = 0.1 sample, see the left-axis [solid red circles (100 Oe) and solid black rhombus (1000 Oe)] of Fig. 3(d). This suggests that we are restricted to collect the magnetization from the smaller volume fraction of FM and higher fraction of AFM phase with increasing \(x\)[51]. The ZFC magnetic susceptibility values are found to be negative at 2 K when measured at 100 Oe magnetic field, which increased significantly when measured at 1000 Oe for the \(x\) = 0.075 and 0.1 samples [see Figs. 2(c, d)]. Here, a large magnetic anisotropy due to the strong competitive AFM and FM interactions might be responsible for the field/temperature induced magnetization reversal behavior [53; 54], which is also consistent with the variation of \(\Delta\)M with the applied magnetic field. Interestingly, we observe a further decrease in the T\({}_{\rm MO}\) to 135 K (100 Oe), 131 K (1000 Oe) for the \(x\) = 0.075 sample, and 130 K (100 Oe), 121 K (1000 Oe) for the \(x\) = 0.1, and an increase in the broadening of the transition temperature region with \(x\) can be attributed to the enhanced magnetic disorder in the material and dilution of the FM double exchange interaction [55; 56; 57]. Moreover, in Figs. 2(c, d), the cusp/peak in the FC and ZFC magnetization at 100 Oe is observed at different temperatures like 83.5 K and 110.5 K (\(x\) = 0.075) and 82 K and 102 K (\(x\) = 0.1), respectively. The corresponding values at 1000 Oe are found to be 82 K and 84 K (\(x\) = 0.075), and 80 K and 81.5 K (\(x\) = 0.1), respectively. The peak/cusp in the FC curves appears at a lower temperature as compared to the ZFC, which is due to the difference in the measurement protocols as the FC measures the higher FM volume fraction compared to the ZFC protocol [51]. Therefore, confirmation of the decreased FM volume fraction in higher \(x\) samples is obtained from a smaller difference
Figure 2: A comparison of the temperature-dependent zero-field-cooled (ZFC) and field-cooled (FC) dc-magnetic susceptibility data recorded at 100 Oe and 1000 Oe applied magnetic field: (a) \(x\) = 0.025, (b) \(x\) = 0.05, (c) \(x\) = 0.075, (d) \(x\) = 0.1, and (e) \(x\) = 0.125, and in (f) for the \(x\) = 0.15 (100 Oe and 1000 Oe), \(x\) = 0.2 (100 Oe), and \(x\) = 0.25 (100 Oe) samples.
in the cusp/peak value of FC and ZFC magnetization, i.e., \(\Delta\)T\({}_{\rm f}\)=27 K (100 Oe), 2 K (1000 Oe) for the \(x\) = 0.075 sample, and 20 K (100 Oe), 1.5 K (1000 Oe) for the \(x\) = 0.1 sample. Furthermore, the ZFC and FC susceptibility remains constant from \(\sim\)35 K (100 Oe) and 28 K (1000 Oe) to the lowest measured temperature for both the FC and ZFC protocols [see Fig. 2(c, d)], and this is expected to be a state of magnetic glass where the competitive FM and AFM magnetic clusters are frozen arbitrarily in the sample at low temperatures [51].
Interestingly, for the \(x\) = 0.125-0.25 samples we find a substantial change in the magnetic behavior, as shown in Fig. 2(e, f), respectively. For example, the FM ordering is suppressed due to reduction in the concentration of Co\({}^{4+}\) ions and the evolution of AFM interactions due to an enhancement in Co\({}^{3+}\), which may result in the evolution of a glassy state [58; 59] for the \(x\geq\) 0.125 samples. The magnetic susceptibility value at 2 K dropped abruptly to 0.04 emu/mole-Oe for the \(x\) = 0.125 and 0.15 samples, which indicates that the magnetic order is very much sensitive to the percolation limit in the La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) samples. Moreover, we can see that all samples in the 0.125 \(\leq\ x\leq\) 0.25 range manifest analogous behavior of FC and ZFC magnetization curves with temperature [see Fig. 2(e, f)], where both the FC and ZFC magnetization increase monotonously with lowering the temperature, and after a peak/cusp in ZFC (known as the freezing temperature, T\({}_{\rm f}\)), the FC magnetization further increases at a lower rate while the ZFC magnetization starts decreasing up to the lowest measured temperature (2 K). Additionally, we observe that the freezing temperature T\({}_{\rm f}\) (peak/cusp in ZFC) decreases with \(x\), see right-axis (open blue rhombus) of Fig. 3(d), such that it has the value of 58\(\pm\)2 K (\(x\) = 0.125), \(\approx\)55 K (\(x\) = 0.15), 40 K (\(x\) = 0.2), and 23.5 K (\(x\) = 0.25). Also, the value of \(\Delta\)M measured at 100 Oe decreases from 3 emu/mole for the \(x\) = 0.125 to 0.7 emu/mole for the \(x\) = 0.25 sample, as shown on the left-axis (solid red circles) of Fig. 3(d), which further suggests a minimal volume fraction of the FM magnetic phase in these samples as discussed above. Therefore, we can conclude that the substitution of Nb\({}^{5+}\) (4\(d^{0}\)) plays a crucial role in the dilution of the FM order and inducing a magnetic glassy state at low temperatures [58; 59; 60].
In order to determine the effective magnetic moment (\(\mu_{\rm eff}\)), we plot the inverse FC magnetic susceptibility in Fig. 3(a-c) for the \(x\) = 0.025-0.15 (1000 Oe) and the \(x\) = 0.2, 0.25 (100 Oe) samples. Moreover, the inverse susceptibility data for the \(x\) = 0.025-0.15 samples measured at 100 Oe and 1000 Oe are compared in the Figs. 2(a-f) of [39]. We fit the inverse susceptibility data using the Curie-Weiss (C-W) equation [\(\chi^{-1}\) = (\(T-\theta_{\rm CW}\))/C], where C is C-W constant and \(\theta_{\rm CW}\) is C-W temperature in the paramagnetic region for all the La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) samples. The obtained slope in the C-W fit, for the \(x\) = 0.025-0.15 (1000 Oe) and \(x\) = 0.2, 0.25 (100 Oe) samples,
Figure 3: A linear fit (solid black lines) of the inverse susceptibility data (recorded at 100 Oe and/or 1000 Oe) manifests the Curie-Weiss law in the paramagnetic region of (a) 260–320 K for the \(x\) = 0.025 and 0.05 samples, (b) 260–320 K for the \(x\) = 0.075 and 0.1 samples, (c) 230–320 K for the \(x\) = 0.125–0.25 samples. (d) The thermomagnetic irreversibility, \(\Delta\)M, for 100 Oe (solid red circles) and 1000 Oe (solid black rhombus) on a semilogarithmic left-axis, and peak/freezing (T\({}_{\rm f}\)) temperature from the ZFC magnetization (open blue rhombus) on right-axis as a function of \(x\), and (e) the Curie-Weiss temperature, \(\theta_{\rm CW}\), for 100 Oe (open magenta triangles) and 1000 Oe (open black rhombus) along with the average cobalt valence, \(n\) [open green rectangles (calculated) and solid blue rhombus (experimentally obtained using XPS)], and (f) a comparison of experimentally determined values of effective magnetic moments for susceptibility data recorded at 100 Oe and 1000 Oe magnetic fields and theoretically calculated values considering different possible spin-states of the Co ions.
is used to determine the \(\mu_{\rm eff}\) and the intercept is utilized to estimate the \(\theta_{\rm CW}\), which are summarised in Table 2 of [39]. Further, we plot the \(x\)-dependence of \(\mu_{\rm eff}\) (solid red circles for 1000 Oe and open magenta pentagon for 100 Oe) in Fig. 3(e), which shows a monotonous decrease in the values as a function of \(x\). Additionally, we observe a monotonous decrease in the \(\theta_{\rm CW}\) values (magenta open triangles, left axis) as well as the calculated average Co valence, \(n\) (green open rectangles) with \(x\), as shown in Fig. 2(h). The value of \(n\) is calculated using the general formula of La\({}_{0.55}^{3+}\)Sr\({}_{0.5}^{2+}\)Co\({}_{1-x}^{n}\)Nb\({}_{x}^{5+}\)O\({}_{3}^{2-}\), where \(x\) is the percentage of Nb substitution. Here, the experimentally obtained values of the Co valence state (solid blue rhombus) from the core-level photoemission spectra of Co \(2p\) are also compared (discussed later), which matches well for the \(x=0.1\) and 0.15 samples, while slightly deviates for the \(x=0.25\) sample. Note that, the decreasing trend of these parameters manifests an overall evolution of the AFM interactions over FM interactions and reduction of the magnetic anisotropy with \(x\)[61]. Moreover, we calculate the theoretical value of \(\mu_{\rm eff}^{\rm calc}\) using the equation, \(\mu_{\rm eff}^{\rm calc}=\sqrt{(1-p)[\mu_{\rm eff}^{4+}]^{2}+p[\mu_{\rm eff}^ {3+}]^{2}}\), where \(p\) is the fraction of the Co\({}^{3+}\) ions, and the \(\mu_{\rm eff}^{4+}\) and \(\mu_{\rm eff}^{3+}\) correspond to the \(4+\) and \(3+\) valence states of Co, respectively. These theoretically calculated values of \(\mu_{\rm eff}^{\rm calc}\): considering different spin-states of Co\({}^{3+}\) and Co\({}^{4+}\) ions. For the \(x=0\) sample, Xu _et al._ reported that the Co\({}^{3+}\) ions present in the IS state, whereas the Co\({}^{4+}\) ions stabilize in a mixture of LS and HS states [62]. Similarly, we calculate the \(\mu_{\rm eff}\) values considering composition weighted spin-states of Co\({}^{3+}\)/Co\({}^{4+}\) ions in La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\), which are plotted in Fig. 3(f) as a function of \(x\). The calculated values of \(\mu_{\rm eff}\) are found to be in good agreement with the experimental \(\mu_{\rm eff}^{\rm expt}\) values, as shown in Fig. 3(f). All the extracted parameters are summarized in Table 2 of [39].
Furthermore, to understand the magnetic interaction in the highly competitive FM and AFM regions, we measure the isothermal MH loops for the \(x=0.125\), 0.15, and 0.25 samples at 5 K within \(\pm 70\) kOe applied magnetic field under the ZFC protocol and shown in Fig. 4. Interestingly, a non-saturating symmetric behavior observed in the magnetization is due to the constant increase of spin-reorientation in the field direction. The estimated values of coercivity (H\({}_{\rm C}\)) are found to be 3.8, 3.2, and 2.2 kOe, and the remanence (M\({}_{\rm r}\)) values are 86, 80, and 31 emu/mole for the \(x=0.125\), 0.15, and 0.25 samples, respectively. We find that the values of H\({}_{\rm C}\) and M\({}_{\rm r}\) decrease with increasing the \(x\), as highlighted with a green arrow in the inset of Fig. 4, which manifests that the higher Nb\({}^{5+}\) (\(4d^{0}\)) concentration suppresses the volume fraction of the spin-pinning boundaries as well as the non-homogeneity of the magnetic phase [29]. Moreover, the virgin isothermal curves measured at 5 K deviate from the linear behavior and show an upturn at higher fields (convex nature), see Fig. 3(a) of [39], which suggests a metamagnetic nature for the \(x=0.125\) and 0.15 samples. A more closer look indicates two critical magnetic fields at \(\approx\)16 (H\({}_{\rm m_{1}}\)) and 35 kOe (H\({}_{\rm m_{2}}\)) (arrows in Figs. 3(b, c) of [39]) [63]. These can be associated with competitive AFM and FM interactions at low fields; however, a linear increase of magnetization beyond H\({}_{\rm m_{2}}\) suggests that AFM interactions dominate at higher fields [63; 64]. However, for the \(x=0.25\) sample, the saturating nature of the M-H curve at high magnetic fields (\(>40\) kOe) manifests the field-induced spin-reorientation [65].
To investigate the temperature dependent transport behavior in La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\), first the \(\rho\)-T curves are compared in Fig. 5(a) for the \(x=0.025\) and 0.05 samples. The resistivity varies linearly above T\({}_{\rm MO}\)\(\sim\)250 K, whereas a gradual change in the slope, i.e., a subtle decrease in the resistivity below T\({}_{\rm MO}\) can be correlated to the conversion of Co\({}^{3+}\) ions from HS state to IS state in this temperature range. The changes can be attributed to the reduction in the scattering because of the spin-disorder owing to the FM ordering in the sample below T\({}_{\rm MO}\)[66]. Interestingly, we observe a minimum (T\({}_{\rm min}\)) near 77 K and 83 K for the \(x=0.025\) and 0.05 samples, respectively, as shown in Fig. 5(a). The resistivity increases below T\({}_{\rm min}\), which is the signature of metal to insulator transition (T\({}_{\rm MI}\)), and can be associated with the spin-state transition from the IS to the LS state [12]. The metallic nature in the \(x=0.025\) and 0.05 samples can be understood due to the presence of a larger volume fraction of Co\({}^{4+}\) ions, as in La\({}_{1-x}\)Sr\({}_{x}\)CoO\({}_{3}\) for \(x\geq 0.18\)[17]. Interestingly, a similar resistivity behavior is reported near T\(\sim\)100 K in magnetic oxides [67] and is believed to be a disorder-induced effect [34] as well as this temperature scales with the spin-state transition of Co ions from HS to IS [4; 6]. Further, at low temperatures, the resistivity starts increasing and that indicates the transition of Co\({}^{3+}\) ions to the LS state, as reported for LaCoO\({}_{3}\)[4; 6]. In Fig. 5(b) we compare the \(\rho\)-T behavior of \(x=0.05\) sample measured at 0, 7,
Figure 4: The isothermal M-H loops recorded at 5 K within the applied magnetic field of \(\pm 70\) kOe for the \(x=0.125\), 0.15, and 0.25 samples; the inset highlights the variation of magnetization near the zero-magnetic field.
and 14 Tesla applied magnetic fields. The applied magnetic field enhances the exchange interaction strength and the observed negative magnetoresistance behavior below \(\approx\)100 K can be associated with the enhanced density of states near the Fermi level [68]. Intriguingly, in the quantum transport regime at low temperatures, the phenomenon of weak localization (WL) or weak antilocalization (WAL) correction in the conductivity arises due to the interference of the electron waves traveling back and forth in the material and is termed as the quantum corrections in the conductivity (QCC) [34]. The negative magnetoresistance in the \(x=0.05\) sample manifests the dominance of WL as the field suppresses the magnetic disorder and hence the resistivity [34]. Also, the interference of the two different electron waves gives rise to the renormalized electron-electron interaction (REEI) [34]. Therefore, a least-square minimization method is used to fit the experimental data by considering the contributions of WL and REEI in both three-dimensional (3D) and two-dimensional (2D) limits.
The \(\rho\)-T behavior can be approximated by the following equations (1) and (2) for the 3D and 2D limits of WL and REEI effects, respectively [69; 34]:
\[\rho(T)=\frac{1}{(\sigma_{0}+a_{1}T^{p/2}+a_{2}T^{1/2})}+b^{\prime}T^{2} \tag{1}\]
\[\rho(T)=\frac{1}{\sigma_{0}+a_{3}lnT}+b^{\prime}T^{2} \tag{2}\]
where \(\sigma_{0}\) is the residual conductivity (\(1/\rho_{0}\)), \(a_{1}T^{p/2}\) accounts for the 3D WL contributions, and \(a_{2}T^{1/2}\) is attributed to the 3D REEI corrections, and \(a_{3}\)lnT is accredited to the combined contributions of the WL and REEI in the 2D limit. The variable \(p\) is the index of localization effects in the system and can be influenced by different types of scattering mechanisms. The value of \(p=2\) implies the dominance of electron-electron scattering, whereas \(p=3\) is attributed to electron-phonon scattering [34]. The last term (\(b^{\prime}T^{2}\)) accounts for the Boltzmann term and implies the classical low-temperature dependence of resistivity withholding the Matthiessen rule [70]. We use equation (1) to fit the experimental data using a combination of 3D WL and REEI at low temperatures (below 100 K), as shown by the solid lines in Figs. 5(a, b). Moreover, in Figs. 2(c, d) of [39] we show a comparison of the residual resistivity, \(\Delta\rho(\%)=[\rho_{\rm fit}(T)-\rho(T)]/\rho(T)\times\)100%) by fitting the zero field data considering the WL plus REEI corrections in both 3D (solid red circles) and 2D (solid blue triangles) limits for the \(x=0.025\) and 0.05 samples, respectively. This clearly shows a larger deviation with respect to the zero line in case of the 2D limit as compared to the 3D limit, which confirms the dominance of the 3D limit of WL plus REEI corrections. Now to understand the origin of the quantum effects whether from WL or REEI [34; 69], we measure the resistivity of the \(x=0.05\) sample at high applied magnetic fields of 7 and 14 Tesla and present in Fig. 5(b). We find that a higher magnetic field reduces the resistivity and shifts the minimum point slightly towards the low temperatures [as shown in the inset of Fig. 5(b)]. This suggests for diminishing of the QCC effects, which can be attributed to the suppression of wave coherence of the electron under an applied magnetic field and gives a strong evidence of the WL correction at low temperatures because the REEI correction should be independent of the magnetic field [34]. Further, in Fig. 5(b), we compare the fitting [using equation (1)] of resistivity data measured at 0, 7, and 14 Tesla. The obtained values of \(\sigma_{0}\) (open red triangles) and a\({}_{1}\) (open blue circles) from the fitting at low temperatures (\(<\)100 K) are plotted in Fig. 5(c) on the left and right
Figure 5: (a) A comparison of the temperature-dependent resistivity of the \(x=0.025\) and 0.05 samples at zero magnetic field, (b) the variation in resistivity with the applied magnetic field (0, 7, and 14 T) for the \(x=0.05\) sample, experimental and fitted data are shown in the open symbols and solid lines, respectively. The inset in (b) highlights the decrease in the resistivity minimum (T\({}_{\rm min}\)) with the magnetic field. (c) The variation of the low-temperature (below 100 K) fitting parameters using equation (1), \(\sigma_{0}\) (left-axis) and a\({}_{1}\) (right-axis), and (d) the variation in fitting parameters in 120–200 K range using equation (3), \(A\) (first-left-axis), \(\rho_{0}\) (second-left-axis), and power law-exponent (\(n\)) with the magnetic field (for the \(x=0.05\) sample) and for both the \(x=0.025\) and 0.05 samples at the zero-magnetic field.
axes, respectively. A significant increase in \(\sigma_{0}\) can be correlated to the negative magnetoresistance and a decrease in the value of \(a_{1}\) is attributed to the reduction of the quantum corrections with the magnetic field and manifests the dominance of WL effects. The parameters obtained from the low-temperature fit for the \(x=0.025\) and 0.05 samples are summarized in Table 3 of [39].
In order to analyze the resistivity behavior in the intermediate temperature range (120-200 K), we use the power law to fit the least square minimization to the experimental data, see Figs. 5(a, b):
\[\rho(T)=\rho_{0}+AT^{n} \tag{3}\]
where \(\rho_{0}\) is the residual resistivity, \(n\), and \(A\) are the fitting parameters to be utilized to quantify the disorder in the FM oxides [34]. In the present case, the \(\rho_{0}\) values are estimated in the range 8-11 \(\times 10^{-5}\)\(\Omega\)-cm for both the samples (see Table III of [39]). The values of coefficient \(A=\) 0.4-1.5\(\times 10^{-11}\)\(\Omega\)-cm-K\({}^{-2}\) are found to be comparable to the values obtained for the elemental ferromagnets, which are attributed due to electron-electron scattering [71]. Also, in ferromagnetic metal oxides, \(n\) values can be used to quantify the disorder; for example, Herranz _et al._ reported the \(n\) value change from \(\sim\) 1.1 to 3 for the weak and strong disorder systems, respectively [70]. In our case, the \(n\) value is found to be 2.9\(\pm\)0.05 for both samples at zero magnetic field, which suggests the presence of strong disorder in the system. This is also consistent with the observed non-linearity in \(\rho\)-T behavior and the coexistence of FM ordering and metal-insulator transition in these samples [34]. Moreover, the fitting parameters \(A\) (open magenta rhombus), \(\rho_{0}\) (open green triangles), and \(n\) (open cyan circles) are plotted on the right and left axes of Fig. 5(d), respectively. The exponent \(n\) decreases from 2.95 (0 T) to 2.75 (14 T), which manifests a decrease in the disorder with the applied magnetic field [70]. The fitting parameters obtained using equation (2) are included in Table III of [39] for the \(x=0.025\) and 0.05 samples. Further, the analysis of the \(\rho\)-T behavior of \(x=0.025\) and 0.05 samples [Figs. 5(a, b)] in the high-temperature range (T\(>\)T\({}_{\rm MO}\)) is important to study the effect of disorder on the electron-phonon coupling; therefore, we use the electronic transport theory based on Bloch-Gruneisen law, as given below [70; 72]:
\[\begin{split}\rho&=\rho_{0}+\rho_{m}+\frac{2\pi k_{ B}}{\hbar e^{2}(n/m_{eff})}G(\Theta_{D}/T)\lambda^{\prime}T\\ &=\rho_{0}+\rho_{m}+\gamma(T)T\end{split} \tag{4}\]
where \(\rho_{0}\), \(\rho_{m}\), and \(\gamma(T)\) are the residual resistivity, magnetic resistivity due to the spin scattering of the electrons (considered constant in paramagnetic phase at T\(>\)T\({}_{\rm MO}\)), and \(\gamma(T)\) is the temperature-dependent Gruneisen parameter, respectively. Here, \(\gamma(T)\) is considered equal to \([2\pi k_{B}/\hbar e^{2}(n/m_{\rm eff})]G(\Theta_{D}/T)\lambda^{\prime}\), where \(n\) is the carrier density, m\({}_{\rm eff}\) is the effective mass of carriers, \(G(\Theta_{D}/T)\) is the Gruneisen function. The values of \(\gamma(T)\) are determined from the slope of the temperature-dependent linear fit of \(\rho(T)\) in the high-temperature range [70], see Figs. 5(a, b). The obtained fitting parameters are presented in Table III of [39], where the \(\gamma(T)\) values are found to be 31\(\times 10^{-8}\)\(\Omega\)-cm-K\({}^{-1}\) for the \(x=0.025\), which is much higher as compared to the \(x=0.05\) sample 6\(\times 10^{-8}\)\(\Omega\)-cm-K\({}^{-1}\). This manifests a higher degree of disorder in the \(x=0.025\) sample. Further, for the \(x=0.05\) sample, there is no significant change in the values of fitting parameters [\(\rho_{0}\), \(\rho_{m}\) and \(\gamma(T)\)] with the applied magnetic field, which suggests invariance of disorder at high temperatures.
In Fig. 6(a), we present the temperature dependent resistivity data (on a semilogarithmic scale) of the 0.075 \(\leq x\leq\) 0.25 samples, which clearly show an increasing
Figure 6: (a) A comparison of the temperature-dependent resistivity variation of \(x=0.075\)–0.25 samples on the semilogarithmic scale, (b) the low-temperature resistivity fitted with the 3D-VRH model, and (c) the high-temperature resistivity data fitted with the Arrhenius model; a solid-black line presents the fit for the corresponding model and the dotted lines mark the range where the experimental curve deviates from the respective fits, and (d) the activation energy (E\({}_{\rm a}\)) extracted from the Arrhenius model on a linear scale and density of states near the Fermi level [N(E\({}_{\rm F}\))] extracted from the 3D-VRH model on a semi-logarithmic scale.
trend with decreasing temperature, i.e., a semiconducting/insulating nature. Interestingly, the resistivity increases monotonously with \(x\) due to higher Nb\({}^{5+}\) concentration, which predominantly favors the insulating nature of these samples. This observation further confirms that the substitution of Nb\({}^{5+}\) converts metallic Co\({}^{4+}\) ions into Co\({}^{3+}\), which restricts the metallic conduction pathways. Moreover, the Nb\({}^{5+}\) does not participate in conduction due to the unavailability of conduction electrons in 4\(d^{0}\). In order to understand the possibility of carrier conduction mechanisms in these samples, we analyze the experimental data in two different regions, the high-temperature (320-200 K) with the Arrhenius model (where charge carriers move through the band-conduction) and the low-temperature region (160-60 K) with the three-dimensional variable range hopping (3D-VRH) model (where charge carriers move between the ions through nearest-neighbor hopping in the delocalized states). The Arrhenius model can be expressed as [6], \(\rho(T)=\rho_{0}exp(E_{a}/k_{B}T)\), where, \(\rho_{0}\) and E\({}_{a}\) are the exponential pre-factor and activation energy, respectively. In Fig. 6(c), we show the \(\ln(\rho)\) with 1000/T, and the slope of linear fit (solid black line) of these curves is utilized to determine the activation energy for each sample. The obtained values of activation energy are presented on the left-axis of Fig. 6(d) in a linear scale, where we obtain a monotonous enhancement of the activation energy, which is consistent with the dominance of insulating nature with \(x\). Interestingly, a deviation of the experimental data from the Arrhenius model towards low temperatures (highlighted with the dotted black line) demonstrates a possibility of a different conduction mechanism [50]. Therefore, we follow the three-dimensional variable-range hopping (3D-VRH) model, which can be given as [6], \(\rho(T)=\rho_{0}exp\left(\frac{T_{0}}{T}\right)^{1/4}\), where, \(\rho_{0}\) and T\({}_{0}\) are the exponential factors and characteristic temperatures, respectively. In Fig. 6(b), the best fit of the plot between \(\ln(\rho)\) and T\({}^{-1/4}\) validates the 3D-VRH model in the low temperature range, and the dotted black lines are shown where a deviation in the experimental data from the 3D-VRH model is visible. The slope of the linear fit is used to determine the characteristic temperature (T\({}_{0}\)) and that is used to calculate the density of states (DOS) near the Fermi level [N(E\({}_{\rm F}\))] using the formula, N(E\({}_{\rm F}\))=18/k\({}_{\rm B}\)T\({}_{0}\lambda^{3}\), where \(\lambda\) is the localization length of the conduction path. Moreover, the conducting path for the VRH is chosen here (Co/Nb)-O-(Co/Nb) and its value is determined from the Rietveld refinement for the calculation of the DOS. We show the variation in DOS on the right-axis of Fig. 6(d) in a semi-logarithmic scale, which manifests a monotonous decrease in N(E\({}_{\rm F}\)) with \(x\). Further, for the confirmation of the VRH model, we estimate the values of mean hopping distance (R\({}_{\rm H}\)) and means hopping energy (W\({}_{\rm H}\)), as given below [73],
\[R_{\rm H} = \left[9\lambda/8\pi kTN(E_{F})\right]^{1/4}\,nm \tag{5}\] \[W_{\rm H} = \left[3/4\pi R_{H}^{3}N(E_{F})\right]\,eV \tag{6}\]
The calculated values of R\({}_{\rm H}\) and W\({}_{\rm H}\) are found to increase with \(x\), as presented in Table III of [39]. The obtained parameters for the \(x\geq\)0.075 samples fulfill Mott's criteria for VRH conduction, \(\lambda^{-1}\)R\({}_{\rm H}\)\(>>\) 1 and W\({}_{\rm H}\)\(>>\) k\({}_{\rm B}\)T, at low temperatures below 160 K. These results manifest the magnetic dilution due to Nb\({}^{5+}\) (4d\({}^{0}\)) substitution, which leads to the reduction in the conduction pathways for the electron hopping and drives the system toward a more insulating state. We also note here that the temperature-dependent resistivity analysis includes the effect of the scattering of charge carriers at the grain boundary due to the polycrystalline nature of these samples [74; 75].
Finally, we investigate the electronic properties of the \(x=\) 0.1, 0.15, and 0.25 samples using core-level photoemission spectroscopy. In Figs. 7(a-c), we present the core-level La 3\(d\) spectra, which show two peaks in each spin-orbit component of 3\(d_{5/2}\) and 3\(d_{3/2}\) having a separation \(\approx\)3.8 eV due to the transfer of the electron to the La 4\(f^{0}\) from the oxygen valence band [76; 16]. Here, the binding energy (BE) values for the main components are 850.6 eV (3\(d_{3/2}\)) and 833.8 eV (3\(d_{5/2}\)) having a separation of 16.8 eV confirm the presence of La\({}^{3+}\) ions [76; 16]. More importantly, the spin-orbit split components 2\(p_{3/2}\) and 2\(p_{1/2}\) of Co 2\(p\) core-levels are shown in Figs. 7(d-f). The deconvolution of Co 2\(p_{3/2}\) with two peaks corresponds to the 3+ and 4+ oxidation states, where the
Figure 7: The core-level x-ray photoemission spectra measured at room temperature and the deconvoluted components of (a–c) La 3\(d\), (d–f) Co 2\(p\), (g–i) Sr 3\(d\), (j–l) Nb 3\(d\), and (m–o) O 1\(s\) for the \(x\) = 0.1, 0.15 and 0.25 samples.
BE of Co\({}^{3+}\) ions is 780 eV (\(2p_{3/2}\)) and 795.5 eV (\(2p_{1/2}\)), wheres the Co\({}^{4+}\) is found to be at 782.0 eV (\(2p_{3/2}\)) and 797.4 eV (\(2p_{1/2}\)), both having a separation of \(\approx\)15.5 eV, which are in good agreement with the refs. [77; 78]. There is no signature of any satellite feature at 6 eV from the main peak of Co \(2p\) core-levels, which confirms the absence of Co\({}^{2+}\)[78]) in the \(x=0.1\), 0.15 and 0.25 samples, see in Figs. 7(d-f). These results also conclude that the substitution of Nb\({}^{5+}\) converts Co\({}^{4+}\) ions into Co\({}^{3+}\) ions [13; 79]. Further, we determine the average valence (\(n\)) of the Co ions from the area ratio of the deconvoluted components and found it 68% (Co\({}^{3+}\)) and 32%(Co\({}^{4+}\)) for the \(x=0.1\) sample, and 72% (Co\({}^{3+}\)) and 28%(Co\({}^{4+}\)) for the \(x=0.15\) sample, which manifests an average valence of 3.32 (\(x=0.1\)) and 3.28 (\(x=0.15\)), and are found to be in good agreement with theoretically determined values of 3.34 and 3.24, respectively. For the \(x=0.25\) sample, the ratio is 88% (Co\({}^{3+}\)) and 12%(Co\({}^{4+}\)) having an average valence of 3.1, which is slightly higher than the theoretically calculated value of 3. Further, in Figs. 7(g-i) we present the Sr \(3d\) core levels, which are deconvoluted with four components. The two strong components at the lower BE side are observed at 132 eV (\(3d_{5/2}\)) and 133.6 eV (\(3d_{3/2}\)) with a separation of 1.6 eV, which are in good agreement with the values reported for the Sr\({}^{2+}\) valence state [77; 80]. The other two weaker components at around 134.0 eV (\(3d_{5/2}\)) and 135.3 (\(3d_{3/2}\)) are from SrCO\({}_{3}\) traces present on the surface of the sample [80]. Moreover, the Nb \(3d\) core-levels are presented in Figs. 7(j-l) for the \(x=0.1\), 0.15 and 0.25 samples, respectively, where the deconvoluted components at 206 eV (\(3d_{5/2}\)) and 208.8 eV (\(3d_{3/2}\)) with 2.8 eV separation. These values are in good agreement with the refs. [6; 77] and confirm the 5+ oxidation state of Nb in these samples. The O \(1s\) core-level spectra in Figs. 7(m-o) are deconvoluted into two peaks at the BE values of 529 eV and 531.1 eV, which correspond to the contributions from the lattice oxygen and surface adsorbed oxygen, respectively [81].
## IV Conclusions
In conclusion, we have investigated the evolution of structural, magnetic, and transport properties of bulk La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) (\(x=0.025\)-0.25). The Rietveld refinement of the x-ray diffraction patterns with R3c space group, reveals that the lattice parameters and rhombohedral distortion monotonously increase with the increased Nb concentration. The magnetic susceptibility data manifest that the Nb substitution dilutes the double exchange interaction and a decrease in the magnetic ordering temperature and net magnetization are observed with \(x\). Interestingly, for the \(x>\)0.1 samples, the magnetic interactions are dominated by the superexchange antiferromagnetic interactions. Moreover, the isothermal MH loops recorded at 5 K for the \(x>\)0.1 samples manifest that coercivity (H\({}_{\rm C}\)) and remanence (M\({}_{\rm r}\)) decrease with \(x\) due to the dominance of AFM interactions and reduction of FM volume fractions. More interestingly, we observe a minimum in resistivity for the \(x=0.025\) and 0.05 samples, which are analyzed using the quantum corrections in the conductivity where the weak localization effect dominates over the renormalized electron-electron interactions in the 3D limit. Moreover, a semiconducting behavior is observed for the \(x>0.05\) samples, and the resistivity monotonously increases with higher Nb\({}^{5+}\)(\(4d^{0}\)) concentration. This semiconducting resistivity behavior is analyzed with the Arrhenius model in the high-temperature (\(\sim\)160-320 K) and 3D-variable range hopping conduction in the low-temperature region (\(<\)160 K). Further, the core-level photoemission spectra of La \(3d\), Sr \(3d\), Co \(2p\), Nb \(3d\), and O \(1s\) confirm the valence state of constituent elements and affirms the absence of Co\({}^{2+}\), which are in accordance with the magnetization results.
## V Acknowledgements
RS acknowledges DST, INSPIRE for the fellowship, and IIT Delhi for providing research facilities: XRD and PPMS at the Department of Physics, and PPMS and XPS at CRF. RS thanks Mr. Ramcharan Meena for his help in the transport measurements. RSD acknowledges SERB-DST for the financial support through a core research grant (project reference no. CRG/2020/003436).
|
2308.14111 | MARL for Decentralized Electric Vehicle Charging Coordination with V2V
Energy Exchange | Effective energy management of electric vehicle (EV) charging stations is
critical to supporting the transport sector's sustainable energy transition.
This paper addresses the EV charging coordination by considering
vehicle-to-vehicle (V2V) energy exchange as the flexibility to harness in EV
charging stations. Moreover, this paper takes into account EV user experiences,
such as charging satisfaction and fairness. We propose a Multi-Agent
Reinforcement Learning (MARL) approach to coordinate EV charging with V2V
energy exchange while considering uncertainties in the EV arrival time, energy
price, and solar energy generation. The exploration capability of MARL is
enhanced by introducing parameter noise into MARL's neural network models.
Experimental results demonstrate the superior performance and scalability of
our proposed method compared to traditional optimization baselines. The
decentralized execution of the algorithm enables it to effectively deal with
partial system faults in the charging station. | Jiarong Fan, Hao Wang, Ariel Liebman | 2023-08-27T14:06:21Z | http://arxiv.org/abs/2308.14111v1 | # MARL for Decentralized Electric Vehicle Charging Coordination with V2V Energy Exchange
###### Abstract
Effective energy management of electric vehicle (EV) charging stations is critical to supporting the transport sector's sustainable energy transition. This paper addresses the EV charging coordination by considering vehicle-to-vehicle (V2V) energy exchange as the flexibility to harness in EV charging stations. Moreover, this paper takes into account EV user experiences, such as charging satisfaction and fairness. We propose a Multi-Agent Reinforcement Learning (MARL) approach to coordinate EV charging with V2V energy exchange while considering uncertainties in the EV arrival time, energy price, and solar energy generation. The exploration capability of MARL is enhanced by introducing parameter noise into MARL's neural network models. Experimental results demonstrate the superior performance and scalability of our proposed method compared to traditional optimization baselines. The decentralized execution of the algorithm enables it to effectively deal with partial system faults in the charging station.
Electric vehicle, vehicle-to-vehicle, vehicle-to-grid, energy management, multi-agent reinforcement learning (MARL).
## I Introduction
Electric vehicles (EVs) have emerged as an effective solution to the net-zero transition of the transport sector, contributing to emission reduction and climate change mitigation. EV charging predominantly from renewable energy sources (RES) can further decarbonize the sector. Recent advances in charging technologies, e.g., vehicle-to-vehicle (V2V), have made EV charging more flexible, such that the use of intermittent RES can be increased. But integrating RES and V2V flexibility into the EV charging coordination process presents significant challenges. More specifically, the intermittent nature of RES and other exogenous uncertainties, such as EV charging behaviors and electricity prices, make EV charging optimization a challenging task. Moreover, EV coordination usually relies on communications between EVs to exchange operating information for making proper scheduling decisions. But given the time-varying environment, EV coordination through extensive communications is often impractical. Therefore, this paper is motivated to develop an effective EV charging coordination algorithm that can handle various uncertainties in the system without the need of extensive communications and information exchange between EVs.
There has been a large body of literature on EV charging coordination and energy management under uncertainties. In terms of the underlying methodology, existing studies can be classified into model-based approaches [1, 2, 3] and model-free approaches [4, 5, 6, 7, 8]. The model-based approaches rely on the precise modeling of the system and uncertainties. For example, [2] and [3] presented a centralized model-based scheduling method for EV charging coordination with V2V, and used the prediction of real-time energy prices. These methods heavily depend on the accuracy of predictions and could suffer from significant performance degradation when the prediction is less effective. Model-free approaches, mainly using deep reinforcement learning (DRL), have demonstrated substantial potential in addressing sequential decision-making problems under uncertainties. This provides a promising solution to handling uncertainties in EV charging coordination. For example, a customized actor-critic reinforcement learning algorithm was proposed in [4] to coordinate EV charging under uncertainties in EV charging behaviors.
DRL-based EV charging management is effective in handling uncertainties, but existing centralized solution methods rely on excessive communication between EVs, requiring reliable EV charging infrastructure and thus creating possible barriers to their implementation in practice. This concern is especially pertinent given the frequent occurrence of partial system faults in modern EV charging stations [9]. Hence, there is a compelling need for a new paradigm for DRL-based EV charging coordination that can obviate the necessity for excessive information exchange, consequently bolstering the system reliability. Recent research effort has been made to address this issue by introducing decentralized algorithms for EV charging coordination. For example, [6, 8, 10, 11] employed multi-agent reinforcement learning (MARL) to enable decentralized execution of EV charging decisions. But many decentralized EV charging methods still required some information exchange.
More advanced decentralized MARL algorithms have been proposed, e.g., in [11], to manage EV charging without information exchange. However, the results showed that many EV charging requests were not completed when EVs left the charging station. This problem is largely caused by the reward
signal in MARL, and decentralized EV coordination could make this problem more challenging. In addition, uncompleted EV charging requests can cause concerns to EV owners, in particular, if there is a biased pattern in completing charging requests causing fairness issues. According to [12], EV user satisfaction with the charging process and the equitable provision of charging services can significantly affect the adoption of EVs. As such, it is imperative to develop decentralized EV coordination algorithms that are capable of navigating these intricate challenges.
This paper is motivated to bridge the aforementioned research gaps by developing an effective decentralized MARL algorithm for coordinating EV charging with V2V. A group of EVs arrive and depart with random charging demands, and the system manages EV charging using local RES and grid power with possible assistance from V2V which enables energy exchange among EVs when needed. The system also incorporates EV user satisfaction by considering charging completion metrics while ensuing fairness among EVs. The key contributions of this paper are as follows.
* _Decentralized EV coordination via MARL:_ We develop a decentralized MARL algorithm to minimize energy costs of EVs under uncertainties in solar power generation, energy prices, and EV charging behaviors. Our algorithm is proficient in coordinating EV charging under uncertainties and enhances system reliability under system faults, such as partial EV charger failures.
* _Fairness-aware user satisfaction:_ We introduce a fairness model for charging satisfaction among EV users, which is subsequently incorporated into the design of DRL rewards. The results demonstrate the efficacy of this fairness model in optimizing the EV charging process.
* _Noisy network for better exploration in MARL training:_ We employ a noisy network as opposed to action noise, which fosters agent exploration and thereby accelerates convergence. Numerical results show that the proposed framework improves convergence during MARL training.
## II Problem Formulation
We consider an EV charging station, which serves a set of EVs \(\mathcal{I}{=}1,...,I\) connected to their corresponding chargers over a predetermined operational horizon \(\mathcal{T}\). The objective of the system is to minimize the aggregate energy cost of the charging station.
#### Ii-1 EV Charging Model
We assume that EV chargers support bidirectional energy flow, and we split the decision variable of EVs into charging power and discharging power. We let \(p_{i,t}^{\text{ch}}\) and \(p_{i,t}^{\text{disch}}\) denote charging and discharging power for EV \(i\in\mathcal{I}\) at time \(t\). The constraints associated with these decision variables are as follows
\[0\leq p_{i,t}^{\text{ch}}\leq(1-z_{i,t})\bar{P}^{\text{ch}}, \tag{1}\] \[0\leq p_{i,t}^{\text{disch}}\leq z_{i,t}\bar{P}^{\text{disch}},\] (2) \[z_{i,t}\in\{0,1\}, \tag{3}\]
where \(\bar{P}^{\text{ch}}\) and \(\bar{P}^{\text{disch}}\) are the maximum charging and discharging power of EV chargers. We use binary variable \(z_{i,t}\) to ensure that charge and discharge can not happen simultaneously.
During the EV charging process, the battery dynamic must be defined for setting battery constraints. The battery energy level \(E_{i,t}\) and battery constraints for the \(i\)-th EV at \(t\) can be expressed as
\[E_{i,t}=E_{i,t-1}+p_{i,t}^{\text{ch}}\eta_{i}^{\text{ch}}\Delta t-\frac{p_{i,t }^{\text{disch}}\Delta t}{\eta_{i}^{\text{disch}}}, \tag{4}\]
\[A_{i,t}E_{i}^{\text{min}}\leq E_{i,t}\leq A_{i,t}E_{i}^{\text{max}}, \tag{5}\]
where \(\eta_{i}^{\text{ch}}\) and \(\eta_{i}^{\text{disch}}\) denote the charging and discharging efficiencies of EV \(i\), respectively. In (5), \(E_{i}^{\text{min}}\) and \(E_{i}^{\text{max}}\) are the minimum/maximum energy level of the \(i\)-th EV. The battery dynamic at one time step \(\Delta t\) are represented in (4) and \(A_{i,t}\) indicates the status of the \(i\)-th EV at \(t\). If \(A_{i,t}\) is 0, the charger disconnects to an EV, which means the battery energy level \(E_{i,t}\) is always 0. In order to satisfy the constraint (5), \(p_{i,t}^{\text{ch}}\) and \(p_{i,t}^{\text{disch}}\) must equal to 0, when \(A_{i,t}\) is 0.
It is crucial to consider the additional battery degradation costs associated with EV discharging, as it affects the economic feasibility of EV charging/discharging [13]. According to [14], the energy throughput equivalent method, which primarily determines the Equivalent Full Cycle (EFC) for calculating the \(i\)-th EV's cycle aging \(AGE_{i,t}^{\text{cyc}}\), is shown below
\[EFC_{i,t}=0.5\cdot\frac{|p_{i,t}^{\text{ch}}\eta_{i}^{\text{ch}}\Delta t-p_{i,t}^{\text{disch}}\Delta t}{\eta_{i}^{\text{disch}}}|}{E_{i}^{\text{clip}}}, \tag{6}\]
\[AGE_{i,t}^{\text{cyc}}=\frac{EFC_{i,t}}{L_{i}^{\text{cyc}}}, \tag{7}\]
in which \(E_{i}^{\text{clip}}\) represents the battery capacity of the \(i\)-th EV and 0.5 signifies the mean half cycle in the battery life. The proportion of battery aging in a time step in the total battery cycle life, denoted as \(L_{i}^{\text{cyc}}\), can be computed by Equation (7). This signifies the cycle aging, \(AGE_{i,t}^{\text{cyc}}\), for the \(i\)-th EV.
#### Ii-2 Energy Balance
EV charging sources can be classified into three categories: grid energy (G2V), V2V energy exchange, and locally sourced solar energy. Discharging from EVs can either be directed towards the grid (V2G) or towards other EVs.
The system under consideration includes multiple charging and discharging methods, such as V2G, V2V, and solar energy. During the charging process, each charger must determine both the direction and the amount of power. Hence, we introduce decision variables including the V2V charging power \(p_{i,t}^{\text{V2Vc}}\), V2V discharging power \(p_{i,t}^{\text{V2Vd}}\), power from the \(i\)-th EV to the grid \(p_{i,t}^{\text{V2G}}\), power from the grid to the \(i\)-th EV \(p_{i,t}^{\text{G2V}}\), power from photovoltaics (PV) to the grid \(p_{t}^{\text{PVG}}\), and power from PV to the \(i\)-th EV \(p_{i,t}^{\text{PVEV}}\). Initially, the charge/discharge balance is shown as follows
\[p_{i,t}^{\text{ch}}=p_{i,t}^{\text{PVEV}}+p_{i,t}^{\text{V2Vc}}+p_{i,t}^{\text {G2V}}, \tag{8}\]
\[p_{i,t}^{\text{disch}}=p_{i,t}^{\text{V2G}}+p_{i,t}^{\text{V2Vd}}, \tag{9}\]
\[p_{i,t}^{\text{PVEV}}\cdot p_{i,t}^{\text{V2Vc}},p_{i,t}^{\text{G2V}},p_{i,t}^{ \text{V2G}},p_{i,t}^{\text{V2Vd}},p_{t}^{\text{PVG}}\geq 0. \tag{10}\]
The solar energy must satisfy the PV generation constraint
\[0\leq\sum_{i\in\mathcal{I}}p_{i,t}^{\text{PVEV}}+p_{t}^{\text{ PVG}}\leq p_{t}^{\text{PVgen}}. \tag{11}\]
EVs purchasing and selling electricity from the grid must comply with the following constraints
\[0\leq\sum_{i\in\mathcal{I}}p_{i,t}^{\text{G2V}}\leq G^{\text{max}}, \tag{12}\] \[0\leq\sum_{i\in\mathcal{I}}p_{i,t}^{\text{V2G}}+p_{t}^{\text{ PVG}}\leq G^{\text{max}}, \tag{13}\]
where \(G^{\text{max}}\) is the maximum energy transmission capacity between charging station and the grid. Moreover, the V2V energy transfer must be balanced as
\[\sum_{i\in\mathcal{I}}p_{i,t}^{\text{V2V}e}=\sum_{i\in\mathcal{I}}p_{i,t}^{ \text{V2V}e}, \tag{14}\]
in which the V2V energy transfer is divided to V2V power of consumer \(p_{i,t}^{\text{V2V}e}\) and V2V energy of producer \(p_{i,t}^{\text{V2V}d}\).
#### Ii-B3 User Satisfaction and Fairness Factor
In order to represent the satisfaction fairness model, this work applies the future average power (FAP) to represent user satisfaction, which means a constant power required by the electric vehicle to complete the charging task for the rest of the time. The EV charging satisfaction level is defined as follows
\[FAP_{i,t}=\frac{E_{i}^{\text{dem}}-E_{i,t}}{T_{i,t}^{r}}, \tag{15}\] \[U_{i,t}=-\rho\frac{FAP_{i,t}}{P^{\text{ch,max}}}, \tag{16}\]
where \(\rho\) denotes a constant coefficient that balances the costs and completion of charging tasks. Then, the fairness factor \(\psi_{i,t}\) can be defined as follows
\[\psi_{i,t}=|U_{i,t}-\overline{U}_{t}|, \tag{17}\]
in which the fairness is represented as the distance between current EV charging satisfaction level \(U_{i,t}\) and average satisfaction level \(\overline{U}_{t}\) of all EVs.
#### Ii-B4 Objectives and Constraints
According to the service of the charging station, this problem has three objectives: energy cost reduction, charging demand satisfaction, and fairness. The objective function can be expressed as
\[\min \sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{I}}\Bigl{(}(p_{i,t}^{ \text{G2V}}\kappa_{t}^{\text{buy}}-p_{i,t}^{\text{V2G}}\kappa_{t}^{\text{sell} })\Delta t\] \[+AGE_{i,t}^{\text{cyc}}\kappa^{\text{bat}}+\psi_{i,t}-U_{i,t} \Bigr{)}-\sum_{t\in\mathcal{T}}(p_{t}^{\text{PVG}}\kappa_{t}^{\text{sell}} \Delta t)\] (18) \[\text{s.t.}\ \ \eqref{eq
a crucial aspect of artificial general intelligence [15]. The design of the reward function is intrinsically connected to the objectives of the problem at hand. In this study, the goal is to minimize the total cost and meet the charging demands of the EVs upon their departure from the station.
As such, the reward function must account for three key components: energy cost, user satisfaction and the fairness metric.
* **Cost:** The overall cost includes both energy cost and battery degradation cost. Agents need to cooperate to achieve the goal of reducing the total cost. As such, the reward structure is designed to assign the average cost to each agent, which can encourage all agents to work collectively towards cost reduction with fluctuating solar and energy prices. According to the objective function (18), the reward for the \(j\)-th agent in terms of the total cost, included the EV energy cost \(R_{t}^{\text{energy}}\), the benefit from selling solar energy to the grid \(R_{t}^{\text{PVG}}\) and battery degradation cost \(R_{t}^{\text{battery}}\) at time step \(t\). The average cost \(R_{j,t}^{\text{cost}}\) for the \(j\)-th agent at \(t\) can be represented as \[R_{j,t}^{\text{cost}} =-(\frac{R_{t}^{\text{energy}}-R_{t}^{\text{PVG}}+R_{t}^{\text{ battery}}}{n_{t}}).\] (24) In this context, the positive cost equates to the negative reward within the RL model. Additionally, the quantity of active agents at time \(t\) is represented as \(n_{t}\).
* **User Satisfaction and Fairness:** For EV user experience, the reward \(R_{j,t}^{\text{user}}\) can be calculated via charging satisfaction level (16) and fairness (17) as \[R_{j,t}^{\text{user}}=U_{j,t}+\psi_{j,t}.\] (25)
At last, the MARL needs to deal with the multi-objective problem, which is to minimize the total charging cost (with battery degradation cost) while satisfying the charging demand considering fairness. Thus, we set weighted reward in the total reward calculation. Thus, the total reward can be calculated as
\[R_{j,t}=\xi R_{j,t}^{\text{cost}}+(1-\xi)R_{j,t}^{\text{user}}+R^{\text{grid}}, \tag{26}\]
where \(\xi\) is the trade-off parameter between charging demand and costs. The penalty of grid constraint violation is \(R^{\text{grid}}\).
## IV Proposed MARL Method
In this section, we present the MARL algorithm, which orchestrates EV charging to optimize the cumulative system reward. Effective information exchange among chargers is needed during the training process. We utilize the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm with Centralized Training with Decentralized Execution (CTDE) scheme [16], in which the policy is trained centrally and actions are independently executed for each agent in a decentralized manner. The adopted framework is enabled by the Actor-Critic (A2C) architecture. The actor generates the EV charging action via policy neural network, while the critic evaluates the respective actor's actions via critic neural network.
The structure of CTDE architecture is demonstrated in the right-hand section of Fig. 1. The training process remains centralized, which must access the information from other chargers. The centralized critic utilizes the critic neural network to approximate the Q-function \(Q_{j}^{\mu}(\mathbf{s},a_{1},...,a_{n})\) for policy \(\mu\). Conversely, actors are restricted to local information only, using actor neural networks \(\mu_{j}\), parameterized by \(\theta_{j}\) to produce action under current state \(s_{j}\). In centralized training, Temporal Difference (TD) learning is used to estimate the centralized Q-function and construct TD-target \(y\). According to [16], the loss function of the critic (TD-error) can be expressed as
\[\mathscr{L}(\theta_{j}) = \mathbb{E}_{s,a,r,s^{\prime}}[(Q_{j}^{\mu}(\mathbf{s},a_{1},...,a_{n} )-y)^{2}], \tag{27}\] \[y = r_{j}+\gamma Q_{j}^{\mu^{\prime}}(\mathbf{s}^{\prime},a_{1}^{\prime },...,a_{n}^{\prime})\big{|}_{a_{k}^{*}=\mu_{k}^{*}(s_{k})}, \tag{28}\]
where the target policy network is denoted as \(\mu_{j}^{\prime}\) with delayed parameter \(\theta_{j}^{\prime}\) and next state is represented as \(\mathbf{s}^{\prime}\). The TD target \(y\) is an estimation of the cumulated reward for the next step based on the most recent reward. The loss function, also known as the TD-error, quantifies the difference between the current estimation and the estimation from the previous step.
When considering the Actor (policy) networks, we aim to learn a policy that operates under a continuous action space, which can be achieved through the deterministic policy gradient. The loss function can be formally articulated as
\[\nabla_{\theta_{j}}J(\mathbf{\mu}_{j}) = \mathbb{E}_{s\sim p,a_{j}\sim D} \tag{29}\] \[[\nabla_{\theta_{j}}\mathbf{\mu}_{j}(a_{j}|s_{j})\nabla_{a_{j}}Q_{j} ^{\mu}(\mathbf{s}|a_{1},...,a_{n})\big{|}_{a_{j}=\mathbf{\mu}_{j}(s_{j})}],\]
where \(D\) is the experience replay buffer that contains the tuples \((s,s^{\prime},a_{1},...,a_{n},r_{1},...,r_{n})\) for all agents.
Also, the parameter noise is added to the neural network for better exploration. We modify the original algorithm by incorporating a noisy network [17]. This addition involves substituting the standard linear layer with a noisy linear layer, which is described as follows
\[Y=(\nu^{\theta}+\sigma^{\theta}\odot\epsilon^{\theta})X+(\nu^{b}+\sigma^{b} \odot\epsilon^{b}), \tag{30}\]
in which \(\epsilon=[\epsilon^{\theta},\epsilon^{b}]\) represents randomly sampled noise matrices exhibiting zero mean and fixed statistics. In addition, \(\nu=[\nu^{\theta},\nu^{b}]\) and \(\sigma=[\sigma^{\theta},\sigma^{b}]\) denote the neural network's
Fig. 1: The architecture of CTDE with noisy network.
learnable parameters for weights and bias. Within the noisy network framework, instead of applying an \(\epsilon\)-greedy policy, the agent is capable of acting greedily in accordance with a network utilizing noisy linear layers.
Upon completion of the training phase, the policy networks are able to approximate the optimal policy function. Given that the policy network is embedded within the decentralized agent, it enables the agent to make autonomous decisions during the execution process without dependency on other agents' information.
## V Performance Evaluation
To simulate EV charging processes, we use EV data from the Adaptive Charging Network (ACN) dataset [18] with a fixed charging/discharging power limits set at \(16\)kW. We employ corresponding solar data to simulate the energy generation of our solar power system, whose solar capacity is configured at \(30\)kWp. In order to compute the energy cost for the charging station, EVs purchase energy from grid following a Time-of-Use (TOU) pricing scheme and sell energy to grid using the wholesale energy price. We use real price data from National Electricity Market (NEM) in Australia [19]. We assume that the usage of solar energy is free for all EVs in the charging station. According to a previous study [1], the V2V price should ideally be bracketed between the selling price for the producer and the purchased price for the consumer, facilitating the participation of EVs in V2V energy exchange activities. In this study, the V2V price is assigned as the midpoint of the TOU energy price and the wholesale market energy price.
In the experiments, we consider \(20\) chargers (i.e., agents) and design a centralized model-based optimization algorithm as a baseline. This baseline method is Rolling-Horizon Optimization (RHO), which relies the predictions of solar energy generation and energy prices to determine the EV charging and discharge schedule in a look-ahead window (as the optimization horizon) and execute the decision in the current time slot. The window size of the rolling horizon aligns with the longest EV parking time upon the arrival of a new EV. Morevoer, in contrast to the model-based RHO baseline, we also design a model-free baseline, which is a centralized multi-agent deep Q-network (MADQN) for comparisons with our proposed MADDPG with CTDE.
Fig. 2 provides a comparative depiction of the convergence of the MADDPG with and without incorporating a noisy network-NoisyNet. We see that MADDPG with NoisyNet converges faster. The result suggests that incorporating NoisyNet can enhance the exploration capabilities of agents within the environment. Hence, the agents become proficient in learning an optimal policy within a limited number of episodes.
The effectiveness of the CTDE framework is demonstrated in Fig. 3. The dotted orange curve in the figure exhibits the real-time solar energy generation. We see that the MADDPG enables the agent to learn the pattern of solar energy generation and utilize solar energy to reduce energy cost. This is evidenced by the alignment of the EV charging (in red solid curve) and the available solar energy. Furthermore, the red cross symbolizes the occurrence of a partial system fault, simulating failures in some chargers by replacing the information of those faulty chargers with random numbers. A partial system fault event is introduced after the 280th time step. Upon comparing the centralized and decentralized charging power, it becomes evident that the partial fault leads to instability of the remaining chargers using the centralized MADQN, whereas the MADDPG continues to function normally.
Since our system includes multiple objectives that the charging station and EV owners care about, such as energy cost, user satisfaction, and fairness, it becomes necessary to establish trade-offs in these distinct objectives. In our work, we use the charging task completion ratio to represent user satisfaction, which is the percentage of the completion across all charging tasks. Fig. 4 depicts the trade-off between the user satisfaction and energy cost. In Fig. 4, a lower position indicates better performance. The results show that the MARL algorithms outperform the model-based approach. Comparatively, MADDPG outperforms the model-based approach while exhibiting slightly compromised performance than the MADQN.
Moreover, our study incorporates considerations of fairness with respect to the user satisfaction. A fairness factor is integrated into the MARL reward mechanism. In order to quantify the level of fairness, we compute fairness metrics for a chosen set of EVs, which is the distance between the completion ratio of each selected EV and the average completion ratio. According to Fig. 5, the result suggests that the MADDPG with fairness maintains lower metrics, signifying a relatively
Fig. 3: The performance between centralized and decentralized MARL in the case of partial faults.
Fig. 2: Effect of Noisy Network on Algorithm Convergence.
smaller disparity in user satisfaction across different EVs.
The scalability of the MADDPG with NoisyNet is examined under varying scenarios, encompassing five different sizes of the charging station. These results are presented in TABLE I. With an increase in the size of the charging station (i.e., the number of chargers), the average energy cost per each charger exhibits relative consistency. Similarly, the average completion ratio of each charger demonstrates remains at a similar level.
## VI Conclusion
This paper proposed a decentralized MARL approach to coordinate EV charging with V2V and solar energy integration. The proposed algorithm seeks to minimize the total energy costs while enhancing the EV user experience in terms of charging satisfaction and fairness. Leveraging the capability of MARL to learn from EV charging environment, EV chargers, modeled as agents, make independent decisions without information exchange. This unique attribute ensures operational normalcy during instances of partial system faults. Additionally, the inclusion of a noisy network within the MARL algorithm fosters agent exploration of the environment, leading to faster convergence. Numerical studies demonstrated that the proposed algorithm exhibited scalability and superior performance compared to traditional optimization baselines. For future research, we will consider the integration of EV charging station into the grid to provide ancillary services with network constraints.
|
2307.04587 | Endotaxial Stabilization of 2D Charge Density Waves with Long-range
Order | Charge density waves are emergent quantum states that spontaneously reduce
crystal symmetry, drive metal-insulator transitions, and precede
superconductivity. In low-dimensions, distinct quantum states arise, however,
thermal fluctuations and external disorder destroy long-range order. Here we
stabilize ordered two-dimensional (2D) charge density waves through endotaxial
synthesis of confined monolayers of 1T-TaS$_2$. Specifically, an ordered
incommensurate charge density wave (oIC-CDW) is realized in 2D with
dramatically enhanced amplitude and resistivity. By enhancing CDW order, the
hexatic nature of charge density waves becomes observable. Upon heating via
in-situ TEM, the CDW continuously melts in a reversible hexatic process wherein
topological defects form in the charge density wave. From these results, new
regimes of the CDW phase diagram for 1T-TaS$_2$ are derived and consistent with
the predicted emergence of vestigial quantum order. | Suk Hyun Sung, Nishkarsh Agarwal, Ismail El Baggari, Yin Min Goh, Patrick Kezer, Noah Schnitzer, Yu Liu, Wenjian Lu, Yuping Sun, Lena F. Kourkoutis, John T. Heron, Kai Sun, Robert Hovden | 2023-07-10T14:29:00Z | http://arxiv.org/abs/2307.04587v1 | # Endotaxial Stabilization of 2D Charge Density Waves with Long-range Order
###### Abstract
Charge density waves are emergent quantum states that spontaneously reduce crystal symmetry, drive metal-insulator transitions, and precede superconductivity. In low-dimensions, distinct quantum states arise, however, thermal fluctuations and external disorder destroy long-range order. Here we stabilize ordered two-dimensional (2D) charge density waves through endotaxial synthesis of confined monolayers of 1T-TaS\({}_{2}\). Specifically, an ordered incommensurate charge density wave (oIC-CDW) is realized in 2D with dramatically enhanced amplitude and resistivity. By enhancing CDW order, the hexatic nature of charge density waves becomes observable. Upon heating via in-situ TEM, the CDW continuously melts in a reversible hexatic process wherein topological defects form in the charge density wave. From these results, new regimes of the CDW phase diagram for 1T-TaS\({}_{2}\) are derived and consistent with the predicted emergence of vestigial quantum order.
Some exotic crystals spontaneously reorganize their valence electrons into periodic structures known as charge density waves (CDWs). In essence, two crystals emerge--the underlying atomic lattice and the emergent charge lattice. Just like atomic crystals, a charge density wave has defects: dislocations, disclinations, and elastic deformation [1, 2, 3]. Furthermore, the charge density wave can undergo phase transitions wherein the charge lattice unit cell changes shape and size. All of this CDW reshaping and topological restructuring occurs even when the underlying atomic lattice remains unchanged.
In low dimensions, these quantum phase transitions are promising candidates for novel devices [4, 5, 6, 7], efficient ultrafast non-volatile switching [8, 9, 10], and suggest elusive chiral superconductivity [11, 12, 13]. Unfortunately, 2D CDWs are inherently unstable and accessing low-dimensional CDWs remains a challenge [14, 15, 16]. Even worse, at elevated temperatures where devices typically operate, disruption of charge density waves is all but guaranteed due to ever-present disorder [17, 18, 19]. A long-range ordered incommensurate CDW has yet to be reported.
Here we stabilize ordered incommensurate charge density waves (oIC-CDW) at elevated temperatures (T\({}_{\rm IC}\) = 350 K) in two-dimensions by endotaxial synthesis of TaS\({}_{2}\) polytype heterostructures. The estimated hundred-fold amplitude enhancement of the charge density wave has an increased coherence length comparable to the underlying atomic crystal. The enhanced order of the oIC-CDW increases electronic resistivity. This substantial enhancement of charge order is achieved through encapsulation of an isolated octahedral TaS\({}_{2}\) CDW layer within a matrix of prismatic TaS\({}_{2}\) metallic layers via 2D endotaxial synthesis.
Realizing the ordered incommensurate CDW reveals CDWs have hexatic structure at high-temperature--that is, long-range translational symmetry is limited by proliferation of topological defects (i.e., dislocations and disclinations) in CDWs. We show at high-temperatures, the CDWs in TaS\({}_{2}\) continuously melt as additional dislocations and disclinations form in the charge lattice. This hexatic CDW melting process was not previously observable since the incommensurate CDW normally emerges as a highly-disordered, melted state. By restoring order through 2D endotaxy, we can reversibly melt and unmelt CDWs in TaS\({}_{2}\). Based on these results, we access new regimes of the CDW phase diagram for octahedrally coordinated TaS\({}_{2}\) in temperature vs disorder space. Similar vestigial ordering (i.e., hexaticity) was predicted by Nie, Tarjus and Kivelson [18]; however, with 2D endotaxy we can now tune down the disorder in the CDW phase diagram.
## The Ordered Incommensurate Charge Density Wave
The ordered incommensurate CDW (oIC) reported herein (Fig. 1a-d) is strikingly distinct from the well-known incommensurate (IC) CDW (Fig. 1e-h) found in 1T-TaS\({}_{2}\) or 1T-TaSe\({}_{2}\). Here, the oIC phase is a truly two-dimensional (2D) CDW with long-range positional and orientational order that couples strongly with the underlying crystal lattice (Fig. 1a). The oIC-CDW, illustrated in Figure 1b, is a crystalline charge-lattice with well-defined, sharp peaks in Fourier space (Fig. 1b-inset). This CDW charge-lattice (a\({}_{\rm CDW}\) = 11.87 nm) exists within an underlying atomic lat
tice illustrated in Figure 1c.
Electron-lattice interaction is an essential aspect of CDWs, and associated soft-phonon modes manifest as static periodic lattice distortions (PLDs) that reduce crystal symmetry and lower the electronic energy [20, 21]. For TaS\({}_{2}\), the CDW pulls atoms toward the nearest charge maximum to form periodic clusters of atoms (Fig. 1c). Notably for incommensurate charge ordering, each cluster is distinct since the atomic lattice is not commensurate with the CDW. While these lattice distortions are small (\(<\)10 pm), selected area electron diffraction (SAED) is sensitive to subtle picoscale distortions and making it a popular choice for characterization of CDW/PLDs [22]. CDW/PLDs diffract incident swift electrons into distinct superlattice peaks decorating each Bragg peak [2, 23, 24, 25]. In reciprocal space, the CDW charge lattice (Fig. 1b-inset) and the measurable atomic superlattice peaks (Fig. 1c-inset) have corresponding spacing, symmetry, and intensity.
Diffracted superlattice peaks provide a direct measure of the CDW lattice and contain rich information on their order-disorder. Specifically, diffraction represents an ensemble average of the structure over the selected area, and disorder manifests as diffused diffraction peaks [26, 27]. Disorder of CDWs smears superlattice peaks but leaves the principle Bragg peaks unaffected (Fig. 1g-inset). For oC-CDWs, the charge lattice is ordered with limited defects, thus diffraction shows both sharp superlattice and Bragg peaks (Fig. 1c-inset). In contrast, the well-known IC-CDW in 1T-TaS\({}_{2}\) possesses significant disorder of its charge distribution. Across decades, the IC phase in 1T-TaS\({}_{2}\) is reported with a ring-like, azimuthally diffuse diffraction around each Bragg peak [23, 28, 29, 30], yet the origin of the diffused superlattice peaks is hardly discussed [31, 32].
Here we present the well-known IC-CDW in bulk 1T-TaS\({}_{2}\) as a hexatically disordered charge lattice containing dislocations and disclinations (Fig. 1f). In-situ SAED of 1T-TaS\({}_{2}\) taken at 408 K (Fig. 2a) shows azimuthally blurred first order superlattice peaks (marked brown). Averaging all six third order Bragg peaks (inset, \(\Gamma_{3}\)) better highlights this point. Notably, hexatic phases are known to have six-fold rotationally symmetric, azimuthally diffused peaks [33]. The experimental diffraction of IC-CDWs are consistent with a hexatic charge distribution (Fig. 1f) [27, 32, 33, 34] and corresponding azimuthally diffuse structure factor (Fig. 1f, g-inset). The IC-CDWs are three-dimensional (or quasi-2D) with non-negligible out-of-plane interactions (Fig. 1e-h).
In contrast, the oIC-CDW, shows drastically sharper and stronger superlattice peaks measured by in-situ SAED at 408 K (Fig. 2b). Sharpening is especially highlighted in averaged third order Bragg peaks (\(\Gamma_{3}\)). The measured superlattice peaks of oIC-CDW are sharper both in azimuthal (by \(\sim\)60%) and radial (by \(\sim\)50%) directions when compared to the IC-CDW. Notably, the superlattice peak widths of the oIC phase is comparable to the peak widths of the principle Bragg peaks. Therefore, the oIC is a spatially coherent
Fig. 1: **Long-range Ordered Incommensurate Charge Density Waves.** a) Schematic representation of ordered IC-CDW. The CDW is two-dimensional with little disorder. b) Ordered IC-CDW illustrated as a crystalline charge-density lattice. Inset) Fourier transform of the charge lattice shows well defined peaks. c) Associated periodic lattice distortions (PLDs) move tantalum nuclei along the charge density gradient. Inset) Simulated diffraction shows sharp superlattice peaks decorating Bragg peaks. d) Schematic representation of ordered IC-CDW in endotaxial polypybe heterostructure. Mono- or few layers of endotaxially protected OC-TaS\({}_{2}\) hosts 2D ordered IC-CDWs. e) Schematic representation of hexatic IC-CDW. The CDW phase is quasi-2D with non-trivial interlayer interactions, and hexatically disordered. f) Charge density distribution is comparable to hexatically disordered crystal lattice. Inset) Structure factor reveals azimuthally diffused peaks—characteristics of hexatic phases. g) Associated lattice distortion of IC-CDW with (inset) Fourier transform showing azimuthally blurred superlattice peaks while maintaining sharp Bragg peaks. h) Schematic representation of hexatic IC-CDW in bulk 1T-TaS\({}_{2}\) where every layer hosts disordered IC-CDW.
electronic crystal.
The oIC-CDW, a 2D charge ordered state, is enhanced by at least one-hundred fold over previously reported bulk ICDWs. Diffracted superlattice peaks in oIC-CDWs have an integrated intensity over ten times stronger despite that the number of charge ordered TaS\({}_{2}\) layers has been reduced to less than 10% of the material. Thus, endotaxial engineering improves not only the long range order but also the charge order amplitude of the IC-CDW. The correlation of long-range order and CDW enhancement is measured directly via hexatic CDW melting later in this manuscript.
### Endotaxial Polytype Heterostructure of TaS\({}_{2}\)
The oIC-CDW phase reported herein is stabilized by synthesizing endotaxial polytype heterostructures of TaS\({}_{2}\), where oIC-CDWs reside in monolayers of octahedrally coordinated (Oc-) TaS\({}_{2}\) embedded within prismatic (Pr-) TaS\({}_{2}\) matrix and one-to-one atomic registry (Fig. 2e). Endotaxial polytype heterostructures are synthesized by heating 1T-TaS\({}_{2}\) at \(\sim\)720 K for 15-30 min in an inert environment. Notably, 1T-TaS\({}_{2}\) is metastable and goes through Oc-to-Pr endotaxial layer-by-layer polytype transformation upon heating (\(\gtrsim\) 620 K). In-situ SAEDs (Fig. 2c i-iv) were acquired at 20 seconds intervals at 408 K through the high temperature conversion process (723 K). These snapshots reveal sharpening of superlattice peaks--a clear indicator of enhanced CDW order. Cooling the sample midst transition stops the conversion and an interleaved polytype heterostructure is synthesized--confirmed by cross-sectional ADF-STEM.
Figure 2d and e show atomic resolution micrographs of bulk 1T endotoxially converted to a polytype heterostructure. The atomic resolution images demonstrate endotaxial monolayer encapsulation of Oc-TaS\({}_{2}\) (Fig. 2e, highlighted red) in Pr-layers. The Pr-TaS\({}_{2}\) (bulk: 2H, 3R) are metallic above \(\sim\)100 K. Previous work showed these metallic layers decouple CDWs out-of-plane and raise the critical temperature for commensurate quantum states (i.e., C-CDW) from \(\sim\)200 K to \(\sim\)350 K [35].
Surprisingly, the endotaxial polytype heterostructure stabilizes long-range order in IC-CDWs at elevated (\(\gtrsim\) 350 K) temperatures. The oIC-CDW phase has correlation length comparable to the crystal lattice, quantified by comparing widths of both superlattice and Bragg peaks from in-situ selected area electron diffraction patterns (SA aperture: 850 nm diameter). This indicates the CDW is relatively ordered (i.e. spatially coherent) over the distances comparable to the parent atomic crystal (\(\sim\)10\({}^{2}\) nm).
This enhancement of long-range CDW order is accompanied by a marked increase of the in-plane resistivity of the IC phase (Fig. 2f). Figure 2f shows temperature vs in-plane resistivity measurement of 1T (brown) and endotaxial (red) specimen. Resistivity of endotaxial TaS\({}_{2}\) is higher for IC-CDW phases (\(>\)358 K), despite having many metallic layers introduced to the system. This implies that oIC-CDWs have a much higher resistivity than hexatic-IC in 1T-TaS\({}_{2}\).
Fig. 2: **Endotaxial polytype heterostructure of TaS\({}_{2}\).** a) In bulk TaS\({}_{2}\), an IC-CDW phase emerges above 350 K, with azimuthally diffused superlattice peaks characteristic of hexatic disorder. b) oIC-CDW in endotaxial polytype heterostructure has enhanced long-range order and amplitude. Superlattice peaks are well-defined, sharper and brighter. c) Evolution of IC-CDW during the endotaxial synthesis. Atomic resolution cross-sectional HAADF-STEM of d) bulk and e) heat-treated TaS\({}_{2}\).c) confirms polytype transformation. After treatment, Pr layers encapsulate monolayers of Oc layers. Scale bar is 2 mm. A selenium doped sample was imaged to enhance chalcogen visibility. f) Resistivity vs temperature measurement of bulk (brown) and thermally-treated (red) TaS\({}_{2}\) shows a marked increase in resistivity) in IC-CDW phases. In pristine sample IC-CDW gives way to nearly commensurate (NC-) CDW around 350 K. In polytype heterostructure, twinned commensurate (IC-) CDW emerges at a similar temperature range.
## Hexatic Melting of IC-CDWs
Creating the oIC-CDW provides an ordered charge lattice that can be hexatically melted upon further heating. Hexatic melting is a uniquely 2D process wherein a crystal melts in two stages through the creation of dislocations and disclinations [34, 36, 37, 38, 39]. During this process the reciprocal space structure continuously evolves. Initially at lower-temperatures (c.a. 350 K), the oIC phase is an ordered charge crystal with well-defined peaks in reciprocal space (Fig. 3c). As temperature rises, the CDW peaks continuously blur azimuthally as the density of dislocations and disclinations increases (Fig. 3d, e). Azimuthal blurring of the reciprocal lattice is characteristic of hexatic phases and reflects the loss of translational symmetry while maintaining some orientational order [33]. Eventually, at higher temperatures (c.a. 570 K), the hexatic crystal completely dissociates into an amorphous liquid state with ring-like structure factor. Figure 3c-e, are generated using a phenomological Monte Carlo simulation wherein displacement of the CDW charge centers follow a temperature dependent Maxwell-Boltzmann probability distribution (See Methods). Here, the incommensurate CDW hexatically melts while the underlying atomic lattice remains unchanged--in diffraction this corresponds to a blurring of CDW superlattice peaks and preservation of Bragg peaks.
During the hexatic melting of oIC-CDWs, superlattice peaks increasingly blur as temperature is raised--clearly visible in in-situ SAED at Fig. 3a-i) 473 K, Fig. 3a-ii) 523 K, and Fig. 3a-iii) 573 K. The blurring is anisotropic and more prominent along azimuthal directions as expected for hexatic phases. The CDW peaks are quantified throughout the melting process in Figure 3b. Azimuthal peak width (Fig. 3b, blue-triangles) increases continuously with temperature; roughly doubling when raised from 410 K to 570 K. Around 520 K the oIC has melted into a state that resembles the well-known IC-CDW for bulk TaS\({}_{2}\). This CDW melting process is reversible and peaks sharpen when temperature is decreased. Notably, Bragg peaks do not show appreciable changes indicating only the electronic crystal is melting, not the TaS\({}_{2}\) atomic crystal.
Although the CDW melting process appears hexatic, it is distinct from familiar liquid crystals, silica spheres, or atomic crystals wherein the amplitude of the order parameter does not change. Here, quantitative analysis of the superlattice peak intensities (Fig. 3a-red) reveals the charge density wave amplitude decreases with temperature. This is expected as topological defects in CDWs (dislocations and disclinations) have locally divergent strain with elastic energy cost that forces a local amplitude collapse. These local CDW amplitude collapses have been observed at the center of topological defects in the 3D charge ordering of manganites [1].
## The CDW Phase Diagram for Octahedral TaS\({}_{2}\)
Endotaxial synthesis of octahedrally coordinated TaS\({}_{2}\) allows access to new phases of matter and construction of a
Fig. 3: **Hexatic Melting of IC-CDWs.** a) Averaged in-situ SAED patterns showing oIC-CDW superlattice peaks in endotaxial heterostructure. **
phase diagram for CDWs using temperature (T) and disorder (\(\sigma\)). The CDW phase diagram for 1T-TaS\({}_{2}\) is shown in Figure 4. 1T-TaS\({}_{2}\) exists with native disorder and the ordered, commensurate phase (C-CDW, Fig. 4g) is only observed at low-temperatures. At room temperature, the CDW is a partially-ordered NC phase (Fig. 4f) that enters the hexatic IC phase upon heating (Fig. 4e). At high-temperatures or high-disorder, CDWs degrade or vanish. The high disorder regime was historically achieved by substituting tantalum ions with other metal species (e.g. Ti, Nb) or by forcing intercalates within the van der Waals gap [23]. At room temperature, mild substitution of titanium (1T-Ta\({}_{0.7}\)Ti\({}_{0.3}\)S\({}_{2}\)) drives the system into hexatic-IC CDW states (Fig. 4h), and as more titanium is substituted (1T-Ta\({}_{0.3}\)Ti\({}_{0.7}\)S\({}_{2}\)) CDW vanishes completely (Fig. 4i).
The low disorder regime, now accessible by endotaxial engineering, provides room temperature ordered C-CDWs and a novel ordered IC-CDW at higher temperatures. Notably with low-disorder, the C to IC transition is direct and the NC phase does not appear. The IC phase is ordered, but the CDW can be continuously melted into a disordered hexatic-IC phase (as described in figure 3). The boundaries of the CDW phase diagram are drawn with consistency to hexatic melting of 2D colloidal particles under temperature and disorder [41] as well as nematic CDWs [18, 19, 42].
Notably, CDWs in endotaxial TaS\({}_{2}\) are two dimensional and the oIC phase has enhanced order despite the 3D to 2D dimensionality reduction. In bulk 1T-TaS\({}_{2}\) CDWs are quasi-2D with non-negligible out-of-plane interaction (Fig. 1h) [43, 44, 45, 46]. Formation of endotaxial polytype heterostructures disrupts the out-of-plane interactions and CDWs reside in a protected 2D environment [35]. Stabilization of an ordered IC-CDW in 2D seemingly contradicts with Hohenberg-Mermin-Wagner theorem [14, 15] and Imry-Ma argument [17] which state spontaneous symmetry breaking of continuous symmetry (e.g. IC-CDWs) is unstable at non-zero temperatures in 2D. While both principles do not prevent intermediate phases with short-range order, the 2D CDWs should be none-the-less more fragile to disorder [18]. An ordered IC phase can only emerge in ultra-clean environments. Here endotaxial synthesis protects CDW states by strain-free encapsulation in a chemically identical environment of metallic layers that shield disorder.
## Conclusion
In summary, we demonstrate that endotaxial synthesis of clean interleaved polytypic heterostructures can stabilize fragile quantum phases such as ordered CDWs even at high temperatures. Here, we stabilize and enhance 2D charge density waves (both long-range order and amplitude) in an endotaxially confined monolayer of 1T-TaS\({}_{2}\). Surprisingly, the low-dimensional symmetry breaking of an ordered incommensurate CDW (oIC-CDW) appears, suggesting the quantum states reside within minimal extrinsic disorder. By enhancing CDW order the hexatic nature of IC-CDWs are revealed. Experimental observation matches advanced simulation of electron diffraction of charge lattices to provide the real-space evolution of 2D CDW melting. Heating the oIC-CDW in-situ TEM above 400 K we see a reversible hexatic melting process, in which disclinations and dislocations destroy long-range translational symmetry of the CDW while maintaining its orientational order. The CDW melts well before the underlying atomic crystal changes. In 2D, CDWs are expected to manifest through vestigial elec
Fig. 4: **Phase Diagram of Octahedrally Coordinated TaS\({}_{2}\).** a) Schematic temperature vs disorder phase diagram of octahedrally coordinated TaS\({}_{2}\). As extrinsic disorder (\(\sigma\)) decreases, more ordered CDW phases are stabilized. At room temperature, polytype heterostructures with low disorder stabilizes C-CDW (d) instead of NC-CDW (f), and long-range ordered IC-CDW (c) phase instead of hexatically disordered IC-CDW (e). Furthermore, it stabilizes CDWs (b) at higher temperatures then bulk 1T-TaS\({}_{2}\) can (1T\({}_{\text{CDW}}\)\(\leq\) 540 K [40]). Substitutional disorder, on the other hand, destroys long-range order and hexatic IC-CDW is stable at room temperature (h) and leads to complete destruction of CDW eventually (i). b–(l) Electron diffraction patterns showing superlattice peaks around a single Bragg peak reveals the charge ordering states. h,l) are adapted from Wilson et al. [23].
tronic hexaticity--a weak CDW with substantial defects and short range order. The nature of vestigial phases in CDWs remains poorly understood with little direct evidence. From these results, a CDW phase diagram for 1T-TaS\({}_{2}\) is created and consistent with the predicted emergence of vestigial quantum order.
|
2303.16474 | Orbit spaces of free involutions on the product of three spheres | In this paper, we have determined the orbit spaces of free involutions on a
finitistic space having mod 2 cohomology of the product of three spheres
$\mathbb{S}^n\times \mathbb{S}^m \times \mathbb{S}^l, 1 \leq n \leq m \leq l$.
This paper generalizes the results proved by Dotzel et al. [6] for free
involutions on the product of two sphere $\mathbb{S}^n \times
\mathbb{S}^m,1\leq n\leq m.$ | Dimpi, Hemant Kumar Singh | 2023-03-29T06:09:49Z | http://arxiv.org/abs/2303.16474v1 | # Orbit spaces of free involutions on the product of three spheres
###### Abstract.
In this paper, we have determined the orbit spaces of free involutions on a finitistic space having mod \(2\) cohomology of the product of three spheres \(\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l},1\leq n\leq m\leq l.\) This paper generalizes the results proved by Dotzel et al. [6] for free involutions on the product of two sphere \(\mathbb{S}^{n}\times\mathbb{S}^{m},1\leq n\leq m.\) As an application, we have also derived the Borsuk-Ulam type results.
Key words and phrases:Free action; Finitistic space; Leray-Serre spectral sequence; Orbit spaces 2020 Mathematics Subject Classification: Primary 57S17; Secondary 57S25 The first author of the paper is supported by SRF of UGC, New Delhi, with reference no.: 201610039267.
three dimensional lens spaces. Recently, Dey et al. [7], Morita et al. [11], and Singh [13] discussed the orbit spaces of free involutions on real Milnor manifolds, Dold manifolds, and the product of two projective spaces, respectively.
Continuing this thread of research, this paper is concerned with the study of the orbit spaces of free involutions on a finitistic space having mod 2 cohomology of the product of three spheres \(\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\), \(n\leq m\leq l\). For example: (1) The complex Stiefel manifold \(V_{n,n-3}\) has integral cohomology of the product of three spheres \(\mathbb{S}^{2n-9}\times\mathbb{S}^{2n-7}\times\mathbb{S}^{2n-5}\), for all \(n\geq 5\)[1], which admits a free involution defined by \((v_{1},v_{2},\cdots,v_{n-3})\mapsto(gv_{1},gv_{2},\cdots,gv_{n-3})\), where \(v_{i}^{\prime}s,1\leq i\leq n-3\), are orthonormal vectors in \(\mathbb{C}^{n}\) and \(g\in\mathbb{Z}_{2}\), and (2) The product of special unitary 3-group and a sphere \(SU(3)\times\mathbb{S}^{l}\) and unitary 2-group and a sphere \(U(2)\times\mathbb{S}^{l}\) has integral cohomology of the product of three spheres \(\mathbb{S}^{3}\times\mathbb{S}^{5}\times\mathbb{S}^{l}\) and \(\mathbb{S}^{1}\times\mathbb{S}^{3}\times\mathbb{S}^{l}\), respectively [2]. Consider, diagonal actions on \(SU(3)\times\mathbb{S}^{l}\) and \(U(2)\times\mathbb{S}^{l}\) obtained by taking trivial actions of \(G=\mathbb{Z}_{2}\) on \(SU(3)\) and \(U(2)\), and the antipodal action on \(\mathbb{S}^{l}\). This gives free involutions on \(SU(3)\times\mathbb{S}^{l}\) and \(U(2)\times\mathbb{S}^{l}\), respectively. This paper generalizes the results proved by Dotzel et al. [6] for free involutions on the product of two sphere \(\mathbb{S}^{n}\times\mathbb{S}^{m},1\leq n\leq m\). As an application, we have also determined the Borsuk-Ulam type results.
## 2. Preliminaries
In this section, we review some basic definitions and results that are used in this paper. A paracompact Hausdorff space is called finitistic if every open covering has a finite dimensional open refinement. This is the most suitable topological space for the study of the relationship between the cohomology structure of the total space and that of the orbit space of a transformation group. It includes all compact Hausdorff spaces and paracompact spaces with finite covering dimension. Let \(G\) be a finite cyclic group acting on a finitistic space \(X\). The associated Borel fibration is \(X\stackrel{{ i}}{{\hookrightarrow}}X_{G}\stackrel{{ \pi}}{{\rightarrow}}B_{G}\), where \(X_{G}=(E_{G}\times G)/G\) (Borel space) obtained by diagonal action of \(G\) on space \(X\times E_{G}\) and \(B_{G}\) (classifying space) is the orbit space of free action of \(G\) on contractible space \(E_{G}\). We recall some results of Leray-Serre spectral sequence associated with Borel fibration \(X\stackrel{{ i}}{{\hookrightarrow}}X_{G}\stackrel{{ \pi}}{{\rightarrow}}B_{G}\). For proofs, we refer [3, 10].
**Proposition 2.1**.: ([10]) Suppose that the system of local coefficients on \(B_{G}\) is simple. Then, the homomorphisms \(i^{*}:H^{*}(X_{G})\to H^{*}(X)\) and \(\pi^{*}:H^{*}(B_{G})\to H^{*}(X_{G})\) are the edge homomorphisms,
\[H^{k}(B_{G})=E_{2}^{k,0}\to E_{3}^{k,0}\to\cdots E_{k}^{k,0} \to E_{k+1}^{k,0}=E_{\infty}^{k,0}\subset H^{k}(X_{G}),\,\text{and}\] \[H^{i}(X_{G})\to E_{\infty}^{0,i}\hookrightarrow E_{l+1}^{0,i} \hookrightarrow E_{l}^{0,i}\hookrightarrow\cdots\hookrightarrow E_{2}^{0,i} \hookrightarrow E_{2}^{0,i}\hookrightarrow H^{i}(X),\,\text{respectively}.\]
**Proposition 2.2**.: ([10]) Let \(G=\mathbb{Z}_{2}\) act on a finitistic space \(X\) and \(\pi_{1}(B_{G})\) acts trivially on \(H^{*}(X).\) Then, the system of local coefficients on \(B_{G}\) is simple and
\[E_{2}^{k,i}=H^{k}(B_{G})\otimes H^{i}(X),\ k,i\geq 0.\]
**Proposition 2.3**.: ([3]) Let \(G=\mathbb{Z}_{2}\) act on a finitistic space \(X\) and \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{*}(X).\) Then, the \(E_{2}\) term of the Leray-Serre spectral sequence of the fibration \(X\overset{i}{\hookrightarrow}X_{G}\overset{\pi}{\rightarrow}B_{G}\) is given by
\[E_{2}^{k,i}=\begin{cases}\ker\ \tau&\text{for}\ k=0,\\ \ker\ \tau/\text{im}\ \sigma&\text{for}\ k>0,\end{cases}\]
where \(\tau=\sigma=1+g^{*},\)\(g^{*}\) is induced by a generator \(g\) of \(G.\)
**Proposition 2.4**.: ([10]) Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic space \(X.\) Then, the Borel space \(X_{G}\) is homotopy equivalent to the orbit space \(X/G.\)
**Proposition 2.5**.: ([3]) Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic space \(X.\) If \(H^{i}(X;\mathbb{Z}_{2})=0\ \forall\ i>n,\) then \(H^{i}(X/G;\mathbb{Z}_{2})=0\ \forall\ i>n.\)
**Proposition 2.6**.: ([3]) Let \(G=\mathbb{Z}_{2}\) act on a finitistic space \(X.\) Suppose that \(H^{i}(X;\mathbb{Z}_{2})=0\) for all \(i>2n\) and \(H^{2n}(X;\mathbb{Z}_{2})=\mathbb{Z}_{2}.\) If there exists an element \(a\in H^{n}(X;\mathbb{Z}_{2})\) such that \(ag^{*}(a)\neq 0,\) where \(g\) be a generator of \(G,\) then fixed point set is nonempty.
**Proposition 2.7**.: ([3]) Let \(G=\mathbb{Z}_{2}\) act on a finitistic space \(X.\) Then, for any \(a\in H^{n}(X),\) the element \(ag^{*}(a)\) is permanent cocycle in spectral sequence of \(X_{G}\to B_{G}.\)
Recall that \(H^{*}(\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l};\mathbb{Z}_{2}) =\mathbb{Z}_{2}[a,b,c]/<a^{2},b^{2},c^{2}>,\) where \(\deg\ a=n,\)\(\deg\ b=m\) and \(\deg\ c=l,\ 1\leq n\leq m\leq l.\)
Throughout the paper, \(H^{*}(X)\) will denote the Cech cohomology of a space \(X\) with coefficient group \(G=\mathbb{Z}_{2},\) and \(X\sim_{2}Y,\) means \(H^{*}(X;\mathbb{Z}_{2})\cong H^{*}(Y;\mathbb{Z}_{2}).\)
The Cohomology Algebra of The Orbit Space of Free involutions on \(\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\)
In this section, we determine the cohomology ring of free involutions on finitistic spaces \(X\) whose cohomology ring is isomorphic to the product of three spheres, \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l},\) where \(n\leq m\leq l.\) We first prove the following lemma.
**Lemma 3.1**.: Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic space \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l},\) where \(n<m<l.\) Then, \(\pi_{1}(B_{G})\) acts trivially on \(H^{*}(X).\)
Proof.: For \(l\neq m+n\), obviously, \(\pi_{1}(B_{G})\) acts trivially on \(H^{*}(X)\). Next, suppose \(l=m+n\). If \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{*}(X)\), then we must have \(g^{*}(c)=ab+c\), where \(g\) is the generator of \(\pi_{1}(B_{G})\). So, we get \(cg^{*}(c)=abc\neq 0\), which contradicts Proposition 2.6. Hence, our claim.
Note that \(G=\pi_{1}(B_{G})\) may act nontrivially on \(H^{*}(X)\) in the following two cases: (i) \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{m},n<m\), and (ii) \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{n}\times\mathbb{S}^{l},n\leq l\). First, we determine the cohomology algebra of free involutions on \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\) in the case when \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{*}(X)\). We have proved the following Theorems.
**Theorem 3.2**.: Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic space \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{m}\), where \(n<m\). If \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{*}(X)\), then \(H^{*}(X/G)\) is isomorphic to the following graded commutative algebra:
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},y^{2}+a_{0}z,w^{2},z^{2},yw+a_{1}x^{n}z,yz,wz,xy,xw>,\]
where deg \(x=1\), deg \(y=m\), deg \(w=m+n\), deg \(z=2m\) and \(a_{0},a_{1}\in\mathbb{Z}_{2}\).
Proof.: It is clear that \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{m}(X)\). Let \(g\) be a generator of \(\pi_{1}(B_{G})\). We have exactly three possibilities of nontrivial actions: (1) \(g^{*}(b)=c\) & \(g^{*}(c)=b\), (2) \(g^{*}(b)=b\) & \(g^{*}(c)=b+c\) and (3) \(g^{*}(b)=b+c\) & \(g^{*}(c)=c\). First, consider nontrivial action defined by \(g^{*}(b)=c\) & \(g^{*}(c)=b\). By the natuality of cup product, we get \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{m+n}(X)\). Note that \(\sigma=\tau=1+g^{*}\). By Proposition 2.3, we get \(E_{2}^{0,i}\cong\mathbb{Z}_{2}\) and \(E_{2}^{k,i}=0\)\(\forall\)\(k>0,\ i=m,m+n\). For \(i\neq m,m+n\), \(\pi_{1}(B_{G})\) acts trivially on \(H^{i}(X)\), and hence, \(E_{2}^{k,i}\cong H^{k}(B_{G})\otimes H^{i}(X)\). By Proposition 2.7, \(1\otimes bc\) is a permanent cocycle. If \(d_{n+1}(1\otimes a)=0\), then atleast two lines survives to \(E_{\infty}\) which is not possible. Therefore, we get \(d_{n+1}(1\otimes a)=t^{n+1}\otimes 1\), and \(d_{n+1}(1\otimes abc)=t^{n+1}\otimes bc\). Clearly, \(d_{r}=0\)\(\forall\)\(r>n+1\). Thus, \(E_{n+2}^{*,*}\cong E_{\infty}^{*,*}\). So, we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\) where \(0\leq p\leq n,q=0,2m;p=0,q=m,n+m\). Consequently, the cohomology groups are given by
\[H^{k}(X_{G})\cong\bigoplus_{p+q=k}E_{\infty}^{p,q}=\begin{cases}\mathbb{Z}_{2 }&j\leq k\leq n+j,j=0,2m;k=m,m+n,\\ 0&\text{otherwise.}\end{cases}\]
The permanent cocycles \(t\otimes 1\), \(b+c\), \(ab+ac\) and \(1\otimes bc\) are determine the elements \(x\in E_{\infty}^{1,0}\), \(u\in E_{\infty}^{0,m}\), \(v\in E_{\infty}^{0,n+m}\) and \(s\in E_{\infty}^{0,2m}\), respectively. Thus, the total complex Tot\(E_{\infty}^{*,*}\) is given by
\[\mathbb{Z}_{2}[x,u,v,s]/<x^{n+1},u^{2}+a_{0}s,v^{2},s^{2},uv,us,vs,xu,xv>,\]
where deg \(x=1\), deg \(u=m\), deg \(v=m+n\), deg \(s=2m\), and \(a_{0}\in\mathbb{Z}_{2}\). Let \(y\in H^{m}(X_{G}),w\in H^{m+n}(X_{G})\) and \(z\in H^{2m}(X_{G})\) such that \(i^{*}(y)=b+c,i^{*}(w)=ab+ac\) and \(i^{*}(z)=bc\), respectively. Clearly, we have \(yw+a_{1}x^{n}z=0,a_{1}\in\mathbb{Z}_{2}\). By Proposition 2.4, the graded commutative algebra \(H^{*}(X/G)\) given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},y^{2}+a_{0}z,w^{2},z^{2},yw+a_{1}x^{n}z,yz,wz, xy,xw>,\]
where deg \(x=1\), deg \(y=m\), deg \(w=m+n\), deg \(z=2m\), and \(a_{0},a_{1}\in\mathbb{Z}_{2}\). For the other two nontrivial actions of \(\pi_{1}(B_{G})\) on \(H^{m}(X)\), we get the same cohomology algebra.
**Remark 3.3**.: In the above theorem, if \(a_{0}=a_{1}=0\), then \(X/G\sim_{2}(\mathbb{RP}^{n}\times S^{2m})\vee\mathbb{S}^{m}\vee\mathbb{S}^{m+n}\).
Now, for a finitistic space \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{n}\times\mathbb{S}^{l}\), \(n\leq l\), we have following Theorem.
**Theorem 3.4**.: Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic space \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{n}\times\mathbb{S}^{l}\), where \(n\leq l\). If \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{*}(X)\), then \(H^{*}(X/G)\) is isomorphic to one of the following graded commutative algebras:
1. \(\mathbb{Z}_{2}[x,y,w,z]/<x^{2n+l+1},I_{j},z^{2},xy,zx,x^{l-2n+1}w>_{1\leq j\leq 4}\), where deg \(x=1\), deg \(y=n\), deg \(w=2n\) & deg \(z=n+l\); \(I_{1}=y^{2}+a_{1}x^{2n}+a_{2}w,I_{2}=w^{2}+a_{3}x^{4n}+a_{4}x^{2n}w+a_{5}z,I_{3 }=yw+a_{6}x^{3n}+a_{7}x^{n}w\) & \(I_{4}=yz+a_{8}x^{2n+l}\), \(a_{i}\in\mathbb{Z}_{2},1\leq i\leq 8\); \(a_{3}=0\) if \(l<2n,a_{4}=0\) if \(l<4n,a_{5}=0\) if \(l\neq 3n\) & \(a_{7}=0\) if \(l<3n\).
2. \(\mathbb{Z}_{2}[x,y,w,z]/<x^{l+1},I_{j},z^{2},xy,zx>_{1\leq j\leq 4}\), where deg \(x=1\), deg \(y=n\), deg \(w=2n\) & deg \(z=n+l\); \(I_{1}=y^{2}+a_{1}x^{2n}+a_{2}w,I_{2}=w^{2}+a_{3}x^{4n}+a_{4}x^{2n}w+a_{5}z,I_{3 }=yw+a_{6}x^{3n}+a_{7}x^{n}w+a_{8}z\) & \(I_{4}=yz+a_{9}x^{l}w\), \(a_{i}\in\mathbb{Z}_{2},1\leq i\leq 9\); \(a_{1}=a_{4}=0\) if \(l<2n,a_{3}=0\) if \(l<4n,a_{5}=0\) if \(l\neq 3n\),\(a_{6}=0\) if \(l<3n\) & \(a_{8}=0\) if \(l\neq 2n\).
3. \(\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},I_{j},w^{2},z^{2},wz,xz,xy>_{1\leq j\leq 3}\), where deg \(x=1\), deg \(y=n\) & deg \(w\)= deg \(z=2n\); \(I_{1}=y^{2}+a_{1}w+a_{2}z,I_{2}=yw+a_{3}x^{n}w\) & \(I_{2}=yz+a_{4}x^{n}w\), \(a_{i}\in\mathbb{Z}_{2},1\leq i\leq 4\).
Proof.: We consider two cases: (i) \(n<l\), and (ii) \(n=l\).
**Case (i):** Assume \(n<l\).
It is clear that if \(l\neq 2n\), then \(\pi_{1}(B_{G})\) must act nontrivially on \(H^{n}(X)\). Now, if \(l=2n\) and \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{2n}(X)\), then we must have \(g^{*}(c)=c+ab\). Thus, we get \(cg^{*}(c)=abc\neq 0\). By Proposition 2.6, the fixed point set is empty, a contradiction. Therefore, \(\pi_{1}(B_{G})\) always acts trivially on \(H^{2n}(X)\). So, \(\pi_{1}(B_{G})\) must act nontrivially
on \(H^{n}(X)\). As in Theorem 3.2, we have exactly three possibilities of nontrivial actions on \(H^{n}(X)\). Suppose that \(g^{*}(a)=b\) and \(g^{*}(b)=a\). Consequently, we get \(g^{*}(ac)=bc\) and \(g^{*}(bc)=ac\), where \(g\) is the generator of \(\pi_{1}(B_{G})\). Note that \(\sigma=\tau=1+g^{*}\). By Proposition 2.3, we have \(E_{2}^{0,i}\cong\mathbb{Z}_{2}\), \(i=n,n+l\) and \(E_{2}^{k,i}=0\ \forall\ k>0,\ i=n,n+l\). For \(i\neq n,n+l\), \(\pi_{1}(B_{G})\) acts trivially on \(H^{i}(X)\). So, for \(l\neq 2n\), \(E_{2}^{k,i}\cong\mathbb{Z}_{2}\ \forall\ k\geq 0;i=0,l,2n,2n+l\) and for \(l=2n\), \(E_{2}^{k,i}\cong\mathbb{Z}_{2}\ \forall\ k\geq 0,i=0\ \&\ 4n\) and \(E_{2}^{k,2n}\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\ \forall\ k\geq 0\). By Proposition 2.7, \(1\otimes ag^{*}(a)=1\otimes ab\) is a permanent cocycle. If \(d_{n+1}\) is nontrivial, then we get \(d_{n+1}(a+b)=t^{n+1}\otimes 1\). So, we have \(0=d_{n+1}((a+b)(t\otimes 1))=(t^{n+1}\otimes 1)(t\otimes 1)=t^{n+2}\otimes 1\), which is not possible. Thus, \(d_{n+1}(a+b)=0\). Suppose \(d_{l-2n+1}\neq 0\). Then, we must have \(2n<l\) and \(d_{l-2n+1}(1\otimes c)=t^{l-2n+1}\otimes ab\). We get \(E_{l-2n+2}^{*,*}=E_{2n+l}^{*,*}\). As \(G\) acts freely on \(H^{*}(X)\), we must have \(d_{2n+l+1}(1\otimes abc)=t^{2n+l+1}\otimes 1\). Thus, \(E_{2n+l+2}^{*,*}=E_{\infty}^{*,*}\), and hence \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), where \(0\leq p\leq 2n+l,q=0;0\leq p\leq l-2n,q=2n;p=0,q=n,n+l\). Thus, the cohomology groups are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<2n,k\neq n;l<k\leq 2n+l,k\neq n +l\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&2n\leq k\leq l,k=n,n+l.\end{cases}\]
The permanent cocycles \(t\otimes 1\), \(a+b\), \(1\otimes ab\) and \(c(a+b)\) determines the elemnets \(x\in E_{\infty}^{1,0}\), \(u\in E_{\infty}^{0,n}\), \(v\in E_{\infty}^{0,2n}\) and \(s\in E_{\infty}^{0,n+l}\), respectively. Thus, the total complex is given by
\[\mathbb{Z}_{2}[x,u,v,s]/<x^{2n+l+1},u^{2}+\gamma_{1}v,v^{2},s^{2},uv,us,vs,xu,x^ {l-2n+1}v,sx>,\]
where deg \(x=1\), deg \(u=n\), deg \(v=2n\), deg \(s=n+l\), and \(\gamma_{1}\in\mathbb{Z}_{2}\). Let \(y\in H^{n}(X_{G}),w\in H^{2n}(X_{G})\) and \(z\in H^{n+l}(X_{G})\) such that \(i^{*}(y)=a+b,i^{*}(w)=ab\) and \(i^{*}(z)=c(a+b)\), respectively. We have \(I_{1}=y^{2}+a_{1}x^{2n}+a_{2}w=0,I_{2}=w^{2}+a_{3}x^{4n}+a_{4}x^{2n}w+a_{5}z=0\); \(I_{3}=yw+a_{6}x^{3n}+a_{7}x^{n}w=0\) and \(I_{4}=yz+a_{8}x^{2n+l}=0\), where \(a_{i}\in\mathbb{Z}_{2},1\leq i\leq 8\); \(a_{3}=0\) if \(l<2n\), \(a_{4}=0\) if \(l\neq 3n\) and \(a_{7}=0\) if \(l<3n\). By Proposition 2.4, the graded commutative algebra \(H^{*}(X/G)\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{2n+l+1},I_{j},z^{2},x^{l-2n+1}w,wz,xz,xy>_{1\leq j \leq 4},\]
where deg \(x=1\), deg \(y=n\), deg \(w=2n\), and deg \(z=n+l\). This realizes possibility (1).
Now, suppose \(d_{l-2n+1}=0\). If \(d_{l+1}=0\), then atleast two lines survive to infinity, which contradicts Proposition 2.5. So, we have \(d_{l+1}(1\otimes c)=t^{l+1}\otimes 1\), and hence \(d_{l+1}(1\otimes abc)=t^{l+1}\otimes ab\). So, the differentials \(d_{r}=0\ \forall\ r>l+1\), which implies that \(E_{\infty}^{*,*}=E_{l+2}^{*,*}\). Consequently, \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), where \(0\leq p\leq l,q=0,2n;p=0,q=n,n+l\).
For \(l<2n\), the cohomology groups \(H^{k}(X_{G})\) are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&j\leq k\leq l+j,j=0,2n\text{ and }k\neq n,n+l\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=n,n+l,\end{cases}\]
and, for \(l\geq 2n\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k\leq 2n,k\neq n;l<k\leq 2n+l,k \neq n+l\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=n,n+l,2n\leq k\leq l.\end{cases}\]
The permanent cocycles \(t\otimes 1\), \(a+b\), \(1\otimes ab\) and \(c(a+b)\) determine the elements \(x\in E_{\infty}^{1,0}\), \(u\in E_{\infty}^{0,n}\),\(v\in E_{\infty}^{0,2n}\) and \(s\in E_{\infty}^{0,n+l}\), respectively. Thus, the total complex is given by
\[\mathbb{Z}_{2}[x,u,v,s]/<x^{l+1},u^{2}+\gamma_{1}v,v^{2},s^{2},uv+\gamma_{2}s,us,vs,xu,xv>,\]
where \(\deg\,x=1\), \(\deg\,u=n\), \(\deg\,v=2n\), \(\deg\,s=n+l\), and \(\gamma_{1},\gamma_{2}\in\mathbb{Z}_{2}\), \(\gamma_{2}=0\) if \(l\neq 2n\). Let \(y\in H^{n}(X_{G}),w\in H^{2n}(X_{G})\) and \(z\in H^{n+l}(X_{G})\) such that \(i^{*}(y)=a+b,i^{*}(w)=ab\) and \(i^{*}(z)=ac+cb\), respectively. We have \(I_{1}=y^{2}+a_{1}x^{2n}+a_{2}w=0,I_{2}=w^{2}+a_{3}x^{4n}+a_{4}x^{2n}w+a_{5}z=0\), \(I_{3}=yw+a_{6}x^{3n}+a_{7}x^{n}w+a_{8}z=0\) and \(I_{4}=yz+a_{9}x^{l}w=0\), where \(a_{i}\in\mathbb{Z}_{2},1\leq i\leq 9\); \(a_{1}=a_{4}=0\) if \(l<2n,a_{3}=0\) if \(l<4n;a_{5}=0\) if \(l\neq 3n,a_{6}=0\) if \(l<3n\), and \(a_{8}=0\) if \(l\neq 2n\). Therefore, the graded commutative algebra \(H^{*}(X/G)\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{l+1},I_{j},z^{2},wz,xy,xw>_{1\leq j\leq 4},\]
where \(\deg\,x=1\), \(\deg\,y=n\), \(\deg\,w=2n\), and \(\deg\,z=n+l\). This realizes possibility (2).
**Case (ii):** Assume \(n=l\).
As \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{n}(X)\), \(g^{*}\) must fixes one or two generator(s) of \(H^{n}(X)\). If \(g^{*}\) fixes one generator, say, \(g^{*}(a)=a\), then in this case three nontrivial actions are possible: \(g^{*}(b)=c,g^{*}(c)=b;g^{*}(b)=a+b,g^{*}(c)=a+c;g^{*}(b)=a+c,g^{*}(c)=a+b\). Further, if \(g^{*}\) fixes two generators, say, \(g^{*}(a)=a,g^{*}(b)=b\), then also in this case, three nontrivial actions are possible: \(g^{*}(c)=a+c;g^{*}(c)=b+c;g^{*}(c)=a+b+c\). This gives eighteen different possibilities of nontrivial actions of \(\pi_{1}(B_{G})\) on \(H^{n}(X)\). Now, consider a nontrivial action define as \(g^{*}(a)=a,g^{*}(b)=c\) and \(g^{*}(c)=b\). So, we have \(g^{*}(ab)=ac,g^{*}(bc)=bc\) and \(g^{*}(ac)=ab\). By Proposition 2.6, the \(E_{2}\)-page is given by
\[E_{2}^{k,i}=\begin{cases}\ker\,\tau&\text{for }k=0,\\ \ker\,\tau/\text{im }\tau&\text{for }k>0.\end{cases}\]
Note that, for \(i=n\), \(\ker\,\tau=<a,b+c>\) and \(\text{im }\tau=<b+c>\), and for \(i=2n\), \(\ker\,\tau=<bc,ab+ac)>\) and \(\text{im }\tau=<ab+ac>\). So, for \(i=n,2n\), we get \(E_{2}^{0,i}\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\), \(E_{2}^{k,i}\cong\mathbb{Z}_{2}\)\(\forall\)\(k>0\). And, for \(i=0,3n\), \(\pi_{1}(B_{G})\) acts trivially on \(H^{i}(X)\). So,
we get \(E_{2}^{k,i}\cong H^{k}(B_{G})\otimes H^{i}(X)\cong\mathbb{Z}_{2}\ \forall\ k\geq 0.\) By Proposition 2.7, \(bg^{*}(b)=bc\) is permanent cocycle. It is easy to observed that \(d_{n+1}(a)\) must be nonzero. Therefore, \(d_{n+1}(a)=t^{n+1}\otimes 1\). Assume that \(d_{n+1}(b+c)=a_{0}t^{n+1}\otimes 1\), \(a_{0}\in\mathbb{Z}_{2}\). Consequently, \(d_{n+1}(1\otimes abc)=(t^{n+1}\otimes 1)bc\). So, we have \(E_{n+2}^{*,*}=E_{\infty}^{*,*}\), and hence \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), where \(0\leq p\leq n,q=0;0<p\leq n,q=2n;p=0,q=n\) and \(E_{\infty}^{0,2n}\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\). Thus, the cohomology groups of \(X/G\) are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;2n<k\leq 3n,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=n,2n.\end{cases}\]
The permanent cocycles \(t\otimes 1\), \(c_{0}a+b+c,c_{0}\in\mathbb{Z}_{2}\), \(bc\) and \(ac+bc\) determine the elements \(x\in E_{\infty}^{1,0},u\in E_{\infty}^{0,n}\), \(v\in E_{\infty}^{0,2n}\) and \(s\in E_{\infty}^{0,2n}\), respectively. Thus, the total complex \(\mathrm{Tot}E_{\infty}^{*,*}\) is given by
\[\mathbb{Z}_{2}[x,u,v,s]/<x^{n+1},u^{2}+\gamma_{1}v+\gamma_{2}s,v^{2},s^{2},ux,uv,us,vs,sx>,\]
where \(\deg\,x=1\), \(\deg\,u=n\), \(\deg\,v=\deg\,s=2n\), and \(\gamma_{1},\gamma_{2}\in\mathbb{Z}_{2}\). Let \(y\in H^{n}(X_{G}),w\in H^{2n}(X_{G})\) and \(z\in H^{2n}(X_{G})\) such that \(i^{*}(y)=c_{0}a+b+c,i^{*}(w)=bc\) and \(i^{*}(z)=ab+ac\), respectively. Clearly, we have \(I_{1}=y^{2}+a_{1}w+a_{2}z=0\), \(I_{2}=yw+a_{3}x^{n}w=0\), and \(I_{3}=yz+a_{4}x^{n}w=0;a_{i}\in\mathbb{Z}_{2},1\leq i\leq 4\). Therefore, the graded commutative algebra \(H^{*}(X/G)\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},I_{j},w^{2},z^{2},wz,xz,xy>_{1\leq j\leq 3},\]
where \(\deg\,x=1\), \(\deg\,y=n\), \(\deg\,w=2n\), and \(\deg\,z=2n\). This realizes possibility (3). For the other possibilities of nontrivial actions of \(\pi_{1}(B_{G})\) on \(H^{n}(X)\), we get the same cohomology algebras as in case (i) and case (ii).
**Remark 3.5**.: In the possibility (2) of Theorem 3.4, if we take \(a_{i}=0\ \forall\ 1\leq i\leq 9\), then \(X/G\sim_{2}\mathbb{RP}^{l}\times S^{2n}\vee(\mathbb{S}^{n}\vee\mathbb{S}^{n+l})\), and if we take \(a_{i}=0\ \forall\ 1\leq i\leq 4\) in possibility (3), then \(X/G\sim_{2}\mathbb{RP}^{n}\times S^{2n}\vee(\mathbb{S}^{n}\vee\mathbb{S}^{2n})\).
Next, we determine the orbit spaces of free involutions on \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\), when \(\pi_{1}(B_{G})\) acts trivially on \(H^{*}(X)\).
Recall that if \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\), \(n<m<l\), then \(\pi_{1}(B_{G})\) always acts trivially on \(H^{*}(X)\). For the remaining cases, we assume that \(\pi_{1}(B_{G})\) acts trivially on \(H^{*}(X)\). Let \(\{E_{r}^{*,*},d_{r}\}\) be the Leray-Serre spectral sequence of the Borel fibration \(X\hookrightarrow X_{G}\to B_{G}\). So, by Proposition 2.2, we get
\[E_{2}^{k,i}=H^{k}(B_{G})\otimes H^{i}(X)\ \forall\ k,i\geq 0.\]
Thus, \(E_{2}^{k,i}\cong\mathbb{Z}_{2}\), for \(k=0,n,m,l,n+m,n+l,m+l,n+m+l\), and \(\forall\ i\geq 0\).
If \(G=\mathbb{Z}_{2}\) acts freely on \(X,\) then at least one of the images of \(1\otimes a\), \(1\otimes b\) or \(1\otimes c\), where \(a,b,c\) are generators of \(H^{*}(X),\) under some differential \(d_{r}\) must be nonzero.
Note that
* if \(d_{r_{1}}(1\otimes a)\neq 0\), then \(r_{1}=n+1\),
* if \(d_{r_{2}}(1\otimes b)\neq 0\), then \(r_{2}=m-n+1\) or \(m+1\), and
* if \(d_{r_{3}}(1\otimes c)\neq 0\), then \(r_{3}=l-m-n+1\), \(l-m+1\), \(l-n+1\) or \(l+1\).
So, the following cases are possible:
1. \(d_{r_{1}}(1\otimes a)\neq 0\),
2. \(d_{r_{1}}(1\otimes a)=0\) and \(d_{r_{2}}(1\otimes b)\neq 0\),
3. \(d_{r_{1}}(1\otimes a)=0\), \(d_{r_{2}}(1\otimes b)=0\) and \(d_{r_{3}}(1\otimes c)\neq 0\).
We have discussed following theorems depending on above three cases:
**Theorem 3.6**.: Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic space \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l},\) where \(n\leq m\leq l.\) If \(d_{r_{1}}(1\otimes a)\neq 0\), then \(H^{*}(X/G)\) is isomorphic to one of the following graded commutative algebras:
1. \(\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},I_{j},z^{2}>_{1\leq j\leq 5}\), where \(\deg x=1\), \(\deg y=m\), \(\deg w=l\) & \(\deg z=m+l\); \(I_{1}=y^{2}+a_{1}z+a_{2}x^{n}y+a_{3}x^{2m-l}w,I_{2}=w^{2}+a_{4}x^{l-m}z+a_{5}x^{ n}y+a_{6}x^{n}w,I_{3}=yw+a_{7}z+a_{8}x^{n}w+a_{9}x^{n}y,I_{4}=yz+a_{10}x^{n}z\), and \(I_{5}=wz+a_{11}x^{n}z\), \(a_{i}\in\mathbb{Z}_{2}\), \(1\leq i\leq 11\); \(a_{1}=0\) if \(n\leq m\neq l\), \(a_{2}=a_{8}=a_{10}=0\) if \(n<m\leq l\), \(a_{3}=0\) if \(2m-l>n\) or \(l>2m\), \(a_{4}=0\) if \(l>m+n\), \(a_{5}=a_{6}=a_{9}=a_{11}=0\) if either \(n<m\) or \(m<l\).
2. \(\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},I_{j},w^{2},z^{2},wz,x^{l-m+1}y,x^{l-m+1}w>_{1 \leq j\leq 3}\), where \(\deg x=1\), \(\deg y=m\), \(\deg w=n+m\) & \(\deg z=m+l\); \(I_{1}=y^{2}+a_{1}x^{m-n}w\), \(I_{2}=yw+a_{2}x^{m+n-l}z\), and \(I_{3}=yz+a_{3}x^{n}z\), \(a_{i}\in\mathbb{Z}_{2}\), \(1\leq i\leq 3\); \(a_{1}=0\) if \(2m>n+l;a_{3}=0\) if \(n<m\).
Proof.: If \(d_{r_{1}}(1\otimes a)\neq 0\), then \(r_{1}=n+1\) and \(d_{n+1}(1\otimes a)=t^{n+1}\otimes 1.\) For the remaining differentials, the following four cases are possible: (i) \(d_{r_{2}}(1\otimes b)=d_{r_{3}}(1\otimes c)=0\), (ii) \(d_{r_{2}}(1\otimes b)\neq 0\) & \(d_{r_{3}}(1\otimes c)=0\), (iii) \(d_{r_{2}}(1\otimes b)=0\) & \(d_{r_{3}}(1\otimes c)\neq 0\) and (iv) \(d_{r_{2}}(1\otimes b)\neq 0\) & \(d_{r_{3}}(1\otimes c)\neq 0\).
**Case (i):**\(d_{r_{2}}(1\otimes b)=0\) & \(d_{r_{3}}(1\otimes c)=0\).
In this case, we get \(d_{n+1}(1\otimes ab)=t^{n+1}\otimes b\), \(d_{n+1}(1\otimes ac)=t^{n+1}\otimes c\) and \(d_{n+1}(1\otimes abc)=t^{n+1}\otimes bc.\) So, \(d_{r}=0\) for all \(r>n+1.\) Thus, \(E_{n+2}^{*,*}=E_{\infty}^{*,*}\).
If \(n\leq m<l\), then \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\) for \(0\leq p\leq n\), \(q=0,m,l,m+l\), and zero otherwise.
First, we consider \(n<m<l\).
For \(l>m+n\), the cohomology groups \(H^{k}(X_{G})\) are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&j\leq k<n+j,j=0,m,l,m+l,\\ 0&\text{otherwise},\end{cases}\]
and, for \(l\leq m+n\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&j\leq k<n+j,j=0,m+l;m\leq k<l;m+n<k \leq n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&l\leq k\leq m+n,\\ 0&\text{otherwise}.\end{cases}\]
The permanent cocycles \(t\otimes 1,1\otimes b\), \(1\otimes c\) and \(1\otimes bc\) of \(E_{2}^{*,*}\) determine the elements \(x\in E_{\infty}^{1,0},u\in E_{\infty}^{0,m}\), \(v\in E_{\infty}^{0,l}\) and \(s\in E_{\infty}^{0,m+l}\), respectively. Thus, the total complex is given by
\[\text{Tot }E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/<x^{n+1},u^{2}+\gamma_{1}v,v^{2},s^{2},uv+\gamma_{2}s,us,vs>,\]
where \(\deg x=1\), \(\deg u=m\), \(\deg v=l\) & \(\deg s=m+l\) and \(\gamma_{1},\gamma_{2}\in\mathbb{Z}_{2}\), \(\gamma_{1}=0\) if \(l\neq 2m\). Let \(y\in H^{m}(X_{G}),w\in H^{l}(X_{G})\) and \(z\in H^{m+l}(X_{G})\) such that \(i^{*}(y)=b,i^{*}(w)=c\), and \(i^{*}(z)=bc\), respectively. Clearly, \(I_{1}=y^{2}+b_{1}x^{2m-l}w=0\), \(I_{2}=w^{2}+b_{2}x^{l-m}z=0\) & \(I_{3}=yw+b_{3}z=0\), where \(b_{i}\in\mathbb{Z}_{2},1\leq i\leq 3\), \(b_{1}=0\) if \(2m>n+l\) or \(l>2m\) and \(b_{2}=0\) if \(l>m+n\). By Proposition 2.4, the cohomology ring of the orbit space \(X/G\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},I_{j},z^{2},yz,wz>_{1\leq j\leq 3},\]
where \(\deg x=1\), \(\deg y=m\), \(\deg w=n+m\) and \(\deg z=m+l\). This realizes possibility (1) by taking \(a_{i}=0\) for \(i=1,2,5,6\) and \(8\leq i\leq 11\).
Now, we consider \(n=m<l\).
For \(l>2n\), the cohomology groups \(H^{k}(X_{G})\) is given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&j\leq k<n+j;n+j<k\leq 2n+j,j=0,l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=n,n+l,\\ 0&\text{otherwise},\end{cases}\]
and, for \(l\leq 2n\), we get
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;n+j<k<l+j,j=0,n;n+l<k\leq 2 n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=n,n+l;l\leq k\leq 2n,\\ 0&\text{otherwise}.\end{cases}\]
It is clear that the total complex is same as in the case when \(n<m<l\). Clearly, \(I_{1}=y^{2}+b_{1}x^{2n-l}w+b_{2}x^{n}y=0\), \(I_{2}=w^{2}+b_{3}x^{l-n}z=0\), \(I_{3}=yw+b_{4}z+b_{5}x^{n}w=0\)
and \(I_{4}=yz+b_{6}x^{n}z=0\) where \(b_{i}\in\mathbb{Z}_{2},1\leq i\leq 6\), \(b_{1}=b_{3}=0\) if \(l>2n\). Hence, the cohomology ring of the orbit space \(X/G\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},I_{j},z^{2},wz>_{1\leq j\leq 4},\]
where deg \(x=1\), deg \(y=n\), deg \(w=l\) & deg \(z=n+l\). This realizes possibility (1) by taking \(a_{i}=0\) for \(i=1,5,6,9,11\).
If \(n\leq m=l\), then \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\) for \(0\leq p\leq n\), \(q=0,2m\); \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\) for \(0\leq p\leq n\), \(q=m\), and zero otherwise. For \(n<m=l\), the cohomology groups are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&j\leq k\leq n+j,j=0,2m,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&m\leq k\leq m+n,\\ 0&\text{otherwise},\end{cases}\]
and, for \(n=m=l\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n,2n<k\leq 3n,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n<k<2n,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=n,2n,\\ 0&\text{otherwise}.\end{cases}\]
Thus, the total complex is given by
\[\text{Tot }E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/<x^{n+1},u^{2}+\gamma_{1}s,v^{2}+\gamma_{2}s,s^{2},uv+\gamma_{3}s,us,vs>,\]
where \(\gamma_{1},\gamma_{2},\gamma_{3}\in\mathbb{Z}_{2}\), deg \(x=1\), deg \(u\) = deg \(v\) = \(m\) and deg \(s=2m\).
If \(n<m=l\), then we get \(I_{1}=y^{2}+b_{1}z=0\), \(I_{2}=w^{2}+b_{2}z=0\) and \(I_{3}=yw+b_{3}z=0\), where \(b_{i}\in\mathbb{Z}_{2},1\leq i\leq 3\). Thus, the cohomology ring of the orbit space \(X/G\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},I_{j},z^{2},yz,wz>_{1\leq j\leq 3},\]
where deg \(x=1\), deg \(y\)= deg \(w=m\) & deg \(z=2m\). This realizes possibility (1) by taking \(a_{i}=0\) for \(i=2,3,5,6\) & \(8\leq i\leq 11\).
If \(n=m=l\), then we get \(I_{1}=y^{2}+b_{1}z+b_{2}x^{n}y+b_{3}x^{n}w=0,I_{2}=w^{2}+b_{4}z+b_{5}x^{n}y+b_ {6}x^{n}w=0,I_{3}=yw+b_{7}z+b_{8}x^{n}w+b_{9}x^{n}y=0,I_{4}=yz+b_{10}x^{n}z=0\) and \(I_{5}=wz+b_{11}x^{n}z=0,b_{i}\in\mathbb{Z}_{2},1\leq i\leq 11\). Thus, the cohomology ring of the orbit space \(X/G\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},I_{j},z^{2}>_{1\leq j\leq 5},\]
where deg \(x=1\), deg \(y\) = deg \(w=n\) and \(z=2n\). This realizes possibility (1).
**Case (ii)**\(d_{r_{2}}(1\otimes b)\neq 0\) & \(d_{r_{3}}(1\otimes c)=0\).
This case is possible only when \(n=m\). We have \(d_{n+1}(1\otimes a)=d_{n+1}(1\otimes b)=t^{n+1}\otimes 1\). Consequently,
\(t^{n+1}\otimes(a+b)\) & \(d_{n+1}(1\otimes abc)=t^{n+1}\otimes(bc+ac)\). Thus, \(E_{n+2}^{*,*}=E_{\infty}^{*,*}\). The elements \(t\otimes 1,1\otimes(a+b),1\otimes c\) and \(1\otimes c(a+b)\) are the permanent cocycles, and the cohomology groups and cohomology algebra of \(X/G\) are same as in case (i) when \(n=m\leq l\).
**Case (iii)**\(d_{r_{2}}(1\otimes b)=0\) & \(d_{r_{3}}(1\otimes c)\neq 0\).
As \(d_{r_{1}}(1\otimes a)\neq 0\) & \(d_{r_{3}}(1\otimes c)\neq 0\), it is easy to observe that this case is not possible when \(l>m+n\) or \(n<m=l\).
First, we consider \(n\leq m<l\leq m+n\). If \(l-m<n\), then \(d_{l-m+1}\) must be nontrivial. So, we get \(d_{l-m+1}(1\otimes c)=t^{l-m+1}\otimes b\) and \(d_{l-m+1}(1\otimes ac)=t^{l-m+1}\otimes ab\). Clearly, \(d_{r}=0\) for \(l-m+1<r<n+1\). Since \(d_{n+1}(1\otimes a)=t^{n+1}\otimes 1\), we get \(d_{n+1}(1\otimes abc)=t^{n+1}\otimes bc\). Thus, \(E_{n+2}^{*,*}=E_{\infty}^{*,*}\), and \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\) for \(0\leq p\leq n\), \(q=0,m+l;E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\) for \(0\leq p\leq l-m\), \(q=m,n+m\), and otherwise zero. For \(n<m<l<m+n\), the cohomology groups are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&j\leq k\leq n+j,j=0,m+l;m+j\leq k\leq l +j,j=0,n,\\ 0&\text{otherwise.}\end{cases}\]
For \(n=m<l<2n\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;n<k\leq l;2n\leq k<n+l;n+l <k\leq 2n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=n,n+l,\\ 0&\text{otherwise.}\end{cases}\]
The permanent cocycles \(t\otimes 1,1\otimes b\), \(1\otimes ab\) and \(1\otimes bc\) of \(E_{2}^{*,*}\) determine the elements \(x\in E_{\infty}^{1,0},u\in E_{\infty}^{0,m}\), \(v\in E_{\infty}^{0,n+m}\) & \(s\in E_{\infty}^{0,m+l}\), respectively. The total complex is given by
\[\text{Tot }E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/<x^{n+1},u^{2}+\gamma_{1}v, v^{2},s^{2},uv,us,vs,x^{l-m+1}u,x^{l-m+1}v>,\]
where \(\deg x=1\), \(\deg u=m\), \(\deg v=n+m\) & \(\deg s=m+l\), \(\gamma_{1}\in\mathbb{Z}_{2},\gamma_{1}=0\) if \(n<m\). Let \(y\in H^{m}(X_{G}),w\in H^{n+m}(X_{G})\) and \(z\in H^{m+l}(X_{G})\) such that \(i^{*}(y)=b,i^{*}(w)=ab\), and \(i^{*}(z)=bc\), respectively. Thus, for \(n\leq m<l<n+m\), we have \(I_{1}=y^{2}+b_{1}x^{m-n}w=0\), \(I_{2}=yw+b_{2}x^{m+n-l}z=0\), \(I_{3}=yz+b_{3}x^{n}z=0\), where \(b_{1},b_{2},b_{3}\in\mathbb{Z}_{2}\) and \(b_{1}=0\) if \(2m>n+l,b_{3}=0\) if \(n<m\). Thus, the cohomology ring of the orbit space \(X/G\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+1},I_{j},w^{2},z^{2},wz,x^{l-m+1}y,x^{l-m+1}w>_{1 \leq j\leq 3},\]
where \(\deg x=1\), \(\deg y=m\), \(\deg w=n+m\), and \(\deg z=m+l\). This realizes the possibility (2).
Next, we consider \(n\leq m<l=m+n\). We must have \(d_{n+1}(1\otimes ab)=d_{n+1}(1\otimes c)=t^{n+1}\otimes b\). Consequently, \(d_{n+1}(1\otimes ac)=t^{n+1}\otimes(c+ab)\) and \(d_{n+1}(1\otimes abc)=t^{n+1}\otimes bc\).
So, we get \(E_{n+2}^{*,*}=E_{\infty}^{*,*}\). The elements \(t\otimes 1,1\otimes b,1\otimes(c+ab)\) and \(1\otimes bc\) are permanent cocycles, and the cohomology algebra of \(X/G\) is the same as case (i) when \(n\leq m<l\) and \(l=m+n\).
Finally, we consider \(n=m=l\). We must have \(d_{n+1}(1\otimes a)=d_{n+1}(1\otimes c)=t^{n+1}\otimes 1\). So, we get \(E_{n+2}^{*,*}=E_{\infty}^{*,*}\). The elements \(t\otimes 1,1\otimes b,1\otimes(a+c)\) and \(1\otimes b(a+c)\) are permanent cocycles, and cohomology algebra is the same as in case (i) when \(n=m=l\).
**Case (iv) \(d_{r_{2}}(1\otimes b)\neq 0\)**& \(d_{r_{3}}(1\otimes c)\neq 0\).
In this case, we must have \(n=m\leq l\). First, we consider \(n=m<l\). Clearly, \(l<2n\) or \(l\geq 3n\) are not possible. Now, suppose that \(2n<l<3n\) and \(d_{l-2n+1}\) is nontrivial. So, we get \(d_{l-2n+1}(1\otimes c)=t^{l-2n+1}\otimes ab\) and \(d_{r}=0\) for \(l-2n+1<r\leq n\). As \(d_{n+1}(1\otimes a)=d_{n+1}(1\otimes b)=t^{n+1}\otimes 1\), we have \(d_{n+1}(1\otimes ab)=t^{n+1}\otimes(a+b)\). Thus, we get \(0=d_{n+1}\{(t^{l-2n}\otimes ab)(t\otimes 1)\}=t^{l-n+1}\otimes(a+b)\), which is not possible. So, we must have \(l=2n\). We get \(d_{n+1}(1\otimes a)=d_{n+1}(1\otimes b)=t^{n+1}\otimes 1\) & \(d_{n+1}(1\otimes ab)=d_{n+1}(1\otimes c)=t^{n+1}\otimes(a+b)\). Consequently, \(E_{n+2}^{*,*}=E_{\infty}^{*,*}\). The elements \(t\otimes 1,1\otimes(a+b),1\otimes(ab+c)\) and \(1\otimes(ac+bc)\) are permanent cocycles, and the cohomology algebra of \(X/G\) is the same as in case (i) when \(n=m<l=2n\).
Next, we consider \(n=m=l\). We have \(d_{n+1}(1\otimes a)=d_{n+1}(1\otimes b)=d_{n+1}(1\otimes c)=t^{n+1}\otimes 1\). So, we get \(E_{n+2}^{*,*}=E_{\infty}^{*,*}\). The elements \(t\otimes 1,1\otimes(a+b),1\otimes(a+c)\) and \(1\otimes(ab+ac+bc)\) are permanent cocycles, and the cohomology algebra of \(X/G\) is the same as in case (i) when \(n=m=l\).
**Theorem 3.7**.: Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic space \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\), where \(n\leq m\leq l\). If \(d_{r_{1}}(1\otimes a)=0\) and \(d_{r_{2}}(1\otimes b)\neq 0\), then \(H^{*}(X/G)\) is isomorphic to one of the following graded commutative algebras:
1. \(\mathbb{Z}_{2}[x,y,w,z]/<Q(x),I_{j},z^{2},x^{m-n+1}y,x^{m-n+1}z>_{1\leq j\leq 5}\), where deg \(x=1\), deg \(y=n\), deg \(w=l\) & deg \(z=n+l\); \(I_{1}=y^{2}+a_{1}x^{2n}+a_{2}x^{2n-l}w+a_{3}x^{n}y,I_{2}=w^{2}+a_{4}x^{l}w+a_{5 }x^{l-n}z+a_{6}x^{2l},I_{3}=yw+a_{7}z+a_{8}x^{n+l}+a_{9}x^{n}w,I_{4}=yz+a_{10}x ^{n}z+a_{11}x^{2n}w+a_{12}x^{2n+l}\) & \(I_{5}=wz+a_{13}x^{n+m}w\), \(a_{i}\in\mathbb{Z}_{2}\), \(1\leq i\leq 13\); \(a_{2}=0\) if \(l>2n\), \(a_{3}=a_{10}=0\) if \(m<2n\) & \(a_{5}=0\) if \(m<l\); \(Q(x)=x^{n+m+j^{\prime}+1}\), \(j^{\prime}=0\) & \(l\). If \(j^{\prime}=0\), then \(a_{4}=0\) if \(l>n+m\), \(a_{8}=a_{13}=0\) if \(m<l\) and \(a_{6}=a_{12}=0\). If \(j^{\prime}=l\), then \(a_{9}=0\) if \(m<l\), and \(a_{i}=0\) for \(i=4,11,13\).
2. \(\mathbb{Z}_{2}[x,y,w,z]/<x^{m+1},I_{j},z^{2}>_{1\leq j\leq 5}\), where deg \(x\)=1, deg \(y=n\), deg \(w=l\) & deg \(z=n+l\); \(I_{1}=y^{2}+a_{1}x^{n}y+a_{2}x^{2n-l}w+a_{3}x^{2n}+a_{4}z,I_{2}=w^{2}+a_{5}x^{l- n}z+a_{6}x^{m}w+a_{7}x^{n}y,I_{3}=yw+a_{8}z+a_{9}x^{n}w+a_{10}x^{m}y,I_{4}=yz+a_{11}x^{n}z+a_{ 12}x^{2n}w\) & \(I_{5}=wz+a_{13}x^{m}z\), \(a_{i}\in\mathbb{Z}_{2}\)
\(1\leq i\leq 13\); \(a_{2}=0\) if \(l>2n\), \(a_{3}=0\) if \(m<2n\), \(a_{4}=a_{7}=0\) if \(n<m\) or \(m<l\), \(a_{6}=a_{10}=0\) if \(m<l\) and \(a_{5}=0\) if \(l>m+n,a_{12}=0\) if \(m>2n\).
3. \(\mathbb{Z}_{2}[x,y,w,z]/<Q(x),I_{j},z^{2},wz,x^{m-n+1}y,x^{m-n+1}z,a_{0}x^{l-m-n +1}w>_{1\leq j\leq 4}\), where \(\deg\)\(x=1\), \(\deg\)\(y=n\), \(\deg\)\(w=n+m\) & \(\deg\)\(z=n+l\); \(I_{1}=y^{2}+a_{1}x^{2n}+a_{2}x^{n}y,I_{2}=w^{2}+a_{3}x^{n+m}w+a_{4}x^{2n+2m}+a_ {5}x^{2m+n-l}z,I_{3}=yw+a_{6}x^{n}w+a_{7}x^{2n+m}+a_{8}x^{n+m-l}z\) & \(I_{4}=yz+a_{9}x^{n+l-m}w+a_{10}x^{n}z+a_{11}x^{2n+l}\), \(a_{i}\in\mathbb{Z}_{2},0\leq i\leq 11\); \(a_{2}=0\) if \(m<2n\); \(Q(x)=x^{l+j^{\prime}+1},j^{\prime}=0\) or \(n+m\). If \(j^{\prime}=0\), then \(a_{1}=0\) if \(l<2n\), \(a_{3}=0\) if \(l<n+m\) or \(m=l\), \(a_{4}=0\) if \(l<2n+2m\) or \(m=l,a_{5}=0\) if \(l>m+2n\) or \(m=l,a_{7}=0\) if \(l<2n+m\), \(a_{8}=0\) if \(l>m+n\), \(a_{10}=0\) if \(m>2n\) and \(a_{0}=a_{11}=0\). If \(j^{\prime}=n+m\), then \(a_{10}=0\) if \(m<2n\), \(a_{3}=0\) if \(l<2n+2m\), \(a_{6}=0\) if \(l<2n+m\), and \(a_{5}=a_{8}=a_{9}=0\) & \(a_{0}=1\).
4. \(\mathbb{Z}_{2}[x,y,w,z]/<x^{m+1},I_{j},w^{2},z^{2},wz,x^{l-n+1}y,x^{l-n+1}w>_{1 \leq j\leq 3}\), where \(\deg\)\(x=1\), \(\deg\)\(y=n\), \(\deg\)\(w=n+m\) & \(\deg\)\(z=n+l\); \(I_{1}=y^{2}+a_{1}x^{2n}+a_{2}x^{n}y+a_{3}w,I_{2}=yw+a_{4}x^{n}w+a_{5}x^{m+n-l}z\) & \(I_{3}=yz+a_{6}x^{n}z+a_{7}x^{l+n-m}w\), \(a_{i}\in\mathbb{Z}_{2},1\leq i\leq 7\); \(a_{1}=a_{7}=0\) if \(m<2n\), \(a_{2}=a_{4}=0\) if \(l<2n\), \(a_{3}=0\) if \(n<m\).
Proof.: If \(d_{r_{1}}(1\otimes a)=0\) & \(d_{r_{2}}(1\otimes b)\neq 0\), then either \(r_{2}=m-n+1\) or \(r_{2}=m+1\). In this theorem, we consider two cases: (i) \(d_{r_{3}}(1\otimes c)=0\) and (ii) \(d_{r_{3}}(1\otimes c)\neq 0\).
**Case (i):**\(d_{r_{3}}(1\otimes c)=0\).
First, Suppose that \(r_{2}=m-n+1\). We must have \(n<m\leq l\), and \(d_{m-n+1}(1\otimes b)=t^{m-n+1}\otimes a\). So, \(d_{m-n+1}(1\otimes bc)=t^{m-n+1}\otimes ac\). Assume that \(d_{m+n-l+1}\) is nontrivial. Then, we have \(d_{m+n-l+1}(1\otimes ab)=t^{n+m-l+1}\otimes c\). As \(G\) acts freely on \(X\), we must have \(d_{m+n+l+1}(1\otimes abc)=t^{n+m+l+1}\otimes 1\). Thus, \(E_{n+m+l+2}^{*,*}=E_{\infty}^{*,*}\) and \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\) for \(0\leq p\leq n+m-l\), \(q=l;0\leq p\leq m-n\), \(q=n,n+l;0\leq p\leq n+m+l\), \(q=0\), and zero otherwise. For \(n<m<l\), the cohomology groups \(H^{k}(X_{G})\) are given by
\[\begin{cases}\mathbb{Z}_{2}&0\leq k<n;m+j<k<l+j,j=0,n;m+l<k\leq m+n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&l\leq k\leq n+m;n+j\leq k\leq m+j,j=0,l,\\ 0&\text{otherwise},\end{cases}\]
and, for \(n<m=l\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;2m<k\leq 2m+n,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n\leq k<m;m<k<n+m;n+m<k\leq 2m,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=m,n+m,\\ 0&\text{otherwise}.\end{cases}\]
The permanent cocycles \(t\otimes 1,1\otimes a\), \(1\otimes c\) and \(1\otimes ac\) of \(E_{2}^{*,*}\) determine the elements \(x\in E_{\infty}^{1,0},u\in E_{\infty}^{0,n}\), \(v\in E_{\infty}^{0,l}\) and \(s\in E_{\infty}^{0,n+l}\), respectively. The total complex \(\mathrm{Tot}E_{\infty}^{*,*}\) is given by
\[\mathbb{Z}_{2}[x,u,v,s]/<x^{n+m+l+1},u^{2}+\gamma_{1}v,v^{2},s^{2},uv+\gamma_{ 2}s,us,vs,x^{m-n+1}u,x^{m-n+1}s,x^{m+n-l+1}v>,\]
where \(\deg\,x=1\), \(\deg\,u=n\), \(\deg\,v=l\) & \(\deg\,s=n+l\) and \(\gamma_{1},\gamma_{2}\in\mathbb{Z}_{2}\), \(\gamma_{1}=0\) if \(l\neq 2n\). Let \(y\in H^{n}(X_{G}),w\in H^{l}(X_{G})\) and \(z\in H^{n+l}(X_{G})\) such that \(i^{*}(y)=a,i^{*}(w)=c\) and \(i^{*}(z)=ac\), respectively. Clearly, \(I_{1}=y^{2}+b_{1}x^{2n}+b_{2}x^{2n-l}w+b_{3}x^{n}y=0\), \(I_{2}=w^{2}+b_{4}x^{2l}+b_{5}x^{m-n}z=0\), \(I_{3}=yw+b_{6}z+b_{7}x^{n+l}+b_{8}x^{n}w=0\), and \(I_{4}=yz+b_{9}x^{n}z+b_{10}x^{2n+l}\), \(b_{i}\in\mathbb{Z}_{2},1\leq i\leq 10\); \(b_{2}=0\) if \(l>2n,b_{3}=0\) if \(m<2n\), \(b_{5}=b_{8}=0\) if \(m<l\), and \(b_{9}=0\) if \(m<2n\). Then, the cohomology ring of the orbit space \(X/G\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+m+l+1},I_{j},z^{2},wz,x^{m-n+1}y,x^{m-n+1}z,x^{m+ n-l+1}w>_{1\leq j\leq 4.}\]
where \(\deg\,x=1\), \(\deg\,y=n\), \(\deg\,w=l\) and \(\deg\,z=n+l\). This realizes possibility (1) when \(j^{\prime}=l\).
If \(d_{m+n-l+1}\) is trivial, then \(d_{n+m+1}\) must be nontrivial. Therefore, we have \(d_{n+m+1}(1\otimes ab)=t^{n+m+1}\otimes 1\). Consequently, \(d_{n+m+1}(1\otimes abc)=t^{n+m+1}\otimes c\). Thus, \(E_{n+m+2}^{*,*}=E_{\infty}^{*,*}\), and hence \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\) for \(0\leq p\leq n+m\), \(q=0,l;0\leq p\leq m-n\), \(q=n,n+l\), and zero otherwise. For \(n<m<l\leq m+n\), the cohomology groups \(H^{k}(X_{G})\) are given by
\[\begin{cases}\mathbb{Z}_{2}&0\leq k<n;m+j<k<l+j,j=0,n;m+l<k\leq m+n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n+j\leq k\leq m+j,j=0,l;l\leq k\leq n+m,\\ 0&\text{otherwise},\end{cases}\]
and, for \(l>m+n\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;m+j<k\leq n+m+j,j=0,l;l \leq k<n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n+j\leq k\leq m+j,j=0,l,\\ 0&\text{otherwise}.\end{cases}\]
For \(n<m=l\), the cohomology groups are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;2m<k\leq 2m+n,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n\leq k<m;m<k<n+m;n+m<k\leq 2m,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=m,n+m,\\ 0&\text{otherwise}.\end{cases}\]
The permanent cocycles \(t\otimes 1,1\otimes a\), \(1\otimes c\) and \(1\otimes ac\) of \(E_{2}^{*,*}\) determine the elements \(x\in E_{\infty}^{1,0},u\in E_{\infty}^{0,n}\), \(v\in E_{\infty}^{0,l}\), and \(s\in E_{\infty}^{0,n+l}\), respectively. Thus, the total complex \(\mathrm{Tot}E_{\infty}^{*,*}\) is given by
\[\mathbb{Z}_{2}[x,u,v,s]/<x^{n+m+1},u^{2}+\gamma_{1}v,v^{2},s^{2},uv+\gamma_{2}s, us,vs,x^{m-n+1}u,x^{m-n+1}s>,\]
where deg \(x=1\), deg \(u=n\), deg \(v=l\)\(\&\) deg \(s=n+l\) and \(\gamma_{1},\gamma_{2}\in\mathbb{Z}_{2},\gamma_{1}=0\) if \(l\neq 2n\). Let \(y\in H^{n}(X_{G}),w\in H^{l}(X_{G})\) and \(z\in H^{n+l}(X_{G})\) such that \(i^{*}(y)=a,i^{*}(w)=c\) and \(i^{*}(z)=ac\), respectively. Clearly, we have \(I_{1}=y^{2}+b_{1}x^{2n}+b_{2}x^{2n-l}w+b_{3}x^{n}y=0\), \(I_{2}=w^{2}+b_{4}x^{l}w+b_{5}x^{l-n}z=0\), \(I_{3}=yw+b_{6}z+b_{7}x^{n+m}+b_{8}x^{n}w=0\), \(I_{4}=yz+b_{9}x^{n}z+b_{10}x^{2n}w=0\) and \(I_{5}=wz+b_{11}x^{n+m}w=0\), \(b_{i}\in\mathbb{Z}_{2},1\leq i\leq 11\); \(b_{2}=0\) if \(l>2n,b_{3}=0\) if \(m>2n\), \(b_{4}=0\) if \(l>n+m\), \(b_{5}=b_{7}=b_{11}=0\) if \(m<l\), and \(b_{9}=0\) if \(m<2n\). Thus, the cohomology ring of the orbit space \(X/G\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+m+1},I_{j},z^{2},x^{m-n+1}y,x^{m-n+1}z>_{1\leq j \leq 5},\]
where deg \(x=1\), deg \(y=n\), deg \(w=l\) and deg \(z=n+l\). This realizes possibility (1) when \(j^{\prime}=0\).
Next, we consider \(r_{2}=m+1\). We have \(d_{m+1}(1\otimes b)=t^{m+1}\otimes 1\). So, we get \(d_{m+1}(1\otimes ab)=t^{m+1}\otimes a\), \(d_{m+1}(1\otimes bc)=t^{m+1}\otimes c\) and \(d_{m+1}(1\otimes abc)=t^{n+1}\otimes ac\). Clearly, \(d_{r}=0\) for all \(r>m+1\). Thus, we get \(E_{m+2}^{*,*}=E_{\infty}^{*,*}\).
If \(n\leq m<l\), then \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\) for \(0\leq p\leq m\), \(q=0,n,l,n+l\), and zero otherwise.
For \(l\leq m+n\), the cohomology groups \(H^{k}(X_{G})\) are given by
\[\begin{cases}\mathbb{Z}_{2}&0\leq k<n;m+j<k<l+j,j=0,n;m+l<k\leq m+n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n+j\leq k\leq m+j,j=0,l;l\leq k\leq n+m,\\ 0&\text{otherwise}.\end{cases}\]
and, for \(l>m+n\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&j\leq k<n+j,j=0,l;m+j<k\leq m+n+j;j=0,l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n+j\leq k\leq m+j,j=0,l,\\ 0&\text{otherwise}.\end{cases}\]
The permanent cocycles \(t\otimes 1,1\otimes a\), \(1\otimes\)c and \(1\otimes ac\) of \(E_{2}^{*,*}\) determine the elements \(x\in E_{\infty}^{1,0},u\in E_{\infty}^{0,n}\), \(v\in E_{\infty}^{0,l}\) and \(s\in E_{\infty}^{0,n+l}\), respectively. Thus, the total complex is given by
\[\text{Tot }E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/<x^{m+1},u^{2}+\gamma_{1}v,v ^{2},s^{2},uv+\gamma_{2}s,us,vs>,\]
where deg \(x=1\), deg \(u=n\), deg \(v=l\), deg \(s=n+l\), and \(\gamma_{1},\gamma_{2}\in\mathbb{Z}_{2}\), \(\gamma_{1}=0\) if \(l\neq 2n\). Let \(y\in H^{n}(X_{G}),w\in H^{l}(X_{G})\) and \(z\in H^{n+l}(X_{G})\) such that \(i^{*}(y)=a,i^{*}(w)=c\), and \(i^{*}(z)=ac\), respectively. For \(n\leq m<l\), we have \(I_{1}=y^{2}+b_{1}x^{2n-l}w+b_{2}x^{n}y+b_{3}x^{2n}=0\), \(I_{2}=w^{2}+b_{4}x^{l-n}z=0\), \(I_{3}=yw+b_{5}z+b_{6}x^{n}w=0\), and \(I_{4}=yz+b_{7}x^{n}z+b_{8}x^{2n}w=0\)
\(b_{i}\in\mathbb{Z}_{2},0\leq i\leq 8\); \(b_{1}=0\) if \(l>2n\), \(b_{3}=0\) if \(m<2n\), \(b_{4}=0\) if \(l>m+n\) and \(b_{8}=0\) if \(m>2n\). Thus, the cohomology ring of the orbit space \(X/G\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{m+1},I_{j},z^{2},wz>_{1\leq j\leq 4},\]
where \(\deg\,\,x=1\), \(\deg\,\,y=n\), \(\deg\,\,w=l\), and \(\deg\,\,z=n+l\). This realizes possibility (2) by taking \(a_{i}=0\) for \(i=4,6,7,10,13\).
If \(n<m=l\), then \(E_{\infty}^{p,q}=\mathbb{Z}_{2}\) for \(0\leq p\leq m\), \(q=0,n,m,n+m\) and zero otherwise. Further, if \(n=m=l\), then \(E_{\infty}^{p,q}=\mathbb{Z}_{2}\) for \(0\leq p\leq n\), \(q=0,2n\); \(E_{\infty}^{p,q}=\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\) for \(0\leq p\leq n\), \(q=n\), and zero otherwise. For \(n<m=l\), the cohomology groups are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;2m<k\leq 2m+n,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n\leq k<m;m<k<m+n;m+n<k\leq 2m,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=m,m+n,\\ 0&\text{otherwise}.\end{cases}\]
and, for \(n=m=l\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;2n<k\leq 3n,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n<k<2n,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&k=n,2n\\ 0&\text{otherwise}.\end{cases}\]
It is clear that for \(n<m=l\), the total complex is the same as in case when \(n\leq m<l\). For \(n=m=l\), the total complex is given by
\[\text{Tot}\,\,E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/<x^{n+1},u^{2}+\gamma_ {1}s,v^{2}+\gamma_{2}s,s^{2},uv+\gamma_{3}s,us,vs>,\]
where \(\deg\,\,x=1\), \(\deg\,\,u=\deg\,\,v=m\), \(\deg\,\,s=2m\), and \(\gamma_{1},\gamma_{2}\in\mathbb{Z}_{2}\). For \(n\leq m=l\), we have \(I_{1}=y^{2}+b_{1}x^{2n-m}+b_{2}x^{n}y+b_{3}x^{2n}+b_{4}z=0\), \(I_{2}=w^{2}+b_{5}x^{m-n}z+b_{6}x^{m}w+b_{7}x^{n}y=0\), \(I_{3}=yw+b_{8}z+b_{9}x^{n}w+b_{10}x^{m}y=0,I_{4}=yz+b_{11}x^{n}z=0\) & \(I_{5}=wz+b_{12}x^{m}z=0,b_{i}\in\mathbb{Z}_{2},1\leq i\leq 12\); \(b_{1}=0\) if \(m>2n\), \(b_{3}=0\) if \(m<2n\) and \(b_{4}=b_{7}=0\) if \(n<m\). Thus, the cohomology ring of the orbit space \(X/G\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{m+1},I_{j},z^{2}>_{1\leq j\leq 5},\]
where \(\deg\,\,x=1\), \(\deg\,\,y=n\), \(\deg\,\,w=m,\&\deg\,\,z=m+n\). This realizes possibility (2).
**Case (ii):**\(d_{r_{3}}(1\otimes c)\neq 0\).
First, suppose that \(r_{2}=m-n+1\). Then, we must have \(n<m\leq l\), and \(d_{m-n+1}(1\otimes b)=t^{m-n+1}\otimes a\). Consequently, \(d_{m-n+1}(1\otimes bc)=t^{m-n+1}\otimes ac\). In this case, we have \(r_{3}=l-m-n+1\) or \(l+1\).
If \(r_{3}=l+1\), then \(d_{l+1}(1\otimes c)=t^{l+1}\otimes 1\) and \(d_{l+1}(1\otimes abc)=t^{l+1}\otimes ab\). Thus, \(E_{l+2}^{*,*}=E_{\infty}^{*,*}\).
If \(n<m<l\), then \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), for \(0\leq p\leq m-n,q=n,n+l;0\leq p\leq l,q=0,n+m\), and zero otherwise.
For \(l<m+n\), the cohomology groups \(H^{k}(X_{G})\) are given by
\[\begin{cases}\mathbb{Z}_{2}&0\leq k<n;m<k\leq l;m+n\leq k<n+l;m+l<k\leq m+n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n+j\leq k\leq m+j,j=0,l,\\ 0&\text{otherwise},\end{cases}\]
and, for \(l\geq m+n\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;j<k<n+j,j=m,l;m+l<k\leq n+m+ l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n+j\leq k\leq m+j,j=0,l;m+n\leq k\leq l,\\ 0&\text{otherwise}.\end{cases}\]
If \(n<m=l\), then \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), for \(0\leq p\leq m,q=0;0\leq p\leq m-n,q=n;m-n<p\leq m,q=n+m\), and \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\), for \(0\leq p\leq m-n,q=n+m\), and zero otherwise. Thus, the cohomology groups of \(X_{G}\) are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;2m<k\leq 2m+n,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n+j\leq k\leq m+j,j=0,m,\\ 0&\text{otherwise},\end{cases}\]
The permanent cocycles \(t\otimes 1,1\otimes a\), \(1\otimes ab\) and \(1\otimes ac\) of \(E_{2}^{*,*}\) determine the elements \(x\in E_{\infty}^{1,0},u\in E_{\infty}^{0,n}\), \(v\in E_{\infty}^{0,n+m}\) and \(s\in E_{\infty}^{0,n+l}\), respectively. Thus, for \(n<m\leq l\), the total complex is given by
\[\text{Tot}\ E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/<x^{l+1},u^{2},v^{2}+ \gamma_{1}s,s^{2},uv+\gamma_{2}s,us,vs,x^{m-n+1}u,x^{m-n+1}s>,\]
where \(\text{deg }x=1\), \(\text{deg }u=n\), \(\text{deg }v=n+m\), \(\text{deg }s=n+l\) and \(\gamma_{1},\gamma_{2}\in\mathbb{Z}_{2}\); \(\gamma_{1}=0\) if \(l\neq n+2m\) and \(\gamma_{2}=0\) if \(l\neq n+m\). Let \(y\in H^{n}(X_{G}),w\in H^{n+m}(X_{G})\) and \(z\in H^{n+l}(X_{G})\) such that \(i^{*}(y)=a,i^{*}(w)=c\), and \(i^{*}(z)=ac\), respectively. Clearly, we have \(I_{1}=y^{2}+b_{1}x^{2n}+b_{2}x^{n}y=0\), \(I_{2}=w^{2}+b_{3}x^{n+m}w+b_{4}x^{2n+2m}+b_{5}x^{2m+n-l}z=0\), \(I_{3}=yw+b_{6}x^{n}w+b_{7}x^{2n+m}+b_{8}x^{n+m-l}z=0\), \(I_{4}=yz+b_{9}x^{n+l-m}w+b_{10}x^{n}z=0,b_{i}\in\mathbb{Z}_{2},1\leq i\leq 10\); \(b_{1}=0\) if \(l<2n,b_{2}=0\) if \(m<2n\); \(b_{3}=0\) if \(l<n+m\) or \(m=l,b_{4}=0\) if \(l<2n+2m\) or \(m=l\), \(b_{5}=0\) if \(l>m+2n\) or \(m=l\), \(b_{7}=0\) if \(l<2n+m,b_{8}=0\) if \(l>n+m\) and \(b_{10}=0\) if \(m>2n\). Thus, the cohomology ring of the orbit space \(X/G\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{l+1},I_{j},z^{2},wz,x^{m-n+1}y,x^{m-n+1}z>_{1\leq j \leq 4},\]
where \(\text{deg }x=1\), \(\text{deg }y=n\), \(\text{deg }w=n+m\) and \(\text{deg }z=n+l\). This realizes possibility (3) when \(j^{\prime}=0\).
If \(r_{3}=l-m-n+1\), then we must have \(l>n+m\) and \(d_{l-n-m+1}(1\otimes c)=t^{l-n-m+1}\otimes ab\).
As \(G\) acts freely on \(X,\) we get \(d_{n+m+l+1}(1\otimes abc)=t^{n+m+l+1}\otimes 1.\) Thus, \(E_{n+m+l+2}^{*,*}=E_{\infty}^{*,*}.\) We get \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2},\) for \(0\leq p\leq n+m+l,q=0;0\leq p\leq m-n,q=n,n+l;0\leq p\leq l-m-n,q=n+m,\) and zero otherwise. Consequently, the cohomology groups are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;j<k<n+j,j=m,l;m+l<k\leq n+m+l, \\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n+j\leq k\leq m+j,j=0,l;m+n\leq k\leq l,\\ 0&\text{otherwise}.\end{cases}\]
The permanent cocycles \(t\otimes 1,1\otimes a,\)\(1\otimes ab\) and \(1\otimes ac\) of \(E_{2}^{*,*}\) determine the elements \(x\in E_{\infty}^{1,0},u\in E_{\infty}^{0,n},\)\(v\in E_{\infty}^{0,n+m}\) and \(s\in E_{\infty}^{0,n+l},\) respectively. Then, the total complex Tot \(E_{\infty}^{*,*}\) is given by
\[\mathbb{Z}_{2}[x,u,v,s]/<x^{n+m+l+1},u^{2},v^{2}+\gamma_{1}s,s^{2},uv,us,vs,x^ {m-n+1}u,x^{m-n+1}s,x^{l-m-n+1}v>,\]
where deg \(x=1,\) deg \(u=n,\) deg \(v=n+m,\) deg \(s=n+l\) and \(\gamma_{1}\in\mathbb{Z}_{2},\)\(\gamma_{1}=0\) if \(l\neq n+2m.\) Let \(y\in H^{n}(X_{G}),w\in H^{n+m}(X_{G})\) and \(z\in H^{n+l}(X_{G})\) such that \(i^{*}(y)=a,i^{*}(w)=c,\) and \(i^{*}(z)=ac,\) respectively. Clearly, \(I_{1}=y^{2}+b_{1}x^{2n}+b_{2}x^{n}y=0\),\(I_{2}=w^{2}+b_{3}x^{2n+2m}+b_{4}x^{n+m}w=0,\)\(I_{3}=yw+b_{5}x^{2n+m}+b_{6}x^{n}w=0,\)\(I_{4}=yz+b_{7}x^{2n+l}+b_{8}x^{n}z,b_{i}\in\mathbb{Z}_{2},1\leq i\leq 8;\)\(b_{2}=0\) if \(m<2n;\)\(b_{4}=0\) if \(l<2n+2m,\)\(b_{6}=0\) if \(l<2n+m,\) and \(b_{8}=0\) if \(m<2n.\) So, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+m+l+1},I_{j},z^{2},wz,x^{m-n+1}y,x^{m-n+1}z,x^{l -m-n+1}w>_{1\leq j\leq 4}\]
where deg \(x=1,\) deg \(y=n,\) deg \(w=n+m\) and deg \(x=n+l.\) This realizes possibility (3) when \(j^{\prime}=m+n.\)
Finally, suppose that \(r_{2}=m+1.\) Clearly, \(r_{3}\neq l-m+1.\) Further, if \(r_{3}=l-m-n+1,\) then we get \(0=d_{m+1}\{(t^{l-m-n}\otimes ab)(t\otimes 1)\}=t^{l-n+2}\otimes a,\) which is not possible. So, \(r_{3}=l-n+1\) or \(l+1.\)
First, we consider \(r_{3}=l-n+1.\) If \(m<l-n,\) then \(d_{r_{3}}(1\otimes c)=0,\) which contradicts our hypothesis. So, we get \(l-n\leq m.\) As \(d_{l-n+1}\) is nontrivial, we get \(d_{l-n+1}(1\otimes c)=t^{l-n+1}\otimes a\) and \(d_{l-n+1}(1\otimes bc)=t^{l-n+1}\otimes ab.\) Also, we have \(d_{m+1}(1\otimes b)=t^{m+1}\otimes 1\) & \(d_{m+1}(1\otimes abc)=t^{m+1}\otimes ac.\) Thus, \(E_{m+2}^{*,*}=E_{\infty}^{*,*}.\) If \(n\leq m<l\) and \(l-n<m,\) then \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2},\) for \(0\leq p\leq m,q=0,n+l;0\leq p\leq l-n,q=n,n+m,\) and zero otherwise. Thus, the cohomology groups \(H^{k}(X_{G})\) are given by
\[\begin{cases}\mathbb{Z}_{2}&0\leq k<n;m<k\leq l;m+n\leq k<n+l;m+l<k\leq m+n+l, \\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n+j\leq k\leq m+j,j=0,l,\\ 0&\text{otherwise}.\end{cases}\]
The permanent cocycles \(t\otimes 1,1\otimes a\), \(1\otimes ab\) and \(1\otimes ac\) of \(E_{2}^{*,*}\) determine the elements \(x\in E_{\infty}^{1,0}\), \(u\in E_{\infty}^{0,n}\), \(v\in E_{\infty}^{0,n+m}\) and \(s\in E_{\infty}^{0,n+l}\), respectively. Then, the total complex Tot \(E_{\infty}^{*,*}\) is given by
\[\mathbb{Z}_{2}[x,u,v,s]/<x^{m+1},u^{2}+\gamma_{1}v,v^{2},s^{2},uv,us,vs,x^{l-n+1 }u,x^{l-n+1}v>,\]
where deg \(x=1\), deg \(u=n\), deg \(v=n+m\), deg \(s=n+l\) and \(\gamma_{1}\in\mathbb{Z}_{2}\), \(\gamma_{1}=0\) if \(n<m\). Let \(y\in H^{n}(X_{G}),w\in H^{n+m}(X_{G})\) and \(z\in H^{n+l}(X_{G})\) such that \(i^{*}(y)=a,i^{*}(w)=c\), and \(i^{*}(z)=ac\), respectively. We have \(I_{1}=y^{2}+b_{1}x^{2n}+b_{2}x^{n}y+b_{3}w=0,I_{2}=yw+b_{4}x^{n}w+b_{5}x^{m+n- l}z=0\), and \(I_{3}=yz+b_{6}x^{n}z+b_{7}x^{l+n-m}w=0\), \(b_{i}\in\mathbb{Z}_{2},1\leq i\leq 7\); \(b_{1}=b_{7}=0\) if \(m<2n\), \(b_{3}=0\) if \(n<m\) and \(b_{2}=b_{4}=0\) if \(l<2n\). Therefore, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{m+1},I_{j},w^{2},z^{2},wz,x^{l-n+1}y,x^{l-n+1}w>_{ 1\leq j\leq 3}\]
deg \(x=1\), deg \(y=n\), deg \(w=n+m\), and deg \(z=n+l\). This realize possibility (4).
Next, if \(n\leq m<l\) and \(l-n=m\), then the cohomology groups and cohomology algebra are same in the case (i) when \(r_{2}=m+1\) and \(l=m+n\).
If \(n<m=l\) and \(d_{l-n+1}(1\otimes c)\neq 0\), then \(l-n=m-n\) and cohomology groups and cohomology algebra are same as in the case (ii) when \(n<m=l\).
Next, if \(n<m=l\) and \(d_{l-n+1}(1\otimes c)=0\), then we must have \(r_{2}=r_{3}=m+1\) and hence, \(d_{m+1}(1\otimes b)=d_{m+1}(1\otimes c)=t^{m+1}\otimes 1\). The cohomology groups and cohomology algebra are same as in the case (i) when \(r_{2}=m+1\) and \(n<m=l\).
Finally, if \(n=m=l\), then we must have \(r_{2}=r_{3}=n+1\) and hence, \(d_{n+1}(1\otimes b)=d_{n+1}(1\otimes c)=t^{n+1}\otimes 1\). The cohomology groups and cohomology algebra are the same as in the case (i) when \(n=m=l\).
**Theorem 3.8**.: Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic space \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\), where \(n\leq m\leq l\). If \(d_{r_{1}}(1\otimes a)=d_{r_{2}}(1\otimes b)=0\) and \(d_{r_{3}}(1\otimes c)\neq 0\), then the cohomology ring of the orbit space \(X/G\) is isomorphic to one of the following graded commutative algebras:
1. \(\mathbb{Z}_{2}[x,y,w,z]/I\), where \(I\) is homogeneous ideal given by: \(I\,=<\,Q(x),I_{j},x^{l-m-n+1}z,c_{0}x^{l+n-m+1}w,c_{1}x^{l+m-n+1}y,c_{2}x^{l +1}y,c_{3}x^{l+1}w\,>_{1\leq j\leq 6}\), where deg \(x=1\), deg \(y=n\), deg \(w=m\) & deg \(z=n+m\) and \(I_{1}=y^{2}+a_{1}x^{2n}+a_{2}x^{n}y+a_{3}x^{2n-m}w+a_{4}z,I_{2}=w^{2}+a_{5}x^{2m }+a_{6}x^{2m-n}y+a_{7}x^{m}w+a_{8}x^{m-n}z,I_{3}=z^{2}\) + \(a_{9}x^{2n+2m}\) + \(a_{10}x^{n+2m}y+a_{11}x^{2n+m}w+a_{12}x^{n+m}z,I_{4}=yw+a_{13}x^{n+m}+a_{14}x^{ m}y+a_{15}x^{n}w+a_{16}z,I_{5}=yz+a_{17}x^{2n+m}+a_{18}x^{n+m}y+a_{19}x^{2n}w+a_{20}x^{n }z\) & \(I_{6}=wz+a_{21}x^{2m+n}+a_{22}x^{2m}y+a_{23}x^{n+m}w+a_{24}x^{m}z,a_{i}\in \mathbb{Z}_{2},1\leq i\leq 24\); \(a_{4}=0\)
if \(n<m,a_{8}=0\) if \(l<2m\), \(a_{12}=0\) if \(l<2m+2n\), \(a_{20}=0\) if \(l<2n+m\), and \(a_{24}=0\) if \(l<n+2m\); \(Q(x)=x^{l+1+j^{\prime}},j^{\prime}=n+m,m\) or \(n\). If \(j^{\prime}=n+m\), then either \(\{c_{0}=c_{1}=1\) & \(c_{2}=c_{3}=0\) with \(a_{7}=0\) if \(n+l<2m,a_{10}=0\) if \(l<2n+m,a_{11}=0\) if \(l<n+2m\) & \(a_{23}=0\) if \(l<2m\}\) or \(\{c_{0}=c_{1}=0\) & \(c_{2}=c_{3}=1\) with \(a_{10}=0\) if \(l<2n+m,a_{11}=0\) if \(l<n+2m\) & \(a_{22}=0\) if \(l<2m\}\). If \(j^{\prime}=m\), then \(c_{0}=1\) & \(c_{1}=c_{2}=c_{3}=0\) with \(a_{7}=0\) if \(n+l<2m,a_{9}=0\) if \(l<2m+m,a_{11}=0\) if \(l<n+2m\) & \(a_{23}=0\) if \(l<2m\). If \(j^{\prime}=n\), then \(c_{1}=1\) & \(c_{0}=c_{2}=c_{3}=0\) with \(a_{5}=0\) if \(n+l<2m,a_{9}=0\) if \(l<2m+n,a_{10}=0\) if \(l<2n+m\) & \(a_{21}=0\) if \(l<2m\).
2. \(\mathbb{Z}_{2}[x,y,w,z]/<Q(x),I_{j},c_{0}x^{m+l-n+1}y,x^{l-m+1}z,x^{l-m+1}w>_{1 \leq j\leq 6}\), where deg \(x=1\), deg \(y=n\), deg \(w=m\) & deg \(z=n+m\) and \({I_{j}}^{\prime}s,1\leq j\leq 6\) are same as in possibility (1), with \(a_{3}=0\) if \(l<2n,a_{4}=0\) if \(n<m,a_{7}=a_{24}=0\) if \(l<2m,a_{8}=0\) if \(n+l<2m,a_{11}=0\) if \(l<2n+2m,a_{12}=a_{23}=0\) if \(l<n+2m,a_{15}=a_{20}=0\) if \(l<m+n\) & \(a_{19}=0\) if \(l<2n+m\), and \(Q(x)=x^{m+l+j^{\prime}+1},j^{\prime}=0\) or \(n\). If \(j^{\prime}=0\), then \(c_{0}=0\) with \(a_{9}=0\) if \(l<2n+m,a_{10}=a_{21}=0\) if \(l<n+m\) & \(a_{17}=0\) if \(l<2n\). If \(j^{\prime}=n\), then \(c_{0}=1\) with \(a_{9}=a_{22}=0\) if \(l<n+m,a_{10}=0\) if \(l<2n+m\) & \(a_{18}=0\) if \(l<2n+m\).
3. \(\mathbb{Z}_{2}[x,y,w,z]/<Q(x),I_{j},c_{0}x^{n+l-m+1}w,x^{l-n+1}y,x^{l-n+1}z>_{1 \leq j\leq 6}\), where deg \(x=1\), deg \(y=n\), deg \(w=m\) & deg \(z=n+m\) and \({I_{j}}^{\prime}s,1\leq j\leq 6\) are same as in possibility (1), with \(a_{2}=a_{20}=0\) if \(l<2n,a_{4}=0\) if \(n<m,a_{6}=0\) if \(l<2m,a_{10}=0\) if \(l<2n+2m,a_{14}=a_{24}=0\) if \(l<n+m\) & \(a_{18}=0\) if \(l<2n+m\) and \(Q(x)=x^{n+1+j^{\prime}},j^{\prime}=0\) or \(m\). If \(j^{\prime}=0\), then \(c_{0}=0\) with \(a_{5}=0\) if \(n+l<2m,a_{9}=a_{12}=0\) if \(l<2m+n,a_{11}=a_{17}=0\) if \(l<n+m,a_{21}=0\) if \(l<2m\) & \(a_{22}=0\) if \(l<2n+m,a_{21}=0\) if \(l<2m\) & \(a_{23}=0\) if \(l<2m\).
4. \(\mathbb{Z}_{2}[x,y,w,z]/<x^{l+1},I_{j}>_{1\leq j\leq 6}\), where deg \(x=1\), deg \(y=n\), deg \(w=m\) & deg \(z=n+m\) and \({I_{j}}^{\prime}s,1\leq j\leq 6\) are same as in possibility (1), with \(a_{1}=a_{19}=0\) if \(l<2n,a_{4}=0\) if \(n<m,a_{5}=a_{22}=0\) if \(l<2m,a_{6}=0\) if \(n+l<2m,a_{9}=0\) if \(l<2n+2m,a_{10}=a_{21}=0\) if \(l<2m+n,a_{11}=a_{17}=0\) if \(l<2n+m\) & \(a_{12}=a_{13}=a_{18}=a_{23}=0\) if \(l<n+m\).
Proof.: If \(d_{r_{1}}(1\otimes a)=d_{r_{2}}(1\otimes b)=0\) and \(d_{r_{3}}(1\otimes c)\neq 0\), then we have following four cases: (i) \(r_{3}=l-m-n+1\), (ii) \(r_{3}=l-m+1\), (iii) \(r_{3}=l-n+1\), and (iv) \(r_{3}=l+1\).
**Case (i):**\(r_{3}=l-m-n+1\).
Clearly, \(n\leq m<l\) and \(n+m<l\). We have \(d_{l-m-n+1}(1\otimes c)=t^{l-m-n+1}\otimes ab\).
First, we assume that \(d_{n+l-m+1}\) is nontrivial. So, we have \(d_{n+l-m+1}(1\otimes ac)=t^{n+l-m+1}\otimes b\). Now, we have either \(d_{m+l-n+1}(1\otimes bc)=0\) or \(d_{m+l-n+1}(1\otimes bc)=t^{m+l-n+1}\otimes a\).
Let \(d_{m+l-n+1}(1\otimes ac)=t^{m+l-n+1}\otimes a\). As \(G\) acts freely on \(X\), we must have \(d_{n+m+l+1}(1\otimes abc)=t^{n+m+l+1}\otimes 1\). Thus, \(E_{n+m+l+2}^{*,*}=E_{\infty}^{*,*}\). For \(n<m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq n+m+l,q=0;0\leq p\leq m+l-n,q=n;0\leq p\leq n+l-m,q=m\) and \(0\leq p\leq l-m-n,q=n+m\), and zero otherwise. For \(n=m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq 2n+l,q=0\) and \(0\leq p\leq l-2n,q=2n;E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\), \(0\leq p\leq l,q=n\), and zero otherwise.
For \(n<m<l\), the cohomology groups are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;m+l<k\leq m+n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n\leq k<m;n+l<k\leq m+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&m\leq k<m+n;l<k\leq n+l, \\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&m+n \leq k\leq l,\\ 0&\text{otherwise},\end{cases}\]
and, for \(n=m<l\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;n+l<k\leq 2n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n\leq k<2n;l<k\leq n+l, \\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&2n \leq k\leq l,\\ 0&\text{otherwise}.\end{cases}\]
The permanent cocycles \(t\otimes 1,1\otimes a\), \(1\otimes b\) and \(1\otimes ab\) of \(E_{2}^{*,*}\) determine the elements \(x\in E_{\infty}^{1,0}\), \(u\in E_{\infty}^{0,n}\), \(v\in E_{\infty}^{0,m}\) and \(s\in E_{\infty}^{0,n+m}\), respectively. Thus, the total complex
\(E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/I\), where ideal \(I\) is given by
\(<x^{n+m+l+1},u^{2}+\gamma_{1}v+\gamma_{2}s,v^{2}+\gamma_{3}s,s^{2},uv+\gamma_ {4}s,us,vs,x^{l+m-n+1}u,x^{l+n-m+1}v,x^{l-m-n+1}s>\)
where \(\deg\,x=1\), \(\deg\,u=n\), \(\deg\,v=m\), \(\deg\,s=n+m\), and \(\gamma_{i}\in\mathbb{Z}_{2},1\leq i\leq 4\);
\(\gamma_{1}=0\) if \(m\neq 2n\), and \(\gamma_{2}=\gamma_{3}=0\) if \(n<m\). Let \(y\in H^{n}(X_{G}),w\in H^{m}(X_{G})\) and \(z\in H^{n+m}(X_{G})\) such that \(i^{*}(y)=a,i^{*}(w)=b\), and \(i^{*}(z)=ab\), respectively. Clearly, \(I_{1}=y^{2}+b_{1}x^{2n}+b_{2}x^{n}y+b_{3}x^{2n-m}w+b_{4}z=0,I_{2}=w^{2}+b_{5}x^{ 2m}+b_{6}x^{2m-n}y+b_{7}x^{m}w+b_{8}x^{m-n}z=0,I_{3}=z^{2}+b_{9}x^{2n+2m}+b_{10 }x^{n+2m}y+b_{11}x^{2n+m}w+b_{12}x^{n+m}z=0,I_{4}=yw+b_{13}x^{n+m}+b_{14}x^{m}y+b _{15}x^{n}w+b_{16}z=0,I_{5}=yz+b_{17}x^{2n+m}+b_{18}x^{n+m}y+b_{19}x^{2n}w+b_{20 }x^{n}z=0\)
\(0\)
\(k\)
\(=wz+b_{21}x^{2m+n}+b_{22}x^{2m}y+b_{23}x^{n+m}w+b_{24}x^{m}z=0\), where \(b_{i}\in\mathbb{Z}_{2},1\leq i\leq 24\);
\(b_{4}=0\) if \(n<m,b_{7}=0\) if \(n+l<2m\), \(b_{8}=b_{23}=0\) if \(l<2m,b_{10}=b_{20}=0\) if \(l<2n+m,b_{11}=b_{24}=0\) if \(l<n+2m\), and \(b_{12}=0\) if \(l<2n+2m\). Thus, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+m+l+1},I_{j},x^{l-m-n+1}z,x^{l+n-m+1}w,x^{l+n-m+1} y>_{1\leq j\leq 6},\]
where deg \(x=1\), deg \(y=n\), deg \(w=m\), and deg \(z=n+m\). This realizes possibility (1) when \(j^{\prime}=n+m\) with \(c_{0}=c_{1}=1\) and \(c_{2}=c_{3}=0\).
Now, let \(d_{m+l-n+1}(1\otimes ac)=0\). Then, we must have \(d_{m+l+1}(1\otimes bc)=t^{m+l+1}\otimes 1\) and \(d_{m+l+1}(1\otimes abc)=t^{m+l+1}\otimes a\). Thus, \(E_{m+l+2}^{*,*}=E_{\infty}^{*,*}\). For \(n<m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq m+l,q=0,n;0\leq p\leq n+l-m,q=m\) and \(0\leq p\leq l-m-n,q=n+m\), and zero otherwise. For \(n=m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq n+l,q=0;0\leq p\leq l-2n,q=2n;l<p\leq n+l,q=n;E_{\infty}^{p,q}\cong \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\), \(0\leq p\leq l,q=n\), and zero otherwise. Note that the cohomology groups are the same as above. The total complex Tot \(E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/I\), where \(I\) is an ideal given by
\[<x^{m+l+1},u^{2}+\gamma_{1}v+\gamma_{2}s,v^{2}+\gamma_{3}s,s^{2},uv+\gamma_{4 }s,us,vs,x^{l+n-m+1}v,x^{l-m-n+1}s>,\]
where deg \(x=1\), deg \(u=n\), deg \(v=m\) & deg \(s=n+m\), and \(\gamma_{i}\in\mathbb{Z}_{2},1\leq i\leq 4\); \(\gamma_{1}=0\) if \(m\neq 2n\), and \(\gamma_{2}=\gamma_{3}=0\) if \(n<m\). The ideals \({I_{j}}^{\prime}s,1\leq j\leq 6\) are also same as above with conditions: \(b_{4}=0\) if \(n<m\), \(b_{7}=0\) if \(m>2n\), \(b_{8}=b_{23}=0\) if \(l<2m\), \(b_{9}=0\) if \(l<2n+m\), \(b_{11}=0\) if \(l<n+2m\), \(b_{12}=0\) if \(l<2n+2m\), \(b_{20}=0\) if \(l<2n+m\), and \(b_{24}=0\) if \(l<n+2m\). Thus, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{m+l+1},I_{j},x^{l-m-n+1}z,x^{l+n-m+1}w>_{1\leq j\leq 6},\]
where deg \(x=1\), deg \(y=n\), deg \(w=m\), and deg \(z=n+m\). This realizes possibility (1) when \(j^{\prime}=m\).
Now, assume that \(d_{n+l-m+1}\) is trivial. We have either \(d_{l+1}(1\otimes ac)=0\) or \(d_{l+1}(1\otimes ac)=t^{l+1}\otimes a\).
Let \(d_{l+1}(1\otimes ac)=t^{l+1}\otimes a\). Then, \(d_{l+1}(1\otimes bc)=t^{l+1}\otimes b\), and \(d_{n+m+l+1}(1\otimes abc)=t^{n+m+l+1}\otimes 1\). Thus, \(E_{n+m+l+2}^{*,*}=E_{\infty}^{*,*}\). For \(n<m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq n+m+l,q=0;0\leq p\leq l,q=n,m\) and \(0\leq p\leq l-m-n,q=n+m\), and zero otherwise. For \(n=m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq 2n+l,q=0;0\leq p\leq l-2n,q=2n;E_{\infty}^{p,q}\cong\mathbb{Z}_{2} \oplus\mathbb{Z}_{2}\), \(0\leq p\leq l,q=n\), and zero otherwise. The cohomology groups are same as above. The total complex \(\text{Tot}E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/I\), where \(I\) is an ideal given by
\[<x^{n+m+l+1},u^{2}+\gamma_{1}v+\gamma_{2}s,v^{2}+\gamma_{3}s,s^{2},uv+\gamma_{ 4}s,us,vs,x^{l+1}u,x^{l+1}v,x^{l-m-n+1}s>,\]
where deg \(x=1\), deg \(u=n\), deg \(v=m\) & deg \(s=n+m\), and \(\gamma_{i}\in\mathbb{Z}_{2},1\leq i\leq 4\); \(\gamma_{1}=0\) if \(m\neq 2n\), and \(\gamma_{2}=\gamma_{3}=0\) if \(n<m\). The ideals \(I_{j}{}^{\prime}s;1\leq j\leq 6\) are same as above with conditions: \(b_{4}=0\) if \(n<m,b_{8}=0\) if \(l<2m,b_{10}=b_{24}=0\) if \(l<n+2m\), \(b_{11}=b_{20}=0\) if \(l<2n+m\), \(b_{12}=0\) if \(l<2n+2m\), and \(b_{22}=0\) if \(l<2m\). So, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w.z]/<x^{n+m+l+1},I_{j},x^{l-m-n+1}z,x^{l+1}y,x^{l+1}w>_{1 \leq j\leq 6}\]
where deg \(x=1\), deg \(y=n\), deg \(w=m\), and deg \(z=n+m\). This realizes possibility (1) when \(j^{\prime}=n+m\) with \(c_{0}=c_{1}=0\) and \(c_{2}=c_{3}=1\).
Now, let \(d_{l+1}(1\otimes ac)=0\). Then, we must have \(d_{m+l-n+1}(1\otimes bc)=t^{m+l-n+1}\otimes a\). Consequently, we get \(d_{n+l+1}(1\otimes ac)=t^{n+l+1}\otimes 1\) and \(d_{n+l+1}(1\otimes abc)=t^{n+l+1}\otimes b\). Thus, \(E_{n+l+2}^{*,*}=E_{\infty}^{*,*}\). For \(n\leq m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq n+l,q=0,m;0\leq p\leq m+l-n,q=n\); \(0\leq p\leq l-m-n,q=n+m\), and zero otherwise. The cohomology groups are same as above. The total complex \(\mathrm{Tot}E_{\infty}^{*,*}\) is given by
\[\mathbb{Z}_{2}[x,u,v,s]/<x^{n+l+1},u^{2}+\gamma_{1}v+\gamma_{2}s,v^{2}+\gamma _{3}s,s^{2},uv+\gamma_{4}s,us,vs,x^{l+m-n+1}u,x^{l-m-n+1}s>\]
where deg \(x=1\), deg \(u=n\), deg \(v=m\), deg \(s=n+m\), and \(\gamma_{i}\in\mathbb{Z}_{2},1\leq i\leq 4\); \(\gamma_{1}=0\) if \(m\neq 2n\), and \(\gamma_{2}=\gamma_{3}=0\) if \(m\neq n\). The ideals \(I_{j}{}^{\prime}s,1\leq j\leq 6\) are same as above with conditions: \(b_{4}=0\) if \(n<m,b_{5}=0\) if \(n+l<2m,b_{8}=b_{21}=0\) if \(l<2m,b_{9}=b_{24}=0\) if \(l<n+2m\), \(b_{10}=0\) if \(l<2n+m\), \(b_{12}=0\) if \(l<2n+2m\), and \(b_{20}=0\) if \(l<2n+m\). Thus, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+l+1},I_{j},x^{l-m-n+1}z,x^{m+l-n+1}y,>_{1\leq j \leq 6}\]
where deg \(x=1\), deg \(y=n\), deg \(w=m\), deg \(z=n+m\). This realizes possibility (1) when \(j^{\prime}=n\).
**Case (ii):**\(r_{3}=l-m+1\).
Clearly, \(n\leq m<l\). We have \(d_{l-m+1}(1\otimes c)=t^{l-m+1}\otimes b\) and \(d_{l-m+1}(1\otimes ac)=t^{l-m+1}\otimes ab\). Now, we have either \(d_{m+l-n+1}(1\otimes bc)=0\) or \(d_{m+l-n+1}(1\otimes bc)=t^{m+l-n+1}\otimes a\).
First, let \(d_{m+l-n+1}(1\otimes bc)=t^{m+l-n+1}\otimes a\). It is clear that \(d_{n+m+l+1}(1\otimes abc)=t^{n+m+l+1}\otimes 1\). Thus, \(E_{n+m+l+2}^{*,*}=E_{\infty}^{*,*}\). For \(n<m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq n+m+l,q=0;0\leq p\leq m+l-n,q=n\); \(0\leq p\leq l-m,q=m,n+m\), and zero otherwise. For \(n=m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq 2n+l,q=0;0\leq p\leq l-n,q=2n,l-n<p\leq l,q=n;E_{\infty}^{p,q}\cong \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\), \(0\leq p\leq l-n,q=n\), and zero otherwise.
For \(n<m<l<n+m\), the cohomology groups are given by
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;m+l<k\leq m+n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n\leq k<m;l<k<n+m;n+l<k\leq m+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&m+j\leq k\leq l+j,j=0,n, \\ 0&\text{otherwise},\end{cases}\]
and, for \(n=m<l<2n\), we have
\[H^{k}(X_{G})=\begin{cases}\mathbb{Z}_{2}&0\leq k<n;n+l<k\leq 2n+l,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&l<k<2n,\\ \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}&n+j\leq k\leq l+j,j=0,n, \\ 0&\text{otherwise}.\end{cases}\]
For \(l\geq n+m\), the cohomology groups are similar to case (i). The total complex \(\text{Tot}E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/I\), where ideal \(I\) is given by
\[<x^{n+m+l+1},u^{2}+\gamma_{1}v+\gamma_{2}s,v^{2}+\gamma_{3}s,s^{2},uv+\gamma_ {4}s,us,vs,x^{l+m-n+1}u,x^{l-m+1}v,x^{l-m+1}s>,\]
where \(\deg\,x=1\), \(\deg\,u=n\), \(\deg\,v=m\), \(\deg\,s=n+m\), and \(\gamma_{i}\in\mathbb{Z}_{2}\), \(1\leq i\leq 4\); \(\gamma_{1}=0\) if \(m\neq 2n\), and \(\gamma_{2}=\gamma_{3}=0\) if \(n<m\). The ideals \(I_{j}^{\,\prime}s,1\leq j\leq 6\) are same as in case (i) with conditions: \(b_{3}=b_{18}=0\) if \(l<2n\), \(b_{4}=0\) if \(n<m,b_{7}=b_{24}=0\) if \(l<2m,b_{8}=0\) if \(n+l<2m,b_{9}=b_{15}=b_{20}=b_{22}=0\) if \(l<n+m\), \(b_{10}=b_{19}=0\) if \(l<2n+m,b_{11}=0\) if \(l<2n+2m\), and \(b_{12}=b_{23}=0\) if \(l<n+2m\). Thus, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w.z]/<x^{n+m+l+1},I_{j},x^{m+l-n+1}y,x^{l-m+1}w,x^{l-m+1}z>_ {1\leq j\leq 6}\]
where \(\deg\,x=1\), \(\deg\,y=n\), \(\deg\,w=m\), and \(\deg\,z=n+m\). This realizes possibility (2) when \(j^{\prime}=n\).
Now, let \(d_{m+l-n+1}(1\otimes bc)=0\). Then, we must have \(d_{m+l+1}(1\otimes bc)=t^{m+l+1}\otimes 1\) and \(d_{m+l+1}(1\otimes abc)=t^{m+l+1}\otimes a\). Thus, \(E_{m+l+2}^{*,*}=E_{\infty}^{*,*}\). For \(n<m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), if \(0\leq p\leq m+l,q=0,n;0\leq p\leq l-m,q=m,n+m\), and zero otherwise. For \(n=m<l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq n+l,q=0;l-n<p\leq n+l,q=n;0\leq p\leq l-n,q=n+m;E_{\infty}^{p,q} \cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\), \(0\leq p\leq l-n,q=n\), and zero otherwise. For \(l<n+m\), the cohomology groups are same as case (ii) when \(d_{m+l-n+1}\) is nontrivial, and for \(l\geq n+m\), the cohomology groups are similar to case (i). Thus, the total complex Tot \(E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/I\), where ideal \(I\) is given by
\[<x^{m+l+1},u^{2}+\gamma_{1}v+\gamma_{2}s,v^{2}+\gamma_{3}s,s^{2},uv+\gamma_{4 }s,us,vs,x^{l-m+1}v,x^{l-m+1}s>,\]
where \(\deg\,x=1\), \(\deg\,\,u=n\), \(\deg\,\,v=m\), \(\deg\,\,s=n+m\), and \(\gamma_{i}\in\mathbb{Z}_{2}\), \(1\leq i\leq 4\); \(\gamma_{1}=0\) if \(m\neq 2n\), and \(\gamma_{2}=\gamma_{3}=0\) if \(n<m\). The ideals \({I_{j}}^{\prime}s,1\leq j\leq 6\) are same as in case (i) with conditions: \(b_{3}=b_{17}=0\) if \(l<2n,b_{4}=0\) if \(n<m,b_{7}=b_{24}=0\) if \(l<2m,b_{8}=0\) if \(n+l<2m,b_{9}=b_{19}=0\) if \(l<2n+m,b_{10}=b_{15}=b_{20}=b_{21}=0\) if \(l<m+n,b_{11}=0\) if \(l<2n+2m\), \(b_{12}=b_{23}=0\) if \(l<2m+n\), and \(b_{17}=0\) if \(l<2n\). Thus, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{m+l+1},I_{j},x^{l-m+1}w,x^{l-m+1}z>_{1\leq j\leq 6},\]
where \(\deg\,\,x=1\), \(\deg\,\,y=n\), \(\deg\,\,w=m\) and \(\deg\,\,z=n+m\). This realizes possibility (2) when \(j^{\prime}=0\).
**Case (iii):**\(r_{3}=l-n+1\).
Clearly, \(n<l\). We have \(d_{l-n+1}(1\otimes c)=t^{l-n+1}\otimes a\) and \(d_{l-n+1}(1\otimes bc)=t^{l-n+1}\otimes ab\). We have either \(d_{n+l-m+1}(1\otimes ac)=0\) or \(d_{n+l-m+1}(1\otimes ac)=t^{n+l-m+1}\otimes b\).
First, let \(d_{n+l-m+1}(1\otimes ac)=t^{n+l-m+1}\otimes b\). Then, we must have \(d_{n+m+l+1}(1\otimes abc)=t^{n+m+l+1}\otimes 1\). Thus, \(E_{n+m+l+2}^{*,*}=E_{\infty}^{*,*}\). For \(n<m\leq l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\) if \(0\leq p\leq n+m+l,q=0;0\leq p\leq l-n,q=n,n+m;0\leq p\leq n+l-m,q=m\), and zero otherwise. For \(n=m<l\) we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\) if \(0\leq p\leq 2n+l,q=0;l-n<p\leq l,q=n;0\leq p\leq l-n,q=2n;E_{\infty}^{p,q}\cong \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\) if \(0\leq p\leq l-n,q=n\), and zero otherwise. For \(l\geq n+m\), the cohomology groups are similar to case (i), and for \(l<n+m\), the cohomology groups are same as in case (ii). Thus, the total complex \(\text{Tot}E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/I\), where \(I\) is given by
\[<x^{n+m+l+1},u^{2}+\gamma_{1}v+\gamma_{2}s,v^{2}+\gamma_{3}s,s^{2},uv+\gamma_ {4}s,us,vs,x^{l-n+1}u,x^{n+l-m+1}v,x^{l-n+1}s>,\]
where \(\deg\,\,x=1\), \(\deg\,\,u=n\), \(\deg\,\,v=m\), \(\deg\,\,s=n+m\), and \(\gamma_{i}\in\mathbb{Z}_{2}\), \(1\leq i\leq 4\); \(\gamma_{1}=0\) if \(m\neq 2n\), and \(\gamma_{2}=\gamma_{3}=0\) if \(n<m\). Clearly, the ideals \(I_{j},1\leq j\leq 6\) are same as in case (i) with conditions: \(b_{2}=b_{20}=0\) if \(l<2n\), \(b_{4}=0\) if \(n<m,b_{6}=b_{23}=0\) if \(l<2m,b_{7}=0\) if \(n+l<2m,b_{9}=b_{14}=b_{19}=b_{24}=0\) if \(l<n+m,b_{10}=0\) if \(l<2n+2m\), \(b_{11}=b_{22}=0\) if \(l<2m+n\) and \(b_{12}=b_{18}=0\) if \(l<2n+m\). Thus, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w.z]/<x^{n+m+l+1},I_{j},x^{l-n+1}y,x^{n+l-m+1}w,x^{l-n+1}z> _{1\leq j\leq 6}\]
where \(\deg\,\,x=1\), \(\deg\,\,y=n\), \(\deg\,\,w=m\), and \(\deg\,\,z=n+m\). This realizes possibility (3) when \(j^{\prime}=m\).
Now, let \(d_{n+l-m+1}(1\otimes ac)=0\). Then, we must have \(d_{n+l+1}(1\otimes ac)=t^{m+l+1}\otimes 1\) and \(d_{n+l+1}(1\otimes abc)=t^{m+l+1}\otimes b\). Thus, \(E_{n+l+2}^{*,*}=E_{\infty}^{*,*}\). For \(n<m\leq l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), if \(0\leq p\leq n+l,q=0,m;0\leq p\leq l-n,q=n,n+m\), and zero otherwise. For \(n=m<l\)
we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), if \(0\leq p\leq n+l,q=0;l-n<p\leq n+l,q=n;0\leq p\leq l-n,q=2n;E_{\infty}^{p,q}\cong \mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\) if \(0\leq p\leq l-n,q=n\), and zero otherwise. For \(l\geq n+m\), the cohomology groups are similar to case (i), and for \(l<n+m\), the cohomology groups are same as case (ii). Thus, the total complex Tot \(E_{\infty}^{*,*}=\mathbb{Z}_{2}[x,u,v,s]/I\), where ideal \(I\) is given by
\[<x^{n+l+1},u^{2}+\gamma_{1}v+\gamma_{2}s,v^{2}+\gamma_{3}s,s^{2},uv+\gamma_{4} s,us,vs,x^{l-n+1}u,x^{l-n+1}s>,\]
where deg \(x=1\), deg \(u=n\), deg \(v=m\), deg \(s=n+m\), and \(\gamma_{i}\in\mathbb{Z}_{2}\), \(1\leq i\leq 4\); \(\gamma_{1}=0\) if \(m\neq 2n\), and \(\gamma_{2}=\gamma_{3}=0\) if \(n<m\). The ideals \(I_{j}{}^{\prime}s;1\leq j\leq 6\) are same as in case (i) with conditions: \(b_{2}=b_{20}=0\) if \(l<2n\), \(b_{4}=0\) if \(n<m\), \(b_{5}=0\) if \(n+l<2m\), \(b_{6}=b_{21}=0\) if \(l<2m\), \(b_{9}=b_{12}=0\) if \(l<2m+n\), \(b_{10}=0\) if \(l<2n+2m,b_{11}=b_{14}=b_{17}=b_{24}=0\) if \(l<n+m\) and \(b_{18}=b_{22}=0\) if \(l<2n+m\). Thus, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w,z]/<x^{n+l+1},I_{j},x^{l-n+1}y,x^{l-n+1}z>_{1\leq j\leq 6},\]
where deg \(x=1\), deg \(y=n\), deg \(w=m\) and deg \(z=n+m\). This realizes possibility (3) when \(j^{\prime}=0\).
**Case (iv):**\(r_{3}=l+1\).
We have \(d_{l+1}(1\otimes c)=t^{l+1}\otimes 1\). Consequently, \(d_{l+1}(1\otimes ac)=t^{l+1}\otimes a\), \(d_{l+1}(1\otimes bc)=t^{l+1}\otimes b\) and \(d_{l+1}(1\otimes abc)=t^{l+1}\otimes ab\). Thus, \(E_{l+2}^{*,*}=E_{\infty}^{*,*}\). For \(n<m\leq l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), where \(0\leq p\leq l,q=0,n,m,n+m\), and zero otherwise. For \(n=m\leq l\), we have \(E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\), \(0\leq p\leq l,q=0,2n;E_{\infty}^{p,q}\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\), \(0\leq p\leq l,q=n\), and zero otherwise. For \(l\geq n+m\), the cohomology groups are similar to case (i), and for \(l<n+m\), the cohomology groups are same as in case (ii). The total complex Tot \(E_{\infty}^{*,*}\)\(=\mathbb{Z}_{2}[x,u,v,s]/I\), where ideal \(I\) is given by
\[<x^{l+1},u^{2}+\gamma_{1}v+\gamma_{2}s,v^{2}+\gamma_{3}s,s^{2},uv+\gamma_{4} s,us,vs>,\]
where deg \(x=1\), deg \(u=n\), deg \(v=m\), deg \(s=n+m\), and \(\gamma_{i}\in\mathbb{Z}_{2}\), \(1\leq i\leq 4\); \(\gamma_{1}=0\) if \(m\neq 2n\), and \(\gamma_{2}=\gamma_{3}=0\) if \(n<m\). The ideals \(I_{j}{}^{\prime}s;1\leq j\leq 6\) are same as in case (i) with conditions: \(b_{1}=b_{19}=0\) if \(l<2n\), \(b_{4}=0\) if \(n<m,b_{5}=b_{22}=0\) if \(l<2m,b_{6}=0\) if \(n+l<2m,b_{9}=0\) if \(l<2n+2m\), \(b_{10}=b_{21}=0\) if \(l<2m+n\), \(b_{11}=b_{17}=0\) if \(l<2n+m\), and \(b_{12}=b_{13}=b_{18}=b_{23}=0\) if \(l<n+m\). Thus, the cohomology ring \(H^{*}(X_{G})\) is given by
\[\mathbb{Z}_{2}[x,y,w.z]/<x^{l+1},I_{j}>_{1\leq j\leq 6},\]
where deg \(x=1\), deg \(y=n\), deg \(w=m\) and deg \(z=n+m\). This realizes possibility (4).
**Example 3.9**.: An example of case (1) of Theorem 3.6 can be realized by considering diagonal action of \(G=\mathbb{Z}_{2}\) on \(\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\), where \(\mathbb{Z}_{2}\) acts freely on \(\mathbb{S}^{n}\) and trivially on both \(\mathbb{S}^{m}\) and \(\mathbb{S}^{l}\). Then, \(X/G\sim_{2}\mathbb{RP}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\). This realizes case (1) by taking \(a_{7}=1\) and \(a_{i}=0\) for \(i\neq 7\). Similarly, case (2) of Theorem 3.7, with \(a_{8}=1\) and \(a_{i}=0\) for \(i\neq 8\), and case (4) of Theorem 3.8, with \(a_{16}=1\) and \(a_{i}=0\) for \(i\neq 16\), can be realized.
## 4. Some Applications on \(\mathbb{Z}_{2}\)-Equivariant Maps
In this section, we derive the Borsuk-Ulam type results for free involutions on \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l},1\leq n\leq m \leq l\). We determine the nonexistence of \(\mathbb{Z}_{2}\)-equivariant maps between \(X\) and \(\mathbb{S}^{k}\), where \(\mathbb{S}^{k}\) equipped with the antipodal actions.
Recall that [5] the index (respectively, co-index) of a \(G\)-space \(X\) is the greatest integer \(k\) (respectively, the lowest integer \(k\)) such that there exists a \(G\)-equivariant map \(\mathbb{S}^{k}\to X\) (respectively, \(X\to\mathbb{S}^{k}\)).
By Theorems proved in section 3, we get the largest integer \(s\) for which \(w^{s}\neq 0\) is one of \(n,m,l,n+m,2n+l,n+l,m+l\) or \(n+m+l\), where \(w\in H^{1}(X/G)\) is the characteristic class of the principle \(G\)-bundle \(G\hookrightarrow X\to X/G\). We know that \(\text{index}(X)\leq s\)[5]. Thus, we have the following Result:
**Proposition 4.1**.: Let \(G=\mathbb{Z}_{2}\) act freely on \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\), \(n\leq m\leq l\). Then there does not exist \(G-\)equivariant map from \(\mathbb{S}^{d}\) to \(X\), for \(d>h\), where \(h\) is one of \(n,m,l,n+m,2n+l,n+l,m+l\) or \(n+m+l\).
Recall that the Volovikov's index \(i(X)\) is the smallest integer \(r\geq 2\) such that \(d_{r}:E_{r}^{k-r,r-1}\to E_{r}^{k,0}\) is nontrivial for some \(k\), in the Leray-Serre spectral sequence of the Borel fibration \(X\xleftrightarrow{i}X_{G}\xrightarrow{\pi}B_{G}\)[23]. Again, by Theorems proved in section 3, we get \(i(X)\) is \(n,m,l,m-n,l-m,l-n\) or \(l-m-n\). By taking \(Y=\mathbb{S}^{k}\) in Theorem 1.1[4], we have
**Theorem 4.2**.: Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m}\times\mathbb{S}^{l}\), \(n\leq m\leq l\). Then, there is no \(G\)-equivariant map \(f:X\to\mathbb{S}^{k}\) if \(1\leq k<i(X)-1\), where \(i(X)\) is one of \(n,m,l,m-n,l-m,l-n\) or \(l-m-n\).
|
2310.11223 | Model-based Estimation of AV-nodal Refractory Period and Conduction
Delay Trends from ECG | Atrial fibrillation (AF) is the most common arrhythmia, associated with
significant burdens to patients and the healthcare system. The atrioventricular
(AV) node plays a vital role in regulating heart rate during AF, but is often
insufficient in regards to maintaining a healthy heart rate. Thus, the AV node
properties are modified using rate-control drugs. Hence, quantifying individual
differences in diurnal and short-term variability of AV-nodal function could
aid in personalized treatment selection.
This study presents a novel methodology for estimating the refractory period
(RP) and conduction delay (CD) trends and their uncertainty in the two pathways
of the AV node during 24 hours using non-invasive data. This was achieved using
a network model together with a problem-specific genetic algorithm and an
approximate Bayesian computation algorithm. Diurnal and short-term variability
in the estimated RP and CD was quantified by the difference between the daytime
and nighttime estimates and by the Kolmogorov-Smirnov distance between adjacent
10-minute segments in the 24-hour trends.
Holter ECGs from 51 patients with permanent AF during baseline were analyzed,
and the predictive power of variations in RP and CD on the resulting heart rate
reduction after treatment with four rate control drugs was investigated.
Diurnal variability yielded no correlation to treatment outcome, and no
prediction of drug outcome was possible using the machine learning tools.
However, a correlation between the short-term variability for the RP and CD in
the fast pathway and resulting heart rate reduction during treatment with
metoprolol ($\rho=0.48, p<0.005$ in RP, $\rho=0.35, p<0.05$ in CD) were found.
The proposed methodology enables non-invasive estimation of the AV node
properties during 24 hours, which may have the potential to assist in treatment
selection. | Mattias Karlsson, Pyotr G Platonov, Sara R. Ulimoen, Frida Sandberg, Mikael Wallman | 2023-10-17T12:55:59Z | http://arxiv.org/abs/2310.11223v1 | Model-based Estimation of AV-nodal Refractory Period and Conduction Delay Trends from ECG (Preprint)
###### Abstract
Atrial fibrillation (AF) is the most common arrhythmia, associated with significant burdens to patients and the healthcare system. The atrioventricular (AV) node plays a vital role in regulating heart rate during AF, but is often insufficient in regards to maintaining a healthy heart rate. Thus, the AV node properties are modified using rate-control drugs. Hence, quantifying individual differences in diurnal and short-term variability of AV-nodal function could aid in personalized treatment selection.
This study presents a novel methodology for estimating the refractory period (RP) and conduction delay (CD) trends and their uncertainty in the two pathways of the AV node during 24 hours using non-invasive data. This was achieved using a network model together with a problem-specific genetic algorithm and an approximate Bayesian computation algorithm. Diurnal and short-term variability in the estimated RP and CD was quantified by the difference between the daytime and nighttime estimates and by the Kolmogorov-Smirnov distance between adjacent 10-minute segments in the 24-hour trends.
Holter ECGs from 51 patients with permanent AF during baseline were analyzed, and the predictive power of variations in RP and CD on the resulting heart rate reduction after treatment with four rate control drugs was investigated. Diurnal variability yielded no correlation to treatment outcome, and no prediction of drug outcome was possible using the machine learning tools. However, a correlation between the short-term variability for the RP and CD in the fast pathway and resulting heart rate reduction during treatment with metoprolol (\(\rho=0.48,p<0.005\) in RP, \(\rho=0.35,p<0.05\) in CD) were found.
The proposed methodology enables non-invasive estimation of the AV node properties during 24 hours, which may have the potential to assist in treatment selection.
AV node model, Atrial fibrillation, Atrioventricular node, Mathematical modeling, Genetic algorithm, Approximate Bayesian computation, ECG, Rate control drugs
## I Introduction
Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia and a significant burden for patients and the healthcare system [2]. The prevalence of AF is currently estimated to be between 2 and 4% worldwide [3]. In addition, the number of AF cases in the European Union is estimated to increase by 89% between 2016 and 2060 [4]. Atrial fibrillation is characterized by disorganized electrical activity in the atria, leading to rapid and irregular contraction, and is associated with an increased risk of mortality, predominantly due to heart failure or stroke [5].
The atrioventricular (AV) node acts as the only electrical connection between the atria and ventricles and partly protects the ventricles from the rapid and irregular electrical activity in the atria during AF. It can be functionally divided into two pathways, the fast pathway (FP) and the slow pathway (SP), interconnected at the Bundle of His [6]. The AV node either blocks an incoming impulse, based on its refractory period (RP), or sends it through with a delay, based on its conduction delay (CD). The AV node is thus the most essential part in regulating the heart rate during AF, and the RP and CD are the two most important properties of the AV node, deciding its filtering capability.
The AV node during permanent AF is in many cases insufficient in regards to maintaining a healthy heart rate. Therefore, the AV node properties are often modified by treatment with rate control drugs, with \(\beta\)-blockers and calcium channel blockers recommended as first-line treatment [2]. Common \(\beta\)-blockers for AF treatment are metoprolol and carvediol, which block the \(\beta 1\) receptors in the heart in order to reduce the effect of the sympathetic nervous system on the heart [7]. Common calcium channel blockers are verapamil and diliazem, which prevent the L-type calcium channels in the cardiac myocytes from opening in order to reduce conduction in the AV node [8]. However, due to the significant and poorly understood individual variability, the choice of drug is currently made empirically for each patient [2]. This could lead to a prolonged time until successful treatment, and possibly result in a suboptimal final choice of drug. Since the two recommended first-line treatments have different physiological effects on the AV node, assessing the patient-specific properties of the AV node has the potential to assist in treatment selection. Specifically, we hypothesize that \(\beta\)-blockers would exhibit an
increased effect (more reduced heart rate) when variations in the AV node properties are prominent since \(\beta\)-blockers reduce the effect of the sympathetic nervous system.
The AV-node has previously been studied using several mathematical models based on invasive data from humans and animals [9, 10, 11, 12, 13, 14, 15, 16]. However, in order for a model to be clinically applicable on an individual level, the model parameters should ideally be identifiable from non-invasive data, such as the ECG. A statistical model of the AV node with dual pathway physiology using the RR interval series and the atrial fibrillatory rate (AFR) for model fitting has been proposed [17, 18, 19]. However, the model lumps RP and CD together, limiting its interpretability.
We have previously proposed a network model of the AV node [20] together with a framework for continuously estimating twelve model parameters describing the RP and CD in the two pathways from 24-hour Holter ECG [1]. Although promising, the characterization of the AV node was still limited by the number of model parameters and their intrinsic complex dependencies, where a large change in the model parameters could result in a very small change in the RP or CD, thus, making their interpretation a non-trivial task. For a modeling approach to gain acceptance in a clinical context, the outcome should be readily interpretable by medical professionals; a fact that has become especially relevant with the increasing use of advanced modeling and machine learning techniques [21, 22]. Additionally, in [1], a version of Sobol's method was applied to quantify uncertainty in the parameter estimates. However, these uncertainty estimates were not directly interpretable as probabilities and could thus only be used as a relative measure between the model parameters, between patients, or between different times of the day. When the extent of the uncertainty is unknown, uncertain estimates have the potential to mislead decision-making processes or further analysis of the trends. A proper quantification of the uncertainty is thus advantageous in order to fully understand the estimates.
In the present study, we propose a novel methodology for estimating the RP and CD of both pathways of the AV node and the associated uncertainty continuously over 24 hours. The methodology comprises a genetic algorithm (GA) for initial model parameter estimation and an approximate Bayesian computation (ABC) algorithm to refine the estimates, together with a simulation approach to map model parameters to RP and CD in order to increase interpretability. In addition to refining the estimates, the ABC algorithm provides samples from the Bayesian posterior distribution of the AV node properties, hereafter denoted the posterior, enabling proper quantification of the uncertainty of the estimated properties. We employ these novel tools in an exploratory manner to analyze Holter ECGs from 51 patients during baseline in combination with their respective drug responses to identify potential markers for differences in drug response. Specifically, we analyze the correlation between diurnal and short-term variability and drug outcomes, as well as train several machine learning models to predict drug outcomes.
## II Materials and Methods
The overall method for assessing the RP and CD of the two pathways in the AV node for each patient (\(pat\)) can be divided into four stages, as shown in Figure 1. Firstly, 24-hour Holter ECGs are processed to extract RR interval series and AFR trends, divided into ten-minute segments (\(s\)) with a 50% overlap, as described in Sections II-A and II-B. Secondly, the parameters for the network model of the AV node, described in Section II-C, are fitted to the RR interval series and AFR in each segment using a problem-specific dynamic GA as described in Section II-D1. The GA-derived estimates are subsequently used as inputs to an ABC algorithm to refine the estimates and estimate the posterior of the model parameters, as described in Section II-D2. These model parameter estimates are finally used to simulate data with the model while tracking the RP and CD used for the two pathways, as described in Section II-D3. This results in a distribution of the RP and CD in the FP and the SP for each ten-minute segment. Finally, the possibility to predict treatment outcomes using the estimated distributions is evaluated, as described in Section II-E.
### _ECG Data_
Data from the Rate Control in Atrial Fibrillation (RATAF) study, a randomized, investigator-blind, crossover study, approved by the regional ethics committee and the Norwegian Medicines Agency and conducted in accordance with the Helsinki Declaration, is analyzed in this study [23]. Specifically, 24-hour ambulatory ECGs from 60 patients (mean age 71 \(\pm\) 9 years, 18 women) with permanent AF, no heart failure, or symptomatic ischemic heart disease, recorded during baseline, are used for the estimation of patient-specific AV node properties. In addition to the baseline ECG, the relative change in the 24-hour average heart rate (\(\Delta HR\)) for treatment with the two
Fig. 1: A schematic overview of the methodology, from ECG to estimations of the RP and CD. Previous study refers to [1].
calcium channel blockers verapamil and dilitazem and the two \(\beta\)-blockers metoprolol and carvediol are used to evaluate the therapeutic implications of the estimated AV node properties. The calculation of \(\Delta HR\) is based on the RR interval series extracted from the ECG, as explained in Section II-B.
### _ECG Processing_
The RR interval series is extracted from the ECG for each patient and divided into ten-minute segments with a 50% overlap (\(\mathbf{RR}(pat,s)\)), where RR intervals following and preceding QRS-complexes with deviating morphology are excluded from the series [24]. Segments with excessive noise can lead to a large number of undetected beats and thus an unrealistically low heart rate. Hence, each ten-minute segment is divided into minute-long non-overlapping intervals, and the whole ten-minute segment is excluded from further analysis if any one-minute interval has fewer than 20 detected beats. Patients with RR interval series with a total duration shorter than 12 h are excluded from further analysis. The RR interval series corresponding to the four rate control drugs are calculated equivalently.
Spatiotemporal QRST cancellation is employed to extract the f-waves from the ECG [25]. Subsequently, the fundamental frequency of the extracted f-waves is tracked using a hidden Markov model-based method to extract an AFR trend for each patient with a resolution of one minute [26]. For time segments where the AFR could not be obtained due to excessive noise, but the RR interval series could, the AFR is set to the closest observed AFR value.
### _Network Model of the AV Node_
Our network model of the AV node, introduced in [20], describes the AV node as two pathways (the SP and the FP) comprising 10 nodes each. These two pathways are connected by a coupling node, as illustrated in Figure 2. Each pathway node corresponds physiologically to a localized section of the respective pathway, and the coupling node corresponds to the Purkinje fibers and Bundle of His.
Atrial impulses are modeled by a Poisson process with mean arrival rate \(\lambda\). The impulses are assumed to reach the first nodes of SP and FP simultaneously. Each network node can be either in a refractory state or in a non-refractory state. A node in its refractory state will block incoming impulses, and a node in its non-refractory state will transmit an incoming impulse to all adjacent nodes with an added conduction delay before entering its refractory state. The RP (\(R_{i}(n)\)) and CD (\(D_{i}(n)\)) for node \(i\) are updated for each incoming impulse \(n\) according to Equations 1, 2, and 3,
\[R_{i}(n)=R_{min}+\Delta R(1-e^{-\tilde{t}_{i}(n)/\tau_{R}}) \tag{1}\]
\[D_{i}(n)=D_{min}+\Delta De^{-\tilde{t}_{i}(n)/\tau_{D}}, \tag{2}\]
\[\tilde{t}_{i}(n)=t_{i}(n)-(t_{i}(n-1)+R_{i}(n-1)), \tag{3}\]
where, \(\tilde{t}_{i}(n)\) is the diastolic interval preceding impulse \(n\) and \(t_{i}(n)\) is the arrival time of impulse \(n\) at node \(i\). When \(\tilde{t}_{i}(n)<0\), the node is in its refractory state and will block incoming impulses. All parameters are fixed for each pathway, resulting in three model parameters for the RP in the FP (\(R_{min}^{FP}\), \(\Delta R^{FP}\), \(\tau_{R}^{FP}\)); three model parameters for the CD in the FP (\(D_{min}^{FP}\), \(\Delta D^{FP}\), \(\tau_{R}^{FP}\)); three model parameters for the RP in the SP (\(R_{min}^{SP}\), \(\Delta R_{R}^{SP}\), \(\tau_{R}^{SP}\)); three model parameters for the CD in the SP (\(D_{min}^{SP}\), \(\Delta D^{SP}\), \(\tau_{D}^{SP}\)). These twelve model parameters constitute the mode parameter vector \(\mathbf{\theta}\). In addition, the RP in the coupling node is fixed to the mean of the ten shortest RR intervals in the data, and its CD is fixed at 60 ms [20].
### _Parameter Estimation_
For each ten-minute segment, the mean arrival rate for the Poisson process \(\lambda\) is estimated as the mean of the AFR trend (\(\hat{\lambda}(pat,s)\)), and the model parameters \(\hat{\mathbf{\theta}}(pat,s)\) are estimated using a GA together with an ABC algorithm.
An error function (\(\epsilon\)) based on the Poincare plot, i.e., a scatter plot of successive pairs of RR intervals, is used to quantify the difference between \(\mathbf{RR}(pat,s)\) and a simulated RR interval series (\(\mathbf{R\!R}\)). The successive pairs of RR intervals for \(\mathbf{RR}(pat,s)\) and \(\mathbf{R\!R}\) are placed in two-dimensional bins covering the interval between 250 and 1800 ms in steps of
Fig. 2: A schematic representation of the network model where the yellow node represents the coupling node, the red nodes the SP, the green nodes the FP, and arrows the direction for impulse conduction. For readability, only a subset of the 21 nodes is shown [20].
50 ms, resulting in \(K\) = 961 bins, which we refer to as the Poincare histogram. The error function, based on the work presented in [20], is computed according to Equation 4,
\[\epsilon=\frac{1}{K}\sum_{k=1}^{K}\frac{\left(x_{k}-\frac{1}{t_{norm}}\tilde{x}_{ k}\right)^{2}}{\sqrt{x_{k}}}, \tag{4}\]
where \(x_{k}\) and \(\tilde{x}_{k}\) are the numbers of RR intervals in the \(k\)-th bin of \(\mathbf{RR}(pat,s)\) and \(\mathbf{R}\mathbf{R}\), respectively. Additionally, \(t_{norm}\) acts as a normalizing constant and is calculated as the duration of \(\mathbf{RR}\) divided by the duration of \(\mathbf{RR}(pat,s)\).
#### Iii-B1 Genetic Algorithm
A problem-specific dynamic GA based on the work presented in [1] is used to get an initial estimate of \(\mathbf{\theta}(pat,s)\) in every segment. This results in an estimate denoted as \(\mathbf{\hat{\theta}}_{m}^{GA}\)\((pat,s)\), where \(m\) denotes the \(m\)-th fittest individual in the population after completion of the GA, i.e. the individual with the \(m\)-th lowest \(\epsilon\). The hyper-parameters in the algorithm are tuned during the optimization using the difference between the Poincare histograms in pairs of consecutive segments (\(\Delta P\)) [1]. This difference is calculated using Equation 4 with \(x_{k}\) and \(\tilde{x}_{k}\) as the number of RR intervals in each bin of the current segment and the following one, respectively.
The GA uses a population of 300 individuals, where each individual is a model parameter vector \(\mathbf{\theta}\). The algorithm uses tournament selection, a two-point crossover, and creep mutation. To avoid premature convergence and to increase performance, immigration through replacement of the least-fit individuals in the population is performed, following the work in [1]. Furthermore, \(\Delta P\) is used to determine the number of generations that the GA runs before moving to the next data segment, between two and seven. The initialization of individuals is done using latin hypercube sampling within the ranges given in Table I. These values also act as boundaries for the model parameters in the GA. For further details about the algorithm, see [1].
#### Iii-B2 Approximate Bayesian Computation
To estimate the posterior \(p(\mathbf{\theta}|\mathbf{RR}(pat,s),\hat{\lambda}(pat,s))\), an approximate Bayesian computation population Monte Carlo sampling (ABC PMC) algorithm is used [27]. The pseudo-code for the problem-specific ABC PMC is shown in Algorithm 1. The ABC PMC uses a set of \(N_{p}=100\) particles to estimate the posterior in each RR segment independently, which are updated iteratively for eight iterations (\(j\)). Each particle corresponds to a model parameter vector, denoted \(\mathbf{\hat{\theta}}_{v,j}^{ABC}\), where \(v\) corresponds to the \(v\)-th particle for the \(j\)-th iteration. The algorithm is sped up by utilizing the results from the GA to create the initial population. To construct the initial population, twenty particles are drawn from five different normal distributions, \(\mathcal{N}(\mathbf{\hat{\theta}}_{1}^{GA},\mathbf{\Sigma}_{init})\), \(\mathcal{N}(\mathbf{\hat{\theta}}_{2}^{GA},\mathbf{\Sigma}_{init})\), \(\mathcal{N}(\mathbf{\hat{\theta}}_{3}^{GA},\mathbf{\Sigma}_{init})\), \(\mathcal{N}(\mathbf{\hat{\theta}}_{4}^{GA},\mathbf{\Sigma}_{init})\), and \(\mathcal{N}(\mathbf{\hat{\theta}}_{5}^{GA},\mathbf{\Sigma}_{init})\), where the covariance matrix \(\mathbf{\Sigma}_{init}=\text{Cov}(\mathbf{\hat{\theta}}_{1:25}^{GA})\) and \(1:25\) denotes \([1,2,...,25]\) for convenience. During each iteration, each particle has a probability of being chosen based on an assigned weight, computed according to Equation 5[28]
\[\mathbf{w}_{v,j}=\big{(}\sum_{k=1}^{N_{p}}\mathbf{w}_{k,j-1}\mathcal{N}(\mathbf{\hat{ \theta}}_{k,j-1}^{ABC}|\mathbf{\hat{\theta}}_{v,j}^{ABC},\mathbf{\Sigma}_{j-1})\big{)} ^{-1}, \tag{5}\]
where \(\mathbf{w}_{v,j}\) is the weight for the \(v\)-th particle in the \(j\)-th iteration and \(\mathcal{N}(\mathbf{\hat{\theta}}_{k,j-1}^{ABC}|\mathbf{\hat{\theta}}_{v,j}^{ABC},\mathbf{ \Sigma}_{j-1})\) is the probability of \(\mathbf{\hat{\theta}}_{k,j-1}^{ABC}\) given the normal distribution with mean \(\mathbf{\hat{\theta}}_{v,j}^{ABC}\) and covariance \(\mathbf{\Sigma}_{j-1}\), where \(\mathbf{\Sigma}_{j}=2\text{Cov}(\mathbf{\hat{\theta}}_{1:N_{p},j}^{ABC})\). Furthermore, the chosen particle (\(\mathbf{\theta}^{*}\)) is perturbed to create a proposal particle (\(\mathbf{\theta}^{**}\)) using a transition kernel set as \(\mathcal{N}(0,\mathbf{\Sigma}_{j})\)[28]. The model is used to simulate data using \(\mathbf{\theta}^{**}\) to calculate an associated proposal error (\(\epsilon^{**}\)) according to Equation 4. If \(\epsilon^{**}\) is lower than a set threshold (\(T_{j}\)), \(\mathbf{\theta}^{**}\) is accepted and used in the next iteration of the algorithm; if not, a new particle is chosen and perpetuated to create a new proposal particle. Note that the boundaries for the ABC PMC algorithm are more inclusive compared to the GA to accommodate the full width of the estimated posteriors, as shown in Table I. A proposal particle outside the boundaries is always rejected. The next iteration starts when \(N_{p}\) new proposal particles have been accepted, and \(\mathbf{w}_{v,j}\), \(T_{j}\), and \(\mathbf{\Sigma}_{j}\) are then updated. The threshold changes based on the results from the GA; where \(T_{1}=\mathbf{\hat{\theta}}_{10}^{GA}(pat,s)\), \(T_{2}=\mathbf{\hat{\theta}}_{8}^{GA}(pat,s)\), \(T_{3}=\mathbf{\hat{\theta}}_{5}^{GA}(pat,s)\), \(T_{4}=\mathbf{\hat{\theta}}_{3}^{GA}(pat,s)\), and \(T_{5:8}=\mathbf{\hat{\theta}}_{1}^{GA}(pat,s)\). Hence, after the eighth iteration, the \(\epsilon\) for all particles is lower than the \(\epsilon\) for the fittest individual found by the GA. Thus, the final population is assumed to be \(N_{p}\) samples from \(p(\mathbf{\theta}|\mathbf{RR}(pat,s),\hat{\lambda}(pat,s))\).
#### Iii-B3 Parameter Reduction
The posterior estimate of the parameter vector \(\mathbf{\theta}(pat,s)\) is obtained using the resulting \(N_{p}\) samples (\(\mathbf{\hat{\theta}}_{1:N_{p},8}^{ABC}(pat,s)\)) from the ABC PMC algorithm. Each \(\mathbf{\hat{\theta}}_{1:N_{p},8}^{ABC}(pat,s)\) is utilized within the model together with the associated \(\hat{\lambda}(pat,s)\) to simulate a ten-minute RR interval series. For each simulation, \(R_{i}(n)\) and \(D_{i}(n)\) are stored for each activation \(n\) in each pathway node \(i\) and used as the sample distribution of the RP and CD for the SP and the FP, respectively. The samples from these four distributions, denoted as \(\mathbf{\hat{\Phi}}(pat,s)=[\mathbf{R}^{FP}(pat,s),\mathbf{R}^{SP}(pat,s),\mathbf{D}^{FP}(pat,s ),\mathbf{D}^{SP}(pat,s)]\), serves as a translation from the twelve model parameters \(\mathbf{\hat{\theta}}\) to four more interpretable AV node properties \(\mathbf{\hat{\Phi}}\), taking into account not only the model parameters but also the mean AFR associated with the current RR-segment.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Parameters & \(R_{min}^{FP},R_{min}^{SP}\) & \(\Delta R^{FP},\Delta R^{SP}\) & \(D_{min}^{FP},D_{min}^{SP}\) & \(\Delta D^{FP},\Delta D^{SP}\) & \(\tau_{R}^{FP},\tau_{R}^{SP},\tau_{R}^{FP},\tau_{T}^{FP}\) \\ \hline GA (ms) & [100, 1000] & [0, 1000] & [2, 50] & [0, 100] & [25, 500] \\ ABC (ms) & [30, 1300] & [0, 1300] & [0, 1, 80] & [0, 130] & [10, 700] \\ \hline \end{tabular}
\end{table} TABLE I: Parameter ranges for the GA and the ABC PMC algorithm.
To quantify these distributions, their corresponding empirical probability density functions are computed using the MATLAB function ksdensity (MATLAB R2022b) with default bandwidth. From the empirical probability density functions, the maxima are obtained, denoted \(\hat{\mathbf{\varrho}}_{max}(pat,s)=[R_{max}^{FP}(pat,s),R_{max}^{SP}(pat,s),D_{max}^ {FP}(pat,s),D_{max}^{SP}(pat,s)]\).
In addition, the 5th percentile and the 95th percentile are obtained from \(\hat{\mathbf{\Phi}}(pat,s)\), denoted \(\hat{\mathbf{\phi}}_{5}(pat,s)=[R_{5}^{FP}(pat,s),R_{5}^{SP}(pat,s),D_{5}^{FP}(pat,s),D_{5}^{SP}(pat,s)]\), and \(\hat{\mathbf{\phi}}_{59}(pat,s)=[R_{95}^{FP}(pat,s),R_{95}^{SP}(pat,s),D_{95}^{SP }(pat,s),\)
\(D_{95}^{SP}(pat,s)\), respectively. Furthermore, the number of impulses traveling through the FP and SP (\(N_{FP}\) and \(N_{SP}\), respectively) is stored, and the ratio is denoted as
\[SP_{ratio}(pat,s)=\frac{N_{SP}(pat,s)}{N_{FP}(pat,s)+N_{SP}(pat,s)}. \tag{6}\]
The patient-specific diurnal variability (\(\Delta DV\)) in the AV node properties is quantified by the average value of \(\hat{\mathbf{\phi}}_{max}\) during daytime (9:00 A.M. to 9:00 P.M) divided by the average value of \(\hat{\mathbf{\phi}}_{max}\) during nighttime (2:00 A.M. to 6:00 A.M). In addition, the patient-specific short-term variability in the AV node properties is quantified by the average Kolmogorov-Smirnov distance (\(\overline{\Delta KS}\)) between consecutive segments of \(\hat{\mathbf{\Phi}}\) during the full 24-hour (8:00 A.M to 8:00 A.M). The Kolmogorov-Smirnov distance represents the maximal separation between the empirical cumulative distribution functions between consecutive segments [29].
### _Prediction of Treatment Outcome_
The predictive power of the estimates \(\hat{\mathbf{\Phi}}\), \(\hat{\mathbf{\phi}}_{5}\), \(\hat{\mathbf{\phi}}_{95}\), \(\hat{\mathbf{\phi}}_{max}\), and \(SP_{ratio}\) in relation to \(\Delta HR\) for the different rate control drugs is evaluated in three ways; by analyzing the correlation between the diurnal and short-time variability and \(\Delta HR\); by training a feature-based regression model on statistical properties of the trends to predict \(\Delta HR\); and by training a convolutional neural network on the trends to predict \(\Delta HR\).
To quantify the correlation between diurnal and short-term variability in the AV node properties and \(\Delta HR\) after treatment with the four rate control drugs, Spearman's rank correlation is used. Due to the exploratory nature of the study, no hypothesis test is performed and hence no correction of p-values is applied [30, 31].
Three different feature-based regression models (linear regression, random forest [32], and k-nearest neighbor [33]) are trained on 66 statistical properties of the trends. These statistical properties are; the mean \(\pm\) std of the four AV node properties \(\hat{\mathbf{\phi}}_{max}\) during daytime (8 properties), during nighttime (8 properties), and the full 24-hour (8 properties); the mean \(\pm\) std of the 90% credibility region - calculated as the difference between \(\hat{\mathbf{\phi}}_{5}\) and \(\hat{\mathbf{\phi}}_{95}\) - during daytime (8 properties),
```
At iteration \(j=1\), set the initial population Set a counter \(c=1\) for\(1\leq u\leq 5\)do Set \(\hat{\mathbf{\theta}}_{c,1}^{ABC}\leftarrow\mathcal{N}(\hat{\mathbf{\theta}}_{u}^{GA}, \mathbf{\Sigma}_{init})\) Set initial weights \(\mathbf{w}_{c,1}\leftarrow\frac{1}{N_{p}}\) Update counter \(c=c+1\) endfor endfor Set the initial covariance for the transition kernel \(\mathbf{\Sigma}_{1}\leftarrow\) 2Cov(\(\hat{\mathbf{\theta}}_{1:N_{p},1}^{ABC}\)) At iteration \(j>1\) for\(2\leq j\leq 8\)do for\(1\leq v\leq N_{p}\)do Set \(\epsilon^{**}\) = inf while\(\epsilon^{**}>T_{j}\)do Sample one proposal particle from previous iteration \(\mathbf{\theta}^{*}\sim\hat{\mathbf{\theta}}_{1:N_{p},j-1}^{ABC}\) with probability \(\mathbf{w}_{1:N_{p},j-1}\) Perturb \(\mathbf{\theta}^{*}\) by sampling \(\mathbf{\theta}^{**}\sim\mathcal{N}(\mathbf{\theta}^{*},\mathbf{\Sigma}_{j-1})\) Simulate data \(\hat{\mathbf{RI}}\) from \(\mathbf{\theta}^{**}\): \(\hat{\mathbf{RI}}\sim\) Model(\(\mathbf{\theta}^{**},\hat{\lambda}\)) Calculate \(\epsilon^{**}\) from Equation 4 using \(\hat{\mathbf{RI}}\) and \(\mathbf{RR}\) endwhile Set \(\hat{\mathbf{\theta}}_{v,j}^{ABC}\leftarrow\mathbf{\theta}^{**}\) Update the weight \(\mathbf{w}_{v,j}\leftarrow\big{(}\sum_{k=1}^{N_{p}}\mathbf{w}_{k,j-1}P(\hat{\mathbf{\theta} }_{k,j-1}^{ABC}|N\ (\hat{\mathbf{\theta}}_{v,j}^{ABC},\mathbf{\Sigma}_{j-1}))\big{)}^{-1}\) (Equation 5) endfor Update the covariance for the transition kernel \(\mathbf{\Sigma}_{j}\gets\) 2Cov(\(\hat{\mathbf{\theta}}_{1:N_{p},j}^{ABC}\)) endfor
```
**Algorithm 1** Calculate \(p(\mathbf{\theta}|\mathbf{RR},\hat{\lambda})\), given \(\mathbf{RR}\), \(\hat{\lambda}\), the model \(\hat{\mathbf{RI}}\sim\) Model(\(\mathbf{\theta}\), \(\hat{\lambda}\)), the threshold \(T_{j}\), and the initial estimates \(\hat{\mathbf{\theta}}^{GA}\). The indication \((pat,s)\) is omitted to avoid redundancy.
nighttime (8 properties), and the full 24-hour (8 properties); the mean \(\pm\) std of the \(SP_{ratio}\) during daytime (2 properties), nighttime (2 properties), and the full 24-hour (2 properties); \(\Delta DV\) in the four AV node properties (4 properties); the short-term variability in the four AV node properties (4 properties); as well as the age, gender, weight, and height of the patient.
Deep learning approaches have achieved the current state-of-the-art performance for time-series classification and regression [34]. Hence, the prediction of \(\Delta HR\) for the different rate control drug is evaluated using the time series for \(\boldsymbol{\hat{\phi}}_{5}\), \(\boldsymbol{\hat{\phi}}_{95}\), \(\boldsymbol{\hat{\phi}}_{max}\), \(SP_{ratio}\), AFR, and the RR interval series as an input to three convolutional neural networks with different architectures, based on only fully connected layers [35], the ResNet architecture [35], and the Inception architecture [36], respectively. To incorporate the age, gender, weight, and height of the patients, the last fully connected layer of the networks is modified to also include these properties as input neurons. The networks were trained using the tasi library [37], with the Adam solver [38] and the Huber loss [39]. Leave-one-out cross-validation is used, so that the network is trained on data from all but one patient and tested on the left-out patient. The average mean square error (MSE) of the predicted and true \(\Delta HR\) for the whole population is calculated and compared between approaches.
## III Results
As described in Section II-A, this study is based on a population of 60 patients. However, due to excessive noise, some patients are excluded from analysis, as described in Section II-B, resulting in a total of 51 patients. In addition, excessive noise in the ECG during treatment with the four rate control drugs leads to missing values for \(\Delta HR\) for some patients. Thus, of the remaining 51 patients at baseline, two lack data for verapamil, three lack data for diltiazem, two lack data for metoprolol, none lack data for carvedilol, and one lacks data for both verapamil and metoprolol. The mean \(\pm\) standard deviation of \(\Delta HR\) in the population are \(19\%\pm 23\%\) for verapamil; \(24\%\pm 18\%\) for dititazem, \(17\%\pm 18\%\) for metoprolol; and \(11\%\pm 6\%\) for carvedilol.
### _Parameter Trends_
The 24-hour trends of \(\boldsymbol{\hat{\phi}}_{max}(pat,s)\), \(\boldsymbol{\hat{\phi}}_{5}(pat,s)\), and \(\boldsymbol{\hat{\phi}}_{95}(pat,s)\) for two patients, denoted A and B, are presented in Figure 3 and 4. Figure 3 shows a low short-term variability in the RP and CD in both pathways for patient A (\(\overline{\Delta KS}=[0.27,0.19,0.24,0.33]\) for \(R^{FP}\), \(R^{SP}\), \(D^{FP}\), and, \(D^{SP}\)), whereas patient B in Figure 4 has a larger short-term variability (\(\overline{\Delta KS}=[0.41,0.55,0.40,0.40]\)). Conduction mainly occurs through the SP in both patients, as indicated by an \(SP_{ratio}\) over 0.5, which results in a wider credibility region in the \(R^{FP}\) compared to the \(R^{SP}\). However, for patient B, there are segments where the FP is more prevalent, e.g. between 5 PM and 6 PM. In these segments, the RP and CD have a very low variability indicating a stationary behavior of the AV node. A notable shift in RP occurs at 8 AM for patient A, probably as a response to making up from sleep, resulting in a clear change in autonomic regulation. No notable difference between the average \(R^{FP}\), \(R^{SP}\), and \(D^{FP}\) during daytime and during nighttime could be seen for patient A, with a slight difference in \(D^{SP}\) (\(\Delta DV=[0.80,0.81,0.99,1.39]\)). For patient B, only \(D^{FP}\) showed a notable difference (\(\Delta DV=[0.81,0.92,2.60,1.19]\)).
Similar observations can be made for the whole population, as displayed in Table II, which includes the mean and standard deviation of \(\boldsymbol{\hat{\phi}}_{max}(pat,s)\), the 95% credibility region, and \(\overline{\Delta KS}\), during daytime, nighttime, and during 24 hours, as well as \(\Delta DV\), for the RP and CD in the FP and the SP for all patients. For convenience, the total CD, calculated by multiplying the CD for one node by ten, is listed. From Table II, it is evident that the RP on average is higher and the CD is lower during nighttime compared to daytime, probably linked
Fig. 3: The estimated RF (top) and CD (middle) for \(\boldsymbol{\hat{\phi}}_{max}(pat,s)\) (dotted) as well as \(\boldsymbol{\hat{\phi}}_{5}(pat,s)\) and \(\boldsymbol{\hat{\phi}}_{95}(pat,s)\) (filled) for the FP (blue) and SP (red), as well as the SP ratio (bottom) are shown for patient A, marked with a black circle in Figure 6.
to the lower heart rate during sleep and/or circadian autonomic variations. Figure 5 illustrates the population average trends of \(\boldsymbol{\hat{\phi}}_{max}(pat,s)\), \(\boldsymbol{\hat{\phi}}_{\mathrm{5}}(pat,s)\), and \(\boldsymbol{\hat{\phi}}_{\mathrm{95}}(pat,s)\). To reduce the influence of outliers, only segments containing data from over 20% of the population are shown. A distinct separation between RP and CD of the two pathways exists, indicating different functionality. Additionally, the credibility region for the \(R^{FP}\) is larger compared to the \(R^{SP}\). Moreover, the credibility region for \(D^{FP}\), in proportion to its mean value, is larger than that of \(D^{SP}\). The differences in credibility regions between FP and SP reflect the \(SP_{ratio}\), which is 0.78 \(\pm\) 0.11 (mean \(\pm\) std) during the day, 0.79 \(\pm\) 0.12 during the night, and 0.78 \(\pm\) 0.10 during the full 24-hour, indicating that the SP is dominant on average.
### _Prediction of Treatment Outcome_
Spearman's rank correlation between the patient-specific \(\Delta DV\), as described in Section II-E, and \(\Delta HR\) showed no clear correlation (\(p<0.05\)) for any combination of drug and AV node property. Hence, no relationship between diurnal variability and drug outcome was found.
The Spearman correlation between the patient-specific short-time variability, quantified by \(\overline{\Delta KS}\), and \(\Delta HR\) showed no clear correlation (\(p<0.05\)) for the RP and CD in the SP. A moderate correlation was however found between \(\overline{\Delta KS}\) and \(\Delta HR\) for \(R^{FP}\) in the \(\beta\)-blocker metoprolol (\(\rho=0.47,p=0.0011\)) and for \(D^{FP}\) in metoprolol (\(\rho=0.35,p=0.017\)). Figure 6 shows the individual \(\overline{\Delta KS}\) plotted against \(\Delta HR\) and their linear relation for all four drugs, with the left panel showing \(R^{FP}\) and the right panel showing \(D^{FP}\). Interestingly, a similar relation between \(\overline{\Delta KS}\), and \(\Delta HR\) is not present in the other \(\beta\)-blocker carvediol.
The ability to predict \(\Delta HR\) using machine learning approaches is evaluated by the average MSE between the predicted and true \(\Delta HR\) for the four drugs using the leave-one-out validation method. The average MSE is benchmarked against the population variance of \(\Delta HR\) for the four drugs. Hence, if the average MSE is larger than the population variance at 0.0071%, the population mean yields a more accurate predictor. Using the feature-based regression models, as described in Section II-E, resulted in an average MSE of 0.0073% for the linear regression, an average MSE of 0.0074% for the random forest, and an average MSE of 0.074% for the k-nearest neighbor. In addition, using the convolutional neural network resulted in an average MSE of 0.0073% for the fully connected architecture, an average MSE of 0.0079% for the ResNet architecture, and an average MSE of 0.0074% for the Inception architecture. Overall, all the machine-learning approaches resulted in an average MSE higher than 0.0071% and thus in a poor fit to new-seen data.
## IV Discussion
A mathematical model with an associated framework for patient-specific estimation and proper uncertainty quantification of the RP and CD in the FP and SP of the AV node using only non-invasive data has been proposed.
Individual estimation of trends and variability in AV node properties using non-invasive data has the potential to increase the patient-specific understanding of the AV node during AF, which in turn can be used to enhance informatics approaches for the next generation of personalized medicine. The two most dominant properties of the AV node, the RP and CD, together with the ratio of impulses conducted through the different pathways, have the potential to increase the understanding of the AV node and its function during AF.
Due to the physiological differences between the effect of \(\beta\)-blockers and calcium channel blockers, where \(\beta\)-blockers reduce the effect of the sympathetic nervous system, our hypothesis was that \(\beta\)-blockers could exhibit an increased effect when variations in the AV node properties are prominent since this would indicate a larger influence of the autonomic nervous system. The population-averaged trends (Figure 5)
Fig. 4: The estimated RF (top) and CD (middle) for \(\boldsymbol{\hat{\phi}}_{max}(pat,s)\) (dotted) as well as \(\boldsymbol{\hat{\phi}}_{\mathrm{5}}(pat,s)\) and \(\boldsymbol{\hat{\phi}}_{\mathrm{95}}(pat,s)\) (filled) for the FP (blue) and SP (red), together with the SP ratio (bottom) are shown for patient B, marked with a red circle in Figure 6.
show an increase in RP and a slight decrease in CD during nighttime compared to daytime, suggesting that the decreased sympathetic activity during nighttime affects the RP and CD. However, no correlation was found between diurnal variations in AV properties and reduction in heart rate during treatment with \(\beta\)-blockers.
However, a potential association between the short-time variability and the treatment outcome with metoprolol was found. The findings depicted in Figure 6 demonstrate a moderate correlation between \(\overline{\Delta KS}\) and the change in heart rate (\(\Delta HR\)) in the RP and CD for the FP for metoprolol, but not for any other drugs. The lack of correlation between \(\Delta HR\) after treatment with carvedilol (also a \(\beta\)-blocker) and \(\overline{\Delta KS}\) could potentially be attributed to its modest overall effect observed in the RATAF study, likely stemming from its rapid elimination as acknowledged in [40]. For the possible relationship between short-time variation in the RP and CD in the FP between metoprolol and treatment outcome suggested by this analysis, additional studies are needed to confirm the results.
The possibility to predict \(\Delta HR\) for the different rate control drugs used in this study was evaluated using three featured-based regression models and three different architectures of a convolutional neural network (Section III-B). With a resulting average MSE higher than the variance of \(\Delta HR\) for the population, it appears impossible to predict \(\Delta HR\) with any certainty in the present data set. Either there is not enough information relevant for predicting the heart rate reduction after drug treatment in the AV node property trends - possibly due to the 10-minute resolution, limiting the information about autonomic regulation - or the data set size of 51 patients is too low given the inter-individual variability present in the measurements.
Prior iterations of the model and framework focused on estimating the model parameter trends rather than the patient-specific property trends of the AV node [1]. This approach imposed limitations on the interpretability of the results, since the interpretation of the model parameters in terms of common cardiology terminology such as RP and CD is not straightforward. In contrast, the current work introduces a novel methodology that enables the estimation of the RP and CD for each ECG segment individually, facilitating a more comprehensible and interpretable analysis. The ability to derive such estimates is vital as it allows for effective communication of the analysis results. Furthermore, this advancement in methodology opens up new avenues for gaining a deeper understanding of the AV node and its diurnal and short-term variations.
The estimation of the posterior by obtaining a range of plausible values, as opposed to relying on a point estimate of the AV node properties, offers notable advantages. For example, the credibility region for \(R^{FP}\) in Figure 4 is very broad during most segments at nighttime, reflecting a high uncertainty. In scenarios where the extent of the uncertainty is unknown, these uncertain estimates have the potential to influence decision-making processes or further analysis of the trends. As a result, the usefulness and reliability of these estimates may be decreased, emphasizing the need for a posterior estimation approach.
It has previously been shown that a GA can estimate the time-varying network model parameters during 24 hours [1]. However, in order to estimate the posterior distribution, the ABC approach was necessary. The ABC approach has in recent years been used for the personalization of the electrophysiological properties in cardiac models [41]. Although ABC approaches are generally computationally expensive [27], starting in a promising area of the model parameter space, derived from the GA results, reduced the computation time by a factor of around 50 (data not shown). The GA was also used to decide on a reasonable threshold level for the ABC PMC algorithm, which is not straightforward since imperfections in the model make certain RR series more challenging to replicate
Fig. 5: The average RF (top) and CD (middle) for \(\boldsymbol{\hat{\phi}}_{max}(pat,s)\) (dotted) as well as \(\boldsymbol{\hat{\phi}}_{5}(pat,s)\) and \(\boldsymbol{\hat{\phi}}_{95}(pat,s)\) (filled) for the FP (blue) and SP (red), together with the mean (black, dotted) and standard deviation (black, filled) of the SP ratio (bottom).
than others, resulting in a higher average \(\epsilon\). Hence, an \(\epsilon\) value corresponding to a good fit for one RR interval series could correspond to a poor fit for another, making thresholds very data-dependent. Using the GA to find the threshold levels ensures a reasonable threshold level specified for each data segment.
### _Study Limitations and Future Perspectives_
The estimated RP and CD have not been validated against intracardiac measurements, since obtaining such measurements during AF - if at all possible - would be very difficult and time-consuming. The average RP and CD for the two pathways can however be compared with invasive electrophysiological measurements of the AV node from two patients with paroxysmal supraventricular tachycardia and evidence of dual AV nodal conduction found in the literature [42]. The two patients had an RP in the FP of 820 ms and 495 ms; an RP in the SP of 540 ms and 414 ms; a CD in the FP of 125 ms and 150 ms; and a CD in the SP of 500 ms and 300 ms. Comparing these values to the daytime estimates seen in Table II, it is evident that the measured values for the RP and CD in both pathways are within the range of our estimated values. It should be noted that the measured functional RP values come from an S1-S2 protocol during sinus rhythm, thus the comparison is not trivial. The functional RP is the smallest AA-interval preceding a conducted impulse. It is however still dependent on the previous pacing frequency, which is not well-defined during AF. Nevertheless, since AF leads to high frequencies, the RP should be reasonably close to the functional RP. In addition, the estimated CD from our model and framework shown in Table II corresponds to the peak of the probability density function of all CDs in each pathway multiplied by 10. Hence, it differs slightly from the measured CD, since it also captures CDs corresponding to impulses that are blocked within the node.
In this study, short-time variability was estimated as the difference between adjacent 10-minute intervals. However, limiting the short-time variability to ten minutes also limits the information about the autonomic nervous system - which is known to operate on a higher resolution - to a ten-minute resolution. Hence, improving the time resolution of the analysis has the possibility to increase the information extracted by the model and framework, which could improve the results. Furthermore, to extract even more information about the impact of the autonomic nervous system on the AV node, an extension of the model has been proposed in [43]. A similar framework to the one presented in this work could be employed for that model to estimate model parameters and simulate the RP and CD. This could further refine the estimates and thus the information about the AV node.
Moreover, analyzing the RP and CD trends for all the patients, a high inter-individual variability with a wide range of diurnal and short-time variability could be seen, likely due to the inherent individual differences. This, in combination with the relatively low number of patients (51), indicates that the results in this paper should be verified in a larger study.
## V Conclusion
We have proposed a novel framework for estimating patient-specific 24-hour trends of the RP and CD in the FP and SP of the AV node by mapping estimated model parameters. These estimates include the full posterior of the RP and CD and could be estimated using only non-invasive data. Additionally, a correlation between short-term variability in both the RP and CD for the FP and drug-induced changes to the heart rate was found. The individual estimates of AV node properties offer patient-specific trends in RP and CD, which may have the potential to assist in treatment selection.
Fig. 6: Scatter plot of the 24-hour \(\Delta HR\) and \(\overline{\Delta KS}\) for the \(R^{FP}\) (left) and \(D^{FP}\) (right) for the four drugs, with patient A (as shown in Figure 3) marked with black and patient B (as shown in Figure 4) marked with red.
## VI Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## VII Author Contributions
MK, FS, and MW contributed to the design and conception of the study. SU performed the clinical study. FS was responsible for estimating the RR interval series and AFR trends from the ECG. MK wrote the manuscript, designed the genetic algorithm, the approximate Bayesian computation algorithm, and the model reduction, with advice, suggestions, and supervision from FS and MW. SU and PP analyzed and interpreted the results from a medical viewpoint. FS and MW supervised the project and reviewed the manuscript during the writing process. All authors contributed to the manuscript revision, read, and approved the submitted version.
## VIII Funding
This work was supported by the Swedish Foundation for Strategic Research (Grant FID18-0023), the Swedish Research Council (Grant VR2019-04272), and the Crafoord Foundation (Grant 20200605).
## IX Data Availability Statement
The estimated AV node properties \(\mathbf{\hat{\Phi}}(pat,s)\) supporting the conclusions for this article will be available from MK upon request. The measured data are owned by Vestre Viken Hospital Trust, and requests for access can be made to SU. The code for the model together with a user example can be found at https: //github.com/FraunhoferChalmersCentre/AV-node-model.
|
2301.07300 | Two New Upper Bounds for the Maximum k-plex Problem | A k-plex in a graph is a vertex set where each vertex is non-adjacent to at
most k vertices (including itself) in this set, and the Maximum k-plex Problem
(MKP) is to find the largest k-plex in the graph. As a practical NP-hard
problem, MKP has many important real-world applications, such as the analysis
of various complex networks. Branch-and-bound (BnB) algorithms are a type of
well-studied and effective exact algorithms for MKP. Recent BnB MKP algorithms
involve two kinds of upper bounds based on graph coloring and partition,
respectively, that work in different perspectives and thus are complementary
with each other. In this paper, we first propose a new coloring-based upper
bound, termed Relaxed Graph Color Bound (RelaxGCB), that significantly improves
the previous coloring-based upper bound. We further propose another new upper
bound, termed RelaxPUB, that incorporates RelaxGCB and a partition-based upper
bound in a novel way, making use of their complementarity. We apply RelaxGCB
and RelaxPUB to state-of-the-art BnB MKP algorithms and produce eight new
algorithms. Extensive experiments using diverse k values on hundreds of
instances based on dense and massive sparse graphs demonstrate the excellent
performance and robustness of our proposed methods. | Jiongzhi Zheng, Mingming Jin, Kun He | 2023-01-18T04:31:40Z | http://arxiv.org/abs/2301.07300v3 | # Relaxed Graph Color Bound for the Maximum \(k\)-plex Problem
###### Abstract
As a relaxation of the clique, a \(k\)-plex of a graph is a vertex set that each vertex is not connected with at most \(k\) vertices of this set. Given an undirected graph, the Maximum \(k\)-plex Problem (M\(k\)P) aims to find its largest \(k\)-plex. Branch and bound algorithms are a type of well-studied and effective method for exact M\(k\)P solving, whose performance depends heavily on the quality of the upper bounds. In this paper, we investigate the relaxation properties of \(k\)-plex and propose an effective upper bound called Relaxed Graph color Bound (RGB) for the M\(k\)P. To describe and calculate RGB, we propose a new quasi-independent set structure that focuses on the number of conflict vertices. We combine RGB with two of the state-of-the-art branch and bound M\(k\)P algorithms, Maplex and KpLeX. Extensive experiments on real-world benchmarks, DIMACS benchmarks, and random graphs show the excellent performance of our proposed method over the state-of-the-art algorithms.
## 1 Introduction
Given an undirected graph \(G=(V,E)\), a clique in the graph is a set of vertices that are pairwise connected, and a \(k\)-plex [10] is a set of vertices \(S\subseteq V\) that each vertex \(v\in S\) is not connected with at most \(k\) vertices in \(S\). The Maximum Clique Problem (MCP) is to find the largest clique in \(G\), and the Maximum \(k\)-plex Problem (M\(k\)P) is to find the largest \(k\)-plex in \(G\). Obviously, \(k\)-plex is a relaxation clique structure, i.e., a clique is a 1-plex, and the M\(k\)P is a relaxation problem of the MCP.
The MCP is a famous and fundamental NP-hard problem, and the clique model has been widely investigated in the past decades. Applications of clique and the MCP involve many real-world domains, such as social networks [13], data mining [14], bioinformatics [15], and wireless networks [11]. However, in many real-world applications, such as social networks [10], community detection [12, 13, 14], and biological networks [15], dense subgraphs do not need to be restrictive cliques but allow missing a few connections. To address these situations, investigating relaxation clique structures like \(k\)-plex is important and necessary. Thus, studies related to \(k\)-plex have sustainably grown recently [1, 1, 1, 1, 16], and this paper mainly focuses on the exact branch and bound (BnB) methods for the M\(k\)P.
Another related structure is the independent set. An independent set in a graph is a set of vertices where any two vertices are non-connected. The independent set and clique structures are closely related, because solving the MCP in a graph \(G\) is equivalent to finding the maximum independent set in its complementary graph \(\overline{G}\), and a clique can contain at most one vertex of each independent set. Note that graph coloring is a general strategy to partition the vertices in a graph into independent sets. Based on these properties, many BnB methods use graph coloring to calculate the upper bounds when solving the MCP [12, 1, 13, 14] or its relaxation problems such as the M\(k\)P [15] and the maximum \(k\)-defective clique problem [1].
Among these methods, Zhou et al. [2] propose a graph color bound for the M\(k\)P. The main idea is that during the BnB process, an independent set \(I\) consisting of the candidate vertices with respect to the current growing partial solution \(S\) can provide at most \(min\{|I|,k\}\) new vertices for \(S\). We observe that, however, such a bound is not tight enough due to the relaxation properties of \(k\)-plex. On the one hand, since \(k\)-plex is a relaxation structure of clique, a set \(I\) that can provide at most \(min\{|I|,k\}\) new vertices for \(S\) does not need to be an independent set, but can be a quasi-independent set with at most \(k\) conflict vertices (i.e., vertices connected with edges). On the other hand, since the candidate vertices might not be connected with some vertices in \(S\), an independent set \(I\) consisting of the candidate vertices actually can not provide \(k\) new vertices for \(S\) sometimes even when \(|I|>k\).
In this paper, we investigate the above properties of \(k\)-plex and propose several new upper bounds. We propose a new quasi-independent set structure, called \(r\)-Vertices Quasi-Independent Set (\(r\)VQIS), that allows at most \(r\) conflict vertices to help describe the upper bounds. Related studies about
the quasi-independent set structure mainly focus on the number or density of connected edges [1, 1, 22, 13]. To our knowledge, this is the first time a quasi-independent set structure that focuses on conflict vertices has been studied, and this paper shows that the \(r\)VQIS structure fits well with the M\(k\)P. Experimental results show that all the proposed new upper bounds outperform the graph color bound [1]. Among them, the Relaxed graph Color Bound (RGB) combines the advantages of the others and exhibits the best performance.
Recently, many efficient BnB algorithms for the M\(k\)P [1, 22, 23, 14] have been proposed with many effective reduction rules and upper bounds. Among them, the Maplex [1] and KpLeX [14] algorithms are the two most recent and best-performing ones. Maplex proposes the second-order reduction technique and the graph color bound mentioned above. KpLeX proposes an effective upper bound that is calculated by partitioning the candidate vertices according to the connections between them and the vertices in the current partial solution, which is the current best-performing upper bound for the M\(k\)P.
We apply our proposed upper bound to Maplex and KpLeX by replacing the graph color bound in Maplex with RGB and adding RGB into KpLeX as an additional bound. Experimental results show that Maplex can be significantly improved by RGB, and RGB shows excellent complementarity with the effective upper bound proposed in KpLeX, especially for the M\(k\)P instances with larger \(k\) values (such as \(k\geq 10\)), i.e., instances that are more relaxed.
The main contributions of this work are as follows:
* We investigate the relaxation properties of \(k\)-plex and then propose several new upper bounds. Each of them can improve the existing graph color bound. The best one, RGB, can significantly improve Maplex and shows excellent complementarity with the best-performing upper bound proposed in KpLeX, helping KpLeX reduce branches by tens of times in solving some instances.
* We suggest some ways to utilize the relaxation properties of \(k\)-plex. Such methods and ideas may attract more research in the field of solving MCP relaxation problems, such as the M\(k\)P, the maximum \(k\)-defective clique problem, the maximum quasi-clique problem, etc.
* We propose a new quasi-independent set structure, called \(r\)VQIS, that restricts the number of conflict vertices not exceeding a given integer \(r\). We show that \(r\)VQIS fits well with the M\(k\)P. The proposed new quasi-independent set structure may also attract future studies.
## 2 Preliminaries
This section first introduces the brief process of the general BnB algorithms for the M\(k\)P and some common concepts used in the algorithms, and then introduces some related definitions that will be used in the rest of this paper.
### General Branch and Bound Algorithms
During the course of a general BnB algorithm for the M\(k\)P, a lower bound \(lb\) for the size of the maximum \(k\)-plex is maintained. The value of \(lb\) is usually initialized to be the size of the \(k\)-plex obtained by some heuristic algorithms [14, 15], and updated once a larger \(k\)-plex is found. A general BnB algorithm for the M\(k\)P usually contains the preprocessing stage and the BnB stage. In the preprocessing stage, the algorithm uses some reduction rules [22, 14, 15] to remove vertices that are impossible to belong to the \(k\)-plexes with sizes larger than \(lb\) in the original graph.
In the BnB stage, the algorithm actually traverses the depth-first search tree to find the optimal solution. During the process, the algorithm always maintains two sets of vertices, the current growing partial solution \(S\), and its corresponding candidate set \(C\) that contains vertices that might be added into \(S\) to obtain a larger \(k\)-plex. Once the algorithm selects the branching vertex \(v\) to be added into \(S\) from \(C\), it will calculate an upper bound \(ub\) for the size of the maximum \(k\)-plex that can be extended from \(S\cup\{v\}\). The branch of vertex \(v\) will be pruned if \(ub\leq lb\). Obviously, a tight upper bound can help the algorithm prune more branches and reduce the calculation time.
### Related Definitions
Given an undirected graph \(G=(V,E)\), where \(V\) is a set of \(n\) vertices and \(E\) a set of \(m\) edges, the density of \(G\) is \(2m/(n(n-1))\), \(N(G,v)\) represents the set of neighbors (i.e., connected vertices) of vertex \(v\) in \(G\). Given an instance \(G=(V,E)\) and a growing partial solution \(S\subseteq V\), we define \(\omega_{k}(G,S)\) as the size of the maximum \(k\)-plex in \(G\) that includes all vertices in \(S\), and \(\delta(S,v)=|S\backslash N(G,v)|\) as the number of non-neighbors in \(S\) of vertex \(v\). Moreover, given a vertex set \(P\subseteq V\), we define \(\Delta_{min}(S,P)=\delta(S,p)\), where \(p=\operatorname*{arg\,min}_{v\in P}\delta(S,v)\). That is, \(\Delta_{min}(S,P)\) represents the minimum number of non-neighbors in \(S\) for vertices in \(P\). Similarly, we define \(\Delta_{max}(S,P)\) as the maximum number of non-neighbors in \(S\) for vertices in \(P\).
## 3 Relaxed Graph Color Bound
This section proposes several new upper bounds that improve the graph color bound [15] by considering the relaxation properties of \(k\)-plex. To define these upper bounds, we propose a new quasi-independent set structure, called \(r\)-Vertices Quasi-Independent Set (\(r\)VQIS). Among these proposed upper bounds, the Relaxed Graph color Bound (RGB) combines the advantages of the others, and we propose a novel Relaxed Graph color Algorithm (RGA) to calculate RGB. In the following, we first present the definition of the \(r\)VQIS structure, then review the graph color bound and introduce the proposed new upper bounds, and finally introduce the proposed algorithm, RGA, for calculating RGB.
Our proposed \(r\)VQIS structure fits well with the M\(k\)P and is important to our proposed new upper bounds. The \(r\)VQIS structure focuses on the number of conflict vertices, i.e., vertices that are connected with edges. It restricts the number of
conflict vertices not being larger than the given integer \(r\). The formal definition of \(r\)VQIS is as follows.
**Definition 1** (\(r\)-Vertices Quasi-Independent Set).: _Given an undirected graph \(G=(V,E)\) and a positive integer \(r\), a set of vertices \(I\subseteq V\) is an \(r\)-Vertices Quasi-Independent Set if \(\sum_{v\in I}\left[N(G,v)\cap I>0\right]<=r\), where \([\cdot]\) is the Iverson bracket, which returns 1 if - is true, and otherwise 0._
### New Upper Bounds
Before introducing our proposed new upper bounds, we first review the graph color bound [20], which could help better understand our improvements. Given a growing partial solution \(S\) and its corresponding candidate vertex set \(C\), the graph color bound claims that an independent set \(I\subseteq C\) can provide at most \(min\{|I|,k\}\) new vertices for \(S\). We argue that the graph color bound is not tight enough and can be improved by the following propositions.
**Proposition 1** (\(k\)VQIS-Bound).: _Given an undirected graph \(G=(V,E)\), a growing partial solution \(S\subseteq V\) and its corresponding candidate vertex set \(C\subseteq V\backslash S\), if \(C\) can be partitioned into \(c\) disjoint \(k\)VQISs, \(I_{1},\cdots,I_{c}\), then \(|S|+\sum_{i=1}^{c}min\{|I_{i}|,k\}\) is an upper bound of \(\omega_{k}(G,S)\)._
Proof.: For a kVQIS \(I_{i}\subseteq\{I_{1},\cdots,I_{c}\}\), we only need to discuss the situation that \(|I_{i}|>k\) (the same for the following proofs). Suppose \(I_{i}\) provides \(k+1\) vertices for \(S\), there must be a selected vertex with \(k+1\) non-neighbors (including itself) in \(I_{i}\), which violates the restriction of \(k\)-plex. Thus, \(I_{i}\) can provide at most \(k\) new vertices for \(S\).
**Proposition 2** (\(\Delta_{min}\)-Bound).: _Given an undirected graph \(G=(V,E)\), a growing partial solution \(S\subseteq V\) and its corresponding candidate vertex set \(C\subseteq V\backslash S\), if \(C\) can be partitioned into \(c\) disjoint independent sets \(I_{1},\cdots,I_{c}\), then \(|S|+\sum_{i=1}^{c}min\{|I_{i}|,k-\Delta_{min}(S,I_{i})\}\) is an upper bound of \(\omega_{k}(G,S)\)._
Proof.: Each independent set \(I_{i}\subseteq\{I_{1},\cdots,I_{c}\}\) can only provide at most \(k-\Delta_{min}(S,I_{i})\) new vertices for \(S\), because each vertex \(v\in I_{i}\) has at least \(\Delta_{min}(S,I_{i})\) non-neighbors in \(S\).
The \(\Delta_{min}\)-Bound can be obtained easily on top of the graph color bound. However, it is still not tight in some cases, and can be improved by look-ahead. For example, suppose \(I\subseteq C\) is an independent set and \(v\) is a vertex in \(I\), then an upper bound of the maximum number of new vertices that \(I\) can provide for \(S\) is \(min\{|I|,k-\Delta_{min}(S,I),1+k-\Delta_{min}(S,I\backslash\{v\})\}\). Since look-ahead is time-consuming, we propose another proposition as follows to obtain a tighter bound, which is based on the value of \(\Delta_{max}(S,I)\).
**Proposition 3** (\(\Delta_{max}\)-Bound).: _Given an undirected graph \(G=(V,E)\), a growing partial solution \(S\subseteq V\) and its corresponding candidate vertex set \(C\subseteq V\backslash S\), if \(C\) can be partitioned into \(c\) disjoint independent sets \(I_{1},\cdots,I_{c}\) such that each independent set \(I_{i}\) has at most \(k-\Delta_{max}(S,I_{i})\) vertices with less than \(\Delta_{max}(S,I_{i})\) non-neighbors in \(S\), then \(|S|+\sum_{i=1}^{c}min\{|I_{i}|,k-\Delta_{max}(S,I_{i})\}\) is an upper bound of \(\omega_{k}(G,S)\)._
Proof.: For an independent set \(I_{i}\subseteq\{I_{1},\cdots,I_{c}\}\), suppose \(I_{i}\) provides \(k-\Delta_{max}(S,I_{i})+1\) new vertices for \(S\). Then, there must be a selected vertex \(v\) with \(\Delta_{max}(S,I_{i})\) non-neighbors in \(S\) and \(k-\Delta_{max}(S,I_{i})+1\) non-neighbors (including itself) in \(I_{i}\), which violates the restriction of \(k\)-plex. Thus, \(I_{i}\) can provide at most \(k-\Delta_{max}(S,I_{i})\) new vertices for \(S\).
Moreover, the \(\Delta_{max}\)-Bound can be further improved by relaxing the independent set structure to the \(k\)VQIS structure, which results in the proposed RGB.
**Proposition 4** (Rgb).: _Given an undirected graph \(G=(V,E)\), a growing partial solution \(S\subseteq V\) and its corresponding candidate vertex set \(C\subseteq V\backslash S\), if \(C\) can be partitioned into \(c\) disjoint \(k\)VQISs, \(I_{1},\cdots,I_{c}\), such that each \(k\)VQIS \(I_{i}\) has at most \(k-\Delta_{max}(S,I_{i})\) vertices with less than \(\Delta_{max}(S,I_{i})\) non-neighbors in \(S\), and each conflict vertex in \(I_{i}\) has less than \(\Delta_{max}(S,I_{i})\) non-neighbors in \(S\), then \(|S|+\sum_{i=1}^{c}min\{|I_{i}|,k-\Delta_{max}(S,I_{i})\}\) is an upper bound of \(\omega_{k}(G,S)\)._
Actually, RGB only adds some relaxations to the \(\Delta_{max}\)-Bound, i.e., relaxing the vertices in \(I_{i}\) with less than \(\Delta_{max}(S,I_{i})\) non-neighbors in \(S\) to conflict vertices. Proposition 4 can be proved easily by referring to the proof of Proposition 3.
To show how these bounds are calculated and compare their performance, we provide a simple 4-plex instance \(G\) in Figure 1. The growing 4-plex \(S\) contains vertices \(\{v_{1},v_{2},v_{3},v_{4}\}\), and the candidate set \(C\) contains vertices \(\{v_{5},v_{6},v_{7},vs_{9}\}\). Maplex [20] will partition the vertices of \(C\) into two colors (independent sets), such as \(I_{1}=\{v_{5},v_{7},v_{8},v_{9}\}\) and \(I_{2}=\{v_{6}\}\), and the graph color bound of \(\omega_{4}(G,S)\) is \(|S|+\sum_{i=1}^{2}min\{|I_{i}|,4\}=9\). Since the candidate set \(C\) is actually a kVQIS, the \(k\)VQIS-Bound of \(\omega_{4}(G,S)\) is \(|S|+\sum_{i=1}^{2}min\{|C|,4\}=8\). The \(\Delta_{min}\)-Bound of \(\omega_{4}(G,S)\) is \(|S|+\sum_{i=1}^{2}min\{|I_{i}|,4-\Delta_{min}(S,I_{i})\}=8\). Since independent sets \(I_{1}\) and \(I_{2}\) satisfy the conditions in Proposition 3, the \(\Delta_{max}\)-Bound of \(\omega_{4}(G,S)\) is \(|S|+\sum_{i=1}^{2}min\{|I_{i}|,4-\Delta_{max}(S,I_{i})\}=7\). Since 4VQIS \(C\) satisfies the conditions in Proposition 4, the RGB of \(\omega_{4}(G,S)\) is \(|S|+min\{|C|,4-\Delta_{max}(S,C)\}=6\).
### Relaxed Graph Color Algorithm
Given the current partial solution \(S\) and its corresponding candidate set \(C\), we propose a novel Relaxed Graph color
Figure 1: An example for comparing the upper bounds.
Algorithm (RGA) to partition \(C\) into \(k\)VQISs that satisfy the conditions in Proposition 4, so as to calculate the upper bound, RGB, of \(\omega_{k}(G,S)\). The procedure of RGA is shown in Algorithm 1. The algorithm first partitions \(C\) into \(k\) subsets such that the candidate vertices with the same non-neighbors in \(S\) belong to the same subset (line 2). In this way, the equation \(\delta(S,v)=\Delta_{min}(S,C_{i})=\Delta_{max}(S,C_{i})=i\) holds for each subset \(C_{i}\) and each vertex \(v\in C_{i}\).
```
Input: A graph \(G=(V,E)\), the current partial solution \(S\), the candidate set \(C\), the value \(k\) Output: RGB of \(\omega_{k}(G,S)\)
1 Initialize the upper bound \(ub\leftarrow|S|\);
2 Partition \(C\) into \(k\) subsets \(C_{0},\cdots,C_{k-1}\) such that \(C_{i}=\{v\in C|\delta(S,v)=i\}\), \(i\in\{0,\cdots,k-1\}\);
3for\(i\gets k-1:0\)do
4if\(C_{i}=\emptyset\)thencontinue;
5 Partition vertices of \(C_{i}\) into \(c\) independent sets \(I_{1},\cdots,I_{c}\) by a greedy coloring heuristic;
6for\(j\gets 1:c\)do
7\(C^{\prime}\leftarrow\bigcup_{i=0}^{i-1}C_{l}\), \(P\leftarrow\emptyset\);
8for\(l\gets 1:|C^{\prime}|\)do
9\(v\leftarrow\) the \(l\)-th vertex in \(C^{\prime}\);
10if\(N(G,v)\cap I_{j}=\emptyset\)then
11\(P\leftarrow P\cup\{v\}\);
12\(C_{\delta(S,v)}\leftarrow C_{\delta(S,v)}\backslash\{v\}\);
13if\(|P|=k-i\)thenbreak;
14
15
16\(ub\gets ub+min\{|I_{i}\cup P|,k-i\}\);
17
18return\(ub\);
```
**Algorithm 1**Relaxed Graph Color Algorithm
After that, RGA traverses the subsets from \(C_{k-1}\) to \(C_{0}\) (line 3), and partitions the vertices in \(C_{i}\) into \(c\) disjoint independent sets \(I_{1},\cdots,I_{c}\) by the greedy coloring heuristic [14, 15] (line 5), which colors a vertex sequence by assigning each vertex the feasible color with the smallest index. An upper bound of the maximum number of vertices that each independent set \(I_{j}\subseteq C_{i}\) can provide for \(S\) is \(k-i\). Obviously, the smaller the value of \(i\), the larger the upper bound, i.e., \(k-i\). Therefore, to avoid a large upper bound caused by the vertices with fewer non-neighbors in \(S\), RGA tries to insert some (at most \(k-i\)) conflict vertices with fewer non-neighbors in \(S\) into each independent set \(I_{j}\subseteq C_{i}\) (lines 7-12). The set of inserted vertices is recorded as \(P\) in Algorithm 1. Finally, set \(I_{j}\cup P\) is actually a \(k\)VQIS that satisfies the conditions in Proposition 4. The upper bound \(ub\) will be updated once a \(k\)VQIS is fixed (line 14).
In summary, RGA suggests a way to utilize the relaxation properties of \(k\)-plex to obtain a tight upper bound. It tries to combine vertices with fewer non-neighbors in \(S\) and vertices with more non-neighbors in \(S\) in a \(k\)VQIS \(I\), and use the vertices with more non-neighbors in \(S\) to determine the upper bound. In this way, the influences of vertices with fewer non-neighbors in \(S\) on the upper bound can be eliminated, and the quality of the upper bound can be guaranteed.
The main time-consuming procedures in RGA include the partition of \(C\) (line 2), the greedy coloring process (line 5), and the intersection operation between \(N(G,v)\) and \(I_{j}\) (line 10). Their time complexities are \(O(|C|)\), \(O(|C_{i}|^{2})\), and \(O(|V|)\) (actually much smaller than \(O(|V|)\) because the bitset encoding method [16, 15]), respectively.
## 4 Experimental Results
This section presents experimental results to evaluate the performance of the proposed new upper bounds, mainly RGB. We select two of the state-of-the-art BnB M\(k\)P algorithms, Maplex1[15] and KpLeX2[16], as the baseline algorithms, since they are more efficient than other M\(k\)P methods [1, 14, 17] as reported in the literature. We replace the graph color bound in Maplex with RGB and denote the resulting algorithm as Maplex-RGB. We add RGB into KpLeX as an additional bound and denote the resulting algorithm as KpLeX-RGB. The source codes of Maplex-RGB and KpLeX-RGB are available at [https://github.com/JHL-HUST/RGB](https://github.com/JHL-HUST/RGB).
Footnote 1: [https://github.com/ini111/Maplex](https://github.com/ini111/Maplex)
All the algorithms in the experiments were implemented in C++ and run on a server using an Intel(r) Xeon(r) E5-2650 v3 2.30 GHz CPU, running Ubuntu 16.04 Linux operation system. In the following, we first present the comparison of Maplex-RGB and KpLeX-RGB with Maplex and KpLeX in two ways. The first is to compare the four algorithms using small values of \(k\), i.e., \(k\in[2,6]\) as in [16], to show the general performance of RGB. The second is to compare KpLeX-RGB and KpLeX using larger values of \(k\), i.e., \(k\in[7,15]\), to further show the complementarity of RGB and the upper bound proposed in KpLeX. Then, we present the ablation study that analyzes the performance of other proposed upper bounds described in Section 3.1.
### Evaluation of the General Performance
We compare the Maplex-RGB, KpLeX-RGB, Maplex, and KpLeX algorithms in solving the same benchmarks used in [16], including the 2nd DIMACS benchmark3 that contains 80 graphs with up to 4,000 vertices and densities ranging from 0.03 to 0.99, the Real-world benchmark4 that contains 139 real-world sparse graphs from the Network Data Repository [14], the 10th DIMACS benchmark5 that contains 82 graphs with up to \(2\times 10^{7}\) vertices, and 120 randomly generated graphs with 1,000 vertices and densities ranging from 0.05 to 0.3 (we generate 20 graphs for each density \(p=[0.05,0.1,0.15,0.2,0.25,0.3]\) by connecting any two vertices with probability \(p\)). For each graph, we generate five M\(k\)P instances with different \(k\) values, i.e., \(k\in[2,6]\). Due to the difference between our machine and the machine used in [16], we set
the cut-off time to 7,200 seconds, and we set the maximum memory to 64GB, for each instance.
The comparison results of the algorithms on the four benchmarks are summarized in Figure 2, which shows the number of M\(\&\)P instances being solved in each benchmark by the algorithm within the cut-off time. The results show that Maplex-RGB significantly outperforms Maplex in solving instances of all the four benchmarks. With the help of our proposed RGB, Maplex can yield better performance than the state-of-the-art KpLeX algorithm in solving instances of the 2nd DIMACS and Real-world benchmarks. Moreover, KpLeX-RGB shows worse performance than KpLeX in solving instances of the Real-world benchmark and shows slightly better performance than KpLeX in solving instances of the other three benchmarks. Actually, since KpLeX-RGB only adds RGB as an additional upper bound, the branching times to solve an instance of KpLeX-RGB are almost always less than those of KpLeX. However, the calculation of the additional RGB also takes time. Therefore, sometimes RGB can help KpLeX reduce many branches and solve some instances more efficiently, other times RGB might take more time to solve other instances, which results in the complementary performance of KpLeX-RGB and KpLeX. In the next subsection, we will further analyze the complementarity of KpLeX-RGB and KpLeX under large values of \(k\).
### Further Comparing KpLeX-RGB and KpLeX
This subsection mainly compares KpLeX-RGB with KpLeX under large values of \(k\), i.e., \(k\in[7,15]\), to further evaluate the performance of RGB and its complementarity with the upper bound proposed in KpLeX. To our knowledge, this is the first time to evaluate M\(\&\)P algorithms with \(k>7\).
The M\(\&\)P instances used in this subsection include the 2nd DIMACS benchmark, the Real-world benchmark, and the 10th DIMACS benchmark introduced in Section 4.1. Moreover, since the randomly generated graphs with 1,000 vertices and \(k>6\) are too hard for the algorithms (as the results in Figure 2(d) show), we further generate 60 small random graphs with 100 or 200 vertices and densities ranging from 0.05 to 0.3 as in [22] (we generate 5 graphs for each number of vertices \(|V|=[100,200]\) and each density \(p=[0.05,0.1,0.15,0.2,0.25,0.3]\) by connecting any two vertices with a probability equals \(p\)).
To make a comprehensive and clear comparison between KpLeX-RGB and KpLeX, we present three groups of results. The first group compares their general performance on four tested benchmarks in Figure 3. The results show that KpLeX-RGB shows better performance than KpLeX in solving 2nd DIMACS instances and small random graphs, shows comparable performance to KpLeX in solving 10th DIMACS instances, and shows worse performance than KpLeX in solving Real-world instances, indicating again the complementar
Figure 2: Comparison of Maplex-RGB, KpLeX-RGB, Maplex, and KpLeX on four tested benchmarks.
ity of RGB and the upper bound proposed in KpLeX.
The second group presents the cactus plots of KpLeX-RGB and KpLeX in solving the small random graphs in Figure 4. In each cactus plot, the \(x\) axis is the running time (in seconds), and the \(y\) axis is the number of solved instances. We present the detailed results in solving the small random graphs because their scale is stable, and the results are continuous and clear. Due to the limited space, we only present the cactus plots with \(k=[9,10,11,12]\) that are most representative. The results show the significantly better performance of KpLeX-RGB over KpLeX in solving these random graphs under large values of \(k\).
In the last group, we select some instances with \(k=[8,10,12,14]\) from each benchmark to compare KpLeX-RGB and KpLeX in Table 1. The results are expressed by the running time in seconds (column _Time_) to solve the instances and the branching times of the entire computing (column _Branches_). The symbol '-' means the algorithm can not solve the instance within 7,200 seconds. From the results, we can observe that the proposed RGB can help KpLeX reduce lots of branches when solving these instances. In particular, the KpLeX-RGB shows excellent performance for graph san400-0-5-1 that can solve its corresponding instances within one second, while KpLeX even can not solve them within 7,200 seconds. Moreover, the branches of KpLeX-RGB are even more than 40 times lesser than those of KpLeX for graph inf-great-britain_osm with \(k=14\), which is a significant reduction. However, the reduction of KpLeX-RGB over KpLeX on running time is not as obvious as that on branching time when solving these instances, because as an additional upper bound, RGB can reduce lots of branches but also costs additional computing time.
In summary, the three groups of experiments indicate the excellent performance of RGB and its excellent complementarity with the upper bound proposed in KpLeX. When solving some 2nd DIMACS instances and small random instances, RGB can help KpLeX reduce lots of branches and solve the instances faster. When solving some Real-world instances, RGB might reduce the algorithm's efficiency. We have tried to investigate what kinds of instances can be solved well by RGB and what kinds of instances can not. But we did not find any regular patterns. We argue that RGB is good at solving the instances whose subgraphs (i.e., candidate vertex sets) can be colored by RGA (Algorithm 1) efficiently with less number of colors, and we believe that a more efficient and effective coloring algorithm can further improve the performance of RGB, which might attract future studies.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Instance} & \multirow{2}{*}{\(k\)} & \multicolumn{2}{c}{KpLeX-RGB} & \multicolumn{2}{c}{KpLeX} \\ \cline{3-6} & & Time & Branches & Time & Branches \\ \hline \multirow{4}{*}{san400-0-5-1} & 8 & 0.30 & 3.31E+03 & - & - \\ & 10 & 0.11 & 3.29E+02 & - & - \\ & 12 & 0.12 & 3.15E+02 & - & - \\ & 14 & 0.11 & 3.01E+02 & - & - \\ \hline \multirow{4}{*}{sscft-Penn94} & 8 & 29.38 & 3.04E+06 & 20.70 & 4.90E+06 \\ & 10 & 85.39 & 9.37E+06 & 143.12 & 3.80E+07 \\ & 12 & 17.10 & 1.64E+06 & 107.53 & 2.79E+07 \\ & 14 & 79.04 & 9.06E+06 & 1143.2 & 2.85E+07 \\ \hline \multirow{4}{*}{inf-great-britain\_osm} & 8 & 6.18 & 1.18E+05 & 6.33 & 2.41E+06 \\ & 10 & 12.06 & 7.72E+06 & 35.35 & 2.21E+08 \\ \cline{1-1} & 12 & 51.85 & 6.22E+07 & 359.35 & 2.59E+09 \\ \cline{1-1} & 14 & 1439.01 & 1.94E+09 & - & - \\ \hline \multirow{4}{*}{Rand\_100\_005\_5} & 8 & 0.57 & 4.51E+05 & 2.14 & 1.06E+07 \\ \cline{1-1} & 10 & 1.18 & 1.27E+06 & 2.67 & 1.44E+07 \\ \cline{1-1} & 12 & 116.66 & 1.19E+08 & 466.05 & 2.41E+09 \\ \cline{1-1} & 14 & 2317.59 & 2.11E+09 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of KpLeX-RGB and KpLeX on four graphs from different benchmarks, including the 2nd DIMACS graph san400-0-5-1, the Real-world graph socftb-Penn94, the 10th DIMACS graph inf-great-britain_osm, and the small random graph Rand_100_005_5.
Figure 4: Cactus plots of KpLeX-RGB and KpLeX on small random graphs with \(k=[9,10,11,12]\).
Figure 3: Comparison of KpLeX-RGB and KpLeX on four tested benchmarks.
Moreover, since RGB mainly benefits from considering the relaxation properties of \(k\)-plex, it shows better performance in solving instances with larger \(k\) values, i.e., instances that are more relaxed.
### Ablation study
In the end, we compare Maplex and its several variants that replace the graph color bound with our proposed new ones described in Section 3.1, to evaluate the performance of our proposed upper bounds and evaluate the effect of each component in the proposed RGB. We only do ablation studies based on Maplex, not KpLeX, because using the proposed new upper bounds as additional ones in KpLeX might can not clearly distinguish the performance of each variant, while using them to replace the graph color bound in Maplex can.
We denote Maplex-\(\Delta\)min, Maplex-\(k\)VQIS, and Maplex-\(\Delta\)max as the variants of Maplex that replace its graph color bound with the \(\Delta_{min}\)-Bound, \(k\)VQIS-Bound, and \(\Delta_{max}\)-Bound, respectively. Maplex-\(\Delta\)min can be obtained by simply adjusting the value of upper bounds calculated by the graph color bound in Maplex. Maplex-\(k\)VQIS can be obtained by changing \(k-i\) to \(k\) in lines 13-14 of Algorithm 1 in Maplex-RGB. Maplex-\(\Delta\)max can be obtained by removing lines 8-13 of Algorithm 1 in Maplex-RGB.
We compare Maplex-RGB, Maplex-\(\Delta\)max, Maplex-\(k\)VQIS, Maplex-\(\Delta\)min, and Maplex in solving the same benchmarks described in Section 4.1. The results are shown in Figure 5. We can observe that Maplex can be improved by any of our proposed new upper bounds. The algorithms in descending order of performance are Maplex-RGB, Maplex-\(\Delta\)max, Maplex-\(k\)VQIS, Maplex-\(\Delta\)min, and Maplex, which is also roughly consistent with the example shown in Figure 1. Moreover, the results of Maplex-RGB, Maplex-\(\Delta\)max, and Maplex-\(k\)VQIS indicate that RGB combines the advantages of the \(k\)VQIS-Bound and the \(\Delta_{max}\)-Bound.
## 5 Conclusion
This paper investigates the relaxation properties of the \(k\)-plex structure and proposes several new upper bounds that improve the graph color bound for the NP-hard problem of M\(k\)P. To describe and calculate the upper bounds, we propose a new quasi-independent set structure, \(r\)VQIS, that restricts the maximum number of conflict vertices. Among them, the Relaxed graph Color Bound (RGB) combines the advantages of other bounds. We evaluate the performance by combining RGB with two of the state-of-the-art M\(k\)P exact algorithms, Maplex and KpLeX. Extensive experiments show that RGB significantly improves Maplex and has an excellent complementarity with the upper bound proposed in KpLeX. In particular, when solving some instances, RGB can help KpLeX reduce branches by tens of times and solve the problems more efficiently.
Figure 5: Comparison of Maplex-RGB, Maplex-\(\Delta\)max, Maplex-\(k\)VQIS, Maplex-\(\Delta\)min, and Maplex on four tested benchmarks.
We believe the proposed upper bounds suggest some new ways to utilize the relaxation properties of \(k\)-plex. In future work, we will continue to investigate how to better utilize the relaxation properties of \(k\)-plex and other relaxation clique models such as \(k\)-defective clique and quasi-clique.
|
2310.19694 | Convolutional State Space Models for Long-Range Spatiotemporal Modeling | Effectively modeling long spatiotemporal sequences is challenging due to the
need to model complex spatial correlations and long-range temporal dependencies
simultaneously. ConvLSTMs attempt to address this by updating tensor-valued
states with recurrent neural networks, but their sequential computation makes
them slow to train. In contrast, Transformers can process an entire
spatiotemporal sequence, compressed into tokens, in parallel. However, the cost
of attention scales quadratically in length, limiting their scalability to
longer sequences. Here, we address the challenges of prior methods and
introduce convolutional state space models (ConvSSM) that combine the tensor
modeling ideas of ConvLSTM with the long sequence modeling approaches of state
space methods such as S4 and S5. First, we demonstrate how parallel scans can
be applied to convolutional recurrences to achieve subquadratic parallelization
and fast autoregressive generation. We then establish an equivalence between
the dynamics of ConvSSMs and SSMs, which motivates parameterization and
initialization strategies for modeling long-range dependencies. The result is
ConvS5, an efficient ConvSSM variant for long-range spatiotemporal modeling.
ConvS5 significantly outperforms Transformers and ConvLSTM on a long horizon
Moving-MNIST experiment while training 3X faster than ConvLSTM and generating
samples 400X faster than Transformers. In addition, ConvS5 matches or exceeds
the performance of state-of-the-art methods on challenging DMLab, Minecraft and
Habitat prediction benchmarks and enables new directions for modeling long
spatiotemporal sequences. | Jimmy T. H. Smith, Shalini De Mello, Jan Kautz, Scott W. Linderman, Wonmin Byeon | 2023-10-30T16:11:06Z | http://arxiv.org/abs/2310.19694v1 | # Convolutional State Space Models for
###### Abstract
Effectively modeling long spatiotemporal sequences is challenging due to the need to model complex spatial correlations and long-range temporal dependencies simultaneously. ConvLSTMs attempt to address this by updating tensor-valued states with recurrent neural networks, but their sequential computation makes them slow to train. In contrast, Transformers can process an entire spatiotemporal sequence, compressed into tokens, in parallel. However, the cost of attention scales quadratically in length, limiting their scalability to longer sequences. Here, we address the challenges of prior methods and introduce convolutional state space models (ConvSSM)1 that combine the tensor modeling ideas of ConvLSTM with the long sequence modeling approaches of state space methods such as S4 and S5. First, we demonstrate how parallel scans can be applied to convolutional recurrences to achieve subquadratic parallelization and fast autoregressive generation. We then establish an equivalence between the dynamics of ConvSSMs and SSMs, which motivates parameterization and initialization strategies for modeling long-range dependencies. The result is ConvS5, an efficient ConvSSM variant for long-range spatiotemporal modeling. ConvS5 significantly outperforms Transformers and ConvLSTM on a long horizon Moving-MNIST experiment while training \(3\times\) faster than ConvLSTM and generating samples \(400\times\) faster than Transformers. In addition, ConvS5 matches or exceeds the performance of state-of-the-art methods on challenging DMLab, Minecraft and Habitat prediction benchmarks and enables new directions for modeling long spatiotemporal sequences.
Footnote 1: Implementation available at: [https://github.com/NVlabs/ConvSSM](https://github.com/NVlabs/ConvSSM).
## 1 Introduction
Developing methods that efficiently and effectively model long-range spatiotemporal dependencies is a challenging problem in machine learning. Whether predicting future video frames [1; 2], modeling traffic patterns [3; 4], or forecasting weather [5; 6], deep spatiotemporal modeling requires simultaneously capturing local spatial structure and long-range temporal dependencies. Although there has been progress in deep generative modeling of complex spatiotemporal data [7; 8; 9; 10; 11; 12], most prior work has only considered short sequences of 20-50 timesteps due to the costs of processing long spatiotemporal sequences. Recent work has begun considering sequences of hundreds to thousands of timesteps [13; 14; 15; 16]. As hardware and data collection of long spatiotemporal sequences continue to improve, new modeling approaches are required that scale efficiently with sequence length and effectively capture long-range dependencies.
Convolutional recurrent networks (ConvRNNs) such as ConvLSTM [17] and ConvGRU [18] are common approaches for spatiotemporal modeling. These methods encode the spatial information using tensor-structured states. The states are updated with recurrent neural network (RNN) equations that use convolutions instead of the matrix-vector multiplications in standard RNNs (e.g., LSTM/GRUs [21, 22]). This approach allows the RNN states to reflect the spatial structure of the data while simultaneously capturing temporal dynamics. ConvRNNs inherit both the benefits and the weaknesses of RNNs: they allow fast, stateful autoregressive generation and an unbounded context window, but they are slow to train due to their inherently sequential structure and can suffer from the vanishing/exploding gradient problem [23].
Transformer-based methods [9, 13, 14, 24, 25, 26, 27] operate on an entire sequence in parallel, avoiding these training challenges. Transformers typically require sophisticated compression schemes [28, 29, 30] to reduce the spatiotemporal sequence into tokens. Moreover, Transformers use an attention mechanism that has a bounded context window and whose computational complexity scales quadratically in sequence length for training and inference [31, 32]. More efficient Transformer methods improve on the complexity of attention [33, 34, 35, 36, 37, 38, 39], but these methods can fail on sequences with long-range dependencies [40, 13]. Some approaches combine Transformers with specialized training frameworks to address the attention costs [13]. However, recent work in deep state space models (SSMs) [19, 41, 42, 20, 43], like S4 [19] and S5 [20], has sought to overcome attention's quadratic complexity while maintaining the parallelizability and performance of attention and the statefulness of RNNs. These SSM layers have proven to be effective in various domains such as speech [44], images [45] and video classification [45, 46]; reinforcement learning [47, 48]; forecasting [49] and language modeling [50, 51, 52, 53].
Inspired by modeling ideas from ConvRNNs and SSMs, we introduce _convolutional state space models_ (ConvSSMs), which have a tensor-structured state like ConvRNNs but a continuous-time parameterization and linear state updates like SSM layers. See Figure 1. However, there are challenges to make this approach scalable and effective for modeling long-range spatiotemporal data. In this paper, we address these challenges and provide a rigorous framework that ensures both computational efficiency and modeling performance for spatiotemporal sequence modeling. First, we discuss computational efficiency and parallelization of ConvSSMs across the sequence for scalable training and fast inference. We show how to parallelize linear convolutional recurrences using a binary associative operator and demonstrate how this can be exploited to use parallel scans for subquadratic parallelization across the spatiotemporal sequence. We discuss both theoretical and practical considerations (Section 3.2) required to make this feasible and efficient. Next, we address how to capture long-range spatiotemporal dependencies. We develop a connection between
Figure 1: ConvRNNs [17, 18] (left) model spatiotemporal sequences using tensor-valued states, \(\mathbf{\mathcal{X}}_{k}\), and a nonlinear RNN update, \(\mathbf{G}()\), that uses convolutions instead of matrix-vector multiplications. A position-wise nonlinear function, \(\mathbf{h}()\), transforms the states into the output sequence. Deep SSMs [19, 20] (center) model vector-valued input sequences using a discretized linear SSM. The linear dynamics can be exploited to parallelize computations across the sequence and capture long-range dependencies. We introduce ConvSSMs (right) that model spatiotemporal data using tensor states, like ConvRNNs, and linear dynamics, like SSMs. We also introduce an efficient ConvSSM variant, ConvS5, that can be parallelized across the sequence with parallel scans, has fast autoregressive generation, and captures long-range dependencies.
the dynamics of SSMs and ConvSSMs (Section 3.3) and leverage this, in Section 3.4, to introduce a parameterization and initialization design that can capture long-range spatiotemporal dependencies.
As a result, we introduce _ConvS5_, a new spatiotemporal layer that is an efficient ConvSSM variant. It is parallelizable and overcomes difficulties during training (e.g., vanishing/exploding gradient problems) that traditional ConvRNN approaches experience. ConvS5 does not require compressing frames into tokens and provides an unbounded context. It also provides fast (constant time and memory per step) autoregressive generation compared to Transformers. ConvS5 significantly outperforms Transformers and ConvLSTM on a challenging long horizon Moving-MNIST [54] experiment requiring methods to train on 600 frames and generate up to 1,200 frames. In addition, ConvS5 trains \(3\times\) faster than ConvLSTM on this task and generates samples \(400\times\) faster than the Transformer. Finally, we show that ConvS5 matches or exceeds the performance of various state-of-the-art methods on challenging DMLab, Minecraft, and Habitat long-range video prediction benchmarks [13].
## 2 Background
This section provides the background necessary for ConvSSMs and ConvS5, introduced in Section 3.
### Convolutional Recurrent Networks
Given a sequence of inputs \(\mathbf{u}_{1:L}\in\mathbb{R}^{L\times U}\), an RNN updates its state, \(\mathbf{x}_{k}\in\mathbb{R}^{P}\), using the state update equation \(\mathbf{x}_{k}=\mathbf{F}(\mathbf{x}_{k-1},\mathbf{u}_{\mathbf{k}})\), where \(\mathbf{F}()\) is a nonlinear function. For example, a vanilla RNN can be represented (ignoring the bias term) as
\[\mathbf{x}_{k}=\tanh(\mathbf{A}\mathbf{x}_{k-1}+\mathbf{B}\mathbf{u}_{\mathbf{ k}}) \tag{1}\]
with state matrix \(\mathbf{A}\in\mathbb{R}^{P\times P}\), input matrix \(\mathbf{B}\in\mathbb{R}^{P\times U}\) and \(\tanh()\) applied elementwise. Other RNNs such as LSTM [21] and GRU [22] utilize more intricate formulations of \(\mathbf{F}()\).
_Convolutional recurrent neural networks_[17; 18] (ConvRNNs) are designed to model spatiotemporal sequences by replacing the vector-valued states and inputs of traditional RNNs with tensors and substituting matrix-vector multiplications with convolutions. Given a length \(L\) sequence of frames, \(\mathcal{U}_{1:L}\in\mathbb{R}^{L\times H^{\prime}\times W^{\prime}\times U}\), with height \(H^{\prime}\), width \(W^{\prime}\) and \(U\) features, a ConvRNN updates its state, \(\mathcal{X}_{k}\in\mathbb{R}^{H\times W\times P}\), with a state update equation \(\mathcal{X}_{k}=\mathbf{G}(\mathcal{X}_{k-1},\mathcal{U}_{k})\), where \(\mathbf{G}()\) is a nonlinear function. Analogous to (1), we can express the state update equation for a vanilla ConvRNN as
\[\mathcal{X}_{k}=\tanh(\mathcal{A}*\mathcal{X}_{k-1}+\mathcal{B}*\mathcal{U}_{ k}), \tag{2}\]
where \(*\) is a spatial convolution operator with state kernel \(\mathcal{A}\in\mathbb{R}^{P\times P\times k_{A}\times k_{A}}\) (using an [output features, input features, kernel height, kernel width] convention), input kernel \(\mathcal{B}\in\mathbb{R}^{P\times U\times k_{B}\times k_{B}}\) and \(\tanh()\) is applied elementwise. More complex updates such as ConvLSTM [17] and ConvGRU [18] are commonly used by making similar changes to the LSTM and GRU equations, respectively.
### Deep State Space Models
This section briefly introduces deep SSMs such as S4 [19] and S5 [20] designed for modeling long sequences. The ConvS5 approach we introduce in Section 3 extends these ideas to the spatiotemporal domain.
Linear State Space ModelsGiven a continuous input signal \(\mathbf{u}(t)\in\mathbb{R}^{U}\), a latent state \(\mathbf{x}(t)\in\mathbb{R}^{P}\) and an output signal \(\mathbf{y}(t)\in\mathbb{R}^{M}\), a continuous-time, linear SSM is defined using a differential equation:
\[\mathbf{x}^{\prime}(t)=\mathbf{A}\mathbf{x}(t)+\mathbf{B}\mathbf{u}(t), \qquad\mathbf{y}(t)=\mathbf{C}\mathbf{x}(t)+\mathbf{D}\mathbf{u}(t), \tag{3}\]
and is parameterized by a state matrix \(\mathbf{A}\in\mathbb{R}^{P\times P}\), an input matrix \(\mathbf{B}\in\mathbb{R}^{P\times U}\), an output matrix \(\mathbf{C}\in\mathbb{R}^{M\times P}\) and a feedthrough matrix \(\mathbf{D}\in\mathbb{R}^{M\times U}\). Given a sequence, \(\mathbf{u}_{1:L}\in\mathbb{R}^{L\times U}\), the SSM can be discretized to define a discrete-time SSM
\[\mathbf{x}_{k}=\overline{\mathbf{A}}\mathbf{x}_{k-1}+\overline{\mathbf{B}} \mathbf{u}_{k},\qquad\mathbf{y}_{k}=\mathbf{C}\mathbf{x}_{k}+\mathbf{D} \mathbf{u}_{k}, \tag{4}\]
where the discrete-time parameters are a function of the continuous-time parameters and a timescale parameter, \(\Delta\). We define \(\overline{\mathbf{A}}=\mathrm{DISCRETIZE}_{\mathrm{A}}(\mathbf{A},\Delta)\) and \(\overline{\mathbf{B}}=\mathrm{DISCRETIZE}_{\mathrm{B}}(\mathbf{A},\mathbf{B},\Delta)\) where \(\mathrm{DISCRETIZE}()\) is a discretization method such as Euler, bilinear or zero-order hold [55].
S4 and S5Gu et al. [19] introduced the _structured state space sequence_ (S4) layer to efficiently model long sequences. An S4 layer uses many continuous-time linear SSMs, an explicit discretization step with learnable timescale parameters, and position-wise nonlinear activation functions applied to the SSM outputs. Smith et al. [20] showed that with several architecture changes, the approach could be simplified and made more flexible by just using one SSM as in (3) and utilizing parallel scans. SSM layers, such as S4 and S5, take advantage of the fact that linear dynamics can be parallelized with subquadratic complexity in the sequence length. They can also be run sequentially as stateful RNNs for fast autoregressive generation. While a single SSM layer such as S4 or S5 has only linear dynamics, the nonlinear activations applied to the SSM outputs allow representing nonlinear systems by stacking multiple SSM layers [56; 57; 58].
SSM Parameterization and InitializationParameterization and initialization are crucial aspects that allow deep SSMs to capture long-range dependencies more effectively than prior attempts at linear RNNs [59; 60; 61]. The general setup includes continuous-time SSM parameters, explicit discretization with learnable timescale parameters, and state matrix initialization using structured matrices inspired by the HiPPO framework [62]. Prior research emphasizes the significance of these choices in achieving high performance on challenging long-range tasks [19; 20; 56; 57]. Recent work [57] has studied these parameterizations/initializations in more detail and provides insight into this setup's favorable initial eigenvalue distributions and normalization efforts.
### Parallel Scans
We briefly introduce parallel scans, as used by S5, since they are important for parallelizing the ConvS5 method we introduce in Section 3. See Blelloch [63] for a more detailed review. A scan operation, given a binary associative operator \(\bullet\) (i.e. \((a\bullet b)\bullet c=a\bullet(b\bullet c)\)) and a sequence of \(L\) elements \([a_{1},a_{2},...,a_{L}]\), yields the sequence: \([a_{1},\ (a_{1}\bullet a_{2}),\...,\ (a_{1}\bullet a_{2}\bullet\dots \bullet a_{L})]\).
Parallel scans use the fact that associative operators can be computed in any order. A parallel scan can be defined for the linear recurrence of the state update in (4) by forming the initial scan tuples \(c_{k}=(c_{k,a},c_{k,b}):=(\overline{\mathbf{A}},\ \ \overline{\mathbf{B}} \mathbf{u}_{k})\) and utilizing a binary associative operator that takes two tuples \(q_{i},q_{j}\) (either the initial tuples \(c_{i},c_{j}\) or intermediate tuples) and produces a new tuple of the same type, \(q_{i}\bullet q_{j}:=(q_{j,a}\odot q_{i,a},\ q_{j,a}\otimes q_{i,b}+q_{j,b})\), where \(\odot\) is matrix-matrix multiplication and \(\otimes\) is matrix-vector multiplication. Given sufficient processors, the parallel scan computes the linear recurrence of (4) in \(O(\log L)\) sequential steps (i.e., depth or span) [63].
## 3 Method
This section introduces convolutional state space models (ConvSSMs). We show how ConvSSMs can be parallelized with parallel scans. We then connect the dynamics of ConvSSMs to SSMs to motivate parameterization. Finally, we use these insights to introduce an efficient ConvSSM variant, ConvS5.
### Convolutional State Space Models
Consider a continuous tensor-valued input \(\boldsymbol{\mathcal{U}}(t)\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times U}\) with height \(H^{\prime}\), width \(W^{\prime}\), and number of input features \(U\). We will define a continuous-time, linear convolutional state space model (_ConvSSM_) with state \(\boldsymbol{\mathcal{X}}(t)\in\mathbb{R}^{H\times W\times P}\), derivative \(\boldsymbol{\mathcal{X}}^{\prime}(t)\in\mathbb{R}^{H\times W\times P}\) and output \(\boldsymbol{\mathcal{Y}}(t)\in\mathbb{R}^{H\times W\times U}\), using a differential equation:
\[\boldsymbol{\mathcal{X}}^{\prime}(t) =\boldsymbol{\mathcal{A}}*\boldsymbol{\mathcal{X}}(t)+\boldsymbol {\mathcal{B}}*\boldsymbol{\mathcal{U}}(t) \tag{5}\] \[\boldsymbol{\mathcal{Y}}(t) =\boldsymbol{\mathcal{C}}*\boldsymbol{\mathcal{X}}(t)+\boldsymbol {\mathcal{D}}*\boldsymbol{\mathcal{U}}(t) \tag{6}\]
where \(*\) denotes the convolution operator, \(\boldsymbol{\mathcal{A}}\in\mathbb{R}^{P\times P\times k_{A}\times k_{A}}\) is the state kernel, \(\boldsymbol{\mathcal{B}}\in\mathbb{R}^{P\times U\times k_{B}\times k_{B}}\) is the input kernel, \(\boldsymbol{\mathcal{C}}\in\mathbb{R}^{U\times P\times k_{C}\times k_{C}}\) is the output kernel, and \(\boldsymbol{\mathcal{D}}\in\mathbb{R}^{U\times U\times k_{D}\times k_{D}}\) is the feedthrough kernel. For simplicity, we pad the convolution to ensure the same spatial resolution, \(H\times W\), is maintained in the states and outputs. Similarly, given a sequence of \(L\) inputs, \(\boldsymbol{\mathcal{U}}_{1:L}\in\mathbb{R}^{L\times H^{\prime}\times W^{\prime }\times U}\), we define a discrete-time convolutional state space model as
\[\boldsymbol{\mathcal{X}}_{k} =\overline{\boldsymbol{\mathcal{A}}}*\boldsymbol{\mathcal{X}}_{k-1 }+\overline{\boldsymbol{\mathcal{B}}}*\boldsymbol{\mathcal{U}}_{k} \tag{7}\] \[\boldsymbol{\mathcal{Y}}_{k} =\boldsymbol{\mathcal{C}}*\boldsymbol{\mathcal{X}}_{k}+\boldsymbol {\mathcal{D}}*\boldsymbol{\mathcal{U}}_{k} \tag{8}\]
where \(\overline{\boldsymbol{\mathcal{A}}}\in\mathbb{R}^{P\times P\times k_{A}\times k _{A}}\) and \(\overline{\boldsymbol{\mathcal{B}}}\in\mathbb{R}^{P\times U\times k_{B}\times k _{B}}\) denote that these kernels are in discrete-time.
### Parallelizing Convolutional Recurrences
ConvS5 leverages parallel scans to efficiently parallelize the recurrence in (7). As discussed in Section 2.3, this requires a binary associative operator. Given that convolutions are associative, we show:
**Proposition 1**.: _Consider a convolutional recurrence as in (7) and define initial parallel scan elements \(c_{k}=(c_{k,a},c_{k,b}):=(\overline{\mathbf{\mathcal{A}}},\overline{\mathbf{\mathcal{B} }}*\mathbf{\mathcal{U}}_{k})\). The binary operator, defined below, is associative._
\[q_{i}\varoquiv q_{j}:=(q_{j,a}\circ q_{i,a},\ q_{j,a}*q_{i,b}\ +\ q_{j,b}), \tag{9}\]
_where \(\circ\) denotes convolution of two kernels, \(*\) denotes convolution and \(+\) is elementwise addition._
Proof.: See Appendix A.1.
Therefore, in theory, we can use this binary operator with a parallel scan to compute the recurrence in (7). However, the binary operator,, requires convolving two \(k_{A}\times k_{A}\) resolution state kernels together. To maintain equivalence with the sequential scan, the resulting kernel will have resolution \(2k_{a}-1\times 2k_{a}-1\). This implies that the state kernel will grow during the parallel scan computations for general kernels with a resolution greater than \(1\times 1\). This allows the receptive field to grow in the time direction, a useful feature for capturing spatiotemporal context. However, this kernel growth is computationally infeasible for long sequences.
We address this challenge by taking further inspiration from deep SSMs. These methods opt for simple but computationally advantageous operations in the time direction (linear dynamics) and utilize more complex operations (nonlinear activations) in the depth direction of the model. These nonlinear activations allow a stack of SSM layers with linear dynamics to represent nonlinear systems. Analogously, we choose to use \(1\times 1\) state kernels and perform pointwise state convolutions for the convolutional recurrence of (7). When we stack multiple layers of these ConvSSMs, the receptive field grows in the depth direction of the network and allows the stack of layers to capture the spatiotemporal context [64]. Computationally, we now have a construction that can be parallelized with subquadratic complexity with respect to the sequence length.
**Proposition 2**.: _Given the effective inputs \(\overline{\mathbf{\mathcal{B}}}*\mathbf{\mathcal{U}}_{1:L}\in\mathbb{R}^{L\times H \times W\times P}\) and a pointwise state kernel \(\mathbf{\mathcal{A}}\in\mathbb{R}^{P\times P\times 1\times 1}\), the computational cost of computing the convolutional recurrence in Equation 7 with a parallel scan is \(\mathcal{O}\big{(}L(P^{3}+P^{2}HW)\big{)}\)._
Proof.: See Appendix A.2.
Further, the ConvS5 implementation introduced below admits a diagonalized parameterization that reduces this cost to \(\mathcal{O}(LPHW)\). See Section 3.4 and Appendix B for more details.
Figure 2: The dynamics of a ConvSSM with pointwise state kernel (top) can be equivalently viewed as the dynamics of an SSM (bottom). See Proposition 3. Each ConvSSM state pixel evolves according to an SSM state update with shared state matrix, \(\mathbf{\Lambda}_{\mathrm{SSM}}\), and input matrix, \(\mathbf{\mathrm{B}}_{\mathrm{SSM}}\), that can be formed by reshaping the ConvSSM’s state kernel and input kernel. This allows leveraging parameterization insights from deep SSMs [19; 41; 42; 20; 57] to equip ConvS5 to model long-range dependencies.
### Connection to State Space Models
Since the convolutions in (5-6) and (7-8) are linear operations, they can be described equivalently as matrix-vector multiplications by flattening the input and state tensors into vectors and using large, circulant matrices consisting of the kernel elements [65]. Thus, any ConvSSM can be described as a large SSM with a circulant dynamics matrix. However, we show here that the use of pointwise state kernels, as described in the previous section, provides an alternative SSM equivalence, which lends a special structure that can leverage the deep SSM parameterization/initialization ideas discussed in Section 2.2 for modeling long-range dependencies. We show that each pixel of the state, \(\mathbf{\mathcal{X}}(t)_{i,j}\in\mathbb{R}^{P}\), can be equivalently described as evolving according to a differential equation with a shared state matrix, \(\mathbf{A}_{\mathrm{SSM}}\), and input matrix, \(\mathbf{B}_{\mathrm{SSM}}\). See Figure 2.
**Proposition 3**.: _Consider a ConvSSM state update as in (5) with pointwise state kernel \(\mathbf{\mathcal{A}}\in\mathbb{R}^{P\times P\times 1\times 1}\), input kernel \(\mathbf{\mathcal{B}}\in\mathbb{R}^{P\times U\times k_{B}\times k_{B}}\), and input \(\mathbf{\mathcal{U}}(t)\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times U}\). Let \(\mathbf{\mathcal{U}}_{\mathrm{imzol}}(t)\in\mathbb{R}^{H\times W\times Uk_{B}^{2}}\) be the reshaped result of applying the Image to Column (imzol) [66, 67] operation on the input \(\mathbf{\mathcal{U}}(t)\). Then the dynamics of each state pixel of (5), \(\mathbf{\mathcal{X}}(t)_{i,j}\in\mathbb{R}^{P}\), evolve according to the following differential equation_
\[\mathbf{\mathcal{X}}^{\prime}(t)_{i,j}=\mathbf{A}_{\mathrm{SSM}}\mathbf{\mathcal{X}}(t )_{i,j}+\mathbf{B}_{\mathrm{SSM}}\mathbf{\mathcal{U}}_{imzol}(t)_{i,j} \tag{10}\]
_where the state matrix, \(\mathbf{A}_{\mathrm{SSM}}\in\mathbb{R}^{P\times P}\), and input matrix, \(\mathbf{B}_{\mathrm{SSM}}\in\mathbb{R}^{P\times(UK_{B}^{2})}\), can be formed by reshaping the state kernel, \(\mathbf{\mathcal{A}}\), and input kernel, \(\mathbf{\mathcal{B}}\), respectively._
Proof.: See Appendix A.3.
Thus, to endow these SSMs with the same favorable long-range dynamical properties as S4/S5 methods, we initialize \(\mathbf{A}_{\mathrm{SSM}}\) with a HiPPO [62] inspired matrix and discretize with a learnable timescale parameter to obtain \(\overline{\mathbf{A}}_{\mathrm{SSM}}\) and \(\overline{\mathbf{B}}_{\mathrm{SSM}}\). Due to the equivalence of Proposition 3, we then reshape these matrices into the discrete ConvSSM state and input kernels of (7) to give the convolutional recurrence the same advantageous dynamical properties. We note that if the input, output and dynamics kernel widths are set to \(1\times 1\), then the ConvSSM formulation is equivalent to "convolving" an SSM across each individual sequence of pixels in the spatiotemporal sequence (this also has connections to the temporal component of S4ND [45] when applied to videos). However, inspired by ConvRNNs, we observed improved performance when leveraging the more general convolutional structure the ConvSSM allows and increasing the input/output kernel sizes to allow local spatial features to be mixed in the dynamical system. See ablations discussed in Section 5.3.
### Efficient ConvSSM for Long-Range Dependencies: ConvS5
Here, we introduce _ConvS5_, which combines ideas of parallelization of convolutional recurrences (Section 3.2) and the SSM connection (Section 3.3). ConvS5 is a ConvSSM that leverages parallel scans and deep SSM parameterization/initialization schemes. Given Proposition 3, we implicitly parameterize a pointwise state kernel, \(\mathbf{\mathcal{A}}\in\mathbb{R}^{P\times P\times 1\times 1}\) and input kernel \(\mathbf{\mathcal{B}}\in\mathbb{R}^{P\times U\times k_{B}\times k_{B}}\) in (5) using SSM parameters as used by S5 [20], \(\mathbf{A}_{\mathrm{SS}}\in\mathbb{R}^{P\times P}\) and \(\mathbf{B}_{\mathrm{SS}}\in\mathbb{R}^{P\times(UK_{B}^{2})}\). We discretize these S5 SSM parameters as discussed in Section 2.2 to give
\[\overline{\mathbf{A}}_{\mathrm{SS}}=\mathrm{DISCRETIZE}_{\mathrm{A}}(\mathbf{ A}_{\mathrm{SS}},\mathbf{\mathcal{A}}),\qquad\overline{\mathbf{B}}_{\mathrm{SS}}= \mathrm{DISCRETIZE}_{\mathrm{B}}(\mathbf{A}_{\mathrm{SS}},\mathbf{B}_{\mathrm{ S5}},\mathbf{\mathcal{A}}), \tag{11}\]
and then reshape to give the ConvS5 state update kernels:
\[\overline{\mathbf{A}}_{\mathrm{SS}}\in\mathbb{R}^{P\times P}\xrightarrow{ \mathrm{reshape}}\overline{\mathbf{\mathcal{A}}}_{\mathrm{ConvS5}}\in\mathbb{R}^{P \times P\times 1\times 1} \tag{12}\]
\[\overline{\mathbf{B}}_{\mathrm{SS}}\in\mathbb{R}^{P\times(UK_{B}^{2})} \xrightarrow{\mathrm{reshape}}\overline{\mathbf{B}}_{\mathrm{ConvS5}}\in \mathbb{R}^{P\times U\times k_{B}\times k_{B}}. \tag{13}\]
We then run the discretized ConvSSM system of (7- 8), using parallel scans to compute the recurrence. In practice, this setup allows us to parameterize ConvS5 using a diagonalized parameterization [41, 42, 20] which reduces the cost of applying the parallel scan in Proposition 2 to \(\mathcal{O}(LPHW)\). See Appendix B for a more detailed discussion of parameterization, initialization and discretization.
We define a ConvS5 layer as the combination of ConvS5 with a nonlinear function applied to the ConvS5 outputs. For example, for the experiments in this paper, we use ResNet[68] blocks for the nonlinear activations between layers. However, this is modular, and other choices such as ConvNext [69] or S4ND [45] blocks could easily be used as well. Finally, many ConvS5 layers can be stacked to form a deep spatiotemporal sequence model.
### ConvS5 Properties
Refer to Table 1 for a comparison of computational complexity for Transformers, ConvRNNs and ConvS5. The parallel scans allow ConvS5 to be parallelized across the sequence like a Transformer but with cost only scaling linearly with the sequence length. In addition, ConvS5 is a stateful model like ConvRNNs, allowing fast autoregressive generation, extrapolation to different sequence lengths, and an unbounded context window. The connection with SSMs such as S5 derived in Section 3.3 allows for precisely specifying the initial dynamics to enable the modeling of long-range dependencies and to realize benefits from the unbounded context. Finally, the parallel scan of ConvS5 can allow leveraging the continuous-time parameterization to process irregularly sampled sequences in parallel. S5 achieves this by providing suitable spacing to the discretization operation [20], a procedure that ConvS5 can also use.
We have proposed a general ConvSSM structure that can be easily adapted to future innovations in the deep SSM literature. ConvS5's parallel scan could be used to endow ConvS5 with time-varying dynamics such as the input-dependent dynamics shown to be beneficial in the Liquid-S4 [43] work. Multiple works [51, 52, 50, 53] proposed adding multiplicative gating to allow SSM-based methods to overcome some weaknesses on language modeling. Similar ideas could be useful to ConvSSMs for reasoning over long spatiotemporal sequences.
## 4 Related Work
This work is most closely related to the ConvRNNs and deep SSMs already discussed. We note here that ConvRNNs have been used in numerous domains, including biomedical, robotics, traffic modeling and weather forecasting [70, 71, 72, 2, 3, 73, 74, 75, 76, 77, 75, 4, 76]. In addition, many variants have been proposed [78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 23, 24, 25, 26, 270, 209, 222, 28, 293, 200, 221, 223, 201, 224, 25, 26, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 32, 33, 34, 35, 36, 37, 38, 39, 311, 32, 33, 34, 36, 38, 39, 32, 35, 37, 39, 33, 36, 39, 37, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 82, 84, 86, 87, 89, 81, 83, 85, 84, 86, 88, 87, 82, 85, 86, 89, 82, 87, 83, 88, 88, 89, 80, 84, 88, 89, 80, 85, 86, 87, 88, 89, 80, 86, 89, 82, 87, 88, 89, 80, 88, 89, 81, 84, 89, 82, 85, 86, 87, 88, 89, 80, 89, 82, 88, 89, 80, 83, 81, 84, 85, 86, 89, 82, 89, 83, 84, 86, 87, 88, 89, 80, 85, 87, 88, 89, 82, 89, 83, 84, 88, 86, 89, 84, 87, 85, 86, 87, 88, 89, 82, 89, 85, 88, 89, 86, 89, 87, 88, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 89, 88, 89, 80, 89, 82, 83, 84, 85, 86, 87, 89, 80, 88, 87, 88, 89, 81, 89, 82, 85, 89, 80, 83, 84, 86, 88, 87, 88, 89, 82, 89, 83, 85, 89, 84, 86, 87, 88, 89, 85, 89, 86, 87, 89, 88, 89, 80, 87, 89, 88, 89, 80, 89, 80, 88, 89, 81, 82, 83, 84, 85, 86, 88, 89, 80, 87, 88, 89, 80, 89, 81, 82, 84, 85, 87, 88, 89, 80, 89, 82, 85, 89, 86, 88, 87, 88, 89, 82, 89, 83, 88, 89, 84, 85, 89, 86, 87, 89, 88, 89, 80, 89, 80, 81, 82, 83, 84, 86, 88, 87, 88, 89, 82, 89, 84, 88, 89, 85, 86, 89, 87, 88, 89, 80, 89, 80, 81, 82, 84, 85, 86, 87, 88, 89, 80, 88, 89, 82, 89, 84, 85, 88, 86, 89, 87, 88, 89, 80, 89, 82, 89, 83, 84, 85, 86, 87, 88, 88, 89, 89, 80, 81, 82, 85, 89, 86, 88, 87, 89, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 80, 82, 89, 83, 84, 85, 87, 88, 89, 80, 86, 89, 87, 88, 89, 80, 88, 89, 82, 89, 84, 85, 86, 88, 87, 89, 88, 89, 80, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 82, 89, 84, 85, 88, 89, 86, 87, 88, 89, 88, 89, 80, 89, 80, 81, 82, 89, 83, 84, 85, 86, 87, 88, 89, 89, 80, 87, 88, 89, 80, 88, 89, 80, 88, 89, 81, 82, 83, 84, 85, 86, 89, 82, 85, 87, 88, 89, 80, 86, 87, 88, 89, 82, 89, 83, 88, 89, 80, 88, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 82, 89, 84, 86, 88, 89, 80, 8, 87, 88, 89, 82, 89, 83, 85, 89, 80, 89, 80, 81, 82, 85, 86, 89, 87, 88, 89, 80, 89, 82, 89, 80, 83, 84, 85, 86, 88, 89, 87, 88, 89, 80, 89, 80, 81, 82, 89, 83, 84, 85, 86, 89, 80, 87, 88, 89, 80, 89, 82, 89, 80, 83, 84,
environment benchmarks proposed in Yan et al. [13]. Finally, in Section 5.3, we discuss ablations that highlight the importance of Conv55's parameterization.
### Long Horizon Moving-MNIST Generation
There are few existing benchmarks for training on and generating long spatiotemporal sequences. We develop a long-horizon Moving-MNIST [54] prediction task that requires training on 300-600 frames and accurately generating up to 1200 frames. This allows for a direct comparison of Conv55, ConvRNNs and Transformers as well as an efficient attention alternative (Performer [33]) and CW-VAE [16], a temporally hierarchical RNN based method. We first train all models on 300 frames and then evaluate by conditioning on 100 frames before generating 1200. We repeat the evaluation after training on 600 frames. See Appendix D for more experiment details. We present the results after generating 800 and 1200 frames in Table 2. See Appendix C for randomly selected sample trajectories. Conv55 achieves the best overall performance. When only trained on 300 frames, ConvLSTM and Conv55 perform similarly when generating 1200 frames, and both outperform the Transformer. All methods benefit from training on the longer 600-frame sequence. However, the longer training length allows Conv55 to significantly outperform the other methods across the metrics when generating 1200 frames.
In Table 2-bottom we revisit the theoretical properties of Table 1 and compare the empirical computational costs of the Transformer, ConvLSTM and ConvS5 on the 600 frame Moving-MNIST task. Although this specific ConvS5 configuration requires a few more FLOPs due to the convolution computations, ConvS5 is parallelizable during training (unlike ConvLSTM) and has fast autoregressive generation (unlike Transformer) -- training 3x faster than ConvLSTM and generating samples 400x faster than Transformers.
### Long-range 3D Environment Benchmarks
Yan et al. [13] introduced a challenging video prediction benchmark specifically designed to contain long-range dependencies. This is one of the first comprehensive benchmarks for long-range spatiotemporal modeling and consists of 300 frame videos of agents randomly traversing 3D environ
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Trained on 300 frames** & \multicolumn{4}{c}{\(100\to 800\)} & \multicolumn{4}{c}{\(100\to 1200\)} \\ Method & FVD \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & FVD \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline Transformer [24] & \(159\) & \(12.6\) & \(0.609\) & \(0.287\) & \(265\) & \(12.4\) & \(0.591\) & \(0.321\) \\ Performer [33] & \(234\) & \(13.4\) & \(0.652\) & \(0.379\) & \(275\) & \(13.2\) & \(0.592\) & \(0.393\) \\ CW-VAE [16] & \(\underline{104}\) & \(12.4\) & \(0.592\) & \(0.277\) & \(\mathbf{117}\) & \(12.3\) & \(0.585\) & \(0.286\) \\ ConvLSTM [17] & \(\underline{128}\) & \(\underline{15.0}\) & \(\underline{0.737}\) & \(\underline{0.169}\) & \(\underline{187}\) & \(\underline{14.1}\) & \(\mathbf{0.706}\) & \(\mathbf{0.203}\) \\ ConvS5 & \(\mathbf{72}\) & \(\mathbf{16.0}\) & \(\mathbf{0.761}\) & \(\mathbf{0.156}\) & \(\underline{187}\) & \(\mathbf{14.5}\) & \(0.678\) & \(0.230\) \\ \hline \hline \multicolumn{10}{l}{**Trained on 600 frames**} \\ \hline Transformer & \(\mathbf{42}\) & \(13.7\) & \(0.672\) & \(0.207\) & \(\underline{91}\) & \(13.1\) & \(0.631\) & \(0.252\) \\ Performer & \(93\) & \(12.4\) & \(0.616\) & \(0.274\) & \(243\) & \(12.2\) & \(0.608\) & \(0.312\) \\ CW-VAE & \(94\) & \(12.5\) & \(0.598\) & \(0.269\) & \(107\) & \(12.3\) & \(0.590\) & \(0.280\) \\ ConvLSTM & \(91\) & \(\underline{15.5}\) & \(\underline{0.757}\) & \(\underline{0.149}\) & \(137\) & \(\underline{14.6}\) & \(\underline{0.727}\) & \(\underline{0.180}\) \\ ConvS5 & \(\underline{47}\) & \(\mathbf{16.4}\) & \(\mathbf{0.788}\) & \(\mathbf{0.134}\) & \(\mathbf{71}\) & \(\mathbf{15.6}\) & \(\mathbf{0.763}\) & \(\mathbf{0.162}\) \\ \hline \hline \multicolumn{10}{l}{GFLOPS \(\downarrow\)} & Train Step Time (s) \(\downarrow\) & Train Cost (V100 days) \(\downarrow\) & Sample Throughput (frames/s) \(\uparrow\) \\ \hline Transformer & \(\mathbf{70}\) & \(\mathbf{0.77}(\mathbf{1.0}\times)\) & \(\mathbf{50}\) & & \(0.21\) (\(1.0\)x) & \\ ConvLSTM & \(\mathbf{65}\) & \(3.0(3.9\times)\) & \(150\) & & \(\mathbf{117}\) (\(\mathbf{557}\times\)) & \\ ConvS5 & \(97\) & \(0.93(1.2\times)\) & \(\mathbf{50}\) & & \(90\) (\(429\times\)) & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative evaluation on the Moving-MNIST dataset [54]. **Top**: To evaluate, we condition on 100 frames, and then show results after generating 800 and 1200 frames. An expanded Table 6 is included in Appendix C with more results, error bars and ablations. Bold scores indicate the best performance and underlined scores indicate the second best performance. **Bottom**: Computational cost comparison for the 600 frame task. Compare to Table 1.
ments in DMLab [99], Minecraft [100], and Habitat [101] environments. See Appendix C for more experimental details and Appendix E for more details on each dataset.
We train models using the same \(16\times 16\) vector-quantized (VQ) codes from the pretrained VQ-GANs [30] used for TECO and the other baselines in Yan et al. [13]. In addition to ConvS5 and the existing baselines, we also train a Transformer (without the TECO framework), Performer and S5. The S5 baseline serves as an ablation on ConvS5's convolutional tensor-structured approach. Finally, since TECO is essentially a training framework (specialized for Transformers), we also use ConvS5 and S5 layers as a drop-in replacement for the Transformer in TECO. Therefore, we refer to the original version of TECO as _TECO-Transformer_, the ConvS5 version as _TECO-ConvS5_ and the S5 version as _TECO-S5_. See Appendix D for detailed information on training procedures, architectures, and hyperparameters.
DMLabThe results for DMLab are presented in Table 3. Of the methods trained without the TECO framework in the top section of Table 3, ConvS5 outperforms all baselines, including RNN [90; 16], efficient attention [39; 33] and diffusion [15] approaches. ConvS5 also has much faster autoregressive generation than the Transformer. ConvS5 significantly outperforms S5 on all metrics, pointing to the value of the convolutional structure of ConvS5.
For the models trained with the TECO framework, we see that TECO-ConvS5 achieves essentially the same FVD and LPIPS as TECO-Transformer, while significantly improving PSNR and SSIM. Note the sample speed comparisons are less dramatic in this setting since the MaskGit [98] sampling procedure is relatively slow. Still, the sample throughput of TECO-ConvS5 and TECO-S5 remains constant, while TECO-Transformer's throughput decreases with sequence length.
Minecraft and HabitatTable 4 presents the results on the Minecraft and Habitat benchmarks. On Minecraft, TECO-ConvS5 achieves the best FVD and performs comparably to TECO-Transformer on the other metrics, outperfoTarming all other baselines. On Habitat, TECO-ConvS5 is the only method to achieve a comparable FVD to TECO-Transformer, while outperforming it on PSNR and SSIM.
### ConvS5 ablations
In Table 5 we present ablations on the convolutional structure of ConvS5. We compare different input and output kernel sizes for the ConvSSM and also compare the default ResNet activations to a channel mixing GLU [102] activation. Where possible, when reducing the sizes of the ConvSSM kernels, we redistribute parameters to the ResNet kernels or the GLU sizes to compare similar parameter counts. The results suggest more convolutional structure improves performance.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{DMLab} \\ Method & FVD \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & Sample Throughput (frames/s) \(\uparrow\) \\ \hline FitVid* [90] & \(176\) & \(12.0\) & \(0.356\) & \(0.491\) & - \\ CW-VAE* [16] & \(125\) & \(12.6\) & \(0.372\) & \(0.465\) & - \\ Perceiver AR* [39] & \(96\) & \(11.2\) & \(0.304\) & \(0.487\) & - \\ Latent FDM* [15] & \(181\) & \(17.8\) & \(0.588\) & \(0.222\) & - \\ Transformer [24] & \(97\) & \(19.9\) & \(0.619\) & \(0.123\) & \(9\) (\(1.0\times\)) \\ Performer [33] & \(80\) & \(17.3\) & \(0.513\) & \(0.205\) & \(7\) (\(0.8\times\)) \\ S5 [20] & \(221\) & \(19.3\) & \(0.641\) & \(0.162\) & \(\underline{28}\) (\(3.1\times\)) \\ ConvS5 & \(\mathbf{66}\) & \(\mathbf{23.2}\) & \(\mathbf{0.769}\) & \(\mathbf{0.079}\) & \(\mathbf{56}\) (\(\mathbf{6.2}\times\)) \\ \hline TECO-Transformer* [13] & \(\mathbf{28}\) & \(22.4\) & \(0.709\) & \(0.155\) & \(16\) (\(1.8\times\)) \\ TECO-Transformer (our run) & \(\mathbf{28}\) & \(21.6\) & \(0.696\) & \(\mathbf{0.082}\) & \(16\) (\(1.8\times\)) \\ TECO-S5 & \(35\) & \(20.1\) & \(0.687\) & \(0.143\) & \(\mathbf{21}\) (\(\mathbf{2.3}\times\)) \\ TECO-ConvS5 & \(\underline{31}\) & \(\mathbf{23.8}\) & \(\mathbf{0.803}\) & \(0.085\) & \(\underline{18}\) (\(2.0\times\)) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative evaluation on the DMLab long-range benchmark [13]. Results from Yan et al. [13] are indicated with \(*\). Methods trained using the TECO [13] training framework are at the bottom of the table. TECO methods are slower to sample due to the MaskGit [98] procedure. The expanded Table 8 in Appendix C includes error bars and ablations.
We also perform ablations to evaluate the importance of ConvS5's deep SSM-inspired parameterization/initialization. We evaluate the performance of a ConvSSM with randomly initialized state kernel on both Moving-Mnist and DMLab. See Table 6 and Table 8 in Appendix C. We observe a degradation in performance in all settings for this ablation. This reflects prior results for deep SSMs [56; 19; 20; 57] and highlights the importance of the connection developed in Section 3.3.
## 6 Discussion
This work introduces ConvS5, a new spatiotemporal modeling layer that combines the fast, stateful autoregressive generation of ConvRNNs with the ability to be parallelized across the sequence like Transformers. Its computational cost scales linearly with the sequence length, providing better scaling for longer spatiotemporal sequences. ConvS5 also leverages insights from deep SSMs to model long-range dependencies effectively.
We note that despite the ConvS5's sub-quadratic scaling, it did not show significant training speedups over Transformers for sequence lengths of 300-600 frames. (See detailed run-time comparisons in Appendix C.) We expect ConvS5 to excel in training efficiency when applied to much longer spatiotemporal sequences where the quadratic scaling of Transformers dominates. We hope this work inspires the creation of longer spatiotemporal datasets and benchmarks. At the current sequence lengths, future optimizations of the parallel scan implementations in common deep learning frameworks will be helpful. In addition, the ResNet blocks used as the activations between ConvS5 layers could be replaced with efficient activations such as sparse convolutions [103] or S4ND [45].
An interesting future direction is to further utilize ConvS5's continuous-time parameterization. Deep SSMs show strong performance when trained at one resolution and evaluated on another [19; 41; 42; 20; 43; 45]. In addition, S5 can leverage its parallel scan to effectively model irregularly sampled sequences [20]. ConvS5 can be used for such applications in the spatiotemporal domain [104; 93; 95]. Finally, ConvS5 is modular and flexible. We have demonstrated that ConvS5 works well as a drop-in replacement in the TECO [13] training framework specifically developed for Transformers. Due to its favorable properties, we expect ConvS5 to also serve as a building block of new approaches for modeling much longer spatiotemporal sequences and multimodal applications.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Minecraft} & \multicolumn{4}{c}{Habitat} \\ Method & FVD \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & FVD \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline FitVid* & \(956\) & \(13.0\) & \(0.343\) & \(0.519\) & - & - & - & - \\ CW-VAE* & \(397\) & \(13.4\) & \(0.338\) & \(0.441\) & - & - & - & - \\ Perceiver AR* & \(76\) & \(13.2\) & \(0.323\) & \(0.441\) & \(164\) & \(12.8\) & \(\mathbf{0.405}\) & \(0.676\) \\ Latent FDM* & \(167\) & \(13.4\) & \(0.349\) & \(0.429\) & \(433\) & \(12.5\) & \(0.311\) & \(\mathbf{0.582}\) \\ TECO-Transformer* & \(116\) & \(\mathbf{15.4}\) & \(\mathbf{0.381}\) & \(\mathbf{0.340}\) & \(\mathbf{76}\) & \(12.8\) & \(0.363\) & \(0.604\) \\ TECO-ConvS5 & \(\mathbf{71}\) & \(14.8\) & \(0.374\) & \(0.355\) & \(95\) & \(\mathbf{12.9}\) & \(0.390\) & \(0.632\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Quantitative evaluation on the Minecraft and Habitat long-range benchmarks [13]. Results from Yan et al. [13] are indicated with \(*\). See expanded Table 10 with error bars in Appendix C.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{DMLab} \\ conv. & \(\boldsymbol{\mathcal{B}}\) kernel & \(\boldsymbol{\mathcal{C}}\) kernel & nonlinearity & FVD \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline x & - & - & GLU & \(221\) & \(19.3\) & \(0.641\) & \(0.162\) \\ \hline o & \(1\times 1\) & \(1\times 1\) & GLU & \(187\) & \(21.0\) & \(0.689\) & \(0.112\) \\ o & \(1\times 1\) & \(5\times 5\) & GLU & \(89\) & \(21.5\) & \(0.713\) & \(0.106\) \\ o & \(3\times 3\) & \(3\times 3\) & GLU & \(96\) & \(22.7\) & \(0.762\) & \(0.088\) \\ \hline o & \(1\times 1\) & \(1\times 1\) & ResNet & \(81\) & \(23.0\) & \(0.767\) & \(0.083\) \\ o & \(1\times 1\) & \(3\times 3\) & ResNet & \(68\) & \(22.8\) & \(0.756\) & \(0.085\) \\ o & \(3\times 3\) & \(3\times 3\) & ResNet & \(\mathbf{67}\) & \(\mathbf{23.2}\) & \(\mathbf{0.769}\) & \(\mathbf{0.079}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablations of ConvS5 convolutional structure for DMLab long-range benchmark dataset [13]. More convolutional structure improves overall performance. See expanded Table 9 in Appendix. |
2308.03486 | Improving Mass Detection in Mammography Images: A Study of Weakly
Supervised Learning and Class Activation Map Methods | In recent years, weakly supervised models have aided in mass detection using
mammography images, decreasing the need for pixel-level annotations. However,
most existing models in the literature rely on Class Activation Maps (CAM) as
the activation method, overlooking the potential benefits of exploring other
activation techniques. This work presents a study that explores and compares
different activation maps in conjunction with state-of-the-art methods for
weakly supervised training in mammography images. Specifically, we investigate
CAM, GradCAM, GradCAM++, XGradCAM, and LayerCAM methods within the framework of
the GMIC model for mass detection in mammography images. The evaluation is
conducted on the VinDr-Mammo dataset, utilizing the metrics Accuracy, True
Positive Rate (TPR), False Negative Rate (FNR), and False Positive Per Image
(FPPI). Results show that using different strategies of activation maps during
training and test stages leads to an improvement of the model. With this
strategy, we improve the results of the GMIC method, decreasing the FPPI value
and increasing TPR. | Vicente Sampaio, Filipe R. Cordeiro | 2023-08-07T11:28:36Z | http://arxiv.org/abs/2308.03486v1 | Improving Mass Detection in Mammography Images: A Study of Weakly Supervised Learning and Class Activation Map Methods
###### Abstract
In recent years, weakly supervised models have aided in mass detection using mammography images, decreasing the need for pixel-level annotations. However, most existing models in the literature rely on Class Activation Maps (CAM) as the activation method, overlooking the potential benefits of exploring other activation techniques. This work presents a study that explores and compares different activation maps in conjunction with state-of-the-art methods for weakly supervised training in mammography images. Specifically, we investigate CAM, GradCAM, GradCAM++, XGradCAM, and LayerCAM methods within the framework of the GMIC model for mass detection in mammography images. The evaluation is conducted on the VinDr-Mamo dataset, utilizing the metrics Accuracy, True Positive Rate (TPR), False Negative Rate (FNR), and False Positive Per Image (FPPI). Results show that using different strategies of activation maps during training and test stages leads to an improvement of the model. With this strategy, we improve the results of the GMIC method, decreasing the FPPI value and increasing TPR.
+
Footnote †: publicationid: pubid: 979-8-3503-3872-0/23/$31.00 ©2023 IEEE
## I Introduction
Breast cancer has emerged as the most prevalent cancer affecting women globally. It accounts for a substantial number of cancer-related fatalities, responsible for approximately 15.5% of all cancer deaths [1]. Early detection is pivotal in improving treatment outcomes as interventions become more challenging in advanced stages [2]. However, the interpretation of digital mammography images poses significant challenges, even for experienced radiologists, due to various factors, including image quality, radiologist expertise, tissue variations, and lesion characteristics [3]. To address these challenges and enhance diagnostic accuracy, integrating computer-aided diagnosis (CAD) tools for lesion detection has been recommended to assist radiologists in identifying lesions and defining their boundaries, providing an additional tool to physicians and improving the accuracy of breast cancer diagnosis in mammography images.
Computational approaches leveraging Convolutional Neural Networks (CNNs) have achieved remarkable success in various medical image classification and segmentation tasks [4, 5]. In cancer diagnosis applications, achieving image interpretability is crucial, and it is accomplished through the localization of key regions in the image that determines the output class assigned by the model [6], thereby assisting medical professionals in making accurate diagnoses. Networks such as U-Net [7] and Faster-RCNN [8] have been widely employed for segmentation and detection tasks, with annotations indicating lesion regions and their corresponding classification (benign or malignant). Despite significant advancements in semantic segmentation techniques for medical images, current approaches heavily rely on large training datasets with high-quality annotations to ensure efficient model training [9]. However, acquiring such datasets poses significant challenges in the medical domain, as lesion annotations necessitate expert knowledge and meticulous annotation of lesion locations in mammograms, making the process labour-intensive and cost-prohibitive [10].
Given these challenges, the area of weakly supervised learning has been widely studied in recent years [11, 12], exploring strategies to extract information from data with scarce or weak annotations [13]. Although there are different levels of weakly supervised learning, in this study, we consider a weakly annotated database as one in which the images have annotation only regarding the image class (normal or with lesion) but that does not have annotation regarding the location or contour of the lesion. This approach facilitates the training of convolutional networks, making the construction and training of models in mammography images more cost-effective and feasible, as it reduces reliance on specialist annotations for lesion localization. Despite advances in weakly supervised learning, this is still an open problem, and new studies aim to improve the results compared to the strongly supervised methods.
Within weakly supervised training methods, the Class Activation Map (CAM) [14] technique has been widely employed for detecting lesions in digital mammography images. However, new activation map-based methods have been proposed and have not been explored in the context of lesion
detection within mammography images. This work proposes leveraging weakly supervised learning to study state-of-the-art class activation maps for enhanced lesion detection. Specifically, we compare the effectiveness of CAM, GradCAM [15], GradCAM++ [16], XGradCAM [17], and LayerCAM [18] methods. Activation maps are evaluated using the state-of-the-art Globally-Aware Multiple Instance Classifier (GMIC) [6] model and the DRVin-Mamo dataset [19]. The main contributions of this study are outlined as follows:
* Exploration of the impact of utilizing different activation map methods for weakly supervised learning in digital mammography images;
* Analysis of lesion detection models on the VinDr-Mammo dataset;
* Improvement of the GMIC model using different activation maps for training and testing.
## II Related works
In recent years, several works have been proposed in weakly supervised learning for detecting anomalies in digital mammography images [6]. Among the main models used for weakly supervised detection in mammography images, the CAM method has been extensively employed to identify regions of interest.
Shen et al. [6] propose the GMIC model, which uses a convolutional neural network model incorporating local and global image features. First, this model uses a low-capacity network across the image to identify the most informative regions. Then, a higher-capacity network collects details from the selected regions. Finally, a fusion module aggregates global and local information to make a prediction. The model is trained only with class information of the image, and the regions of interest are obtained using the CAM method.
Liu et al. propose the GLAM method, which builds upon the GMIC model by incorporating refined segmentation using only image-level annotation. The key concept behind GLAM is the selection of informative regions (patches), followed by performing segmentation specifically on these selected regions. Similar to other approaches, GLAM also employs the CAM method for identifying regions of interest.
Liang et al. [20] propose using a CAM activation map to replace old attention models. Additionally, a self-training strategy is utilized, involving the observation of outputs from intermediate layers of the model. Bakalo et al. [21] adopt a sliding window approach, leveraging a pre-trained VGG network to identify regions of interest for the targeted problem class. While this approach performs well on smaller images, its computational cost escalates significantly when dealing with large databases comprising high-resolution images and deep model training.
Zhu et al. [22] tackle region of interest detection by generating a reduced feature map through convolution and max pooling layers. Multiple instance learning (MIL) [23] is then employed for image class identification.
Beyond the medical imaging domain, several activation map generation methods have been proposed for weakly supervised learning [15, 16, 17, 18]. However, the methods applied in digital mammography have been limited to CAM evaluation. Our work analyzes different CAM-based methods proposed in the literature in recent years, showing that the activation map is an important optimization factor in the weakly supervised learning process.
## III Materials and Methods
### _Activation Map Methods_
Weakly supervised object detection (WSOD) aims to identify the region containing an object in an image based solely on the image class without pixel-level supervision. Activation map-based methods are commonly employed in WSOD approaches to generate bounding-boxes regions by identifying values above a defined threshold [24]. The resulting region is then resized to match the original image size.
Saliency map methods have been proposed in the literature as an approach to elucidate the relationship between the observed region in the model and the class present in the image [6]. These methods contribute to the interpretability of proposed models and address weakly supervised learning challenges. Saliency methods based on activation, such as CAM, rely on observing the activation of the final layer of the model to identify the regions responsible for the activation of each class. Activation-based methods have been proposed in medical image classification tasks to assist in the interpretability of the models used [25, 26]. Only the CAM model has been investigated in the context of weakly supervised learning applied to lesion detection in mammography images. However, other approaches have been proposed in the literature in recent years and are analyzed in this work.
Let \(f\) be a convolutional neural network with a classifier, and \(c\) denotes the class of interest. Given an image \(x\) and a convolutional layer \(l_{i}\), where \(i\) is the \(i\)-th convolutional layer of \(f\), the class activation map (CAM) of \(x\) with respect to \(c\) is defined as the linear combination of the activation map \(l_{i}\), as shown below [27]:
\[CAM_{c}(x)=ReLU\left(\sum_{k=1}^{N_{l}}\alpha_{k}A_{k}\right), \tag{1}\]
where \(N_{l}\) represents the number of channels in the convolutional layer \(l_{i}\), \(A_{k}\) is the \(k\)-th activation channel, and \(\alpha_{k}\) is the weight indicating the importance of the activation channel to class \(c\). The ReLU activation function is applied to consider only the features that positively influence the target class. The activation map is usually resized to the same size as the input image for CAM-based approaches. Thus, the region of interest can be identified by multiplying the activation map with the input image. In convolutional networks with a global average pooling layer, the values of \(\alpha_{k}\) correspond to the weights of the final classification layer [14]. Figure 1 illustrates the process of obtaining the activation map.
The Grad-CAM method [28] determines the coefficient of the activation map by calculating the average gradients across all activation neurons in the map. The Grad-CAM++
method [16] is a modified version of Grad-CAM that focuses on the positive influences of neurons, considering second-order derivatives. The XGradCAM method [17] is also based on Grad-CAM but scales the gradients using normalized activations. The LayerCAM method [18] combines activation maps from different layers. According to the authors, the initial layers better capture detailed information about object location, while the deeper layers detect the location of the objects of interest.
### _Training_
To conduct this study, we employed the GMIC network [6], a state-of-the-art method for weakly supervised object detection (WSOD) in mammography images. The GMIC model utilizes a global feature extraction module employing CAM to identify regions of interest. These regions are cropped and used as input to a local module. A local feature extraction model extracts the feature vector for each region obtained. Finally, the model is trained by combining the local and global features. Figure 2 shows the operation of the GMIC model.
The GMIC loss function is defined by Equation 2 as follows 2[6]:
\[L(y,\hat{y})=\sum_{c}\text{BCE}(y^{c},\hat{y}^{c}_{local})+\text{ BCE}(y^{c},\hat{y}^{c}_{global})+ \tag{2}\] \[\text{BCE}(y_{c},\hat{y}^{c}_{fusion})+\beta L_{reg}(A^{c}),\]
where BCE represents the binary cross-entropy, \(y^{c}\) denotes the expected output for class \(c\), \(\hat{y}^{c}_{local}\), represents the observed output for the local model, \(\hat{y}^{c}_{global}\) corresponds to the observed output for the global model, \(\hat{y}^{c}_{fusion}\) signifies the observed output for the global model after the fusion of local and global features, \(\beta\) is a regularization coefficient that employs the activation map \(A^{c}\) according to the \(L_{reg}\) function. The regularization function \(L_{reg}\) is defined as \(L_{reg}=\sum_{i,j}|A^{c}_{i,j}|\), where \(i\) and \(j\) represent the rows and columns of the activation map.
### _Experimental Environment_
For model evaluation, we utilized the VinDr-Mammo database [19], which is publicly available. This database comprises 5000 mammogram exams, each containing four associated images, including two views (mediolateral and craniocaudal) for each breast. The images in the database were acquired using the full-field digital mammography (FFDM) technique. The VinDr-Mammo dataset provides information on the anomaly class, such as mass, calcification, asymmetry, and corresponding locations. In our work, we solely used the location information for model validation. During training, only the image class was considered. Specifically, we focused on two classes: "normal" and "mass". The "normal" class signifies that the image does not contain any mass or lesion, while the "mass" class indicates the presence of a lesion potentially associated with a tumour. The training set consisted of 1978 images, and the test set comprised 474. Both sets were balanced in terms of class distribution. Figure 3 presents example images from the dataset.
### _Implementation_
The images from the VinDr-Mamo dataset were resized to a resolution of \(2944\times 1920\). Basic data augmentation techniques were applied to augment the training set, including horizontal flipping, random cropping, and normalization, following the approach used in [6]. The training and testing sets were divided based on the dataset's predefined split, selecting the images containing "mass" and "normal" classes.
To train the GMIC model, a pre-trained model from the NYU Breast Cancer Screening dataset [29] was utilized, and
Fig. 1: CAM Activation Map. Image adapted from [14].
Fig. 3: Example images from the VinDr-Mamo dataset. CC-D and CC-E labels refer to craniocaudal views of the right and left breast, respectively. MLO-D and MLO-E correspond to mediolateral oblique views of the left and right breast, respectively. The images are sourced from [19].
Fig. 2: GMIC model. Image adapted from [6].
transfer learning was employed on the VinDr-Mamo dataset. The GMIC model architecture incorporated a ResNet-22 [30] for the global model and a ResNet-18 [30] for the local model, as described in [6]. The training process involved 50 epochs, using a \(\beta\) value of 3.26 and a batch size 6. The remaining model parameters followed the original values specified by the authors. The code implementation was developed in Python, utilizing the authors' provided source code available on GitHub as the foundation for our work.
For the activation map models GradCAM, GradCAM++, XGradCAM, and LayerCAM, we based our implementation on the code available at [31]. Additionally, we utilized the original code developed in [6] for the CAM model.
### _Avaliation Metrics_
For model evaluation, we employed several metrics to assess the performance of the proposed approach. These metrics included accuracy, Area Under the ROC Curve (AUC), True Positive Rate (TPR), True Negative Rate (TNR), and False Positive per Image (FPPI).
In the classification task, we utilized AUC, TPR, and TNR, commonly used metrics in the literature [32, 33]. TPR represents the ratio of correctly classified positive samples, while TNR represents the correct classification of negative samples. The TPR and TNR metrics are defined by equations 3 and 4, respectively.
\[TPR=\frac{\text{TP}}{\text{TP}+\text{FN}}, \tag{3}\]
\[TNR=\frac{\text{TN}}{\text{TN}+\text{FP}}, \tag{4}\]
where TP, FN, TN, and FP represent the true positives, false negatives, true negatives, and false positives, respectively. In the classification analysis, a true positive occurs when the image class is "mass", and the model correctly predicts it.
We used TPR and FPPI metrics for the detection analysis, commonly used in the literature [34, 35]. In the detection model, a predicted location is considered a true positive if the intersection over union (IoU) between the predicted region and the ground truth region is greater than 0.3. The FPPI metric measures the average number of false positive detections per image. Maximizing the TPR rate while minimizing the FPPI rate is the desired outcome.
## IV Results
Two scenarios were considered to evaluate activation maps using the GMIC model. In the first scenario, the original GMIC model was trained using the CAM method to obtain regions of interest during the training phase. However, different activation map methods were analyzed during the test phase to infer the region's location of interest. The activation map method was changed during the training and test phases in the second scenario. The same training and test sets from the VinDr-Mamo database were used for both scenarios. The obtained metric values after training the original GMIC model are presented in Table. I:
The entire model training was performed using only information from the image class (i.e. normal or with mass). This analysis was done to verify the quality of the model's classification. An accuracy of 80% indicates that the model can correctly classify most images. This is the first analysis of a weakly supervised model for the VinDr-Mamo dataset.
CAM, GradCAM, GradCAM++, XGradCAM, and LayerCAM models were used to perform lesion region detection. Only the test images containing masses were analyzed to evaluate the detection quality. Figure 4 shows the segmentations obtained by each method for two test images. The first column shows the original image, with the ground truth location marked green. Columns 2-6 refer to the CAM, GradCAM, GracCAM++, XGradCAM, and LayerCAM models.
Figure 4 shows that while all methods can identify the region of interest associated with the lesion, the CAM method tends to generate more false positives, encompassing a larger area of segmented regions. The GradCAM method, on the other hand, produces a much smaller region, occasionally underestimating the size of larger lesions. Although the GradCAM, GradCAM++, and XGradCAM methods yield similar results in Figure 4, a greater distinction between the analyzed methods is observed when considering the entire test set.
Table II shows the TPR@FPPI results, which indicate the TPR rate at a specific FPPI value. The highest TPR values obtained were considered for these metrics. Different training and testing scenarios were analysed in Table II. The models defined in the rows GMIC (CAM), GMIC (GradCAM++), and GMIC (XGradCAM) represent the results of the GMIC model using the CAM, GradCAM++, and XGradCAM methods during training, respectively. The values in the columns represent the activation map methods used during the inference process in testing. The original GMIC model corresponds to the combination GMIC(CAM)-CAM. Examining the first row, we observe that the original GMIC(CAM)-CAM model achieves the highest TPR rate but with a high FPPI value. Replacing the activation map method can reduce the FPPI rate without significantly decreasing the TPR, as seen when substituting CAM with XGradCAM. Different results in the testing phase are obtained when training GMIC using alternative activation map methods to locate regions of interest. For this analysis, the GMIC (XGradCAM)-GradCAM++ combination yielded the best results, demonstrating a higher TPR and FPPI rate than other methods.
A noteworthy observation from this study is that employing different methods in the training and testing phases can yield improved results compared to using a single model for both stages. We speculate that it is more crucial to have lower
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Model & Accuracy & AUC & TPR & FNR \\ \hline GMIC & 80.12 & 87.22 & 71.88 & 88.52 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Results of GMIC model trained using VinDr-Mamo database.
detection values with false positives during the training phase, thus enhancing the model's reliability in extracting feature regions. However, a method that generates a larger prediction region during the testing phase leads to a higher TPR value.
In addition, we improved the performance of the GMIC model by replacing the CAM method during training and using the GradCAM++ model during testing. With this, we reduced the FPPI rate from 1.55 to 0.88, increasing the TPR rate.
Additionally, we enhanced the performance of the GMIC model by replacing the CAM method during training with the GradCAM++ model during testing. This substitution reduced the FPPI rate from 1.55 to 0.88, increasing the TPR rate.
## V Conclusion
This work investigated the impact of different activation map methods on detecting lesions in digital mammography using weakly supervised learning. The results highlighted the significant influence of activation map strategies on the true positive and false positive rates per image, indicating the importance of selecting an appropriate method for lesion detection.
One key finding was that employing different activation maps during the training and testing phases yielded improved inference performance compared to using the same method throughout. By replacing the CAM method with XGradCAM during model training and utilizing GradCAM++ during the testing phase, we reduced the False Positive per Image (FPPI) rate while increasing the model's True Positive Rate (TPR). This modification enhanced the model's ability to detect and localize lesions in mammography images.
For future research, we plan to explore the use of the detections obtained from weakly supervised learning to train the model in a supervised manner. By incorporating this additional information, we aim to refine further and enhance the performance of the detection model. Additionally, we intend to investigate noisy annotation techniques to address incorrect detections during the training process, thereby improving the robustness and reliability of the model's predictions.
## VI Acknowledgment
We gratefully acknowledge for financial support of the Brazilian agency Fundacao de Amparo a Ciencia e Tecnologia do Estado de Pernambuco (FACEPE) with project No. APQ-1046-1.03/21 and BIC-0067-1.03/22.
|
2303.08879 | A Quadratic Speedup in the Optimization of Noisy Quantum Optical
Circuits | Linear optical quantum circuits with photon number resolving (PNR) detectors
are used for both Gaussian Boson Sampling (GBS) and for the preparation of
non-Gaussian states such as Gottesman-Kitaev-Preskill (GKP), cat and NOON
states. They are crucial in many schemes of quantum computing and quantum
metrology. Classically optimizing circuits with PNR detectors is challenging
due to their exponentially large Hilbert space, and quadratically more
challenging in the presence of decoherence as state vectors are replaced by
density matrices. To tackle this problem, we introduce a family of algorithms
that calculate detection probabilities, conditional states (as well as their
gradients with respect to circuit parametrizations) with a complexity that is
comparable to the noiseless case. As a consequence we can simulate and optimize
circuits with twice the number of modes as we could before, using the same
resources. More precisely, for an $M$-mode noisy circuit with detected modes
$D$ and undetected modes $U$, the complexity of our algorithm is $O(M^2
\prod_{i\in U} C_i^2 \prod_{i\in D} C_i)$, rather than $O(M^2 \prod_{i \in
D\cup U} C_i^2)$, where $C_i$ is the Fock cutoff of mode $i$. As a particular
case, our approach offers a full quadratic speedup for calculating detection
probabilities, as in that case all modes are detected. Finally, these
algorithms are implemented and ready to use in the open-source photonic
optimization library MrMustard. | Robbe De Prins, Yuan Yao, Anuj Apte, Filippo M. Miatto | 2023-03-15T18:51:36Z | http://arxiv.org/abs/2303.08879v3 | # A quadratic speedup in the optimization of noisy quantum optical circuits
###### Abstract
Linear optical quantum circuits with photon number resolving (PNR) detectors are used for both Gaussian Boson Sampling (GBS) and for the preparation of non-Gaussian states such as Gottesman-Kitaev-Preskill (GKP), cat and NOON states. They are crucial in many schemes of quantum computing and quantum metrology. Classically optimizing circuits with PNR detectors is challenging due to their exponentially large Hilbert space, and quadratically more challenging in the presence of decoherence as state vectors are replaced by density matrices. To tackle this problem, we introduce a family of algorithms that calculate detection probabilities, conditional states (as well as their gradients with respect to circuit parametrizations) with a complexity that is comparable to the noiseless case. As a consequence we can simulate and optimize circuits with twice the number of modes as we could before, using the same resources. More precisely, for an \(M\)-mode noisy circuit with detected modes \(D\) and undetected modes \(U\), the complexity of our algorithm is \(O(M^{2}\prod_{i\in U}C_{i}^{2}\prod_{i\in D}C_{i})\), rather than \(O(M^{2}\prod_{i\in D\cup U}C_{i}^{2})\), where \(C_{i}\) is the Fock cutoff of mode \(i\). As a particular case, our approach offers a full quadratic speedup for calculating detection probabilities, as in that case all modes are detected. Finally, these algorithms are implemented and ready to use in the open-source photonic optimization library MrMustard [1].
## 1 Introduction
Linear optical quantum circuits with photon number resolving (PNR) detectors are studied because of two main reasons. First of all, they are used to perform Gaussian Boson Sampling (GBS). In GBS, squeezed states are sent through an interferometer and subsequently detected by PNR detectors. An example of such a circuit is depicted in Fig. 1(a).
GBS is a leading approach in pursuing quantum advantage [2, 3]. Moreover, several quantum algorithms based on GBS have been introduced [4, 5, 6, 7, 8, 9, 10, 11, 12], some of which rely on the ability to train the circuit parameters.
The second (and arguably more useful) application for circuits with PNR detectors is the generation of conditional non-Gaussian states. Examples of such states include Gottesman-Kitaev-Preskill (GKP) states, cat states, bosonic-code states, weak cubic phase states, ON states and NOON states [13, 14, 15, 16, 17, 18, 19, 20]. These states are used in a wide range of applications, such as generating bosonic error correction codes, providing resource states for the implementation of non-Gaussian gates and quantum metrology. We emphasize the particular interest of GKP states [21] as they are one of the leading candidates for qubits in optical quantum computation [22]. Fig. 1(b) depicts a circuit that can be used to generate non-Gaussian states. De
Figure 1: Examples of linear optical quantum circuits with PNR detectors. Vacuum states are squeezed and sent through an interferometer. A subset of the modes is measured with PNR detectors.
pending on the PNR detection pattern, a certain state is generated. The probability distribution of all conditional states is governed by the circuit parameters. By training these parameters, we can increase the probability of generating certain non-Gaussian states of interest and their quality.
In this work we address these simulation and optimization tasks using the framework that we introduced in our previous work [23, 24]. This framework allows one to recursively calculate elements of the matrix representation of Gaussian operators in Fock space. Here, it provides us with the matrix elements that define the detection probabilities or the amplitudes of conditional states. Moreover, we can recursively calculate the gradients of these elements with respect to a circuit parametrization, which allows us to find the parameters that minimize a certain cost function using gradient descent.
In realistic settings, decoherence effects such as photon loss affect the output of quantum circuits. Consequently, we need to be able to include these effects into our simulations if we want them to be faithful and useful. This motivates us to carry out simulations using density matrices. Normally, swapping state vectors for density matrices would make tasks quadratically more demanding in terms of both memory and runtime. We will show that we can almost completely get around this quadratic increase by introducing an algorithm that allows us to apply the recurrence relations fewer times while still including the amplitudes of interest. The resulting algorithm works for circuits with in principle an arbitrary number of PNR detectors. We will show that the complexity of our algorithm is comparable to the complexity of the lossless case, as long as the number of detected modes is a large fraction of the total number of modes.
The paper is structured as follows. In Section 2 we recall our simulation and optimization framework [23, 24] and apply it to lossless circuits with PNR detectors (i.e. using state vectors). In Section 3 we extend the framework to density matrices. We do this for GBS circuits (such as Fig. 1(a)) in Section 3.1 and for conditional state generator circuits (such as Fig. 1(b)) in Section 3.2. In Section 4 we discuss the complexity of our algorithms. Section 4.1 gives numerical results for the memory requirements and speed. Section 4.2 gives a comparison with the state-of-the-art classical GBS simulation method.
## 2 Circuit optimization framework revisited
### Representing Gaussian operators in Fock space
In Reference [23], it was shown that quantum optical circuits can be simulated by using a recurrence relation that calculates elements of the matrix representation of Gaussian operators (i.e. pure Gaussian states, mixed Gaussian states, Gaussian unitary transformations or Gaussian channels) in Fock space. We will denote such a matrix representation by \(\mathbf{\mathcal{G}}\) and call its elements the 'Fock amplitudes' of a Gaussian operator.
As we are interested in calculating detection probabilities and possible conditional states here, we will consider \(\mathbf{\mathcal{G}}\) to be the matrix representation of the multi-mode Gaussian state before the detectors. In other words, \(\mathbf{\mathcal{G}}\) is either a state vector or density matrix in Fock space. We represent \(\mathbf{\mathcal{G}}\) as a multidimensional array and refer to its total number of dimensions (i.e. indices) as \(D\). Hence, a general Fock amplitude can be written as \(\mathcal{G}_{\mathbf{k}}\), where \(\mathbf{k}\) is an integer vector of length \(D\). We will refer to \(\mathbf{k}\) as a 'Fock index' of \(\mathbf{\mathcal{G}}\). If \(\mathbf{\mathcal{G}}\) is a state vector, we use the convention that every element of \(\mathbf{k}\) corresponds to an optical mode. If \(\mathbf{\mathcal{G}}\) is a density matrix, every _pair_ of consecutive elements in \(\mathbf{k}\) corresponds to an optical mode. For example, \(\mathbf{k}=[m,n,p,q]\) is a general Fock index for a density matrix on 2 modes, where the indices \(m,n\) and \(p,q\) respectively correspond with the first and second mode. The expression for the Fock amplitudes using Dirac notation is \(\mathcal{G}_{\mathbf{k}}=\mathcal{G}_{mnpq}=\left\langle m,p\right|\mathcal{G} \left|n,q\right\rangle\). For a general number of modes \(M\), it follows that:
\[D=\begin{cases}M,&\text{if $\mathbf{\mathcal{G}}$ is a state vector},\\ 2M,&\text{if $\mathbf{\mathcal{G}}$ is a density matrix}.\end{cases} \tag{1}\]
Fock amplitudes can now be calculated using the following recurrence relation:
\[\mathcal{G}_{\mathbf{k}+\mathbf{1}_{i}}=\frac{1}{\sqrt{k_{i}+1}}\left(\mathcal{G}_{ \mathbf{k}}b_{i}+\sum_{l=1}^{D}\sqrt{k_{l}}\,\mathcal{G}_{\mathbf{k}-\mathbf{1}_{l}}A_{il }\right), \tag{2}\]
where \(\mathbf{1}_{i}\) is a vector of all zeroes except for a single 1 in the i\({}^{\text{th}}\) entry. Note that Fock indices that contain at least one negative value correspond to a zero Fock amplitude, as negative photon numbers are nonphysical. Hence, the sum over \(l\) may contain less than \(D\) terms.
The matrix \(\mathbf{A}\) and vector \(\mathbf{b}\) in Eq. (2) are complex-valued parameters (of size \(D\times D\) and \(D\) respectively) that are easily acquired for a specific circuit as they derive from the parameters of the Gaussian representation. If \(\mathbf{\mathcal{G}}\) is a density matrix \(\mathbf{\rho}\) we recall the results derived in Reference [24] that relate \(\mathbf{A}_{\mathbf{\rho}}\) and \(\mathbf{b}_{\mathbf{\rho}}\) to its complex (i.e. in the \(a/a^{\dagger}\) basis) covariance matrix \(\mathbf{\sigma}\) and displacement vector \(\mathbf{\mu}\):
\[\mathbf{A}_{\mathbf{\rho}} =\mathbf{P}_{M}\mathbf{\sigma}_{-}\mathbf{\sigma}_{+}^{-1}, \tag{3}\] \[\mathbf{b}_{\mathbf{\rho}} =\left(\mathbf{\sigma}_{+}^{-1}\mathbf{\mu}\right)^{*}=\mathbf{P}_{M}\mathbf{ \sigma}_{+}^{-1}\mathbf{\mu}, \tag{4}\]
where \(\mathbf{\sigma}_{\pm}=\mathbf{\sigma}\pm\frac{1}{2}\mathbb{1}_{2M}\) and \(\mathbf{P}_{M}=\left[\begin{array}{cc}\mathbf{0}_{M}&\mathbb{1}_{M}\\ \mathbb{1}_{M}&\mathbf{0}_{M}\end{array}\right]\).
If \(\mathbf{\mathcal{G}}\) is a state vector \(\mathbf{\psi}\), then \(\mathbf{A}_{\mathbf{\psi}}\) and \(\mathbf{b}_{\mathbf{\psi}}\) can be
obtained from:
\[\mathbf{A}_{\mathbf{\rho}} =\mathbf{A}^{*}_{\mathbf{\psi}}\oplus\mathbf{A}_{\mathbf{\psi}}, \tag{5}\] \[\mathbf{b}_{\mathbf{\rho}} =\mathbf{b}^{*}_{\mathbf{\psi}}\oplus\mathbf{b}_{\mathbf{\psi}}. \tag{6}\]
Let us now define the 'weight' of a Fock index \(\mathbf{k}\) as:
\[w=\sum_{i=1}^{D}k_{i}. \tag{7}\]
We see that Eq. (2) allows us to write \(D\) Fock amplitudes of weight \(w+1\) as linear combinations of a single Fock amplitude of weight \(w\) and \(D\) Fock amplitudes of weight \(w-1\). In order to refer to these different roles, we call'read' the group of amplitudes of weight \(w-1\) and 'write' the group of amplitudes of weight \(w+1\) (to refer to the fact that \(D\) amplitudes need to be read from memory so that \(D\) new ones can be written to memory), and we refer to the single amplitude of weight \(w\) as the 'pivot'. Fig. 2 gives a schematic representation of Eq. (2) for the case where \(\mathbf{\mathcal{G}}\) is 1-dimensional (i.e. for a state vector on one mode) and 2-dimensional (i.e. for a state vector on two modes or a density matrix on one mode). In this figure, the amplitudes marked in blue (write) are written as linear combinations of the orange ones (read+pivot). In general, a Fock index \(\mathbf{k}\) marks a position in a \(D\)-dimensional 'Fock lattice'. Eq. (2) can thus be interpreted as a relation between \(2D+1\) amplitudes that we can draw as a cross (or hypercross for higher dimensions). We can repeatedly reposition the hypercross in \(\mathbf{\mathcal{G}}\) to calculate new Fock amplitudes under the condition that we already computed the read and pivot amplitudes.
### State vector simulations
Let us now consider how we can apply the recurrence relation (that is, how we can move around the hypercross) to obtain the probabilities of PNR outcomes or the amplitudes of conditional states using the state vector formalism in a noiseless, lossless circuit. As the number of possible measurement results \(\mathbf{n}=[n_{1},n_{2},...,n_{M}]\) (\(n_{i}\in[0,1,...,\infty]\)) is in principle infinite, we limit ourselves to calculating the most probable ones such that the required resources for our simulation remain finite. We will consider the Fock amplitudes \(\mathcal{G}_{\mathbf{k}}\) for all \(\mathbf{k}\) of length \(M\) that satisfy the following boundary conditions:
\[\mathbf{0}\leq\mathbf{k}<\textit{cutoffs}. \tag{8}\]
Here, \(\textit{cutoffs}=[C_{1},C_{2},C_{3},...]\) is the set of upper bounds for the photon numbers in all modes. We assume that they are chosen such that the probability of detecting \(C_{i}\) or more photons in mode \(i\) is negligible.
Note that Fock amplitude \(\mathcal{G}_{\mathbf{0}}\) (where \(\mathbf{0}=[0,0,...,0]\)) is the vacuum component of \(\mathbf{\mathcal{G}}\). If \(\mathbf{\mathcal{G}}\) is a density matrix \(\mathbf{\rho}\), it can be computed as:
\[\rho_{\mathbf{0}}=\frac{\exp\left[-\frac{1}{2}\mathbf{\overline{\mu}}^{\dagger}\mathbf{ \sigma}_{-1}^{-1}\mathbf{\overline{\mu}}\right]}{\sqrt{\det\left(\mathbf{\sigma}_{+} \right)}}, \tag{9}\]
If \(\mathbf{\mathcal{G}}\) is a state vector \(\mathbf{\psi}\), ignoring a global phase, it holds that \(\psi_{\mathbf{0}}=\sqrt{\rho_{\mathbf{0}}}\).
Starting from \(\mathbf{\mathcal{G}}_{\mathbf{0}}\), we can calculate all of the amplitudes by applying Eq. (2). We start by placing the pivot of our hypercross at \(\mathbf{0}\) (for which \(w=0\)) and write amplitudes for which \(w=1\). Next, we apply all pivots for which \(w=1\) and write amplitudes for which \(w=2\). By repeatedly increasing \(w\) and applying all pivots of that weight, we can calculate the required amplitudes. As the amplitudes we write have a higher weight than the amplitudes we read, we know that the right amplitudes are always calculated before we need to read them.
Fig. 3 shows an intermediate step of this process for circuits that consist of one and two modes. In this figure, the cutoff values of all modes are chosen to be 7. Dark grey cells depict amplitudes that have already been used as pivots. Light grey cells are amplitudes that have been calculated, but have not yet been used as pivots. At the end of the process all cells in the figure will be calculated.
Figure 2: Schematic representation of how Eq. (2) can be used to calculate the Fock amplitudes \(\mathcal{G}_{\mathbf{k}}\) of a Gaussian state. Every Fock index \(\mathbf{k}\) marks a position in the Fock lattice. Every blue node can be written as a linear combination of the orange nodes.
Note that this strategy to calculate Fock amplitudes allows for two types of parallelization. First, given a specific pivot, we can parallelize the calculations of different elements in the 'write' group. Second, since we order pivots according to increasing weight, we can also apply pivots of the same weight simultaneously.
### Alternative cutoff conditions
The boundary conditions of Eq. (8) are useful for simulating circuits for which we know the maximum number of photons that a PNR detectors can measure. The cutoff in the undetected modes can be chosen separately, depending on the required accuracy for calculating the conditional state. However, the recurrence relation also allows one to consider other cutoff conditions.
A first useful example occurs when we want to place an upper bound on the total number of photons that is present in all modes. As the total number operator \(\hat{\mathbf{n}}=\sum_{i=1}^{M}\hat{n}_{i}\) commutes with the multi-mode Fock Hamiltonian [25], such an upper bound defines a cutoff on the energy levels of the multi-mode Gaussian state before the detectors. More formally, we can replace Eq. (8) by:
\[0\leq w(\mathbf{k})<w_{\text{max}}\, \tag{10}\]
which can be related to an upper bound for the total number of photons \(N_{\text{max}}\) in the circuit:
\[w_{\text{max}}=\begin{cases}N_{\text{max}},&\text{if $\mathbf{\mathcal{G}}$ is a state vector},\\ 2N_{\text{max}},&\text{if $\mathbf{\mathcal{G}}$ is a density matrix}.\end{cases} \tag{11}\]
Note that for Eq. (10) the number of amplitudes \(\mathcal{G}_{\mathbf{k}}\) that have the same weight increases binomially with \(w\). For Eq. (8), this number of amplitudes first increases with \(w\), after which it reaches a maximum and decreases. Indeed, once \(w\geq\min(\textit{cutoffs})\), the right inequality of Eq. (8) starts to exclude general Fock indices of weight \(w\). Eventually, when \(w\) is raised all the way to \(\sum_{i=1}^{D}(C_{i}-1)\) the number of allowed indices has decreased back to \(1\).
Another possible cutoff condition is given by the total sum of the probabilities of PNR outcomes. After each iteration (in which we apply all pivots of weight \(w\)), we can evaluate this sum and check whether it is sufficiently close to \(1\) to stop the process.
### Circuits without displacement gates
In Reference [24] we showed how to compute the parameters \(\mathbf{A}\), \(\mathbf{b}\) and \(\mathcal{G}_{\mathbf{0}}\) that define a Gaussian operator. More specifically for Gaussian states, we showed how \(\mathbf{A}\), \(\mathbf{b}\) and \(\mathcal{G}_{\mathbf{0}}\) can be calculated from the covariance matrix and means vector. Moreover, it can be shown that for a state with zero displacement vector we have \(\mathbf{b}=\mathbf{0}\). Note that this applies to the states before the detectors in Fig. 1 as these circuits do not contain displacement gates.
In the case that there is no displacement, we can substitute \(\mathbf{b}=\mathbf{0}\) in Eq. (2) such that our recurrence relation turns into:
\[\mathcal{G}_{\mathbf{k}+\mathbf{1}_{i}}=\frac{1}{\sqrt{k_{i}+1}}\sum_{l=1}^{D}\sqrt{k _{l}}\,\mathcal{G}_{\mathbf{k}-\mathbf{1}_{i}}A_{il}. \tag{12}\]
We find that the only Fock amplitudes that differ from zero are the ones which have a Fock index \(\mathbf{k}\) with even weight. For state vectors, we can alter the strategy described in Fig. 3 by only considering pivots that have odd weight. This leads to the checkered pattern of Fig. 4, where we still apply pivots in order of increasing weight. Note that now we now fill the array twice as fast because we only need to compute half of the amplitudes.
### Gradients
In this section, we present how the framework above allows not only to _simulate_ but also to _optimize_ circuits. Given a loss function \(L\) that depends on the probabilities of the PNR outcomes (and the conditionally generated states), we need to calculate the
Figure 3: Intermediate step of state vector simulations for circuits consisting of 1 and 2 modes. Fock amplitudes \(\mathcal{G}_{\mathbf{k}}\) of the output state vectors are computed recursively. We start at \(\mathcal{G}_{\mathbf{0}}\) and apply pivots in order of increasing weight until all amplitudes are calculated. At this intermediate step, dark grey cells have been used as pivots. Light grey cells have been written and will be used as pivots in the next step. Animated versions of these figures are included in the Supplementary Materials.
\[\frac{\partial\mathcal{G}_{\mathbf{k}+\mathbf{1}_{i}}}{\partial b_{m}}=\frac{1 }{\sqrt{k_{i}+1}}\left(\frac{\partial\mathcal{G}_{\mathbf{k}}}{\partial b_{m}}b_{i}+ \mathcal{G}_{\mathbf{k}}\delta_{im}+\sum_{l=1}^{D}\sqrt{k_{l}}\,\frac{\partial \mathcal{G}_{\mathbf{k}-\mathbf{1}_{l}}}{\partial b_{m}}A_{il}\right)\, \tag{16}\] \[\frac{\partial\mathcal{G}_{\mathbf{k}+\mathbf{1}_{i}}}{\partial A_{mn}}= \frac{1}{\sqrt{k_{i}+1}}\left(\frac{\partial\mathcal{G}_{\mathbf{k}}}{\partial A_ {mn}}b_{i}+\sum_{l=1}^{D}\sqrt{k_{l}}\left[\frac{\partial\mathcal{G}_{\mathbf{k}- \mathbf{1}_{l}}}{\partial A_{mn}}A_{il}+\mathcal{G}_{\mathbf{k}-\mathbf{1}_{l}}\delta_{im} \delta_{ln}\right]\right)\, \tag{17}\]
where \(\delta_{jk}\) is the Kronecker delta function. Since both Eq. (16) and Eq. (17) are structured in a similar way as Eq. (2), we can implement all three equations simultaneously. We do so by taking a single walk through the Fock lattice, that is, by performing a single iteration over the Fock indices \(\mathbf{k}\). We still differentiate between the different types of Fock indices'read' (\(\mathbf{k}-\mathbf{1}_{l}\)), 'pivot' (\(\mathbf{k}\)) and 'write' (\(\mathbf{k}+\mathbf{1}_{i}\)), but instead of only manipulating amplitudes \(\mathcal{G}_{\mathbf{k}}\), we now also process their partial derivatives with respect to \(b_{m}\) and \(A_{mn}\). Note that every \(\mathbf{k}\) now corresponds with one Fock amplitude \(\mathcal{G}_{\mathbf{k}}\), \(D\) gradients \(\partial\mathcal{G}_{\mathbf{k}}/\partial b_{m}\) and \(D^{2}\) gradients \(\partial\mathcal{G}_{\mathbf{k}}/\partial A_{mn}\), such that both the memory and time usage of an _optimization_ are a factor \(1+D+D^{2}\) higher than those of a _simulation_.
## 3 Extension to density matrix simulations
### Algorithm for Gaussian Boson Sampling
Consider a circuit of which all \(M\) modes are detected (such as the one in Fig. 1(a)). To capture mixed states (such as can arise in the presence of photon loss) density matrices must be used in place of state vectors. For simplicity, let us assume that the photon number cutoff in each mode is equal to \(C\). To calculate
Figure 4: Intermediate step of state vector simulations for circuits that do not contain displacement gates (consisting of \(\mathbf{1}\) and \(\mathbf{2}\) modes). Fock amplitudes \(\mathcal{G}_{\mathbf{k}}\) are calculated recursively as in Fig. 3, but now they are zero when \(\sum_{i}k_{i}\) is odd. Consequently, pivots (i.e. the central nodes of the hypercross) do not need to be read. At this intermediate step, dark grey cells have been used as pivots but only for placing the cross (their value remains zero), while light grey cells have been actually written.
the probabilities of the \(C^{M}\) possible PNR detection patterns, one could start by following the procedure described in Section 2.2 to calculate all \(C^{2M}\) Fock amplitudes of the multi-mode state before the detector. The probability of observing a certain photon number pattern \(\mathbf{n}=[n_{1},n_{2},...,n_{M}]\) at the detectors is then given by:
\[p(\mathbf{n})=\mathcal{G}_{n_{1}n_{1}n_{2}n_{2}...n_{M}n_{M}}. \tag{18}\]
However, as we are only interested in the \(C^{M}\) diagonal amplitudes, we can construct a more efficient algorithm that selectively applies the recurrence relation in the Fock lattice. This way we prevent the calculation of irrelevant amplitudes as much as possible. After choosing an adequate set of pivot positions, we can apply them in order of increasing weight.
#### 3.1.1 Single mode
Let us first consider the case where we have a single mode. Here the Fock lattice only has two dimensions (i.e. \(\mathbf{k}=[m,n]\)) and we can use the hypercross of Fig. 2(b). For now also consider the case where the circuit under consideration does not contain displacement gates. As explained in Section 2.4, this implies that the inner 'pivot' node of the hypercross cross does not need to be read. Fig. 5(a) visualizes how Eq. (12) can be applied in order to calculate the required diagonal amplitudes. We have chosen all pivots of the type \([a+1,a]\) that satisfy \([0,0]\leq[a+1,a]<[C_{1},C_{1}]\). Note that we could have equivalently chosen pivots of the type \([a,a+1]\) instead. We apply the pivots in order of increasing weight, i.e. from the top left to the bottom right. As these pivots only read amplitudes that are previously written by other pivots, the total set of pivots can be said to be'self-sufficient'.
Fig. 5(b) shows the case where the circuit under consideration does contain displacement gates. Now, we also have to read the value of the pivot node in order to apply the hypercross. These values (at positions \([a+1,a]\)) can be provided by introducing extra pivots of the type \([a,a]\). In their turn, the off-diagonal pivots provide the amplitude values of the diagonal pivots. In other words, the total set of the diagonal and off-diagonal pivots is self-sufficient here.
#### 3.1.2 Two modes
We now consider density matrix simulations of GBS circuits with two modes, such that Eq. (2) can be represented by a four dimensional hypercross. However, we still choose to visualize both the hypercross and \(\mathbf{\mathcal{G}}\) in two dimensions via the Kronecker product. Below, we explain in more detail how such a representation is constructed. The hypercross itself is shown in Fig. 6. Fig. 7 visualizes how this hypercross can be applied to get the diagonal Fock amplitudes in the case where \(\textit{cutoffs}=[4,4]\).
We write \(\mathbf{k}=[m,n,p,q]\), where \([m,n]\) and \([p,q]\) are the indices corresponding to the first and second mode respectively. Note now that if \([p,q]\) would be fixed, we are left with a 2D matrix that is only indexed by \([m,n]\), such that it can be visualized in a similar way as Fig. 5. We now combine all such matrices (for all possible values of \(p\) and \(q\)) in a block matrix. This leads to a 2D 'nested representation'. If \(M>2\), we can recursively apply this process for different index pairs (i.e. constructing block matrices of block matrices), such that we always end up with a 2D image. Note that pivots are no longer applied from top left to bottom right in this representation, as this would not correspond with the order of increasing weight.
We have to make sure that amplitudes are written before they are read. In other words, the total set of pivots used in Fig. 7 has to be self-sufficient. We can check that this is true by first considering the pivots of the type \([a,a,b,b]\) and \([a+1,a,b,b]\) (i.e. the diagonal cells in Fig. 7 and the cells under those). This set of pivots is _almost_ self-sufficient: within each \(C_{1}\times C_{1}\) block that lies on the diagonal of Fig. 7 (i.e. within each block containing amplitudes of the type \([m,n,b,b]\)), _almost_ all of these pivots get their required'read' and 'pivot' amplitudes from the 'write' amplitudes from another pivot in those blocks. The only amplitudes that are missing to complete the self-sufficiency are the amplitudes of the type \([0,0,b,b]\) (marked as \(\star\)). These last amplitudes act like'seed amplitudes' in the diagonal \(C_{1}\times C_{1}\) blocks, similar to how \(\mathcal{G}_{0}\) acts as a seed in Fig. 5. These missing amplitudes can be obtained from the remaining pivots outside of the diagonal \(C_{1}\times C_{1}\) blocks: \([0,0,b{+}1,b]\) (marked as \(\star\)). These last pivots 'bridge' the gaps between different diagonal \(C_{1}\times C_{1}\) blocks by providing the necessary increments of \(k_{i}\) for \(i\in\{3,4,5,...,2M\}\).
#### 3.1.3 General number of modes
The pivot placement strategy of Figs. 5 and 7 can be generalized to a larger number of modes. The strategy for 3 modes is visualized in Appendix A.
Algorithm 1 shows how a GBS circuit with an _arbitrary_ number of modes can be simulated in the density matrix formalism. Lines 1 to 5 are used to apply the diagonal pivots \(diag=[a,a,b,b,c,c,...]\) in order of increasing weight. Note again that these pivots are also diagonal in the nested representation, while the order in which we apply them is not necessarily from top left to bottom right (see for example the animated version of Fig. 7 in the Supplementary Materials). In order to apply the diagonal pivots, a variable \(S\) is increased stepwise, starting from 0. Each time, we apply all diagonal pivots that satisfy both \(a+b+c+...=S\) and the boundary conditions of Eq. (8).
Lines 6 to 10 are used to apply the off-diagonal pivots \(diag+\mathbf{1}_{2K-1}\), where \(K\in\{1,2,...,M\}\) (i.e. \([a{+}1,a,b,b,c,c,...]\), \([a,a,b{+}1,b,c,c,...]\)
\([a,a,b,b,c{+}1,c,...]\), etc.). For \(K{=}1\), the off-diagonal pivots lie in the diagonal \(C_{1}\times C_{1}\) blocks. For \(K>1\), the off-diagonal pivots are 'bridge pivots' that provide the'source amplitudes' \([0,0,b,b,c,c,...]\). Note that because of line 8, the number of off-diagonal pivots decreases with \(K\) (see both Fig. 7 and Appendix A for reference).
In Appendix B, we show that both the total number of pivots and the total number of written amplitudes that appear in Algorithm 1 scale like \(\prod_{i=1}^{M}C_{i}\), which simplifies to \(C^{M}\) if the cutoffs on all modes are equal.
Note that if the local cutoff conditions of Eq. (8) are replaced by the global cutoff condition of Eq. (10), then the sum of line 1 runs to \(N_{\text{max}}=\frac{1}{2}w_{\text{max}}\) instead, while the cutoff conditions in lines 2, 4 and 8 drop out. As shown in Appendix B, the scaling of the algorithm then changes to \((w_{\text{max}})^{M}\).
#### 3.1.4 Compact storage of the Fock amplitudes
In Appendix B, we show that all amplitudes that are written in Algorithm 1 can be parameterized as _diag_+ _offset_ where _diag_ is a diagonal position in the Fock lattice and _offset_ is an offset vector that only comes in a select number of types. This parametrization helps to store the amplitudes in a unique and compact manner. However, in the case that we detect all modes, we are only interested in the \(\prod_{i=1}^{M}C_{i}\) diagonal amplitudes. The off-diagonal amplitudes do not need long-term storage in memory. It can be shown that all off-diagonal amplitudes are included in the'read' group of a pivot exactly once. Thus, we can remove off-diagonal'read' amplitudes from memory once they have been used. We only have to store a buffer of off-diagonal amplitudes that correspond with a select number of weight values. In addition to the animated versions of Figs. 5, 7 and 11, we also include animations in the Supplementary Materials that apply this 'buffer strategy'.
For a circuit consisting of 4 modes (such as the one in Fig. 1(a)), Fig. 8 shows how the number of stored amplitudes evolves as we apply more pivots. We have chosen the photon number cutoff to be 10 in all modes. In contrast to the strategy without buffer (blue curve), the buffer strategy (orange curve) reaches a maximum before the end of the algorithm is reached. This results from the fact that the number of pivots that have an equal weight reaches a maximum at \(w=\sum_{i=1}^{M}C_{i}\) when we apply the local boundary conditions of Eq. (8). For reference, Fig. 8 also shows a horizontal dashed line at \(\prod_{i=1}^{M}C_{i}=C^{M}=10^{4}\). Note that after completing Algorithm 1 using the buffer strategy all off-diagonal amplitudes are removed, such that the orange curve coincides with the dashed curve.
### Algorithm for conditional state generation
Let us now consider circuits where all but one mode are detected, such as the one of Fig. 1(b). Our results can readily be generalized to an arbitrary number of undetected modes. Our goal is now to calculate the
Figure 5: Visualisation of how Eq. (2) can be applied to density matrices in order to calculate the detection probabilities \(|\mathcal{G}_{n_{1}n_{1}}|^{2}\) of a single mode circuit. Pivots (dark grey) are applied from top left to bottom right, i.e. in order of increasing weight. Light grey cells are non-pivot amplitudes that are written. White cells do not have to be written, which improves on the naive idea of applying pivots in all cells (as in Fig. 3(b)). In Fig. a, the pivots are not read as there Eq. (2) simplifies to Eq. (12). We chose to upper bound the photon number by 10 in this example. Animated versions of these figures are included in the Supplementary Materials.
distribution of states that are generated conditionally on the PNR detection results. As a first example, we consider a circuit with two modes and one detector, such that we can use the nested representation of Fig. 6. In this representation, the targeted distribution is defined by the Fock amplitudes \(\mathcal{G}_{mnpq}\) in the diagonal \(C_{1}\times C_{1}\) blocks. Each detection outcome corresponds with one such \(C_{1}\times C_{1}\) block, which is the unnormalized density matrix of the conditional state.
The targeted blocks can be calculated using the two step process presented in Fig. 9. First, we calculate all Fock amplitudes in the upper left \(C_{1}\times C_{1}\) block, which is the density matrix corresponding with detecting zero photons. For this first step, we can use the hypercross of Fig. 6 where we choose only to _increment_ indices \(m\) and \(n\) (not \(p\) and \(q\)). Note that we also do not have to _decrement_\(p\) and \(q\), as these amplitudes would correspond with negative photon numbers. For the second step of our simulation process, we do have to _decrement_ all indices, but this time we choose only to _increment_ indices \(p\) and \(q\) (not \(m\) and \(n\)). Moreover, we choose to apply pivots in blocks of
Figure 6: Schematic representation of Eq. (2) where \(\mathcal{G}\) is 4-dimensional. The Fock amplitudes \(\mathcal{G}_{mnpq}\) are represented via the Kronecker product: all \(C_{1}\times C_{1}\) corresponding to different values of \(p\) and \(q\) are combined in a block matrix.
size \(C_{1}\times C_{1}\). By doing so, we can apply a coarse-grained version of Algorithm 1_as if_ the circuit under consideration has \(M-1\) modes. In this example, \(M=2\) such that we apply a coarse grained version of Fig. 5(b). Within each \(C_{1}\times C_{1}\) block of pivots, the individual pivots still need to be applied according to increasing weight, similar to Fig. 3(b).
This simulation process for state generator circuits can be generalized to an arbitrary number of modes \(M\). Algorithm 2 considers all cases where we have \(1\) undetected mode and \(M-1\) detected modes. The extension to an arbitrary number of undetected modes is straightforward. A similar two step process is followed as in Fig. 9. Note that step 2 of Algorithm 2 is indeed a coarse-grained version of Algorithm 1 as we apply \(C_{1}\times C_{1}\) blocks of pivots. That is, we apply pivots \([m,n,a,a,b,b,c,c,...]\) for \(m,n\in\{0,1,...,C_{1}-1\}\) where \(a,b,c,...\) follow from Algorithm 1 after substituting \(M\) by \(M-1\) and _cutoffs_ by \([C_{2},C_{3},C_{4},...]\). As Algorithm 1 scales as \(\prod_{i=1}^{M}C_{i}\), it is clear from the above that Algorithm 2 scales as \(C_{1}^{2}\prod_{i=2}^{M}C_{i}\). In the case where we choose all modes to have the same cutoff \(C\), these scaling factors are \(C^{M}\) and \(C^{M+1}\) respectively.
## 4 Complexity
In the case where we use state vectors, Section 2.2 explains how pivots can be applied to calculate all Fock amplitudes that satisfy the cutoff conditions of Eq. (8). The total number of pivots then scales as \(O(\prod_{i=1}^{M}C_{i})\). In the case where we simulate a GBS circuit using density matrices, we apply Algorithm 1. In Appendix B, we show that the total number of pivots that are used in this algorithm also scales as \(O(\prod_{i=1}^{M}C_{i})\).
As is clear from Eq. (2), the complexity of applying a single pivot is given by \(D^{2}\). (Note that Eq. (2) can be rewritten as the sum of a vector and a matrix-vector multiplication by rescaling \(\mathcal{G}_{\mathbf{k}-\mathbf{1}_{i}}\) and \(\mathcal{G}_{\mathbf{k}+\mathbf{1}_{i}}\) with \(\sqrt{k_{l}}\) and \(\sqrt{k_{i}+1}\) respectively.) From Eq. (1) it follows that both using state vectors and density matrices, our algorithms for GBS simulation scale like \(O(M^{2}\prod_{i=1}^{M}C_{i})\). As is clear from Section 3.2, for the generation of single mode conditional states, this complexity changes to \(O(M^{2}C_{1}^{2}\prod_{i=2}^{M}C_{i})\). Algorithm 2 can readily be extended to account for a general number of undetected modes. By doing so, the complexity changes to:
\[O(M^{2}\prod_{i\in I_{U}}C_{i}^{2}\prod_{i\in I_{D}}C_{i}), \tag{19}\]
where \(I_{U}\) and \(I_{D}\) are the sets of indices \(i\) that respectively correspond to undetected and detected modes.
In the remainder of this work, we first demonstrate how this scaling behaviour can be observed for circuits with 4 modes. Afterwards, the results for GBS circuits are compared to the state-of-the-art classical simulation method.
Figure 8: The number of stored amplitudes when using a buffer of off-diagonal amplitudes (orange curve) and without using such a buffer (blue curve) as a function of the number of applied pivots. After all pivots are applied, the buffer is left empty such that the orange curve reaches a value of \(C^{M}\) (dashed curve). This figure is made using 4 modes, all with photon number cutoff of 10. Both the real and complex part of the amplitudes are stored as 64-bit-precision floating-point numbers.
Figure 7: Visualisation of how Eq. (2) (i.e. the hypercross from Fig. 6) can be applied to density matrices in order to calculate the detection probabilities \(\mathcal{G}_{n_{1}n_{1}n_{2}n_{2}}\) of a two mode circuit. The photon number in both modes is upper bound by 4. Dark grey cells represent pivots. Light grey cells represent non-pivot amplitudes that are written. The pivots marked as \(\star\) write to the pivots marked as \(\star\). Similar to \(\mathcal{G}_{\mathbf{0}}\), these last pivots (\(\star\)) act as ‘seed’ amplitudes in their \(C_{1}\times C_{1}\) blocks. An animated version of this figure is included in the Supplementary Materials.
### Memory usage and simulation time
Fig. 10 visualizes the memory usage and simulation time for a circuit with 4 modes (such as the circuits in Fig. 1). We have chosen the photon number cutoff \(C\) to be equal for all modes. As both the memory usage and simulation time scale with the number of applications of Eq. (2) (i.e. the number of pivots), the trends in Figs. 10(a) and 10(b) are similar.
When using state vectors, we calculate \(C^{M}\) amplitudes to simulate a circuit, regardless of the number of PNR detectors (green line in Fig. 10(a)). When using density matrices, this number would increase to \(C^{2M}\) (orange line in Fig. 10(a)) if we naively applied the strategy of Section 2.2. When all modes in the circuit are measured, Algorithm 1 reduces the memory requirements from the orange curve to the solid blue curve. This last curve corresponds with the number of written amplitudes given in Appendix B.3. It can be lowered further to the dashed blue curve when the
Figure 9: Visualisation of Algorithm 2 for a circuit of two modes, one of which is detected. Dark grey cells represent pivots. Light grey cells represent non-pivot amplitudes that are written. White cells represent amplitudes that are not written. For the pivots \(\mathcal{G}_{3300}\) (in Fig. a) and \(\mathcal{G}_{2222}\) (in Fig. b) the hypercross is shown, where we only keep two of its blue nodes. For both modes, we chose a photon number cutoff of 5. Animated versions of these figures are included in the Supplementary Materials.
buffer strategy of Section 3.1.4 is applied. Note that the memory usage at a cutoff value of 10 corresponds with the maximum of the orange curve in Fig. 8. From the slopes of these curves we verify that the complexity of Algorithm 1 is equal to the complexity of a state vector simulation, i.e. \(C^{M}\), as was discussed in Section 3.1.3. When all but one mode of the circuit are detected, we can use Algorithm 2 to improve on the naive strategy without selective pivot placement. As discussed in Section 3.2, the complexity of this last algorithm is \(C^{M+1}\).
When calculating both the required amplitudes (Eq. (2)) and gradients (Eqs. (16) and (17)) to optimize the circuit, we know from Section 2.5 that we can implement all three equations by taking a single walk through the Fock lattice. As a result, the memory usage of an _optimization_ is a factor \(1+D+D^{2}\) higher than the memory usage of a _simulation_ (where \(D=M\) for state vectors and \(D=2M\) for density matrices). When we would calculate both amplitudes and gradients for Fig. 10 (where \(M=4\)), this means that the orange, red and blue curves would shift up on the log scale corresponding with a factor of \(1+2M+4M^{2}=73\), while the factor for the green curve would be \(1+M+M^{2}=21\).
### Comparison with the state-of-the-art GBS algorithm
In this section we focus our attention on the case where all modes are detected. This provides us with a useful reference point for our algorithms, since classical GBS algorithms are well studied [3, 26, 27, 28].
When using state vectors, the state-of-the-art classical GBS algorithm [28] obtains the probability of a single detection pattern \(\mathbf{n}=[n_{1},n_{2},...,n_{M}]\) with a complexity that is upper bounded by \(N^{3}2^{N/2}\) (where \(N=\sum_{i}n_{i}\)) and lower bounded by \(N^{3}\prod_{i=1}^{M}\sqrt{n_{i}+1}\). This algorithm is primarily used to _generate samples_ from a GBS circuit, i.e. to draw a pattern \(\mathbf{n}\) from its measurement probability distribution. A popular method for this is 'chain rule sampling', where the photon number in each mode is sampled sequentially, conditioned on the photon numbers in the previous modes. This method only requires the calculation of the conditional probability distributions of the modes instead of the total joint probability distribution.
Instead of _sampling_ from a GBS circuit, here we obtain its _joint probability distribution_ by calculating the probabilities of all detection outcomes up to a certain photon number cutoff. This is useful to study quantum algorithms based on GBS [4, 29]. Naively, one could apply the algorithm of Reference [28] to all detection patterns up to a certain photon number cutoff. Assuming all probabilities can be obtained at the lower bound of the complexity, we get:
\[\sum_{\mathbf{n}=\mathbf{0}}^{\textit{cutoffs}}N^{3}\prod_{i=1}^{M}\sqrt{n_{i}+1}. \tag{20}\]
In Appendix C it is shown that this is higher than the complexity of our algorithm, which is \(M^{2}\prod_{i=1}^{M}C_{i}\). Note that to obtain \(p(\mathbf{n})\) using Algorithm 1, we need to substitute \(C_{i}\) by \(n_{i}+1\).
Reference [28] however also provides a way to obtain all probabilities \(p\big{(}[n^{\prime},n_{2},...,n_{M}]\big{)}\) (where \(n^{\prime}\in[0,1,...,C{-}1]\) and all other \(n_{i}\) are fixed) at once, with the same complexity of obtaining only \(p([C{-}1,n_{2},...,n_{M}])\). Nonetheless, Appendix C shows that fixing \(n_{1}\) to \(C_{1}-1\) in Eq. (20) still results in a complexity that is higher than \(M^{2}\prod_{i=1}^{M}C_{i}\). Currently, the algorithm in Reference [28] is not extended to include more than one 'batched' mode, and hence our algorithm is faster at obtaining the total joint probability distribution of a GBS circuit. However, if an extension to multiple batched modes were to be made, it might improve on our algorithm when using state vectors. This forms an interesting open research question.
In the case of density matrix simulations, the complexity of our algorithm (\(M^{2}\prod_{i=1}^{M}C_{i}\)) remains unaltered, while Reference [28] presents a complexity of \(N^{3}\prod_{i=1}^{M}(n_{i}+1)\). Note that, although this last expression is quadratically higher than the _lower bound_ of their algorithm for state vectors, it denotes the _actual_ complexity to calculate a single probability \(p(\mathbf{n})\). It follows that for \(N^{3}>M^{2}\) (e.g. when \(C_{i}>1\), \(\forall\,i\in[1,2,...,M]\)), our algorithm scales better, while it also produces the probabilities of all detection patterns with lower photon numbers. Consequently, two regimes can be defined for density matrix simulations. If \(N^{3}>M^{2}\), a possible extension of Reference [28] to multiple batched modes would not improve on our algorithm. For \(N^{3}<M^{2}\) this question remains open for further study.
## 5 Conclusions
We have presented an exact procedure to obtain the detection probabilities and conditional states of noisy linear optical quantum circuits with PNR detectors. For a circuit with \(M\) modes, we propose an algorithm for which the memory requirements and speed have a complexity of \(\mathcal{O}(M^{2}\prod_{i=1}^{M}C_{i})\), where \(C_{i}\) is the photon number cutoff of mode \(i\). This constitutes a quadratic improvement over previous approaches.
The reduction in complexity applies to measured modes, even when we are after computing marginal states. Moreover, our methods can easily be adapted to obtain the gradients of the detection probabilities and conditional states with respect to a circuit parametrization.
These methods are included in the open-source library MrMustard [1]. They are written in pure Python using Numpy and are sped up using the just-in-time compiling capabilities of Numba. This paves the way to making realistic simulations and optimizations of circuits with PNR detectors. We expect our methods to accelerate the research on both GBS based algorithms and conditional state generation, with a particular emphasis on GKP state generation.
## Acknowledgements
Special thanks to Rachel S. Chadwick, Sebastian Duque Mesa, Peter Bienstman and Guy Van der Sande for the valuable discussions. The work of Anuj Apte is supported by Yoichiro Nambu Graduate Fellowship courtesy of Department of Physics, University of Chicago.
|
2301.01942 | Compact and scalable polarimetric self-coherent receiver using
dielectric metasurface | The polarimetric self-coherent system using a direct-detection-based
Stokes-vector receiver (SVR) is a promising technology to meet both the cost
and capacity requirements of the short-reach optical interconnects. However,
conventional SVRs require a number of optical components to detect the state of
polarization at high speed, resulting in substantially more complicated
receiver configurations compared with the current
intensity-modulation-direct-detection (IMDD) counterparts. Here, we demonstrate
a simple and compact polarimetric self-coherent receiver based on a thin
dielectric metasurface and a photodetector array (PDA). With a single
1.05-$\mu$m-thick metasurface device fabricated on a compact silicon-on-quartz
chip, we implement functionalities of all the necessary passive components: a
1$\times$3 splitter, three polarization beam splitters with different
polarization bases, and six focusing lenses. Combined with a high-speed PDA, we
demonstrate self-coherent transmission of 20-GBd 16-ary quadrature amplitude
modulation (16QAM) and 50-GBd quadrature phase-shift keying (QPSK) signals over
a 25-km single-mode fiber. Owing to the surface-normal configuration, it can
easily be scaled to receive spatially multiplexed channels from a multicore
fiber or a fiber bundle, enabling compact and low-cost receiver modules for the
future highly parallelized self-coherent systems. | Go Soma, Yoshiro Nomoto, Toshimasa Umezawa, Yuki Yoshida, Yoshiaki Nakano, Takuo Tanemura | 2023-01-05T07:31:09Z | http://arxiv.org/abs/2301.01942v1 | # Compact and scalable polarimetric self-coherent receiver using dielectric metasurface
###### Abstract
The polarimetric self-coherent system using a direct-detection-based Stokes-vector receiver (SVR) is a promising technology to meet both the cost and capacity requirements of the short-reach optical interconnects. However, conventional SVRs require a number of optical components to detect the state of polarization at high speed, resulting in substantially more complicated receiver configurations compared with the current intensity-modulation-direct-detection (IMDD) counterparts. Here, we demonstrate a simple and compact polarimetric self-coherent receiver based on a thin dielectric metasurface and a photodetector array (PDA). With a single 1.05-\(\upmu\)m-thick metasurface device fabricated on a compact silicon-on-quartz chip, we implement functionalities of all the necessary passive components: a 1\(\times\)3 splitter, three polarization beam splitters with different polarization bases, and six focusing lenses. Combined with a high-speed PDA, we demonstrate self-coherent transmission of 20-GBd 16-ary quadrature amplitude modulation (16QAM) and 50-GBd quadrature phase-shift keying (QPSK) signals over a 25-km single-mode fiber. Owing to the surface-normal configuration, it can easily be scaled to receive spatially multiplexed channels from a multicore fiber or a fiber bundle, enabling compact and low-cost receiver modules for the future highly parallelized self-coherent systems.
## 1 Introduction
Rapid spread of cloud computing, high-vision video streaming, and 5G mobile services has led to a steady increase in information traffic in the datacenter interconnects and access networks [1]. While intensity-modulation direct-detection (IMDD) formats such as 4-level pulse amplitude modulation (PAM4) are employed in the current short-reach optical links, scaling these IMDD-based transceivers beyond Tb/s is challenging due to the limited spectral efficiency and severe signal distortion caused by the chromatic dispersion of fibers. On the other hand, the digital coherent systems used in metro and long-haul networks can easily expand the capacity by utilizing the full four-dimensional signal space of light and complete compensation of linear impairments through digital signal processing (DSP). However, substantially higher cost, complexity, and power consumption of coherent transceivers have hindered their deployment in short-reach optical interconnects and access networks.
To address these issues, the self-coherent transmission scheme has emerged as a promising approach that bridges the gap between the conventional IMDD and coherent systems [2-7]. In this scheme, a continuous-wave (CW) tone is transmitted together with a high-capacity coherent signal, which are mixed at a direct-detection-based receiver to recover the complex optical field of the signal. Unlike the full coherent systems, this scheme eliminates the need for a local oscillator (LO) laser at the receiver side as well as the stringent requirement of using wavelength-tuned narrow-linewidth laser sources, suggesting that substantially low-cost broad
linewidth uncooled lasers can be used [4]. In addition, since the impacts of laser phase noise and frequency offsets are mitigated, the computational cost of DSP can be reduced significantly [7, 8]. The self-coherent systems thus enable low-cost, low-power-consumption, yet high-capacity data transmission, required in the future datacenter interconnects and access networks.
Among several variations of implementing self-coherent systems, the polarimetric scheme using a Stokes-vector receiver (SVR) [9-11] has an advantage in terms of simplicity. In this scheme, the coherent signal is transmitted on a single polarization state, together with a CW tone on the orthogonal polarization state. By retrieving the Stokes parameters \(\mathbf{S}=[S_{1},S_{2},S_{3}]\) at the receiver side, the in-phase-and-quadrature (IQ) signal is demodulated through the DSP after compensating for the effects of polarization rotation, chromatic dispersion, and other signal distortions. To date, a number of high-speed polarimetric self-coherent transmission experiments have been reported, where the SVRs were implemented using off-the-shelf discrete components [3, 10-12]. Toward practical use, integrated waveguide-based SVRs were also realized on Si [13, 14] and InP [15-18]. More recently, surface-normal SVRs were demonstrated using nanophotonic circuits [19, 20] and liquid crystal gratings [21] with external photodetectors (PDs). Compared with the conventional low-cost IMDD receivers, however, these devices still suffer from a large fiber-to-chip coupling loss and/or need for external lenses to focus light to PDs.
In this paper, we demonstrate high-speed polarimetric self-coherent signal detection using a compact surface-normal SVR, composed of a metasurface-based polarization-sorting device and a high-speed two-dimensional photodetector array (2D-PDA). A metasurface is a two-dimensional array of subwavelength structures that can locally change the intensity, phase, and polarization of input light [22]. Unlike the previous works on metasurface-based polarimeters for imaging and sensing applications [23-27], our device enables efficient coupling of a self-coherent optical signal from a single-mode fiber (SMF) and lens-less focusing to six high-speed PDs. More specifically, by superimposing three types of meta-atom arrays, it implements the functionalities of all the necessary passive components, namely a 1\(\times\)3 splitter, three polarization beam splitters (PBSs) with different polarization bases, and six lenses, inside a single ultrathin device. Combined with an InP/InGaAs-based 2D-PDA chip, we demonstrate penalty-free transmission of polarimetric self-coherent signals over a 25-km SMF in various formats such as 20-GBd 16-ary quadrature amplitude modulation (16QAM) and 50-GBd quadrature phase-shift keying (QPSK). Owing to the surface-normal configuration with the embedded focusing functionality, highly efficient lens-free coupling to the 2D-PDA is achieved. The demonstrated SVR, therefore, has a comparable complexity as a conventional low-cost IMDD receiver that fits in a compact receiver optical subassembly (ROSA). Moreover, it can readily be extended to receive spatially multiplexed channels from a multicore fiber (MCF) or a fiber bundle, which are expected in the future \(>\)Tb/s highly parallelized optical interconnects [28-31].
## 2 Device concept
The schematic of the proposed surface-normal SVR is illustrated in Fig. 1(a). The light from an SMF is incident to a thin metasurface-based polarization-sorting device, which is designed to provide the same functionality as a conventional polarimeter shown in the inset. Namely, it splits the light into three paths, resolves each of them to the orthogonal components in three different polarization bases, and focuses them to six PDs integrated on a 2D-PDA chip. Unlike previously demonstrated metasurface-based polarimeters [23-27], our proposed SVR implements the 1\(\times\)3 splitter and six metalenses as well to enable direct coupling from an SMF to a high-speed 2D-PDA. As a result, the entire device can fit inside a compact ROSA module, comparable to the current IMDD receivers. Moreover, owing to the surface-normal configuration, this scheme can easily be scaled to receive multiple spatial channels without increasing the number of components by simply replacing the input SMF to a MCF or a fiber bundle and using a larger-scale PDA [32] as shown in Fig. 1(b).
To enable three operations in parallel using a single metasurface layer, we adopt the spatial multiplexing method [33, 34]; three independently designed meta-atom arrays are superimposed as shown by MA1 (red), MA2 (blue), and MA3 (green) in Fig. 1(c). The phase profile \(\varphi(x,y)\) of MA1 is designed to focus the \(x\)-polarized component of light to PD\({}_{\rm x}\) and the \(y\)-polarized component to PD\({}_{\rm y}\) at the focal plane as shown in the inset. Similarly, MA2 and MA3 function as PBSs with embedded metalenses for the \(\pm 45^{\circ}\) polarization basis (a/b) and the right/left-handed circular (RHC/LHC) polarization basis (r/l), respectively, and focus respective components to PD\({}_{\rm a,b}\) and PD\({}_{\rm r,l}\). The Stokes vector \(\mathbf{S}\equiv(S_{1},S_{2},S_{3})^{T}\) can then be derived by taking the difference of the photocurrent signals as \(S_{1}~{}=~{}I_{\rm x}~{}-~{}I_{\rm y}\), \(S_{2}~{}=~{}I_{\rm a}-I_{\rm b}\), and \(S_{3}~{}=~{}I_{\rm r}-I_{\rm l}\), where \(I_{\rm p}\) is the photocurrent at PD\({}_{\rm p}\). We should note that this scheme with three balanced PDs without polarizers offers the maximum receiver sensitivity among various SVR configurations [35] and is advantageous compared with the previous demonstrations that employ a non-optimal polarization basis [19-21].
Figure 1: Surface-normal SVR based on superimposed meta-atom arrays. (a) Schematic illustration of the receiver module. A single metasurface device implements all the necessary passive optical components of the equivalent circuit as shown in the inset. MS: metasurface. PDA: photodetector array. IC: integrated circuit. HWP: half-wave plate. QWP: quarter-wave plate. PBS: polarization beam splitter. (b) Scalable configuration to receive multiple input channels from a MCF or fiber bundle. (c) Functionality and configuration of the designed metasurface. The incident light from the SMF is split into three paths and focused to six PDs located at different positions according to the input state of polarization. The superimposed meta-atom arrays (MA1, 2, and 3) operate as PBSs and metalenses for \(x/y\) linear, \(\pm 45^{\circ}\) linear, and RHC/LHC polarization bases, respectively.
## 3 Metasurface design and fabrication
As the dielectric metasurface, we employ 1050-nm-high elliptical Si nanoposts on a quartz layer. The phase of the transmitted light and its polarization dependence can be controlled by changing the lengths of two principal axes (\(D_{u},D_{v}\)) and the in-plane rotation angle \(\theta\) of each nanopost as defined in Fig. 2(a) [22]. Here, in each meta-atom array, MA1-3, we adopt the triangular lattice with a sub-wavelength lattice constant of \(\Lambda=700\sqrt{3}\) nm, so that the non-zero-order diffraction is prohibited. Then, three meta-atom arrays are superimposed by shifting their positions by \(a=700\) nm to form the overall metasurface, as shown in Fig. 1(c).
First, we set \(\theta\) to 0 and simulate the transmission characteristics of uniform nanopost array for the \(x\)- and \(y\)-polarized light at a wavelength of 1550 nm by the rigorous coupled-wave analysis (RCWA) method [36]. From the simulated results, we first derive \(t_{u}(D_{u},D_{v})\) and \(t_{v}(D_{u},D_{v})\), which denote the complex transmittance for the \(x\)- and \(y\)-polarized light as a function of \(D_{u}\) and \(D_{v}\). Then, we derive the required \((D_{u},D_{v})\) that provides a phase shift of \((\varphi_{u},\varphi_{v})\) for each polarization component. The results are plotted in Fig. 2(b) (see Section S1 of Supplement 1 for details). The amplitude of transmittance for each case is also shown in Fig. 2(c). We can confirm that by setting the dimensions of the ellipse appropriately, arbitrary phase shifts for \(x\)- and \(y\)-polarized components can be achieved with high transmittance.
By rotating the elliptical nanoposts by \(\theta\) as shown in Fig. 2(a), such birefringence can be applied to any linear polarization basis oriented at an arbitrary angle [37]. We should note that the phase shifts and amplitudes of transmission are nearly insensitive to \(\theta\)[22] and similar results as shown in Fig. 2(b) and 2(c) are obtained for all \(\theta\). This is because the light is strongly confined inside each Si nanopost, so that the optical coupling among neighboring meta-atoms has only minor influence on the transmission.
We can also provide arbitrary phase shifts to orthogonal circular-polarization states by using the geometric phase shift of meta-atoms [38]. First, we judiciously select \(D_{u}\) and \(D_{v}\) to satisfy \(\varphi_{v}=\varphi_{u}+\pi\), so that each nanopost operates as a half-wave plate. In this case, input RHC and LHC states are converted to LHC and RHC, respectively. In addition, their phases after transmission are written as \((\varphi_{r},\varphi_{l})=(\varphi_{u}+2\theta,\varphi_{u}-2\theta)\) (see Section S2 of Supplement 1 for the derivation). Therefore, \(D_{u}\) and \(D_{v}\) of each nanopost are selected to obtain desired \(\varphi_{u}\) (=\((\varphi_{r}+\varphi_{l})/2\)) while satisfying the condition \(\varphi_{v}=\varphi_{u}+\pi\). The angle \(\theta\) is also determined to be \((\varphi_{r}-\varphi_{l})/4\).
To realize the function of a metalens, each meta-atom array needs to impart a spatially dependent phase profile given as [39]
\[\varphi(x,y)=-\tfrac{2\pi}{\lambda}(\sqrt{(x-x_{0})^{2}+(y-y_{0})^{2}+f^{2}}-f), \tag{1}\]
where \((x_{0},y_{0})\) is the in-plane position of the focal point, \(f\) is the focal length, and \(\lambda\) is the operating wavelength. In this work, we set \(\lambda=1550\) nm, \(f=10\) mm, and the diameter of the entire metasurface area to be 2 mm, corresponding to the numerical aperture (NA) of \(\sim\)0.10. The six focal points are arranged on a regular hexagon with a spacing of 60 \(\upmu\)m, which are matched to the positions of the high-speed 2D-PDA used in our self-coherent experiments. Under these conditions, the phase profiles required for MA1, 2, and 3 are determined as shown in Fig. 2(d). Note that a rather large (2 mm) metasurface is used in this work due to the limitation in reducing the focal length \(f\) in the current optical setup. In a fully packaged module as shown in Fig. 1(a), we can readily shrink the entire area of the metasurface to a few tens of micrometers by reducing \(f\) and designing the geometrical parameters of each nanopost to satisfy the required phase profiles given by Eq. (1).
The designed metasurface was fabricated using a silicon-on-quartz (SOQ) substrate with a 1050-nm-thick Si layer. The nanopost patterns were defined by electron-beam lithography with ZEP520A resist. Then, the patterns were transferred to the Si layer by inductively-coupled-plasma reactive-ion etching (ICP-RIE) using SF\({}_{6}\), C\({}_{4}\)F\({}_{8}\), and O\({}_{2}\). An optical microscope image and scanning electron microscopy (SEM) images of the fabricated metasurface are shown in Fig. 2(e)-(g).
## 4 Static characterization of the fabricated metasurface
We first characterized the fabricated metasurface by observing the intensity distribution at the focal plane for various input states of polarization (SOPs). The experimental setup is shown in Fig. 3(a). A CW light with a wavelength of 1550 nm was incident to the metasurface. The SOP was modified by rotating a half-wave plate (HWP) and a quarter-wave plate (QWP). The image at the focal plane was magnified at 50 times by a 4-f lens system and captured by an InGaAs camera. From the detected intensity values at the six focal positions, the Stokes vector was retrieved as described in Section 2. To enable quantitative measurement of the focused power, we inserted a flip mirror and detected the total power by a bucket PD after spatially filtering the focused beam at each target position using an iris.
Figure 3(b) shows the observed intensity distributions when the input Stokes vector is set to (\(\pm 1\), 0, 0), (0, \(\pm 1\), 0), and (0, 0, \(\pm 1\)). We can confirm that the incident light is focused to the six well-defined points by transmitting through the metasurface. Moreover, its intensity distribution changes with the SOP; \(x/y\) linear, \(\pm 45^{\circ}\) linear, and RHC/LHC components of light are focused to the designed positions as expected. Figure 3(c) shows the retrieved Stokes
Figure 2: Metasurface design and fabrication. (a) Schematic of a periodic array of Si nano-posts placed at the vertices of a triangular lattice with a lattice constant \(a\) of 700 nm. The transmission of \(x\)- and \(y\)-polarized light is simulated for various axes lengths (\(D_{w}\), \(D_{p}\)) of the elliptical posts. (b) Required (\(D_{u}\), \(D_{v}\)) to obtain phase shifts (\(\varphi_{u}\), \(\varphi_{v}\)) for \(x\)- and \(y\)-polarized light. For ease of fabrication, the ranges of \(D_{u}\) and \(D_{v}\) are limited from 100 nm to 650 nm. (c) The amplitude of transmittance for each case in (b) as a function of (\(\varphi_{u}\), \(\varphi_{v}\)). (d) Required phase profiles for MA1, 2, and 3. (e) Optical microscope image and (f, g) SEM images of the fabricated device. In (g), the image is false-colored to distinguish MA1, 2, and 3.
vectors on the Poincare sphere. The average error \(\langle|\Delta\mathbf{S}|\rangle\) is as small as 0.028. Figure 3(d) shows the measured focusing efficiencies to the six positions. Subtracting the 4.8-dB intrinsic loss due to the 1\(\times\)3 splitter [see Fig. 1(a)], the excess loss is around 6.1 dB, whereas the crosstalk to the orthogonal PD position is suppressed by 13-20 dB. While this excess loss is already comparable to the coupling and propagation losses of the previously reported waveguide-based SVRs [13, 14, 15, 16, 17, 18], we expect further improvement by applying anti-reflection coating at the silica surface, improving the fabrication processes to minimize the errors, and by adopting advanced algorithms in designing the metasurface that take into account the nonzero interactions between adjacent meta-atoms [40, 41].
## 5 Self-coherent signal transmission experiment
We then performed the polarimetric self-coherent signal transmission experiment using the fabricated metasurface. The experimental setup is shown in Fig. 4. We employed a 19-pixel 2D-PDA with InP/InGaAs-based p-i-n structure [42], from which six PDs were used as shown in Fig. 4(b). Each PD had a diameter of 30 \(\upmu\)m, the measured bandwidth above 10 GHz, and the responsivity of 0.3 A/W. The 2D-PDA chip was packaged with the radio-frequency (RF) coaxial connectors connected to each PD. The 2D-PDA was placed at the focal distance of 10 mm from the metasurface as shown in Fig. 4(c). This distance was merely limited by the current setup and should be reduced to a sub-millimeter scale in a practical fully packaged module, which can be comparable to current IMDD receiver modules.
A CW light at a wavelength of 1550 nm was generated from a tunable laser source (TLS) and split into two ports, which served as the signal and the pilot tone ports. At the signal port, a LiNbO\({}_{3}\) IQ modulator was used to generate a high-speed coherent optical signal. The Nyquist filter was applied to the driving electrical signals from an arbitrary waveform generator (AWG). The modulated optical signal was then combined with the pilot tone by a polarization beam combiner (PBC). The optical power of the pilot tone was adjusted by a variable optical
Figure 3: Experimental characterization of the fabricated metasurface. (a) Schematic of the optical setup. The flip mirror is used to switch between capturing the intensity distribution at the focal plane and measuring the power of each focused beam. TLS: tunable laser source. PC: polarization controller. VOA: variable optical attenuator. FC: fiber collimator. Pol.: polarizer. HWP: half-wave plate. QWP: quarter-wave plate. MS: metasurface. M: flip mirror. PD: photodetector. (b) Measured intensity distributions at the focal plane for different input SOPs. (c) Retrieved and input Stokes vectors on the Poincaré sphere. (d) Measured focusing efficiency to each PD. The intrinsic loss due to splitting into three paths is shown by a green line. The input polarization is labeled on the top of each bar.
attenuator (VOA), so that their powers were nearly balanced. The self-coherent signal was then transmitted over a 25-km SMF. At the receiver side, the optical signal-to-noise ratio (OSNR) was controlled using another VOA, followed by an erbium-doped fiber amplifier (EDFA) and an optical bandpass filter (OBPF). The electrical signals from the six PDs of the PDA were amplified by differential RF amplifiers and then captured by a real-time oscilloscope (OSC). At a baudrate beyond 20 GBd, we could not use the balanced PD (B-PD) configuration due to the residual skew inside the PDA module. In these cases, we employed four single-ended PDs (S-PDs), where the electrical signals from PD\({}_{\text{x}}\), PD\({}_{\text{y}}\), PD\({}_{\text{a}}\), and PD\({}_{\text{i}}\) were independently captured by a four-channel real-time oscilloscope, so that the skew could be calibrated during DSP. By comparing the results using two configurations, the use of four S-PDs was validated (see Section S3 of Supplement 1 for details). To equalize and reconstruct the original IQ signal, we employed offline DSP with the 2\(\times\)3 and 2\(\times\)4 real-valued multi-input-multi-output (MIMO) equalizers [43, 44] for three-B-PD and four-S-PD configurations, respectively.
Figures 5(a)-(c) show the BER curves and the constellations for 15-GBd 16QAM signals, measured using the three-B-PD configuration. We can confirm that BERs well below the hard-decision forward error correction (HD-FEC) threshold are obtained with a negligible penalty even after 25-km transmission. Figures 5(d)-(f) show the results for 20-GBd 16QAM and 50-GBd QPSK signals, measured by the four-S-PD configuration. Once again, BERs below the HD-FEC threshold are obtained. Finally, Fig. 6 shows the measured BER curves and constellation diagrams of 15-GBd 16QAM signal at 1540-nm and 1565-nm wavelengths, demonstrating the wideband operation of our designed metasurface. While the baudrate in this work was limited by the bandwidth of the 2D-PDA, beyond-100-GBd transmission should be possible by using higher-speed surface-normal PDs with bandwidth exceeding 50 GHz [45, 46].
Figure 4: Self-coherent transmission experiment using the fabricated metasurface and 2D-PDA. (a) Experimental setup. AWG: arbitrary waveform generator. PBC: polarization beam combiner. EDFA: erbium-doped fiber amplifier. OBPF: optical bandpass filter. Osc: oscilloscope. In the insets, three-B-PD and four-S-PD configurations are depicted. (b) Optical microscope image of the fabricated 19-pixel 2D-PDA. The six circled PDs were used in this experiment. (c) Photograph of the receiver.
## 6 Conclusion
We have proposed and demonstrated a surface-normal SVR using a dielectric metasurface and 2D-PDA for the high-speed polarimetric self-coherent systems. Three independently designed meta-atom arrays based on Si nanoposts were superimposed onto a single thin metasurface layer to implement both the polarization-sorting and focusing functions simultaneously. Using a compact metasurface chip fabricated on a SOQ substrate, we demonstrated 25-km transmission of 20-GBd 16QAM and 50-GBd QPSK self-coherent signals. The operating baudrate was merely limited by the 2D-PDA, so that higher-capacity transmission should be possible by using a PDA with a broader bandwidth. Owing to the unique surface-normal configuration with the embedded lens array functionality, a compact receiver module with comparable size and complexity as the conventional IMDD receivers can be realized. Moreover, it can easily be extended to receive spatially multiplexed channels by simply replacing the SMF
Figure 5: Experimental results of self-coherent signal transmission at a wavelength of 1550 nm. (a)-(c) Measured BER curves and constellation diagrams of 15-GBd 16QAM signals before (b2b) and after 25-km transmission using the three-B-PD configuration. (d)-(f) Measured BER curves and constellation diagrams of 20-GBd 16QAM and 50-GBd QPSK signals after 25-km transmission using the four-S-PD configuration.
Figure 6: Experimental results of self-coherent signal transmission at wavelengths of (a) 1540 nm and (b) 1565 nm. (a, b) Measured BER curves of 15-GBd 16QAM signals before (b2b) and after 25-km transmission using the three-B-PD configuration. The insets represent the retrieved constellation diagrams.
to a MCF and employing a larger-scale integrated PDA technology [32]. This work would, therefore, pave the way toward realizing cost-effective receivers for the future \(>\)Tb/s spatially multiplexed optical interconnects.
## Funding.
National Institute of Information and Communications Technology (NICT).
## Acknowledgments
This work was obtained from the commissioned research 03601 by National Institute of Information and Communications Technology (NICT), Japan. Portions of this work were presented at the Optical Fiber Communications Conference (OFC) in 2022, M4J.5. A part of the device fabrication was conducted at the cleanroom facilities of d.lab in the University of Tokyo, supported by MEXT Nanotechnology Platform, Japan. The authors also thank all the technical staff at Advanced ICT device laboratory in NICT for supporting the PDA device fabrication. G.S. acknowledges the financial support from Optics and Advanced Laser Science by Innovative Funds for Students (OASIS) and World-leading Innovative Graduate Study Program - Quantum Science and Technology Fellowship Program (WINGS-QSTEP).
|
2307.03203 | 'Frequency-modulated' pulsed Bell setup avoids post-selection | Excepting event-ready setups, Bell experiments require post-selection of data
to define coincidences. From the fundamental point of view, post-selection is a
true 'logical loophole'. From the practical point of view, it implies a
numerically heavy and time consuming task. In Quantum Key Distribution (QKD),
it opens vulnerability in case of a hostile adversary. The core of the problem
is to synchronize independent clocks during long observation runs. A pulsed
source gets rid of clocks' drift, but there is still the problem of identifying
the same pulse in each remote station. We use a frequency modulated pulsed
source to achieve it. This immediately defines the condition of valid
coincidences in a manner that is unaffected by the drift between the clocks. It
allows finding the set of entangled pairs avoiding post-selection and in a way
that is found to be optimal. It is also robust against a hostile adversary in
the case of QKD. | Mónica Agüero, Alejandro Hnilo, Marcelo Kovalsky, Myriam Nonaka | 2023-07-05T18:09:51Z | http://arxiv.org/abs/2307.03203v1 | # "Frequency-modulated" pulsed Bell setup avoids post-selection.
###### Abstract
Excepting event-ready setups, Bell experiments require post-selection of data to define coincidences. From the fundamental point of view, post-selection is a true "logical loophole". From the practical point of view, it implies a numerically heavy and time consuming task. In Quantum Key Distribution (QKD), it opens vulnerability in case of a hostile adversary. The core of the problem is to synchronize independent clocks during long observation runs. A pulsed source gets rid of clocks' drift, but there is still the problem of identifying the same pulse in each remote station. We use a frequency modulated pulsed source to achieve it. This immediately defines the condition of valid coincidences in a manner that is unaffected by the drift between the clocks. It allows finding the set of entangled pairs avoiding post-selection and in a way that is found to be optimal. It is also robust against a hostile adversary in the case of QKD.
Bell experiments, Data post-selection, Quantum Key Distribution security.
## 1 Introduction.
Entangled states of photons are essential in experimental tests of Quantum Mechanics' (QM), in many processes involving quantum information, and in the practical application known as device-independent Quantum Key Distribution (QKD) [1]. However, filtering out the data to be included in the set of entangled pairs is not a trivial task, especially if the photons' detections are recorded in spatially distant stations. Having distant stations of observation is of interest in many cases, and is unavoidable in QKD. In these cases the observers get two lists (one for each station) of time values corresponding to photon detections, as measured by a local clock. The task is finding which time values, among all the recorded ones, correspond to coincident detections. This is known as _post-selection_. In order to perform it, the first step is to choose a time window value \(T_{w}\). Time values in the lists of stations A and B such that \(\mid\)t\({}_{\rm A}\)-t\({}_{\rm B}\)\(\mid\leq T_{w}\) belong to the set of coincident detections. But different distances, cable lengths and time response of instruments and detectors must be taken into account. The second step then is to add some _delay_ time \(d\) to one of the lists.
The way to determine the values of \(T_{w}\) and \(d\) is a combination of educated guess and iteration. One starts with a "large" value of \(T_{w}\) and counts the total number of coincidences \(N_{c}\) for different values of \(d\). One scans a "reasonable" range of \(d\) from estimated distances, cable lengths and instruments' response times. If the histogram of \(N_{c}\) vs \(d\) has a well defined maximum, this provides the first value of \(d\). Then one can shorten \(T_{w}\) and fine tuning \(d\), in an iterative process. Each step in the iteration means a sort of convolution of one of the lists with the other. If the original time lists are long (as it usually happens) and the final value of \(T_{w}\) is short (as it is usually aimed), post-selection is a numerically heavy and time consuming task. At the end, one gets a single peak of \(N_{c}\) of width given by the shortest \(T_{w}\) chosen. The set of selected coincidences is defined by the data under this peak.
Nevertheless, the iteration is not guaranteed to converge. The structure of the lists of time values can be intricate. More than one peak may appear at some point in the iterative process. Often the plots of \(N_{c}\) vs \(d\) have wide "platforms" around the main peak and display secondary peaks at large values of \(d\). In the case the iterative process does not converge to a satisfactory histogram, then the usual interpretation is that something went wrong in that recording run and the data are discarded. If everything goes right instead, the data under the main peak allow calculating some parameter (say, \(S_{\rm CISH}\)) violating Bell's inequality. This means the set of data to correspond to an entangled state. Data in the secondary peaks sometimes violate the Bell's inequality too (although to a lesser extent than the ones in the main peak) and sometimes not. These features are believed to be caused by drift between the clocks in the remote stations. In one recorded case at least, the drift produced an effect important enough to allow an eavesdropper to break part of the key (if the set of data had been used for QKD) [2]. The Global Positioning System (GPS) is often used to synchronize the remote clocks.
Post-selection also has an edge from the point of view of the foundations of QM. According to the standard QM description [3, 4] the pump field is written as a superposition of plane continuous monochromatic waves, so that the output state is the integral: \(\mid\!\!
intuitive picture (which underlies the procedure of post-selection) that entangled photons are like "bullets" that propagate from the source to the stations. Recall that, in the general case, it is not possible to define a wave function for the photon [5].
Note that a _logical_ loophole arises: QM is assumed valid in order to derive eq.1. This equation and the involved bandwidth values validate the post-selection procedure. Data selected according to this procedure are used to test the validity of QM through the violation of Bell's inequality. But QM was assumed valid at start. J.S.Bell was aware of this loophole, that's why his early experimental proposals included an "event-ready" signal indicating that an entangled pair was emitted by the source [6], making post-selection unnecessary. Some event-ready Bell's experiments have been in fact performed [7-9]. They involve both photons and matter and the quantum phenomenon of entanglement swapping, which may lead to another logical loophole [10]. Anyway, they are very complex setups, difficult to use at the current state of technology.
In summary: from the foundational point of view, experiments free of the post-selection procedure are desirable. From the practical point of view, avoiding the sometimes ambiguous and always time consuming calculations of post-selection is also desirable. Using a pulsed source is a simple solution. Such a source is not "event ready", because it cannot certify an entangled state has been emitted. But it can certify when it has _not_ been emitted. This approach has been used to successfully close the so called time-coincidence loophole [11]. A pulsed source also gets rid of clocks' drift through "logical synchronization" [12]. However, the problem of identifying the same pulse in both stations still remains. In this paper, we introduce the method of modulating the frequency of the pulsed source to fully circumvent post-selection. This method also dispenses with the GPS.
## 2 Setup.
The setup is sketched in Figure 1. Biphotons at 810 nm in the Bell state \(|\varphi^{\prime}\rangle\) are produced in the standard configuration using two crossed BBO-I crystals and walk-off compensating crystals, pumped by a diode laser at 405 nm. This laser emits square pulses of 1 \(\upmu\)s duration at a repetition rate of 500 KHz (50% duty cycle). These numbers are chosen so that the probability \(p\) of detecting one photon during one pulse is \(p\)\(<\)\(<\)1. This condition is necessary to limit the number of accidental coincidences in the pulsed regime [13]. The entangled photons are inserted into single-mode optical fibers. Polarization is observed with fiber optic analyzers. Their relatively poor contrast (1:100) limits the value of \(\mathrm{S_{CISH}}\) that can be achieved to 2.77. Silicon avalanche photodiodes detect single photons; time values of photon detection and trigger signal are stored in time-to-digital converters (TDCs). They have 10 ps nominal time resolution, but accuracy is reduced to \(\approx\)2 ns because of detectors' jitter. After pumping the crystals the laser beam is sent to a beam-splitter and illuminates two fast photodiodes. The resulting electrical signals are sent through 38m of coaxial cable to each station. Pulse shape distortion in this cable length is checked to be negligible. Two photodiodes are used (instead of just one) to avoid spurious echoes in the long cables. Three input channels are hence used in each TDC: two for the "1" and "0" outputs of the polarizer, and one for the trigger signal indicating the start of each pump pulse.
The optical fibers (21m long) and coaxial cables are currently coiled; they are placed in preparation to move the stations to distant positions. Note there is no link to synchronize the clocks other than the trigger signals sent through the coaxial cables. A "Mother" computer controls the function generator that pulses the laser and the servo motors that adjust the angle settings. She also instructs remotely through a TCP/IP communication
Figure 1: Sketch of the setup. FG: programmable function generator, L1,L2: focusing lenses, HWP1 and QWP: half and quarter waveplates at 405nm, BBO2: crossed BBO-I crystals, PD: fast photo-diodes, they record samples of the pumping pulse and send trigger signals to the TDCs through coaxial cables (38m long each), BBO: walk-off compensating crystals, HWP2: half-waveplates at 810nm to set the observation angles, MR: Rotating servo motors, F: filters at 810nm, \(\Delta\lambda\)=10nm, PAF: fiberports f = 7.5mm, SMF: single-mode fiber coils, FPC: birefringence compensators (“bataers”), FP: fiber polarization analyzers, SPCM: single photon detectors, TDC: time-to-digital converters. Stations are separated by 42m through the optical fibers and 2.64m in straight line. The setup is designed to allow the stations to be moved to remote places.
via a local network, the "sons" computers in each station to open, name and close the data files recorded in each experimental run.
## 3 Tested method and results.
As said, synchronization between the clocks at the remote stations is a main issue. The pulsed regime eliminates the problem of the clocks' drift, for synchronization is refreshed with each pulse slope, and the drift is negligible during the typical pulse's duration. But, although Mother orders both sons to start recording data simultaneously, fluctuating differences between the time values each son actually starts to record are observed to be as large as 10 ms, what means a difference of \(\approx\)5\(\times\)10\({}^{4}\) pumping pulses. In these conditions, the only way to identify the same pulse (among the 5\(\times\)10\({}^{4}\) ones) in both stations is by counting coincidences, that is, to perform the unwanted post-selection procedure. We find a simple alternative by modulating the frequency of the pump pulses, as it is explained next.
Before a recording run starts, the pulse frequency is set to 490 KHz (i.e., slightly different from the value used during actual recording). When everything is ready to start the run, Mother orders the generator to switch to 500 KHz. This produces a sharp step in the pulse separation without affecting the pulse shape. That step is easily identified in the trigger files in each TDC, see Figure 2. This determines the "first pulse in the run" in both stations and allows numbering the following pulses. Once the run (10 to 30s in real time) has ended, Mother switches the pulsing frequency back to 490 KHz and orders the sons to save the recorded files and to prepare for the next run.
In the run in Fig.2, the time difference between the moment Alice and Bob start measuring is found by the difference between the locally measured pulse numbers where the step occurs (478113 - 473343) multiplied by the period (2 \(\upmu\)s), that is, 9.54 ms. This time is different in each run, but is easily found in this way. As \(p\)\(<\)\(<\)1, if a detection occurs in pulses with the same number in each station, then a coincidence is immediately found. This criterion is unambiguous, fast and reliable. Further filtering could be done by using \(T_{w}\) shorter than the pulse duration, but is found unnecessary (see below).
In our case, a single switch event at start suffices to number all pulses in the run. In runs longer than 30s, the TDCs may fail to record one or more trigger signals during the run. In this case, the function generator can be easily programmed to modulate the pulsing frequency (say, by introducing a slow chirp). This allows identifying the missing trigger signals and restoring pulse numbering.
As an illustration, a typical histogram obtained during post-selection is displayed in the Figure 3a. The relatively broad main peak and the secondary peaks are caused by the drift between the independent clocks. There are no ambiguities in this case; anyway, obtaining this histogram means performing a heavy numerical task. In the Figure 3b, the result of carrying post-selection further down to \(T_{w}\)= 2 ns is shown. Note the scarce statistics and the difficulty in defining the main peak. The value of S\({}_{\text{CHSHI}}\) calculated with \(T_{w}\)= 2 ns is hence meaningless. With these data, S\({}_{\text{CHSHI}}\) must be calculated with \(T_{w}\)\(\geq\) 100 ns.
In the method we test in this paper, drawing histograms like the ones in Figs.3 is unnecessary. In the Figure 4, instead, we show the histogram of the time distances between the detections (summed up during the complete run) observed in Alice and Bob _in the pulses with the same numbering_. As it is seen, filtering with \(T_{w}\) shorter than the pulse duration is redundant.
The original (experimentally recorded) raw data files are the same than in Figs.3 and 4. They correspond to setting \(\alpha\)=0, \(\beta\)=0, and "1" outputs in both stations in Fig.1; recording time \(\approx\)10 s, number of pulses 4,511,169; \(N^{\alpha}_{\text{ssingles}}\) = 115,861; \(N^{\beta}_{\text{singles}}\) = 108,874
Figure 3: Histograms of \(N_{c}\) vs \(d\) obtained by post-selection, (a) \(T_{w}\) = 100 ns, (b) \(T_{w}\) = 2 ns.
Figure 2: Illustration of the method to identify the same pulse in each station. The period of the train of pulses is shown as observed in the Alice (Bob) station in red (blue). As the step is produced by the same event, numbering the following pulses is immediate. No post-selection is necessary.
(\(\Rightarrow p\approx 0.024<\)1 as required). For this file \(N_{c}=\) 2,614 by using post-selection with \(T_{w}\)=100 ns (Fig.3a) and \(N_{c}\) = 8,794 by using our method with \(T_{w}=\) 2 ns (Fig.4). Note the huge difference with Fig.3b in the number of coincidences (and hence, in the efficiency).
Similar files recorded with additional settings allow calculating the level of entanglement. Using always the same raw experimental files, coincidences obtained by post-selection lead to S\({}_{\text{CHSH}}=2.14\pm 0.11\) (\(T_{w}=100\)ns). The ones obtained by our method lead to S\({}_{\text{CHSH}}=2.78\)\(\pm\) 0.05 (\(T_{w}=2\)ns), reaching the limit that is possible with our fiber polarization analyzers. Therefore, in addition to the faster and unambiguous processing of data, the entanglement of the selected set is higher.
## 4 Summary and conclusions.
This paper essentially deals with the problem of synchronization between remote clocks in a Bell setup. This is of interest for experiments on QM foundations and for QKD. A customary solution is using the signals provided by the GPS. But, in the case of QKD, it has the drawback that the GPS can be jammed or destroyed by a hostile adversary, making communication unreliable or impossible.
The tested method dispenses with the existence of the GPS. Its key features are the pulsed regime and the modulation of the repetition rate. The first feature gets rid of clocks' drift, the second one allows pulse numbering even if one or more pulses fail to be detected by the TDCs. Pulse numbering is found to suffice to determine coincidences, further filtering is redundant.
In conclusion, the tested method allows finding the set of entangled pairs between remote stations in a fast, unambiguous and efficient way. Besides, it is free of the logical loophole of post-selection. We believe the tested method to be of wide interest in both pure and applied experimental research in Quantum Information.
## Acknowledgments.
This work received support from the grants N62909-18-1-2021 Office of Naval Research Global (USA), and PUE 229-2018-0100018CO CONICET (Argentina).
|
2305.15218 | Multi-modal Machine Learning for Vehicle Rating Predictions Using Image,
Text, and Parametric Data | Accurate vehicle rating prediction can facilitate designing and configuring
good vehicles. This prediction allows vehicle designers and manufacturers to
optimize and improve their designs in a timely manner, enhance their product
performance, and effectively attract consumers. However, most of the existing
data-driven methods rely on data from a single mode, e.g., text, image, or
parametric data, which results in a limited and incomplete exploration of the
available information. These methods lack comprehensive analyses and
exploration of data from multiple modes, which probably leads to inaccurate
conclusions and hinders progress in this field. To overcome this limitation, we
propose a multi-modal learning model for more comprehensive and accurate
vehicle rating predictions. Specifically, the model simultaneously learns
features from the parametric specifications, text descriptions, and images of
vehicles to predict five vehicle rating scores, including the total score,
critics score, performance score, safety score, and interior score. We compare
the multi-modal learning model to the corresponding unimodal models and find
that the multi-modal model's explanatory power is 4% - 12% higher than that of
the unimodal models. On this basis, we conduct sensitivity analyses using SHAP
to interpret our model and provide design and optimization directions to
designers and manufacturers. Our study underscores the importance of the
data-driven multi-modal learning approach for vehicle design, evaluation, and
optimization. We have made the code publicly available at
http://decode.mit.edu/projects/vehicleratings/. | Hanqi Su, Binyang Song, Faez Ahmed | 2023-05-24T14:58:49Z | http://arxiv.org/abs/2305.15218v2 | # Multi-modal Machine Learning for Vehicle Rating Predictions Using Image, Text, and Parametric Data
###### Abstract
Accurate vehicle rating prediction can facilitate designing and configuring good vehicles. This prediction allows vehicle designers and manufacturers to optimize and improve their designs in a timely manner, enhance their product performance, and effectively attract consumers. However, most of the existing data-driven methods rely on data from a single mode, e.g., text, image, or parametric data, which results in a limited and incomplete exploration of the available information. These methods lack comprehensive analyses and exploration of data from multiple modes, which probably leads to inaccurate conclusions and hinders progress in this field. To overcome this limitation, we propose a multi-modal learning model for more comprehensive and accurate vehicle rating predictions. Specifically, the model simultaneously learns features from the parametric specifications, text descriptions, and images of vehicles to predict five vehicle rating scores, including the total score, critics score, performance score, safety score, and interior score. We compare the multi-modal learning model to the corresponding unimodal models and find that the multi-modal model's explanatory power is 4% - 12% higher than that of the unimodal models. On this basis, we conduct sensitivity analyses using SHAP to interpret our model and provide design and optimization directions to designers and manufacturers. Our study underscores the importance of the data-driven multi-modal learning approach for vehicle design, evaluation, and optimization. We have made the code publicly available at [http://decode.mit.edu/projects/vehicleratings/](http://decode.mit.edu/projects/vehicleratings/).
Keywords:Multi-modal Learning, Machine Learning, Vehicle Rating Prediction, Model Interpretability, Sensitivity Analysis
## 1 Introduction
From the earliest years of their invention, vehicles have stood as a major contributing factor to both everyday consumer life and global economic development. Since the availability of the internet, most consumers research vehicle evaluation scores online and see them as important references for their vehicle purchasing decisions [1]. Vehicle evaluation is likewise at the heart of vehicle design, optimization, and improvement. Effective and efficient vehicle evaluation is essential for designers and manufacturers to enhance the appeal of their new models. Extant research has shown promise in exploiting machine learning (ML) and artificial intelligence for vehicle price prediction [2, 3, 4], vehicle sales prediction [5], vehicle purchase criteria [6], vehicle evaluation [7], and insurance services[8]. When evaluating a vehicle, consumers typically analyze multiple data types, such as images, 3D models, parametric specifications, and text reviews.
Consider a typical vehicle purchasing journey. Initially, a potential vehicle buyer determines the need to purchase a vehicle, which leads them to explore various automotive websites, such as US News, to evaluate numerous vehicle options. In order to make a well-informed choice, they might scrutinize the vehicle's exterior and interior images to assess its design and features. They might even engage with 3D models, when accessible, for a more detailed understanding of the vehicle's attributes.
Additionally, the buyer might review parametric data to measure the vehicle's specifications against others in its category, focusing on elements such as engine capacity, fuel efficiency, safety features, and cost. Reading reviews and written summaries about the vehicle's performance, reliability, and user experience also aids them in ensuring it suits their requirements. Renowned entities, like US News, often rank or rate different vehicles, which can significantly influence the buyer's decision. Through the amalgamation of this varied information, buyers are able to make knowledgeable purchase decisions. Subsequently, they might visit a vehicle dealership to inspect and test drive their chosen vehicle. Upon assessing the vehicle's performance, comfort, and additional features, buyers determine whether it's the right fit for them. If satisfied, they return to the dealership to discuss the price and finalize the purchase. It's crucial to acknowledge that individuals tend to consider multiple data modalities when interacting with designs. However, the majority of current machine learning algorithms are focused on a single modality, typically images, which limits their perspective and hence their practicality. This single-dimensional approach inevitably results in oversimplified conclusions and findings.
This paper endeavors to bridge this gap by tackling the research question: How does multi-modal information about a vehicle influence its ratings? This question is approached utilizing a multi-modal learning method and interpretability mod
els. The application of artificial intelligence and multi-modal deep learning to evaluate and analyze vehicles is relatively unexplored, predominantly due to the substantial requirement for labeled multi-modal data to train deep neural networks. To remedy this shortfall, we also collected a novel multi-modal dataset that includes parametric specifications, images, and textual descriptions of vehicles, all labeled with various vehicle assessment scores.
On this basis, we develop and validate a multi-modal learning model to predict the rating scores of vehicles more comprehensively and accurately. We show that multi-modal learning can exploit the features learned from different types of data and capture the interactions between them to achieve better performance than unimodal learning. Our contributions include the following:
1. We propose the development of individual unimodal ML models that independently learn from parametric specifications, images, and text descriptions of vehicles. These models aim to predict five distinct vehicle rating scores, namely the total score, critics score, performance score, safety score, and interior score.
2. We introduce a multi-modal learning model capable of concurrently learning from parametric, image, and text data to predict vehicle rating scores. Our findings indicate that this multi-modal learning model markedly outperforms the unimodal models.
3. We assess the relative informativeness of different data modes. Our analysis suggests that parametric data is the most informative for predicting all rating scores, and in most instances, text descriptions offer more predictive power than images.
4. We demonstrate that the sensitivity analyses using SHAP are capable of interpreting our models and providing more detailed design, optimization, and improvement directions to designers and companies.
The rest of this article is organized as follows: Section 2 reviews the approaches to the relevant components of the proposed model. Section 3 introduces the source and composition of the data used in this paper, the data processing module, and both the unimodal and multi-modal machine learning models. Section 4 reports and discusses the performances of the unimodal and multi-modal machine learning models, interprets the models through sensitivity analyses, and summarizes the limitations of this study. Section 5 concludes this paper by highlighting its findings and contributions.
## 2 Background
Good vehicle evaluation often requires the analysis of multi-modal data, often involving vehicle parametric specifications, text descriptions, and images. In this section, we first discuss why vehicle evaluation is important. We then review relevant methods for embedding parametric, text, and image data, and investigate prior research on ML techniques for multi-modal data.
### Why are vehicle evaluations important?
A few websites provide vehicle reviews and ratings, such as J.D. Power1, US News2, Motor Trend3, Edmunds4, and Kelley Blue Book5. Among these, US News is one of the most popular websites, showing as the number one search result for the query "vehicle rating" on search engines such as Google, Bing, and Duck.
Footnote 1: [https://www.jdpower.com/cars/rankings](https://www.jdpower.com/cars/rankings)
Footnote 2: [https://cars.usnews.com/cars-trucks/rankings](https://cars.usnews.com/cars-trucks/rankings)
Footnote 3: [https://www.motortrend.com/cars/](https://www.motortrend.com/cars/)
Footnote 4: [https://www.cdmunds.com/new-car-ratings/](https://www.cdmunds.com/new-car-ratings/)
Footnote 5: [https://www.kbb.com/cars/](https://www.kbb.com/cars/)
US News vehicle ratings are highly influential and are widely followed by consumers who are in the market for a new vehicle. When a vehicle receives high ratings, it can receive increasing consumer interest, ultimately resulting in more sales. US News vehicle ratings consider various factors such as safety, reliability, performance, and interior features. These ratings are based on objective data and evaluations from automotive experts, which can provide consumers with a valuable reference for making informed decisions when purchasing a vehicle. Consumers may use these ratings as a guide when comparing different models and brands and may be more likely to consider a vehicle that has received high ratings. Similarly, vehicle dealerships may use these ratings in their advertising and marketing efforts to attract customers to their inventory.
Vehicle manufacturers can use US News vehicle ratings to improve their new vehicle designs in several ways:
1. Identify Areas for Improvement: By looking at the rating scores for factors like safety, reliability, performance, and interior features, vehicle manufacturers can use these ratings to identify areas where their new vehicles are falling short and improve their designs. US News vehicle ratings take into account consumer needs and preferences. By using the ratings to inform their new vehicle designs, vehicle manufacturers can create vehicles that can better meet the needs and preferences of their target customers.
2. Benchmark Against Competitors: Vehicle manufacturers can use vehicle ratings to see how their new vehicle designs compare to those of their competitors. This can help them identify areas where they need to improve to remain competitive in the market.
3. Incorporate Best Practices: Vehicle manufacturers can analyze the highest-ranked vehicles in their category to discover and incorporate best practices into their new vehicle designs. This can help them improve their ratings in future years.
In summary, by using vehicle ratings to inform their new vehicle designs, vehicle manufacturers can create vehicles that better meet the needs of their customers, are more competitive in the market, and ultimately achieve higher ratings in future years.
What are the benefits of predicting vehicle ratings using machine learning?Predicting vehicle ratings using ML can be incredibly useful for several reasons. By analyzing a vast amount
of data, ML algorithms can identify patterns and correlations across different vehicle data, that may not be immediately apparent to humans. This can help vehicle manufacturers gain valuable insights into the features and characteristics contributing to high ratings. By predicting ratings, vehicle manufacturers can identify areas for improvement in their products and make adjustments to enhance their performance in these areas. Predicting ratings can also help vehicle manufacturers remain competitive in the market by identifying trends and preferences among consumers, allowing them to create products that better meet the needs of their target audience.
Predicting vehicle ratings can also inform marketing and advertising strategies by highlighting the features that are most important to consumers. Additionally, it can help vehicle manufacturers identify areas for improvement in their vehicle designs and assess their performance relative to their competitors. By tracking their progress over time and setting internal targets for improvement, vehicle manufacturers can use predicted vehicle ratings as a benchmark to inform their product development and competitive strategies. Ultimately, predicting vehicle ratings using ML can help vehicle manufacturers create better products, improve their marketing and advertising strategies, and gain competitive advantages in the market. Next, we discuss different modalities of data in which vehicle information is typically captured.
### Representing Engineering Data in Different Modalities
Parametric dataEngineering product specifications are often provided in the form of tables in a structured way. Parametric data is one of the most commonly used forms of data, consisting of samples (rows) that share the same feature set (columns), which has been used in many applications [9]. Compared with image or text data, parametric data is mostly heterogeneous, consisting of continuous-valued and categorical-valued attributes. Parametric data features dense values but sparse classification. Although parametric data modeling has been explored intensively using traditional ML methods in the past decades, such as linear regression [10], the Gaussian process [11], and gradient-boosted decision trees (GBDT) [12], deep neural networks can learn parametric data in a gradient-based way and allow for the integration of parametric data with other data modalities for multi-modal learning. Typically, parametric data can be learned by simple neural networks, such as multi-layer perceptrons (MLPs). Prior studies have reported that regularization can improve the performance of MPLs in learning parametric data [13]. Deep learning techniques like attention mechanisms [14] and transformer [15] architectures have also been applied to parametric data learning and have shown good prospects.
Image dataWith the recent advances of deep learning in computer vision, convolutional neural networks (CNNs) have made breakthroughs in image recognition [16], image classification [17], image segmentation [18], image generation [19], and other applications. Therefore, we focus on CNNs for image learning. A few pre-trained image embedding modules are commonly used for image learning tasks, including AlexNet [20], VGGNet [21], ResNet [22], and Inception [23]. Although current image learning for prediction tasks mostly focuses on classification and recognition, this study particularly focuses on the prediction of vehicle rating scores, which is essentially a regression problem. Different from classification problems, the features learned by the image embedding modules are not used to predict a categorical class through the Softmax activation function but are employed to predict continuous values (i.e., vehicle rating scores) through the Rectified Linear Unit (ReLU) activation function.
Text dataIn addition, natural language processing (NLP) has made significant strides toward automatic comprehension of text data. A few neural network language models (NNLMs) [24] first appeared to learn massive text data. Then, deep recurrent neural networks (RNNs) [25] brought NLP to the next level with their strengths in learning sequential data. Its variants, such as long short-term memory (LSTM) [26] or gated recurrent unit (GRU) [27], were proposed to resolve the problems of gradient vanishing and the explosion of RNNs. Recently, with the advent of the transformer models [15], large Transformer-based language models have gradually gained prominence in fulfilling various NLP tasks. These models have many advantages over the previous NNLMs and RNN-based models: taking entire sequences as input, they can understand the context of each word in a sequence more comprehensively; transformers can process and train more data in less time and utilize the embedded self-attention mechanism to enhance learning. Deep learning models, such as the generative pre-trained transformers (GPT) models [28; 29] proposed and constantly updated by the OpenAI team, and the bidirectional encoder representations from transformers (BERT) model [30], and its variants [31; 32], significantly improve NLP tasks. In this study, we use a BERT model to encode text data and predict different rating scores.
### Multi-modal Learning
On the vehicle rating websites, each vehicle is represented in multiple data modes. Capturing the complementarity and alignment of multi-modal data can lead to a better understanding and more accurate evaluation of a vehicle. Multi-modal learning models that can learn vehicle features simultaneously from the multi-modal information are required to predict vehicle rating scores using such information. In multi-modal learning, the unimodal models are often pre-trained to learn features from each data modality first. On this basis, the multi-modal model can be constructed by fusing the features learned by multiple unimodal models for the downstream tasks. In this paper, we focus on employing multi-modal learning to learn vehicle images, text descriptions, and parametric specifications to predict vehicle rating scores.
Obtaining multi-modal latent representations with effective information fusion lies at the heart of multi-modal learning for this prediction task. Joint representations and coordinated representations are the common options to represent multi-modal data [33]. Joint representation is better at capturing complementary information from different modalities compared to coordinated representations [33; 34], making it more suitable for prediction tasks [35]. Fusing the information from multiple modalities effectively is critical to learn informative joint representations. Operation-based methods, bilinear pooling methods, and
attention-based methods are commonly used for information fusion in multi-modal learning [33]. The operation-based methods integrate features learned from unimodal data using simple operations, such as concatenation [35, 36, 37, 38], averaging [39], element-wise multiplication [40], (weighted) summation [37, 38], linear combination [37], and majority voting [41]. Bilinear pooling fusion integrates features learned from unimodal data by calculating their outer product or Kronecker product [42, 43]. This approach can capture the high-order multiplicative interactions among all modalities, leading to more expressive and predictive multi-modal representations for fine-grained recognition [42, 44]. In comparison, the attention-based methods can model dependencies between two data modalities dynamically and assign higher weights to the elements more relevant to the other modality [15, 45]. The integration of the features learned from different modalities can be joined at early or late stages. It is easier to learn the interactions between different data modalities when the features are joined at early stages. However, early joining results in higher-dimensional joint representations, which need more computational resources to train [33].
In recent years, multi-modal learning has been explored for a variety of tasks, such as cross-modal synthesis [46, 47, 48, 49], multi-modal prediction [50], and cross-modal information retrieval [51, 52]. However, it is still underexplored in the engineering domain. Recently, Yuan, Mation, and Moghaddam [53] proposed a multi-modal learning model to capture features from images and text for shoe evaluation. Li et al. [54] developed a multi-modal target embedding variational autoencoder model for 2D silhouette-to-3D shape translation. Song et al. [55] developed an attention-enhanced multi-modal learning model that learns design sketches and text descriptions simultaneously for design metric prediction.
### Machine Learning Model Interpretability
In the realm of engineering, there is a growing emphasis not only on the effectiveness and predictive capabilities of ML models but also on their interpretability [56, 57, 58], which indicates if the reasoning process behind the model predictions can be easily comprehended by humans. The greater the interpretability of a model, the more readily people can understand and trust its predictions.
The rapid advancement of deep learning models has facilitated diverse model-independent explanation techniques. For instance, permutation feature importance [59, 60, 61] and Shapley Additive exPlanations (SHAP) [62, 63, 64] have seen widespread applications. Fisher et al. devised the model class reliance (MCR) approach to facilitate the comprehension of Variable Importance (VI) for unknown models [59]. Within Engineering Design, Ahmed et al. [61] employed a feature permutation-based technique to interpret the predictions of a graph neural network in predicting product relationships. They found factors such as car make, body type, and segment were important for determining co-consideration relationships. Mukund Sundararajan and Amir Najmi [63] proposed Baseline Shapley (BShap), a technique to explore differences in Shapley value attribution across multiple operations. Shrikumar et al. [62] developed Deep Learning Important FeaTures (DeepLIFT), a method that analyzes the contribution of neurons to input in backpropagation networks to ascertain feature importance. And DeepExplainer, an implementation of Deep SHAP, was developed based on SHAP [64] and DeepLIFT [62]. Additionally, Integrated Gradients [65], in conjunction with SHAP [64] and SmoothGrad [66], has given rise to the GradientExplainer, which is a variant of the SHAP Explainer. This variant enables the interpretation of image or text model outputs. In our study, we deploy a SHAP-based approach [64] to interpret the outputs of the image, text, and parametric models.
## 3 Data and Method
This section introduces the different types of data we use and the multi-modal learning models for vehicle rating score prediction in this research. This prediction problem is viewed as a regression task. The multi-modal learning model can be divided into five modules, as shown in Figure 1. The first module is a pre-processing module to prepare the original data (parametric vehicle specifications, images, and text descriptions). The processed data are the input to the respective unimodal models. The second, third, and fourth modules are the unimodal models capturing features from the parametric, image, or text data, respectively. After the three unimodal models are pre-trained, they can be combined to construct the multi-modal learning model. The rest of this section will separately introduce the data used in this study and each module of the proposed multi-modal learning model.
### Data
The data for developing the multi-modal learning model comes from U.S.News6. The website provides detailed information on vehicles from different categories, such as sedans, trucks, vans, and sport utility vehicles. The available information covers expert reviews, photos, prices, specifications, performances, rating scores, and so on. In this study, the rating scores, including the overall score, the performance score, the interior score, the critics score, and the safety score, are used as the labels of each vehicle described by the other information. Among them, the performance score reflects the vehicle's performance in terms of acceleration, braking, ride quality, handling, and other qualitative performance metrics. The interior score is regarding vehicle interior manufacturing quality, interior comfort, decoration and features, cargo space, and styling. The critics score represents the reviewer's degree of recommendation and their overall tone regarding the vehicle. The safety score is based on two factors: the number of advanced accident-avoidance technologies and crash test results from the National Highway Traffic Safety Administration and the Insurance Institute for Highway Safety. The overall score for each vehicle is the weighted average of the other four component scores and a few other factors, which are not available on the US News website and are not considered in our study. The rating scores range from 0 to 10, with 10 being the best.
Footnote 6: [https://cars.usnews.com/cars-trucks](https://cars.usnews.com/cars-trucks)
The goal of our multi-modal regression model is to predict the five scores for new vehicle designs, given their specifications, images, and text descriptions. The text description provides an overall review of the vehicle, including its advantages and disadvantages, and changes to the vehicle model compare to its last
version. The image data are the exterior and interior photos of the vehicle. The parametric data conveys detailed specification information of the vehicle, such as body style, dimensions, and other mechanical, safety, and interior features. Our dataset covers 4,517 different vehicle models from different categories from 2007 to 2022. However, some relevant information is missing on the US News website for 1,946 vehicle models, which are excluded from this study. Accordingly, the data of 2,571 vehicles is used to develop the multi-modal learning model.
### Data Processing
The raw data from the US News website contains three types of data: parametric specifications, text descriptions, and image data. The parametric specifications consist of five categories: general information, exterior information, interior information, mechanical information, and safety information. Each category covers multiple subcategories, as listed in Table 1. Notably, the subcategories further comprise varying numbers of features, resulting in a total of 302 features for each vehicle. Some of these features are numeric, while others are categorical. When preprocessing the data, we normalize all numeric features to [0,1] and use one-hot encoding to represent the categorical features.
The image data consists of exterior and interior photos. Among a large number of exterior and interior photos, we select the four most representative exterior or interior photos as the input to the image model. Specifically, the selected exterior photos include photos taken from four fixed views: angular front, front, front, rear, and side. The original size of these photos is \(776\times 776\). First, we resize the original images to \(224\times 224\). Second, we remove part of the white background at the periphery of the exterior photos to further reduce the size of the images. Third, we integrate the four resized exterior photos into a single exterior image with a size of \(448\times 290\), as shown in Figure 2. The selected interior photos cover the major interior components, including the dashboard, front seat, rear seat, and steering wheel. While their original size is \(776\times 517\), we resize the interior photos to \(224\times 150\) proportionally and integrate them to produce a single interior photo with a size of \(448\times 300\).
For text data, the information of different features is formatted as "The name of this feature: content", and the information on different features is concatenated successively into a single
\begin{table}
\begin{tabular}{|c|c|} \hline Category & Subcategory \\ \hline \multirow{4}{*}{General} & Years \\ & Brand \\ & Drivetrain \\ Information & Manufacturer Suggested Retail Price (MSRP) \\ & Mile Per Gallon (MPG) City \\ & Mile Per Gallon (MPG) Highway \\ \hline \multirow{4}{*}{Exterior} & Exterior Body Style \\ Information & Exterior Dimensions \\ \cline{2-3} & Interior Measurements \\ \hline \multirow{4}{*}{Interior} & Interior Convenience \& Comfort \\ & Interior Dimensions \\ Interior & Interior Entertainment \\ Information & Interior Heating Cooling \\ & Interior Navigation \& Communication \\ \cline{2-3} & Interior Seats \\ \hline \multirow{2}{*}{Mechanical} & Mechanical Transmission \\ Information & Mechanical Fuel \\ & Engine \& Performance \\ \hline \multirow{2}{*}{Safety} & Safety Airbags \\ Information & Safety Brakes \\ \cline{1-1} & Safety Features \\ \hline \end{tabular}
\end{table}
Table 1: Parametric specification information.
Figure 1: The outline of the proposed multi-modal learning model for predicting vehicle rating scores7.
machine-readable string. The maximum, minimum, and average word lengths for the text descriptions are 224, 32, and 74, respectively.
### Models
In this subsection, we first introduce the three unimodal models for embedding the parametric data, image data, and text data, respectively. Then, we describe how the multi-modal learning model is constructed based on these three unimodal models.
**Unimodal Model** Figure 3 illustrates the architectures of the unimodal models that respectively learn the parametric, image, and text data. All these unimodal models adopt the ReLU activation function for the regression task in this study. The final output for each unimodal model is the predicted value of the rating score.
(1) **Parametric model**: Firstly, we construct an MLP model to learn the parametric data. To find the optimal neural network architecture and hyper-parameters, we conduct a set of pilot experiments. This process leads us to a simple neural network architecture containing two hidden layers, as depicted in Figure 3-A. The number of neurons in the first hidden layer is equal to the dimension of the input data (302), and that of the second hidden layer is 100. We add a dropout layer after the first hidden layer with dropout rates ranging from 0.25 or 0.3 for predicting different rating scores.
(2)**Text model**: Secondly, the text model adopts a pre-trained transformer-based BERT [30] text embedding module. We use the pooled output from the BERT model as the final embedding of the input text with a dimension of 512. During pre-training, the BERT embedding module is trained on large text databases (e.g., Wikipedia.) for multiple tasks. The adoption of the pre-trained BERT model allows for effective knowledge transfer from the large external text dataset to our target text descriptions when we fine-tune the model with our dataset for the regression task. Following the BERT embedding layer, a dropout layer with a dropout rate of 0.1 and a dense layer with 100 neurons are attached before the final output layer, as shown in Figure 3-B. We unfreeze all layers to train the text model.
(3) **Image model**: Thirdly, we construct a CNN-based model to learn the vehicle images. We experiment with multiple CNN models, including ResNet [22], Inception [23], and VGG16 [21], during our pilot experiments and get similar performances from them. VGG16 [21] is selected for this study because it takes less time to train. The output from the VGG16 embedding module exhibits a dimension of \(9\times 14\times 512\). Following the image embedding module, we add a self-attention mechanism, as visualized in Figure 3-C. It reshapes the output to \(126\times 512\), which is seen as a set of 126 latent features with a dimension of 512. The self-attention mechanism employs a latent dimension of 32 in this study. Since the input to the image model integrates four exterior or interior vehicle photos, which complement or align with each other to different degrees. The self-attention mechanism is expected to facilitate capturing the interactions between different regions of the input images. The self-attention mechanism employs the dot-product attention proposed in "Attention Is All You Need" [15]. After that, a flatten layer, a dense layer with 1024 neurons, a dropout layer with a dropout rate of 0.2, another dense layer with 100 neurons, and a final output layer are attached sequentially.
For the image model, to enhance our evaluation of vehicle features, we use interior photos to evaluate the interior score and use exterior images to evaluate the other rating scores.
**Multi-modal Learning Model** After the three unimodal models are trained, we integrate them to construct the multi-modal learning model. To facilitate the learning of the interactions between the three data modalities, we do not directly integrate the unimodal models by concatenating their final outputs. Instead, we concatenate the final embedding of the input parametric, text, and image data from the corresponding unimodal models as the final joint representation of the multi-modal input data. The final output of the multi-modal learning model is calculated from the joint representation through a dense layer with the ReLU activation function. The architecture of the multi-modal model is shown in Figure 4. In comparison, we also construct three bi-modal models that respectively combine two of the three unimodal models. These different combinations give us four multi-modal learning models. For the sake of simplicity, we refer to the bi-modal learning model combining parametric and text data as \(Par\_Text-MML\) model, the bi-modal learning model combining parametric and image data as \(Par\_Img-MML\) model, the bi-modal learning model combining the image and text data as \(Img\_Text-MML\) model. The
Figure 3: The architectures of three unimodal models.
Figure 2: Examples of vehicle exterior8 and interior9 photos.
multi-modal model combining the parametric, text, and image data is called \(Par\_Text\_Img-MML\) model in this paper hereafter. When training the multi-modal models, we initialize the multi-modal learning models with the pre-trained weights from the unimodal models and fine-tune the weights jointly to learn the interactions between the three data modalities for better vehicle rating score prediction.
## 4 Results and Discussion
In this section, we compare the performances of the three unimodal models and the four multi-modal learning models to verify the effectiveness of the proposed multi-modal learning model. Specifically, the performance of each model is assessed in terms of the explanatory power for the variances of the vehicle rating scores, which is known as the determination coefficient (i.e., \(R^{2}\) value) in statistics. In regression, the degree of fit improves as the \(R^{2}\) value increases. To train and test the model, the 2,571 vehicles in our dataset are divided into the training set, validation set, and test set following the ratio of 0.8:0.1:0.1. In the process of data split, we ensure that the stratified distribution of the rating scores within each set is consistent with that of the entire dataset. We observe that the distribution of different rating scores could be very different, so we generate a unique data split for each of the five rating scores. All the models use the same data split to predict the same rating score for easy comparison. During training, different unimodal and multi-modal learning models are trained with the same batch size of 32 and the initial learning rates ranging from 0.001 to 0.00005, which are selected through a series of pilot experiments. We also apply different decay rates ranging from 0.0 to \(e^{-0.015}\) to schedule the learning rates during training different models. The training process is ended if the validation loss does not decrease for 20 consecutive epochs. In order to demonstrate the stability of the model and test the statistical significance of the differences between different models, we repeat each experiment 10 times.
### Performance of Unimodal Models
The three unimodal models show varying performances in predicting different rating scores, as shown in Figure 5. The parametric model best predicts all rating scores. Its \(R^{2}\) values are at least 0.04 higher than that of the corresponding image and text models for predicting all rating scores. Moreover, in most cases, the text model outperforms the image model. That is, the parametric data is most informative while the image data is least informative in predicting these rating scores. The information conveyed by these different types of data may explain the differences in model performance. The parametric data intuitively shows the detailed specifications of a vehicle, including general information, exterior information, interior information, mechanical information, and safety information of the vehicle, which summarizes the vehicle's major characteristics. The compact representation may make it easier for the model to capture the key features, leading to better predictions. In comparison, the text data describes the advantages and disadvantages of a vehicle compared to other vehicles or its previous version, which is also valuable for rating the vehicle. The images of a vehicle show its aesthetic features and body design, which influence customers' affection for it and its aerodynamic performance. Since exterior design is not considered by the five rating scores directly, the information conveyed by the images might be less informative for predicting these scores.
Moreover, we find that the three unimodal models are relatively less effective in predicting the total score compared to the other scores. The total score is an overarching evaluation of a vehicle, which is the weighted average of the other four rating scores and several other indicators. The prediction of such an overarching score needs more complicated and comprehensive information from multiple perspectives, which is more challenging for the unimodal models to learn from a single data modality. Therefore, the unimodal models may struggle to learn enough features during training and thus do not predict the overall score as well as the other four rating scores. In addition, the dataset used in this study is small, which cannot provide sufficient information to train these large models. We observe that it is easy to overfit these models and the training is terminated early be
Figure 4: The architecture of the multi-modal learning model.
Figure 5: The performances of the unimodal models. The columns show the average \(R^{2}\) values from the 10 repeated experiments with the bars indicating one standard error. We observe that parametric models have higher \(R^{2}\) across different metrics.
fore better model weights can be learned, which may lead to insufficient final predictions. Among all five rating scores, the parametric model exhibits the highest \(R^{2}\) value in predicting the safety score, and the \(R^{2}\) values of the three unimodal models differ greatly. The \(R^{2}\) value of the parametric model is higher than that of the worst model (image) by 0.34. One potential reason is that its evaluation is partly based on the advanced accident avoidance technologies implemented by a vehicle, which is described clearly in the parametric data. In comparison, although the vehicle body design reflected by the images affects the safety of the vehicle, the material of the vehicle body is unknown from the images and the importance of body design is being weakened by the incorporation of the technologies in recent years.
### Effect of Multi-modal learning
Multi-modal learning models outperform the unimodal models.For predicting all rating scores, the average \(R^{2}\) values of the multi-modal learning models are significantly higher than that of the corresponding unimodal models, as shown in Figure 6. The results suggest that compared to the unimodal models, the joint learning of multi-modal data enables the multi-modal learning models to leverage the complementary features learned from different modalities to better predict the rating scores. Moreover, the \(Par\_Text\_Img-MML\) model also performs better than the three bi-modal learning models that integrate two types of data for predicting all rating scores except for the total score. The \(Par\_Text-MML\) model slightly outperforms the \(Par\_Text\_Img-MML\) model for predicting the total score. This may seem counterintuitive, as adding one more mode should logically allow the model to learn more information and, thus, likely make better predictions. However, as discussed above, the evaluation of the total score relies on more complicated, interrelated, and comprehensive information. This is more challenging for the models to learn. The features learned from the three data modalities are fused through simple concatenation in this paper, which may not be able to capture the complex interactions among the modalities for better total score prediction when the image data is involved. Another possible reason is the dataset used in this study is not large enough to support learning the complex cross-modal interactions for predicting the total score.
The effect of multi-modal learning varies in predicting different rating scores.The effect of multi-modal learning is most substantial in predicting the total score. The best multi-modal model (\(Par\_Text-MML\)) outperforms the best unimodal model (parametric) by 0.12. The characteristics of the total score imply that its evaluation relies more on a comprehensive understanding of a vehicle. Accordingly, the multi-modal features learned by the multi-modal models help significantly in this regard. The effect of multi-modal learning is least obvious in predicting the safety score. The best \(Par\_Text\_Img-MML\) model only improves the \(R^{2}\) value by 0.02 compared with the best unimodal model, which is the parametric model. As mentioned above, the parametric data clearly describe the advanced accident avoidance technologies implemented by a vehicle, which inform the evaluation of the safety score. Since the \(R^{2}\) values achieved by the image model and the text model are much lower than the parametric model, the incorporation of the text and image data only slightly complements the parametric information for this evaluation. In general, the \(Par\_Text\_Img-MML\) model exhibits similar or slightly better performances compared to the best bi-modal learning models for predicting all rating scores. The simple information fusion mechanism and the small size of the dataset in this study may help explain this.
Figure 6: The comparison in performance among the unimodal and multi-modal learning models. The columns show the average \(R^{2}\) values with one standard deviation bar. We observe that the multi-modal model using three modalities outperforms unimodal or bi-modal models.
### Implications for Engineering Design
As demonstrated in the last subsection, our multi-modal learning models can predict vehicle rating scores accurately using vehicle parametric specifications, text descriptions, and images. However, a more detailed interpretation of the outputs from the models is needed to inform designers and companies about potential directions to optimizing the inferior design or advertising the superior design of a vehicle. For this purpose, we utilize the SHAP [64] method to interpret the outputs from the image, text, and parametric models. Through backward gradient-based sensitivity analysis, the output SHAP values indicate the influence of each element of the input data on the final prediction made by a model. A higher absolute SHAP score suggests a higher influence. Therefore, the SHAP method can help us interpret how a deep learning model makes its decision.
We first conduct SHAP analysis for the parametric model. As we mentioned before, the parametric data conveys rich information across five feature categories as listed in Table 1. The SHAP analysis can help us identify the most informative and influential vehicle feature categories from the parametric data. We run SHAP analysis for the parametric models that predict the five rating scores, respectively. Figure 7 illustrates the average absolute SHAP values of the five feature categories across all vehicles in the test set for predicting different scores. These values indicate the extents to which different feature categories affect the model's predictions. The findings indicate that the impact of each category varies, with the interior information category having the greatest influence on all score predictions and the exterior information category having the least impact on them.
On this basis, we further analyze the influences of the 21 feature subcategories on the model predictions, and Table 2 shows the 21 subcategories used to predict the total score. The findings are in line with our expectations and quite interesting. When buying a vehicle, customers tend to base their decisions on the vehicle brand and the appearance of the vehicle's body. For instance, some people prefer Toyota sedans, while others may prefer Subaru SUVs. This indicates that the features like "Brands" and "Exterior Body Style" can significantly influence the prediction. Furthermore, the comfort and convenience of driving a vehicle are crucial since the most straightforward feeling that drivers and passengers may have for a vehicle is how comfortable and convenient it is to drive or ride in it. That is why most of the interior information subcategories have prominent impacts on rating predictions. Additionally, people value the safety and performance of a vehicle since if the vehicle's safety and performance are not up to par, people are less likely to trust and purchase it. Accordingly, "Safety Features", "Engine & Performance" and "Mechanical Transmission" features are also important. Designers and companies need to focus on these feature subcategories to improve or advertise their designs.
In addition to examining the average absolute values, we also analyze the variation of the average SHAP values for the 302 individual features over time. Although the SHAP values of the majority of the features do not show noticeable trends (e.g., sharp fluctuations or little changes), a subset of the features exhibit clear increasing or decreasing trends, as shown in Figure 8. As electronic and information technologies continue to advance, these technologies have been enhancing the driving experience and promoting driving safety. For example, the "Heated Steering Wheel" prevents hand stiffness during long driving hours in winter, and "Keyless Start" eliminates the need for manual key insertion by pressing a button inside the vehicle, or turning a knob, making the process more convenient. Furthermore, the "Hands-Free Lifgate" automatically opens and closes the lifgate. Other technologies, such as "Back-UP Camera," "Lane Keeping Assist," and "Lane Departure Warning" help improve driving safety on the road. These features have experienced an increase in their SHAP values over time, highlighting their growing positive influence on the model's prediction. On the other hand, the SHAP values of a few others show the opposite pattern, such as "auxiliary power outlet", "regular unleaded (fuel)", and "high-intensity discharge (HID) headlights". These features play increasingly negative roles in affecting the predictions, which means having these features may result in lower rating scores as time passes by. For example, "auxiliary Pwr Outlet," also known as the "car cigarette lighter," is an outdated feature that has been excluded by many new vehicle models. By analyzing the original data, we observe that most vehicles manufactured before 2014 were equipped with cigarette lighters, but very few vehicles had them after that year. Similarly, "HID headlights" were once popular for their high brightness and long service life compared to traditional halogen bulbs. However, due to their expensive manufacturing costs and slow response time to peak brightness, they have been gradually replaced by LED headlights that offer lower power consumption, longer lifespan, and faster response times. Consequently, HID headlights are disappearing from the market, which aligns with the observed changes in its SHAP values.
Then, We employ the SHAP method to analyze the informativeness of different image regions for predicting different rating scores. Since we use the interior and exterior images to predict the interior score and the other four scores, respectively. The mean absolute SHAP values of interior image regions are displayed in Table 3, while Figure 9 showcases the mean absolute SHAP values of the exterior image regions for predicting the four rating scores. For predicting the interior score, the dashboard and steering wheel regions present higher mean absolute SHAP
Figure 7: Mean Absolute SHAP values of the five feature categories of the parametric data with one standard error. We observe that the interior information category exhibits the largest SHAP values while the exterior information category holds minimal significance.
values than the front and rear seat regions, suggesting that the model may primarily rely on features in these regions to predict the interior score. Notably, the SHAP values of the front and rear views in the exterior images are higher than that of the other two views in predicting most of the rating scores. The results indicate the critical role of the front and rear views in predicting the other scores. This indicates that when purchasing a vehicle, people prioritize the front and back views, as they are the most visible when driving. Well-designed front and rear portions of a vehicle can also potentially reduce safety risks.
To gain further insights into the performance of different features in each region, we used the SHAP method to analyze the image data of individual samples. We exemplify the SHAP values of two representative images for predicting the total and interior scores in Figure 10 and Figure 11, respectively. The red regions positively influence the predictions, while the blue regions negatively influence the predictions. The color intensity indicates the influence extent.
Figure 10 showcases the SHAP values of the exterior image regions of the 2020 GMC Terrain for predicting the total score. We find that in the front view and angular front view, the regions on the front wheels, the vehicle brand logo, the fog lamps, the front bumper, and the front renders have positive influences on the total score prediction of this vehicle. Similarly, in the rearview, the regions on the rear wheels, taillights, and bumpers also play a bigger role in predicting the total score. This is reasonable. For example, during night driving, turning on the taillight can alert the following vehicle to maintain a safe distance, and a well-designed
\begin{table}
\begin{tabular}{l c} \hline \hline
**Subcategory** & **Mean Absolute Shap Values** & **Sample Features** \\ \hline Interior Convenience \& Comfort & \(9.11\cdot 10^{-2}\) \\ Brand & \(3.93\cdot 10^{-2}\) \\ Interior Seats & \(2.59\cdot 10^{-2}\) \\ Interior Entertainment & \(2.58\cdot 10^{-2}\) \\ Engine \& Performance & \(2.17\cdot 10^{-2}\) \\ Safety Features & \(2.06\cdot 10^{-2}\) \\ Mechanical Transmission & \(2.01\cdot 10^{-2}\) \\ Years & \(1.78\cdot 10^{-2}\) \\ Exterior Body Style & \(1.25\cdot 10^{-2}\) \\ Interior Heating Cooling & \(7.44\cdot 10^{-3}\) \\ Safety Airbags & \(7.21\cdot 10^{-3}\) \\ Interior Navigation \& Communication & \(7.15\cdot 10^{-3}\) \\ Safety Brakes & \(6.48\cdot 10^{-3}\) \\ Drivetrain & \(6.21\cdot 10^{-3}\) \\ Mechanical Fuel & \(3.99\cdot 10^{-3}\) \\ Exterior Dimensions & \(3.43\cdot 10^{-3}\) \\ Interior Dimensions & \(3.30\cdot 10^{-3}\) \\ Manufacturer Suggested Retail Price(MSRP) & \(1.46\cdot 10^{-3}\) \\ Exterior Measurements & \(1.08\cdot 10^{-3}\) \\ Mile Per Gallon (MPG) City & \(5.25\cdot 10^{-4}\) \\ Mile Per Gallon (MPG) Highway & \(4.53\cdot 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The feature subcategories with the corresponding SHAP Values and a few sample features. We observe that Interior convenience and brand are found most important in ratings prediction.
Figure 8: The SHAP values of 12 example features for predicting the total score over time. We observe that the importance of some features such as back-up cameras has increased in the last decade.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Score & Dashboard & Steering Wheel & Front Seat & Rear Seat \\ \hline Interior & 1.159 & 1.011 & 0.448 & 0.389 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean absolute SHAP values of the interior image regions for predicting the interior score. The SHAP values of the dashboard and steering wheel highlight their high importance in predicting the interior score
bumper can offer better protection in case of accidents. This will undoubtedly have a positive impact on the overall rating of the vehicle. Additionally, our SHAP analyses for predicting the other scores show similar trends. However, it is important to note that different vehicles may have distinct components that contribute to different score predictions, thus necessitating a case-by-case analysis by designers and engineers.
Figure 11 displays the SHAP values of the interior image regions of the 2020 Acura RLX for predicting the interior score. The instrument panel, the gearshift, the dashboard, the steering wheel, the steering wheel controls, and the front and rear seats are likely to have positive impacts on the interior score prediction of this vehicle. Among these, the steering wheel and dashboard have the most substantial impacts, as they are among the most used interior components. We believe that people can only truly experience the comfort and convenience of the front and rear seats when they are in the vehicle, rather than from a picture, so their effect may be weaker than that of the steering wheel. Overall, considering these details and features can lead to a better interior score and increase the vehicle's appeal to potential consumers.
Lastly, We use the SHAP method to analyze the informativeness of different text segments for predicting different rating scores. The results are displayed in Figure 12. Our findings reveal that the review of a vehicle and its advantages and disadvantages significantly influence the model predictions. In general, the review segment of the text has the largest influence on the rating score predictions. If this segment provides a negative evaluation (e.g., ) of the vehicle, often with words like "bottom", "however", or "but", the corresponding SHAP value is mostly negative, indicating a negative impact on the rating prediction. Otherwise, words like "top" indicate a positive evaluation and tend to have positive impacts on the predictions. Moreover, the brand and year of the vehicle mentioned in the text also have relatively important impacts on the model predictions. Additionally, the pros segment usually has a positive SHAP value, while the cons segment has a negative SHAP value, as expected. The "New Change" segment of the text indicates if there are any new changes in the vehicle compared to the previous year. If there are positive changes, the corresponding SHAP value is usually positive.
Figure 13 is an instance of SHAP analysis applied to an
Figure 11: SHAP values of the interior image regions of the 2020 Acura RLX11 for the interior score prediction: the SHAP values on the right with the corresponding interior image on the left.
Figure 10: SHAP values of the exterior image regions of the 2020 GMC Terrain10 for total score prediction: the SHAP values on the right with the corresponding exterior image on the left.
Figure 9: Mean absolute SHAP values of the exterior image regions for predicting the total, critics, performance, and safety scores with one standard deviation bar. The SHAP values of the front view and rear view play a bigger role in predicting these scores.
individual vehicle - the 2020 Mazda - for predicting its total score. The light blue colors assigned to "2020" and "Mazda" indicate slightly negative effects on the prediction. The positive SHAP value of the word "top" confirms the esteemed position held by the vehicle in the US News evaluation system, positively influencing the prediction. This positive reputation may sway potential buyers towards considering this vehicle over others within this system. In addition, the vehicle has several advantages such as a "premium cabin," "pleasant ride," and "thrilling handling," all of which are positive aspects of the vehicle and present positive SHAP values. These aspects probably lead to a positive perception of the vehicle and attract potential buyers. However, the vehicle also has some drawbacks, such as "subpar cargo space" and a "cramped third row," indicating that it may not be the best option for those looking for more space. These downsides have negative impacts on the total score forecast, as indicated by their negative SHAP values. New changes such as "standard heated front seats" and "Mazda i-active sense suite of safety features made standard," are positive changes and exhibit positive SHAP values. Engineers and designers need to analyze their own designs case by case.
To inform the design and optimization directions of individual vehicles, brands, or other aspects, designers and engineers need to extract a suitable sub-dataset from the entire dataset, retrain the prediction model, and carry out SHAP analyses and discussions accordingly. This approach can improve the rating score prediction for a particular type of vehicle and improve the interpretability of the model.
### Limitations and Future Work
This section summarizes the limitations of this study and future research directions of using multi-modal learning models to promote engineering design. First, a major limitation of this study is our dataset is much smaller than the other datasets for training large deep learning models, which may not provide sufficient information for the multi-modal learning models to learn the complex interactions between different data modalities. It is hard to harness the full potential of multi-modal learning with small datasets. We observe that the US News website only provides information regarding the vehicles on the US market, leading to the exclusion of some vehicle brands from China, India, and other countries in this study. We will work to expand this dataset by including more vehicle brands and completing the information on the vehicles with missing data items in the future. Second, we use the simple concatenation mechanism to fuse information from different data modalities in this paper, which may lead to less effective information fusion compared to more advanced information fusion mechanisms, such as attention-based or transformer-based information fusion. In future work, we will explore advanced techniques to fuse features learned from the parametric, text, and image data. Additionally, not all information available from the US News website is leveraged in this study. For example, we only select four exterior photos and four interior photos from a much larger photo collection and use a small part of each text description from the website for the rating score prediction. A more comprehensive understanding of a vehicle and a better rating score prediction may be achieved by incorporating all available information into ML. In the future, we need to explore more effective and efficient deep-learning models to manage richer data.
## 5 Conclusion
In this research, we have developed and validated a multi-modal learning model aimed at predicting five different vehicle rating scores--total score, critics score, performance score, safety score, and interior score. These predictions are facilitated using the parametric specifications, text descriptions, and images of vehicles. As the foundation of the multi-modal learning model, we developed three unimodal models to independently extract features from parametric, text, and image data. Based on this, we compared the efficacy of the multi-modal learning model against its unimodal equivalents. Our research has led to three significant discoveries: 1. Parametric data proves to be the most informative in predicting all the scores, with the text model surpassing the image model in most instances for predicting the rating scores. 2. The multi-modal learning model, which concurrently learns from parametric, text, and image data, outperforms all the unimodal models. This suggests that multi-modal data learning captures a richer array of information than learning from a single data mode for the task of prediction. 3. The sensitivity analyses conducted via SHAP can offer invaluable insights for interpreting predictions and provide crucial design, optimization, and improvement guidance to designers and engineers. Furthermore, the proposed multi-modal learning methodology can be extrapolated to a broader range of application scenarios, potentially providing fresh insights and inspiration for designers.
|
2308.02940 | Topological Estimation of Number of Sources in Linear Monocomponent
Mixtures | Estimation of the number of sources in a linear mixture is a critical
preprocessing step in the separation and analysis of the sources for many
applications. Historically, statistical methods, such as the minimum
description length and Akaike information criterion, have been used to estimate
the number of sources based on the autocorrelation matrix of the received
mixture. In this paper, we introduce an alternative, topology-based method to
compute the number of source signals present in a linear mixture for the class
of constant-amplitude, monocomponent source signals. As a proof-of-concept, we
include an example of three such source signals that overlap at multiple points
in time and frequency, which the method correctly identifies from a set of
eight redundant measurements. These preliminary results are promising and
encourage further investigation into applications of topological data analysis
to signal processing problems. | Sean Kennedy, Murali Tummala, John McEachen | 2023-08-05T18:49:55Z | http://arxiv.org/abs/2308.02940v1 | # Topological Estimation of Number of Sources
###### Abstract
Estimation of the number of sources in a linear mixture is a critical preprocessing step in the separation and analysis of the sources for many applications. Historically, statistical methods, such as the minimum description length and Akaike information criterion, have been used to estimate the number of sources based on the autocorrelation matrix of the received mixture. In this paper, we introduce an alternative, topology-based method to compute the number of source signals present in a linear mixture for the class of constant-amplitude, monocomponent source signals. As a proof-of-concept, we include an example of three such source signals that overlap at multiple points in time and frequency, which the method correctly identifies from a set of eight redundant measurements. These preliminary results are promising and encourage further investigation into applications of topological data analysis to signal processing problems.
Number of sources, persistent homology, embedding, monocomponent, array
## I Introduction
The objective of Blind Source Separation (BSS), also called blind signal separation or source separation, is to separate a group of source signals from a mixture without detailed knowledge of the sources or mixing process itself. BSS has applicability to many radar, communication, and imaging scenarios. Numerous techniques have been developed for BSS of mixed signals [1]. However, in most cases, the separation techniques assume that the number of sources equals the number of observations or that the number of sources is known in advance [1, 2, 3]. Thus, the ability to estimate or determine the number of sources to be unmixed is a critical preprocessing step in practical implementation.
The problem of estimating the number of sources in a linear mixture has been studied in the literature [4, 5, 6, 7]. The two most popular methods used for this estimation are information-theoretic measures based on the statistics of the mixture's autocorrelation matrix: the Minimum Description Length (MDL) and Akaike Information Criterion (AIC) estimators [5, 6]. Both have been shown to work well under the assumption of temporally and spatially white noise and Gaussian-distributed random sources. However, they have also been shown to be non-robust when real-world data deviates from these source and noise models [6]. While numerous modifications and enhancements to these methods have been proposed, e.g., [5, 6, 7], the fundamental underpinning of these methods remains rooted in the analysis of eigenvalues and eigenvectors of the sample autocorrelation matrix with the assumption that there exist fewer sources than mixed measurements.
In this paper, we introduce an approach that departs from the statistical inference model for estimating the number of sources. Instead, we frame the problem in terms of topology and show that existing Topological Data Analysis (TDA) tools can be used to estimate the number of independent sources under certain scenarios. The method is mathematically motivated and developed in Section II. In Section III, we provide a simple validation of the method for three nonstationary sources. Conclusions and a discussion of future research goals are provided in Section IV.
## II Method Development and Discussion
In practical terms, the method consists of only three steps: (1) embed an observed signal as a manifold in higher dimensional space, (2) compute the Betti number sequence [8] of the manifold using TDA, and (3) match the Betti number sequence to a known reference sequence. As shown in Section III, implementation of these steps is straightforward with appropriate software. Thus, the primary contribution of this paper is the analysis of the mathematical mechanisms by which the method achieves valid results. In Subsection II-A we provide the basic topological theory behind the method. In Subsection II-B we apply this theory to the case of a linear mixture as might be encountered in a radar, sonar, or multi-receiver communication array. In Subsection II-C, we discuss the primary constraint the measured data must meet in order for the method to work, and provide a potential avenue to circumvent the constraint in practice. In Subsection II-D, we briefly describe the tool used to compute the Betti number sequence of a given data set, and how to use this sequence to estimate the number of sources in the mixture.
### _Topological Analysis of Monocomponent Mixtures_
We begin by analyzing the mixing problem through the lens of topology. First, let \(x(t)=[x_{1}(t),x_{2}(t),...,x_{n}(t)]^{\top}\) be the vector of \(n\) independent sources. If we consider each \(x_{i}(t)\) as the motion of a point along an orthogonal basis vector of \(\mathbb{R}^{n}\), then \(x(t)\) can be interpreted as a parametric path through \(\mathbb{R}^{n}\). We restrict each \(x_{i}(t)\) to be a continuous
signal of the form \(x_{i}(t)=A_{i}\text{cos}(\alpha_{i}(t))\), where \(A_{i}\) is the constant amplitude of \(x_{i}(t)\), and \(\alpha_{i}(t)\) is a continuous function of time encoding the instantaneous frequency and phase of \(x_{i}(t)\). Since each \(\alpha_{i}\) could include a constant phase term in the interval \([-\pi,\pi]\), we can say that these sinusoidal sources are "cosines" without any loss of generality. Sources of this type are often referred to as constant-amplitude "monocomponent signals" in the literature and are frequently encountered in radar (e.g., chirps) and telecommunication (e.g., continuous phase frequency modulated signals) applications.
Constant-amplitude monocomponent signals are of special interest since they can be embedded as topological circles in \(\mathbb{R}^{2}\)[9]. To see this, consider that the Hilbert transform of \(x_{i}(t)\) is immediately obtained as \(\widetilde{x}_{i}(t)=A_{i}\text{sin}(\alpha_{i}(t))\), where \(A_{i}\) and \(\alpha_{i}\) are unchanged from their \(x_{i}(t)\) counterparts [10]. Then, by simple trigonometric identity, the expression \(x_{i}^{2}(t)+\widetilde{x}_{i}^{2}(t)\) equals the constant \(A_{i}^{2}\) for all times \(t\), which is the definition of a circle in the plane [9]. Considering now each component \(x_{i}(t)\) and its Hilbert transform \(\widetilde{x}_{i}(t)\) as the motion of a point along mutually orthogonal axes in \(\mathbb{R}^{2n}\), we find that the trajectory of this point (i.e., the _phase portrait_ of \(x(t)\)) forms a path on an \(n\)-torus since \((\mathbb{S}^{1})^{n}=\mathbb{T}^{n}\)[11]. As discussed in [11], the path itself may not actually become dense on the torus but form a torus knot instead when the ratio of instantaneous frequencies of individual components of \(x(t)\) are rational throughout the observation window. This condition is unlikely to occur for incoherent signals with nonstationary frequencies, so we set aside this outlier case and assume that the path becomes dense on the \(\mathbb{T}^{n}\) manifold.
By the Kunneth formula [12], the Betti number sequence for an \(n\)-torus is given by the coefficients of the Poincare polynomial \((1+q)^{n}\). This sequence is a topological invariant of the \(n\)-torus, i.e., it is invariant under homeomorphisms and embeddings [8]. The existence of this topological invariant allows us to compute the number of sources in a mixture of monocomponent signals as follows. Let \(y(t)\in\mathbb{R}^{m}\) be the vector composed of \(m\) independent observations of mixtures of \(x(t)\), and let \(z(t)\in\mathbb{R}^{2n}\) be defined as \([x_{1}(t),\widetilde{x}_{1}(t),x_{2}(t),\widetilde{x}_{2}(t),...,x_{n}(t), \widetilde{x}_{n}(t)]^{\top}\). Let \(w(t)\in\mathbb{R}^{2m}\) be a vector defined by \(w(t)=f(z(t))\), where \(f\) is an arbitrary function mapping \(\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2m}\). When \(f\) is a homeomorphism (when \(m=n\)) or an embedding (when \(m>n\)), the Betti number sequences of the phase portraits of \(w(t)\) and \(z(t)\) will be equal. So, if we can find a suitable method to embed the observed vector \(y(t)\) into \(\mathbb{R}^{2m}\) as \(w(t)\) such that \(f\) exists and is a homeomorphism or embedding, then we can simply compute the Betti number sequence of the phase portrait of \(w(t)\) to recover the Betti number sequence of the phase portrait of \(z(t)\). Subsection II-B provides such a method for the scenario of a generic receiver array. Once computed, if we find that the Betti number sequence of \(z(t)\) matches the the coefficients of the polynomial \((1+q)^{n}\), we know that there exist \(n\) monocomponent sources in the mixture. In summary, our estimation strategy is: (1) embed an \(\mathbb{R}^{m}\) signal mixture into \(\mathbb{R}^{2m}\), (2) compute the Betti number sequence, and (3) compare the sequence to the coefficients of \((1+q)^{n}\).
### _Embedding Observations into \(\mathbb{R}^{2m}\)_
In order to identify a suitable embedding of \(y(t)\) into \(\mathbb{R}^{2m}\) as \(w(t)\), we impose a constraint on the mixing process itself: that each observation vector \(y_{i}(t)\) is given by:
\[y_{i}(t)=\sum_{j=1}^{n}B_{i,j}\,\text{cos}(\alpha_{j}(t)+\phi_{i,j}) \tag{1}\]
for some values \(B_{i,j}>0\) and \(\phi_{i,j}\in[-\pi,\pi]\). In other words, each observation vector \(y_{i}(t)\) is a sum of each of the sources of \(x(t)\), modified by a relative magnitude and phase at each measurement. This model was chosen as a simplification of the case of a receiver array with multiple, directional elements receiving mixtures of the source signals, as might be encounted in radar, sonar, or communications applications. The directionality of the receiver elements along with the physical spacing of the source signals induces differences in magnitude among the components of each mixture. Likewise, the physical spacing of the receiver's antenna induces a small time delay between the reception of different sources, which, if small, can be approximated by a phase-shift [13].
An effective way to ensure that that the map \(f:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2m}\) (\(m\geq n\)) exists and is a homeomorphism (or embedding) is to let \(f\) be defined by a matrix \(T\) such that \(w(t)=Tz(t)\). Then, \(T\) is a linear map, which is guaranteed to be a homeomorphism (or embedding) if \(T\) has full column rank of \(2n\). Letting \(B_{i,j}=R_{i,j}\cdot A_{i}\) for some \(R_{i,j}\), and using the trigonometric identity for angle sums, we rewrite \(y_{i}(t)\) as:
\[y_{i}(t)=\sum_{j=1}^{n}R_{i,j}\,\text{cos}(\phi_{i,j})\,x_{j}(t)-R_{i,j}\, \text{sin}(\phi_{i,j})\,\widetilde{x}_{j}(t) \tag{2}\]
where \(\widetilde{x}_{j}(t)\) is the Hilbert transform of \(x_{j}(t)\). Based on the form of Eq. 2, we can represent each component of \(y(t)\) as a linear combination of the components of \(z(t)\): the coefficient terms \(R_{i,j}\,\text{cos}(\phi_{i,j})\) and \(R_{i,j}\,\text{sin}(\phi_{i,j})\) form the entries of \(T\). By taking the Hilbert transform of each \(y_{i}(t)\), we can immediately obtain a second set of \(m\) observations given by:
\[\widetilde{y}_{i}(t)=\sum_{j=1}^{n}R_{i,j}\,\text{sin}(\phi_{i,j})\,x_{j}(t)+R _{i,j}\,\text{cos}(\phi_{i,j})\,\widetilde{x}_{j}(t) \tag{3}\]
where each \(R_{i,j}\) and \(\phi_{i,j}\) are unchanged between \(y_{i}(t)\) and its Hilbert transform \(\widetilde{y}_{i}(t)\). This second set of \(m\) observations is likewise a linear combination of the components of \(z(t)\), providing the additional \(m\) rows of \(T\). Therefore, \(w(t)\) is defined as \([y_{i}(t),\widetilde{y}_{i}(t)]^{\top}\), and \(w(t)=Tz(t)\). We note that in practice the embedding observation vector \(\widetilde{y}_{i}(t)\) can be obtained either through direct measurement (e.g., analog phase shifter circuitry), or through digital analysis via the discrete Hilbert transform.
A useful result of requiring that the map \(f:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2m}\)\((m\geq n)\) be a linear transformation \(T\) is that if \(T\) exists and is full rank, then there also must exist a pseudo-inverse of \(T\) that recovers the source vectors from the observation vectors. Therefore when the method is successful, in addition to the mere assertion that there are \(n\) sources in the mixture, we can
further assert that the \(n\) sources are _able to be unmixed_ by finding the appropriate pseudoinverse, e.g., via a least squares fit of transformed data to the \(n\)-dimensional square torus. This is a powerful insight that will be explored in future work.
Another potential benefit of this method is that the only constraint on \(T\) is that \(T\) be full rank, and so exact knowledge of the entries of \(T\) (i.e., each \(B_{i,j}\) and \(\phi_{i,j}\)) is unnecessary. In practical terms, this means that prior knowledge of the array geometry, incidence angle, propagation velocity, or wave frequencies are generally not required for the method to work. This could allow the method to be used on mobile, distributed, and/or dynamic arrays of receiver elements.
### _Defining Independence in the Observations_
As explained in Section II-B, \(T\) must have full column rank under this "Hilbert embedding" for our topological feature recovery strategy to work, so we now discuss the conditions under which this is true. Label each row of \(T\) corresponding to the equation \(y_{i}(t)\) as \(r_{i}(t)\) and each row of \(T\) corresponding to the equation \(\widetilde{y}_{i}(t)\) as \(\widetilde{r}_{i}(t)\). Then, each pair of rows \([r_{i}(t),\widetilde{r}_{i}(t)]^{\top}\) forms \(n\) blocks of size \(2\times 2\) of the form:
\[\begin{bmatrix}R_{i,j}\cos(\phi_{i,j})&-R_{i,j}\sin(\phi_{i,j})\\ R_{i,j}\sin(\phi_{i,j})&R_{i,j}\cos(\phi_{i,j})\end{bmatrix} \tag{4}\]
where the terms have the same meanings as in Eqs. 2 and 3. This is the familiar form of a matrix in \(\mathbb{C}^{m\times n}\) as implemented in \(\mathbb{R}^{2m\times 2n}\). As such, we construct a dual of the matrix \(T\), called \(U\), where each complex term of \(U\) replaces each \(2\times 2\) block from Eq. 4 with \(U_{i,j}=R_{i,j}\angle\phi_{i,j}\) (in phasor notation). Under this construction, we can say that \(T\) has full column rank of \(2n\) if and only if \(U\) has full column rank \(n\). Labeling each row of \(U\) as \(u_{i}\) for \(i\in[1,m]\), we note that each \(u_{i}\) corresponds to the coefficients of the complex observation vector \(v_{i}(t)=y_{i}(t)+\sqrt{-1}\,\widetilde{y}_{i}(t)\). Then, for \(U\) to possess rank \(n\), there must exist a subset of \(n\) rows chosen from \(u_{i}\) that are linearly independent. Since the effect of multiplying the complex vector \(v_{k}(t)\) by a complex constant \(c_{k}=A_{k}\angle\theta_{k}\) (\(A_{k}\in\mathbb{R}\), \(\theta_{k}\in[-\pi,\pi]\)) is to multiply the magnitudes of the associated \(y_{i}(t)\) and \(\widetilde{y}_{i}(t)\) vectors by \(A_{k}\) and add \(\theta_{k}\) to the phases of \(y_{i}(t)\) and \(\widetilde{y}_{i}(t)\), we can informally say that a set of observations \(y(t)\) are "independent" if no element \(y_{i}(t)\) is a linear combination (over \(\mathbb{R}\)) of any phase-shifted versions of the others. Consequently, it is the relative ratios in the magnitudes and relative differences in the phases of each component among the \(y_{i}(t)\) observations that determine the rank of \(T\).
We suspect that the condition for a full rank \(T\) will hold in most practical instances where \(m\gg n\) (e.g., phased array with many elements) as only \(n\) of the \(m\) observation vectors must be "independent" in the meaning given above. However, in cases where there are not expected to be \(n\) independent observation vectors available (e.g., \(m<n\)), it may be possible to derive additional independent observation vectors through the use of a Time Delay Embedding (TDE), as discussed in [11]. A full discussion of the theory, usage and drawbacks of TDEs as embedding functions is beyond the scope of this paper; the interested reader is referred to references [9, 11, 14, 15]. With respect to monocomponent sources, TDEs have the potential to induce relative phase shifts among the source components in a given mixture [9], and are therefore a potential source for additional independent measurement vectors when there are fewer array elements than sources present in the mixture.
### _Topological Computation via Persistent Homology_
As discussed in Sections II-A and II-B, once we have obtained a properly embedded analysis signal \(w(t)\), our next step is to compute the Betti number sequence corresponding to the topology of the phase portrait of \(w(t)\). The TDA tool we choose to perform this computation with is Persistent Homology (PH) [16], which has been used to study the topology of datasets with many real-world applications [17, 18, 19, 20]. We direct the unfamiliar reader to reference [21] for an introduction to the theory and computation of PH. Many PH computation packages have been developed in recent years [22]; in Section III, we use the JavaPlex software to compute PH due to its simple integration into MATLAB [23].
Treating the PH computation as a "black box" processing step, we can estimate the Betti number sequence of the input manifold through simple analysis of the output "barcode" plot [23]. For the purposes of this paper, we determine the Betti number sequence by simply counting the number of features in the barcode plot that persist for at least half of the PH computation interval in each dimension. We use this sequence to compute the number of sources by finding an integer value \(n\) for which our sequence matches the coefficients of the polynomial \((1+q)^{n}\), as discussed in Section II-A. This integer value, \(n\), is the output of the method, providing the estimate of the number of sources in the mixture.
In practice, we need not actually search through all possible \(n\) to find a match. As the coefficients of \((1+q)^{n}\) follow from the binomial theorem, and \(\binom{n}{1}=n\), we let our initial "guess" of \(n\) equal the second Betti number (corresponding to homology group \(H_{1}\)) determined from the barcode plot. We then verify that the remainder of Betti numbers from the sequence match the remaining coefficients of \((1+q)^{n}\). If they all match, we have confidence in our computation of the topology of \(w(t)\) and thus \(z(t)\), so we assert that there are \(n\) sources to be unmixed. If they do not match, then the Betti numbers do not conform to those of a torus \(\mathbb{T}^{n}\), as there are no other possible values of \(n\) which could produce the obtained Betti number sequence. We assume in the non-matching case that our computation has failed and we cannot accurately estimate the number of sources; this can occur, for example, when the mixing matrix \(T\) is either singular or otherwise badly conditioned.
## III Example Computation with Three Monocomponent Sources
We now present a proof-of-concept demonstration of the method using three synthetic digital signals in MATLAB. While unknown to the receiver, the received signals are a
mixture of three independent monocomponent sources with nonstationary frequency characteristics as could be encountered in a radar scenario. The first source is a "barrage jammer" waveform whose instantaneous frequency continuously sweeps the Nyquist range, while the other two are "chirp" signals that begin and end at disparate instantaneous frequencies. Plots of the spectrograms (i.e., short-time Fourier transforms) of the three source signals, as well as an example of a noisy mixture of the three sources, are provided in Figure 1. These signals were chosen somewhat arbitrarily as an example of wideband, nonstationary signals that overlap at irregular points in the time and frequency domains and are therefore inseparable through classical linear filtering. The specific parameters (e.g., frequency sweep ranges, initial phase, etc.) for these chosen signals are irrelevant; our topological method is general enough that virtually any form of monocomponent signals is viable under this method. The authors have conducted many additional trials using alternate monocomponent signal types with results similar to those presented herein.
Assuming that the receiver array contains eight elements (eight being an arbitrary number substantially larger than the expected three sources), we generate eight random values of \(R_{i,j}\in[0.75,1.25]\) and \(\phi_{i,j}\in[-\pi,\pi]\) and create the eight observation signals, \(y_{i}[n]\), according to a discretized version of Eq. 1. Each observation vector contains 30000 samples, equivalent to a 1-MHz sampling frequency observed over 30 milliseconds. Additive white Gaussian noise (AWGN) is added separately to each observation signal with a signal-to-noise ratio (SNR) randomly selected from 15 to 25 dB. A variable SNR on each measurement is used to provide some evidence of robustness in the presence of nonspatially white noise. As discussed in Section II-B, we compute the Hilbert transform of each of our observed \(y_{i}[n]\) to obtain \(\widetilde{y}_{i}[n]\). As a final preparatory step, we remove the first and last 10% of samples from each \(y_{i}[n]\) and \(\widetilde{y}_{i}[n]\) to minimize the windowing effects of computing the Hilbert transform of a finite-duration signal. These trimmed signals are combined into the 16-dimensional input signal \(w[n]\) via concatenation.
We then provide the 16-dimensional signal \(w[n]\) as the point-cloud input to JavaPlex. We use the Witness Complex construction with 150 landmark points chosen using the sequential min-max procedure, up to a maximum filtration distance of 0.24 [23]. The results are provided in barcode format in Figure 2, showing persistent Betti numbers of \(\{1,3,3,1,0,0,0,0,0\}\). As up to eight sources could be detected with this method, the PH was computed over dimensions 0-8; however, there were no features found in dimensions 4-8, so these empty results are omitted from Figure 2 to save space. It is trivial to identify that the computed Betti number sequence of \(\{1,3,3,1\}\) matches the coefficients of the polynomial \((1+q)^{3}=1+3q+3q^{2}+q^{3}\), thus the method computes that there are exactly 3 sources making up the mixed observation vectors, as was desired.
For comparison purposes, we also computed the outputs of the MDL and AIC estimators as defined in [6] on the same data set. Surprisingly, these estimators improperly estimate that the mixtures consist of 7 independent sources. Similar results are obtained when the parameters of the mixtures are changed and different noise levels are generated. As discussed in [5], this error in the MDL and AIC estimators is likely due to variations in AWGN power among the observed signals. While far from conclusive, this difference in outcomes hints that the topological method may provide additional robustness over the statistical methods in certain scenarios, and thus the method warrants additional research.
Fig. 1: Short-time Fourier Transform (STFT) Spectrograms of clean source signals, along with one example of a mixture of all three sources with AWGN at 20 dB SNR. As can be seen, all three signals are highly nonstationary causing overlap in time and frequency at various points.
Fig. 2: Barcodes corresponding to the PH computation in dimensions 0-3. The Betti numbers of \(\{1,3,3,1\}\) are found by counting the number of lines in each dimension which persist over a majority of the filtration interval.
## IV Conclusions and Future Work
In this paper, we introduced a topology-based method to estimate the number of source signals present in a linear, monocomponent mixture. In particular, we showed how the analytic form of a mixture of \(n\) monocomponent signals generates a phase space which is homeomorphic to the \(n\)-torus, so by using TDA tools to recover the topological features of the mixture, we can determine the number of signals making up the mixture. While this method is only directly applicable to constant-amplitude monocomponent source signals in its current form, these signals are nonetheless commonplace in wireless communications and radar scenarios. Therefore, the method could have substantial real-world applicability without significant modification.
Our provided demonstration of the method is merely a proof-of-concept, as there are many aspects to the method that require further research in order to refine. In particular, the TDA tool of PH is known to suffer from high computational complexity and sensitivity to noisy outliers in the data set [23]. These issues are being addressed by other researchers in the literature [22], but the nuances are complex. Accordingly, we omitted such discussions in this paper as they distract from the presentation of the underlying theory of the technique. Since we consider the PH computation as a "black box" for computing the Betti number sequence of the underlying mixture, any improvements in the robustness, accuracy, or efficiency of the PH computation can likewise be leveraged by our method. We suspect additional preprocessing of the observation signals will also play a role in improving the method; these techniques will be explored in future work.
As briefly demonstrated in Section III, the method can potentially outperform the standard MDL and AIC methods in some specific scenarios, such as when the received noise is not spatially white. While this outcome encourages further research into the method, a more robust investigation of this method is needed to adequately compare performance with the existing statistical methods. Such a comparison requires additional control over the scenario model, any preprocessing and optimization steps, and variations in the numbers of sources, signal types, measurements, and noise types/levels. Due to the large number of experimental parameters and alternate techniques to test, we have opted to save such an investigation for future publication, and present only the theory and proof-of-concept of our method here.
More generally, we believe the novelty of the approach is of sufficient interest: eschewing the usual tools of statistical analysis to instead analyze signals on the basis of shape. As topological data analysis is a rapidly evolving field, we believe that additional practical algorithms and tools will emerge to tackle current problems in many fields such as signal processing, control theory, and communications. As such, we intend to not only improve on this particular application, but also leverage the topological approach taken to search for novel solutions to other types of problems. We strongly encourage other researchers to join us in this endeavour.
|
2310.09487 | Proving Rho Meson be a Dynamical Gauge Boson of Hidden Local Symmetry | The rho meson has long been successfully identified with a dynamical gauge
boson of the Hidden Local Symmetry (HLS) $H_{\rm local}$ in the nonlinear sigma
model $G/H$ gauge equivalent to the model having the symmetry $G_{\rm
global}\times H_{\rm local}$, with $G= [SU(2)_L \times SU(2)_R]\simeq O(4),
H=SU(2)_{V}\simeq O(3)$, however under a hitherto unproven assumption that its
kinetic term is dynamically generated, together with an ad hoc choice of the
auxiliary field parameter "$a=2$". We prove this assumption, thereby solving
the long-standing mystery: The rho meson kinetic term is generated simply by
the large $N$ limit of the Grassmannian model $G/H=O(N)/[O(N-3)\times O(3)] $
gauge equivalent to $O(N)_{\rm global}\times [O(N-3)\times O(3)]_{\rm local}$,
extrapolated to $N=4$, $O(4)_{\rm global}\times O(3)_{\rm local}$, with all the
phenomenologically successful "$a=2$ results" i.e., $\rho$-universality, KSRF
relation and the Vector Meson Dominance, realized {\it independently of the
parameter "$a$".} This in turn establishes validity of the large $N$ dynamics
at quantitative level directly by the experiments. The relevant cutoff reads
$\Lambda \simeq 4 \pi F_\pi$ for $N=4$, which is regarded as a matching scale
of the HLS as a "magnetic dual" to QCD. Skyrmion is stabilized by such a
dynamically generated rho meson without recourse to the underlying QCD, further
signal of the duality. The unbroken phase with massless rho meson may be
realized as a novel chiral restored hadronic phase in hot/dense QCD. | Koichi Yamawaki | 2023-10-14T04:24:56Z | http://arxiv.org/abs/2310.09487v1 | # Proving Rho Meson be a Dynamical Gauge Boson of Hidden Local Symmetry
###### Abstract
The rho meson has long been successfully identified with a dynamical gauge boson of the Hidden Local Symmetry (HLS) \(H_{\rm local}\) in the nonlinear sigma model \(G/H\) gauge equivalent to the model having the symmetry \(G_{\rm global}\times H_{\rm local}\), with \(G=[SU(2)_{L}\times SU(2)_{R}]\simeq O(4),H=SU(2)_{V}\simeq O(3)\), however under a hitherto unproven assumption that its kinetic term is dynamically generated, together with an ad hoc choice of the auxiliary field parameter "\(a=2\)". We prove this assumption, thereby solving the long-standing mystery: The rho meson kinetic term is generated simply by the large \(N\) limit of the Grassmannian model \(G/H=O(N)/[O(N-3)\times O(3)]\) gauge equivalent to \(O(N)_{\rm global}\times[O(N-3)\times O(3)]_{\rm local}\), extrapolated to \(N=4\), \(O(4)_{\rm global}\times O(3)_{\rm local}\), with all the phenomenologically successful "\(a=2\) results" i.e., \(\rho\)-universality, KSRF relation and the Vector Meson Dominance, realized _independently of the parameter "\(a\)"_. This in turn establishes validity of the large \(N\) dynamics at quantitative level directly by the experiments. The relevant cutoff reads \(\Lambda\simeq 4\pi F_{\kappa}\) for \(N=4\), which is regarded as a matching scale of the HLS as a "magnetic dual" to QCD. Skyrmion is stabilized by such a dynamically generated rho meson without recourse to the underlying QCD, further signal of the duality. The unbroken phase with massless rho meson may be realized as a novel chiral restored hadronic phase in hot/dense QCD.
## I Introduction
Since its proposal [1; 2; 3; 4; 5] (for reviews see Ref.[6; 7; 8]), identifying the rho meson as a dynamical gauge boson of the Hidden Local Symmetry (HLS) \(H_{\rm local}\) has been widely accepted in the model having the symmetry \(G_{\rm global}\times H_{\rm local}\), with \(G=[SU(2)_{L}\times SU(2)_{R}]\simeq O(4)\) and \(H=SU(2)_{V}\simeq O(3)\), where its Lagrangian consists of two independent invariants \({\cal L}_{\rm HLS}={\cal L}_{A}+a{\cal L}_{V}\), with \(a\) arbitrary parameter. This is gauge equivalent to the nonlinear sigma model, \({\cal L}_{\rm CCWZ}\), \(\hat{a}\) la Callan-Coleman-Wess-Zumino (CCWZ) [9; 10] based on the manifold \(G/H\): In the absence of the kinetic term of the HLS gauge boson it is merely an auxiliary field such that \({\cal L}_{V}=0\), and \({\cal L}_{\rm HLS}={\cal L}_{A}={\cal L}_{\rm CCWZ}\) after gauge fixing.
Once we _assume_, however, that its kinetic term, \({\cal L}_{\rm kinetic}\), is generated at quantum level by the dynamics of the nonlinear sigma model itself, thereby put by hand to the Lagrangian, \({\cal L}_{\rm HLS}\Rightarrow{\cal L}_{A}+a{\cal L}_{V}+{\cal L}_{\rm kinetic}\), novel physics come out [1; 2; 3; 4; 5; 6; 7; 8]: All the successful phenomenological results, such as the universality of the rho meson coupling (\(\rho\) universality), the Kawarabayashi-Suzuki-Ryazzudin-Fayyazudin (KSRF) relation, the Vector Meson Dominance (VMD), are derived for a _particular parameter choice_\(a=2\) in the resultant Lagrangian (_at tree level_), in such a way that \(a{\cal L}_{V}\) becomes the HLS gauge-invariant mass terms of the \(\rho\) meson which contains \(\rho\) mass, \(\rho\) couplings, additional \(\pi\) self-couplings, etc..
For all the phenomenological success of the HLS model of the rho meson, however, the basic assumption of the dynamical origin of the kinetic term and the particular parameter choice \(a=2\) has never been proved within the dynamics of HLS model itself. #1
Footnote #2: Preliminary results of this paper were given as the supplementary ones in Ref.[11] which is mainly addressed to a subject on a possible dynamical gauge boson of HLS within the Higgs sector of the Standard Model, different from the present one, that in the QCD, but the details of the relevant calculations in the present paper may be found in Ref.[11].
In this paper #2 we resolve this long-standing mystery as simply a _consequence of the nonperturbative dynamics of the large \(N\) limit of the nonlinear sigma model_ based on the Grassmannian manifold \(G/H=\)\(O(N)/[O(N-p)\times O(p)\), with \(p=3=\) fixed, which is reduced to the relevant case \(G/H=O(4)/O(3)\simeq SU(2)_{L}\times SU(2)_{R}/SU(2)_{V}\) for the
extrapolation \(N\to 4\), with the rho meson being the dynamical gauge boson of \(O(3)_{\rm local}\simeq[SU(2)_{V}]_{\rm local}\).
It is in fact well known that the HLS gauge bosons in many nonlinear sigma models, such as the \(CP^{N-1}\) model with \(G/H=U(N)/[U(N-1)\times U(1)]\simeq SU(N)/[SU(N-1)\times U(1)]\) gauge equivalent to the model \(SU(N)_{\rm global}\times U(1)_{\rm local}\), do acquire kinetic term for \(U(1)_{\rm local}\) gauge boson at quantum level in the large \(N\) limit [6; 7; 12; 13; 14; 15; 16; 17; 18; 19; 20]. It was further shown (in the context irrelevant to the rho meson physics, though) that the HLS gauge bosons \(O(p)_{\rm local}\) and \(U(p)_{\rm local}\) in the Grassmannian models \(G/H=O(N)/[O(N-p)\times O(p)]\)[21] and \(G/H=\)\(U(N)/[U(N-p)\times U(p)]\), respectively [21; 22] are dynamically generated in the large \(N\) limit. However it was shown only in a specific parameterization "covariant derivative" type which is just _a particular \(a=2\)_ choice from the onset (see [11] and Eq.(10)).
Here we show that _not just the generation of the \(O(p)_{\rm local}\) gauge boson \(\rho_{\mu}\) but also all the successful "\(a=2\) results" are direct consequences of the pure dynamics at quantum level of the large \(N\) limit for arbitrary value_ of \(a\), thereby resolving the long-standing mystery of the rho meson simply on the firm dynamical base. This in turn provides yet another experimental verification of the large \(N\) reliability, this time even quantitatively, not just qualitatively #3
Footnote #3: It is known that the large \(N\) results remain qualitatively true even for the smallest value \(N=2\) in the \(CP^{N-1}\) model, as checked by the equivalent \(O(3)\) model exactly solvable in 2 dimensions. [16].
## II Grassmannian \(N\) extension
Let us define the generic HLS base [2; 6] of an \(N\times N\) real matrix field \(\xi(x)=\xi(\hat{\rho}^{(p)})\cdot\xi(\hat{\rho}^{(N-p)})\cdot\xi(\pi)\), which transforms under \(G_{\rm global}\times H_{\rm local}=O(N)_{\rm global}\times[O(N-p)\times O(p )]_{\rm local}\) as \(\xi(x)\to h(x)\cdot\xi(x)\cdot g^{-1}\) with \(h(x)\in[O(N-p)\times O(p)]_{\rm local}\,,g\in O(N)_{\rm global}\), where \(\xi(\pi)=e^{i\pi_{a}(x)X_{a}/f_{\pi}}\) is the CCWZ base for \(G/H=O(N)/[O(N-p)\times O(p)]\), with \(f_{\pi}\) being the (_bare/tree-level_) decay constant of the NG boson \(\pi\), while \(\xi(\check{\rho}^{(p)})=e^{i\check{\rho}^{(p)}(x)/f_{\pi}^{(p)}}\) and \(\xi(\check{\rho}^{(N-p)})=e^{i\check{\rho}^{(N-p)}(x)/f_{\pi}^{(N-p)}}\), with \(\check{\rho}^{(p)}=\check{\rho}^{(p)}_{a}S^{(p)}_{a}\) and \(\check{\rho}^{(N-p)}=\check{\rho}^{(N-p)}_{a}S^{(N-p)}_{a}\) being the would-be NG bosons to be absorbed into the HLS gauge bosons \(\rho^{(p)}_{\mu}\) and \(\rho^{(N-p)}_{\mu}\) of \(O(p)_{\rm local}\) and \(O(N-p)_{\rm local}\), with the (_bare/tree-level_) decay constant \(f_{\rho}^{(p)}\) and \(f_{\rho}^{(N-p)}\), respectively:
\[\xi(x)=\xi(\pi)\quad\left({\rm unitary\ gauge},\ \check{\rho}^{(N-p)}(x)=\check{ \rho}^{(p)}(x)=0\right). \tag{1}\]
Here the generators read \(X_{a}\in{\cal G}-{\cal H}\), \(S^{(p)}_{a}\in{\cal H}^{(p)}={\cal O}(p)\), \(S^{(N-p)}_{a}\in{\cal H}^{N-p}={\cal O}(N-p)\), with \({\rm tr}(T_{a}T_{b})=2\delta_{ab}\), \({\rm tr}(S_{a}X_{b})=0\), \(T_{a}=\{S_{a},X_{a}\}=-T_{a}^{t}\).
To study the large \(N\) limit in the Grassmannian models including the \(CP^{N-1}\) model it is customary to parameterize the HLS base as:#4
Footnote #4: \(p\times N\) degrees of freedom of \(\phi_{i\beta}\) consist of \(p\times(N-p)\) of \(\pi\), \(p\times(p-1)/2\) of \(\check{\rho}\), and \(p\times(p+1)/2\) of the constraints.
\[\xi(x)_{\alpha\beta} = \frac{G}{N}\left(\begin{array}{c}\phi_{i,\beta}(x)\\ \Phi_{k,\beta}(x)\end{array}\right),\ \alpha=(i,k),\ \beta=(j,l),\,i,j=1,\cdots,p\ \ ;k,l=p+1,\cdots N\,, \tag{2}\] \[\xi^{t}\cdot\xi = \frac{G}{N}\left(\phi^{t}\phi+\Phi^{t}\Phi\right)=\openone,\quad \frac{G}{N}\equiv\ \frac{1}{f_{\pi}{}^{2}},\] \[\xi\cdot\xi^{t} = \frac{G}{N}\left(\begin{array}{cc}\phi\phi^{t}&\phi\Phi^{t}\\ \Phi\phi^{t}&\Phi\Phi^{t}\end{array}\right)=\left(\begin{array}{cc}\openone_ {p\times p}&0\\ 0&\openone_{(N-p)\times(N-p)}\end{array}\right)=\openone, \tag{3}\]
where \(G\equiv N/f_{\pi}{}^{2}\) is the (bare) coupling constant to be fixed in the large \(N\) limit (s.t. \(f_{\pi}{}^{2}={\cal O}(N)\)).
The covariantized Maurer-Cartan one-form reads:
\[\hat{\alpha}_{\mu}\ \equiv\ \frac{1}{i}D_{\mu}\xi\cdot\xi^{t}=\frac{G}{iN}\left( \begin{array}{c}\partial_{\mu}\phi-i\rho^{(p)}_{\mu}\phi\\ \partial_{\mu}\Phi-i\rho^{(N-p)}_{\mu}\Phi\end{array}\right)\cdot\left(\phi^{t} \ \Phi^{t}\right)=\hat{\alpha}_{\mu,\perp}+\hat{\alpha}_{\mu,||}, \tag{4}\]
where \(\hat{\alpha}_{\mu,\perp}\equiv\frac{1}{2}{\rm tr}\left(\hat{\alpha}_{\mu}X^{a} \right)X^{a}\), \(\hat{\alpha}_{\mu,||}\equiv\frac{1}{2}{\rm tr}\left(\hat{\alpha}_{\mu}S^{a} \right)S^{a}\) are
\[\hat{\alpha}_{\mu,\perp}=\alpha_{\mu,\perp}=\left(\begin{array}{cc}0&\frac{G} {iN}\partial_{\mu}\phi\cdot\Phi^{t}\\ \frac{G}{iN}\partial_{\mu}\Phi\cdot\phi^{t}&0\end{array}\right),\quad\hat{\alpha}_{ \mu,||}=\left(\begin{array}{cc}\frac{G}{iN}\partial_{\mu}\phi\cdot\phi^{t}-\rho ^{(p)}_{\mu}&0\\ 0&\frac{G}{iN}\partial_{\mu}\Phi\cdot\Phi^{t}-\rho^{(N-p)}_{\mu}\end{array}\right),\]
all transforming homogeneously as
\[\left(\hat{\alpha}_{\mu,\perp},\hat{\alpha}_{\mu,||}\right)\ \to\ h(x)\cdot\left(\hat{\alpha}_{\mu,\perp},\hat{\alpha}_{\mu,||} \right)\cdot h^{-1}(x)\,, \tag{5}\]
with \(h(x)\in{\cal H}\) for \(H=\left[O(N-p)\times O(p)\right]_{\rm local}\).
Thus the HLS Lagrangian consists of three independent invariants at the lowest derivative: [11]
\[{\cal L}^{(N,p)} = {\cal L}_{A}+a^{(p)}{\cal L}_{V}^{(p)}+a^{(N-p)}{\cal L}_{V}^{(N-p) }\,, \tag{6}\]
where
\[{\cal L}_{A} = \frac{{f_{\pi}}^{2}}{4}{\rm tr}\left(\hat{\alpha}_{\mu,\perp}^{2} \right)=-\frac{G}{2N}{\rm tr}\left(\phi^{t}\partial_{\mu}\phi\cdot\Phi^{t} \partial^{\mu}\Phi\right)\] \[= \frac{1}{2}{\rm tr}\left(\partial_{\mu}\phi\partial^{\mu}\phi^{ t}+\frac{G}{N}\left(\phi\partial_{\mu}\phi^{t}\right)^{2}\right)\] \[\mbox{(unitary gauge)} \longrightarrow {\cal L}_{\rm CCWZ}=\frac{{f_{\pi}}^{2}}{4}{\rm tr}\left(\alpha_ {\mu,\perp}^{2}(\pi)\right)^{2}=\frac{1}{2}\left(\partial_{\mu}\pi_{a}\right)^ {2}+\cdots\,, \tag{7}\]
with (unitary-gauge) \(\alpha_{\mu,\perp}\rightarrow\alpha_{\mu,\perp}(\pi)=\partial_{\mu}\xi(\pi) \cdot\xi^{t}(\pi)=\partial_{\mu}\xi(\pi)\cdot\xi^{\dagger}(\pi)\), and
\[a^{(p)}{\cal L}_{V}^{(p)} = \frac{a^{(p)}{f_{\pi}}^{2}}{4}{\rm tr}\left(\left[\hat{\alpha}_{ \mu,||}^{(p)}\right]^{2}\right)=\frac{1}{2}{\rm tr}\left[\frac{a^{(p)}}{2} \cdot\frac{N}{G}\left(\rho_{\mu}^{(p)}-i\frac{G}{N}\phi\partial_{\mu}\phi^{t} \right)^{2}\right] \tag{8}\] \[= \frac{(f_{\rho}^{(p)})^{2}}{4}{\rm tr}\left[\rho_{\mu}^{(p)}- \frac{\partial_{\mu}\check{\rho}^{(p)}}{f_{\rho}^{(p)}}-\frac{i\left[\partial_ {\mu}\pi,\pi\right]}{2{f_{\pi}}^{2}}+\cdots\right]^{2},\]
where we should impose a bare/tree relation between the two decay constants,
\[(f_{\rho}^{(p)})^{2}=a^{(p)}{f_{\pi}}^{2}=a^{(p)}\frac{N}{G}\,, \tag{9}\]
to normalize the kinetic term of the would-be NG boson \(\check{\rho}^{(p)}\) to the canonical form, and similarly for \(a^{(N-p)}{\cal L}_{V}^{(N-p)}\). In the unitary gauge, \(\check{\rho}^{(p)}=0\), Eq.(8) reads the mass term of \(\rho_{\mu}^{(p)}\) as usual in the HLS formalism, and so does \(a^{(N-p)}{\cal L}_{V}^{(N-p)}\) the mass term of \(\rho_{\mu}^{(N-p)}\).
Here we note that in contrast to \(O(p)_{\rm local}\) gauge boson, the kinetic term for the \(O(N-p)_{\rm local}\) gauge boson, carrying index running \(1,\cdots,N-p\) thus subject to all the planar diagram contributions in the large \(N\) limit, is _not_ dynamically generated #5 and stays as an auxiliary field (i.e., \({\cal L}_{V}^{(N-p)}=0\)) as was the case in the previous calculations for \(CP^{N-1}\) and Grassmannian models. #6
Footnote #5: \(O(N-p)_{\rm local}\) does not exist for \(N=4,p=3\) anyway.
Footnote #6: The \(SU(N-1)_{\rm local}\) gauge boson in \(CP^{N-1}\) model with \(G/H=SU(N)/[SU(N-1)\times U(1)]\), which carries the index running through \(1,\cdots,N-1\), is not dynamically generated in the large \(N\) limit, in contrast to the \(U(1)_{\rm local}\) part [6; 7; 12; 13; 14; 15; 16; 17; 18; 19; 20]. The same is true for \(G/H=O(N)/[O(N-p)\times O(p)]\) and \(G/H=U(N)/[U(N-p)\times U(p)]\), with \(O(N-p)_{\rm local}\) and \(U(N-p)_{\rm local}\), respectively [21; 22]. Similarly, a popular \(N\) extension \(G/H=O(N)/O(N-1)\) gauge equivalent to the model \(O(N)_{\rm global}\times O(N-1)_{\rm local}\) has no dynamical gauge boson for \(O(N-1)_{\rm local}\) and is irrelevant to the rho meson.
Footnote #7: In the broken phase this is simply equivalent to the constraint Eq.(3), while in the unbroken phase the multiplier is only a correct description. See later discussions.
Then without loss of generality the starting Lagrangian, Eqs.(6)-(8), is simplified as \(a^{(p)}{\cal L}_{V}^{(p)}\equiv a{\cal L}_{V}\), \(a^{(N-p)}{\cal L}_{V}^{(N-p)}=0\), \(\rho_{\mu}\equiv\rho_{\mu}^{(p)}\), \(\hat{\rho}=\check{\rho}^{(p)}\), \(f_{\rho}^{2}\)\(\equiv(f_{\rho}^{(p)})^{2}=af_{\pi}^{2}\), etc.:
\[{\cal L}={\cal L}_{A}+a{\cal L}_{V}=\frac{1}{2}{\rm tr}\left[\left( \partial_{\mu}\phi\partial^{\mu}\phi^{t}\right)+\frac{1}{2}\cdot\frac{aN}{G} \rho_{\mu}^{2}-ia\rho^{\mu}\phi\partial_{\mu}\phi^{t}\right)\,+\frac{1}{2}{ \rm tr}\left[\left(1-\frac{a}{2}\right)\frac{G}{N}\left(\phi\partial_{\mu} \phi^{t}\right)^{2}-\eta\left(\phi\phi^{t}-\frac{N}{G}\openone\right)\right]\,, \tag{10}\]
where \({\rm tr}\) and \(\openone\) should read \({\rm tr}_{p\times p}\) and \(\openone_{p\times p}\), respectively, and \((\rho_{\mu})_{ij}=\rho_{\mu}^{a}(S^{a})_{ij}\) with \({\rm tr}(S^{a}S^{b})=2\delta^{ab}\), and the \(p\times p\) matrix Lagrange multiplier \(\eta_{i,j}(x)\) is used for the constraint Eq.(3) #7 as in the standard large \(N\) arguments of \(CP^{N-1}\)[6; 7; 12; 13; 14; 15; 16; 17; 18; 19; 20] and other Grassmannian models [21; 22]. For \(N=4,p=3\) Eq.(10) with \(O(4)_{\rm global}\times O(3)_{\rm local}\) is identical to the standard HLS Lagrangian [1; 2; 1; 3; 14; 15; 16; 17; 18; 19; 20] for the rho meson with \([SU(2)_{L}\times SU(2)_{R}]_{\rm global}\times[SU(2)_{V}]_{\rm local}\). It is now clear [11] that Eq.(10) coincides with that of the conventional "covariant derivative type" Lagrangian [21] for a particular choice \(a=2\), with \(\phi\phi^{t}=(N/G)\openone\) (see Eq.(3)).
From Eq.(10) the effective potential in the large \(N\) limit for \(\langle\mathbf{\phi}_{i,\beta}(x)\rangle=\sqrt{N}v(\delta_{i,j},0)\) (we took \(v\neq 0\) real, i.e., the unitary gauge \(\tilde{\rho}(x)=0\)) and \(\langle\eta_{i,j}(x)\rangle=\eta\,\delta_{i,j}\), takes the form (in \(D\) dimensions):
\[\frac{V_{\rm eff}\left(v,\eta\right)}{Np/2}=\eta\left(v^{2}-\frac{1}{G}\right)+ \int\frac{d^{D}k}{i(2\pi)^{D}}\ln\left(k^{2}-\eta\right), \tag{11}\]
where the (\(a\)-dependent) 1-PI contributions are sub-leading in the large \(N\) limit 10 and therefore the result is _independent of the parameter \(a\)_, in precisely the same form as that of the conventional "covariant derivative" parameterization of \(CP^{N-1}\) and the Grassmannian models corresponding to \(a=2\)[6; 7; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22], and hence yields the same gap equation:
Footnote 10: This observation is due to H. Ohki.
Footnote 11: The cutoff \(\Lambda\) can be removed for \(2\leq D<4\) (the theory is renormalizable), introducing the renormalized coupling at renormalization point \(\mu\) as \(1/G^{(R)}(\mu)\equiv 1/G-\int\frac{d^{D}k}{i(2\pi)^{D}}\frac{1}{\mu^{2}-k^{2}} \equiv\mu^{D-2}/g^{(R)}(\mu)\), \(1/G^{(R)}_{\rm crit}\equiv\int\frac{d^{D}k}{i(2\pi)^{D}}\left(\frac{1}{-k^{2} }-\frac{1}{\mu^{2}-k^{2}}\right)=\frac{\Gamma(2-D/2)}{(D/2-1)}\cdot\frac{\mu^ {D-2}}{(4\pi)^{D/2}}\equiv\mu^{D-2}/g^{(R)}_{\rm crit}\), s.t., \(1/G-1/G_{\rm crit}=\mu^{D-2}\left(1/g^{(R)}(\mu)-1/g^{(R)}_{\rm crit}\right)\) in the gap equation. The renormalized coupling \(g^{(R)}(\mu)\) has an ultraviolet fixed point at \(g^{(R)}_{\rm crit}\), \(0\leq g^{(R)}_{\rm crit}=(4\pi)^{D/2}\left(D/2-1\right)/\Gamma(2-D/2)<\infty \left(2\leq D<4\right)\), with the beta function \(\beta(g^{(R)}(\mu))=\mu\partial g^{(R)}(\mu)/\partial\mu=-(D-2)g^{(R)}(\mu) [g^{(R)}(\mu)-g^{(R)}_{\rm crit}]/g^{(R)}_{\rm crit}\). While for \(D=4\) the theory is not renormalizable, \(1/g^{(R)}_{\rm crit}\sim\Gamma(2-D/2)/(4\pi)^{2}\big{|}_{D\to 4}\sim\ln(\Lambda^{2}/\mu^{2})/(4 \pi)^{2}\), with the remaining log divergence identified in the cutoff notation.
\[\frac{1}{S_{\rm crit}}\equiv\int\frac{d^{D}k}{i(2\pi)^{D}}\,\frac{1}{-k^{2}} =\frac{1}{\left(\frac{D}{2}-1\right)\Gamma(\frac{D}{2})}\frac{\Lambda^{D-2}}{ (4\pi)^{\frac{D}{2}}},\] \[v^{2}_{\eta}\equiv\int\frac{d^{D}k}{i(2\pi)^{D}}\left(\frac{1}{ -k^{2}}-\frac{1}{\eta-k^{2}}\right)=\frac{\Gamma(2-\frac{D}{2})}{\frac{D}{2} -1}\cdot\frac{\eta^{\frac{D}{2}-1}}{(4\pi)^{\frac{D}{2}}}.\]
The gap equation implies as usual the _second order phase transition_ between two phases, the (weak coupling) phase with the symmetry spontaneously broken which is the same as the classical level, and the (strong coupling) phase with that spontaneously unbroken which is a new phase at quantum level:
\[\begin{split}&\text{(i)}\quad G<G_{\rm cr}:v\neq 0\,\ \eta=0\ \text{(broken phase)}\\ & v^{2}=\frac{1}{G}-\frac{1}{G_{\rm crit}}>0\,\\ &\text{(ii)}\quad G>G_{\rm crit}:v=0\,\ \eta\neq 0\ \text{(unbroken phase)}\\ & v^{2}_{\eta}=\frac{1}{G_{\rm crit}}-\frac{1}{G}\,>0\,,\end{split} \tag{13}\]
with the phase transition point \(v=\eta=0\). We may define a full decay constant \(F_{\pi}\) at quantum level in the large \(N\) limit:
\[F^{2}_{\pi}\equiv Nv^{2}=N\left(\frac{1}{G}-\frac{1}{G_{\rm crit}}\right)={f_{ \pi}}^{2}-\frac{N}{\left(\frac{D}{2}-1\right)\Gamma(\frac{D}{2})}\frac{\Lambda ^{D-2}}{(4\pi)^{\frac{D}{2}}}\quad\to{f_{\pi}}^{2}-N\frac{\Lambda^{2}}{(4\pi) ^{2}}\ \left(D\to 4\right)\,. \tag{15}\]
This indicates that approaching from the broken phase to the critical point, \(F^{2}_{\pi}\to 0\), is due to the power divergence of \(1/G_{\rm crit}\)(quadratic divergence for \(D=4\)), similarly to the "Wilsonian matching" of the HLS model with the underlying QCD at the UV scale \(\Lambda\)[7].
## III Dynamical generation of rho meson
Now the (amputated) two-point function of \(\rho_{\mu}\) in the large \(N\) limit takes the form:
\[\Gamma^{(\rho)}_{\mu\nu}(q)=\left(\frac{a}{2}\right)\left(\frac{N}{G }\right)g_{\mu\nu}+\left(\frac{a}{2}\right)^{2}B_{\mu\lambda}(q)\cdot C^{ \lambda}_{\nu}(q),\] \[C_{\mu\nu}(q)=g_{\mu\nu}+\left(\frac{a}{2}-1\right)\frac{G}{N}B_ {\mu\lambda}(q)\cdot C^{\lambda}_{\nu}(q)\,, \tag{16}\]
where the four-\(\phi\) vertex \(\left(\frac{a}{2}-1\right)\frac{G}{N}\) in our Lagrangian Eq.(10) (second line) gives rise to an infinite sum of the bubble graph contribution \(B_{\mu\nu}(q)\);
\[\frac{1}{N}B_{\mu\nu}(q)=\frac{1}{2}\int\frac{dk^{D}}{i(2\pi)^{D}}\frac{(2k+q) _{\mu}(2k+q)_{\nu}}{(k^{2}-\eta)\left((k+q)^{2}-\eta\right)}=q^{2}f(q^{2},\eta )\cdot\left(g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{q^{2}}\right)+\left(v^{2}-\frac{ 1}{G}\right)\cdot g_{\mu\nu}\,, \tag{17}\]
with
\[f(q^{2},\eta)\equiv-\frac{\Gamma(2-\frac{D}{2})}{2\left(4\pi\right)^{\frac{D} {2}}\Gamma(2)}\int_{0}^{1}dx\frac{\left(1-2x\right)^{2}}{\left[x\left(1-x \right)q^{2}+\eta\right]^{2-\frac{D}{2}}}\,,\]
which reads for \(D\to 4\) (\(\epsilon\equiv 2-D/2\to 0\) and \(1/A^{\epsilon}\simeq 1-\epsilon\ln A\)):
\[f(q^{2},0) = -\frac{1}{2}\cdot\frac{1}{3\left(4\pi\right)^{2}}\cdot\left[\ln \left(\frac{\Lambda^{2}}{q^{2}}\right)+\frac{8}{3}\right]\,,\] \[f(0,\eta) = -\frac{1}{2}\cdot\frac{1}{3\left(4\pi\right)^{2}}\cdot\left[\ln \left(\frac{\Lambda^{2}}{\eta}\right)\right], \tag{18}\]
where we have used the gap equation Eq.(12) and identified \(\Gamma(\epsilon)\simeq 1/\epsilon\rightarrow\ln\Lambda^{2}\).#10
Footnote #10: The finite part common to both phases are included in the definition of the cutoff \(\Lambda\), while the part \(+8/3\) is an extra one in the broken phase \(\eta\equiv 0\), similarly to that in the \(CP^{N-1}\)[20].
### \(a=2\) case
Note that for \(a=2\), we have \(C_{\mu\nu}=g_{\mu\nu}\) in Eq.(16), which yields \(\Gamma^{(\rho)}_{\mu\nu}(q)\):
\[\frac{\Gamma^{(\rho)}_{\mu\nu}(q)}{N}=\left(\frac{1}{G}\right)g_{\mu\nu}+\frac {B_{\mu\lambda}(q)}{N}\cdot g^{\lambda}_{\nu}(q)=\left(q^{2}f(q^{2},\eta)+v^{2 }\right)\cdot\left(g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{q^{2}}\right)+v^{2}\cdot \frac{q_{\mu}q_{\nu}}{q^{2}}\,, \tag{19}\]
the well-known form of one-loop dominance in the large \(N\) limit in the conventional "covariant derivative" parameterization for \(CP^{N-1}\) model and other Grassmannian models [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22].
For the broken phase \(v\neq 0\), \(\eta=0\), this is readily inverted to yield the \(\rho_{\mu}\) propagator for \(a=2\): \(\langle\rho_{\mu}\rho_{\nu}\rangle(q)\equiv\langle\rho_{\mu}^{ij}\rho_{\nu}^{ ij}\rangle(q)=2\langle\rho_{\mu}^{a}\rho_{\nu}^{a}\rangle(q)\):
\[\langle\rho_{\mu}\rho_{\nu}\rangle(q) = -\Gamma^{(\rho)}_{\mu\nu}(q)^{-1}=\frac{1}{N}\frac{-f^{-1}(q^{2},0 )}{q^{2}+f^{-1}(q^{2},0)v^{2}}\left(g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{q^{2}} \right)-\frac{1}{N}\frac{1}{v^{2}}\frac{q_{\mu}q_{\nu}}{q^{2}}\] \[= \frac{1}{N}\frac{-f^{-1}(q^{2},0)}{q^{2}+f^{-1}(q^{2},0)v^{2}} \left(g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{-f^{-1}(q^{2},0)v^{2}}\right)=2\Delta_{ \mu\nu}(q),\] \[\Delta_{\mu\nu}(q) \equiv \frac{g_{{}_{\rm BLS}}^{2}(q^{2})}{q^{2}-M_{\rho}^{2}(q^{2})} \left(g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{M_{\rho}^{2}(q^{2})}\right),\] \[M_{\rho}^{2}(q^{2})\equiv-f^{-1}(q^{2},0)v^{2} = g_{{}_{\rm BLS}}^{2}(q^{2})\cdot 2F_{\pi}^{2},\quad g_{{}_{\rm BLS}}^{-2}(q^{2}) \equiv-2Nf(q^{2},0)=\frac{N}{3(4\pi)^{2}}\left[\ln\frac{\Lambda^{2}}{q^{2}}+ \frac{8}{3}\right], \tag{20}\]
which is the form of the unitary gauge (we took the unitary gauge \(\hat{\rho}(x)=0\) with \(v\neq 0=\) real), with the physical pole position and the on-shell HLS coupling given as \(q^{2}=M_{\rho}^{2}=-f^{-1}(M_{\rho}^{2},0)v^{2}=g_{\mbox{\tiny{HLS}}}^{2}(M_{ \rho}^{2})\cdot 2F_{\pi}^{2}\) and \(g_{\mbox{\tiny{HLS}}}^{2}\equiv g_{\mbox{\tiny{HLS}}}^{2}(M_{\rho}^{2})\), respectively. The relation implies the rho meson mass is generated by the Higgs mechanism
\[M_{\rho}^{2}=g_{\mbox{\tiny{HLS}}}^{2}\cdot F_{\rho}^{2}\,,\quad F_{\rho}^{2}= 2\cdot F_{\pi}^{2}\,, \tag{21}\]
where \(F_{\rho}\) is the decay constant of the would-be NG boson \(\hat{\rho}\) (absorbed into the rho meson in the unitary gauge) at quantum level, which is to be compared with the tree-level relation Eq.(9), with \(a=a^{(p)}=2\). The \(q^{2}\) dependence of \(M_{\rho}^{2}(q^{2})\) and \(g_{\mbox{\tiny{HLS}}}^{2}(q^{2})\) may be regarded as the running mass and the (_asymptotically non-free/infrared free_) running coupling. The resultant rho meson mass relation \(M_{\rho}^{2}=-f^{-1}(M_{\rho}^{2},0)v^{2}=g_{\mbox{\tiny{HLS}}}^{2}(M_{\rho}^{ 2})\cdot 2F_{\pi}^{2}\) is independent of \(N\) and can be extrapolated into \(N\to 4\) with \(p=3\) for the actual rho meson.
We thus _establish the dynamically generation of the rho meson as the HLS gauge boson for \(a=2\)_[11] in exactly the same way as in the \(CP^{N-1}\) model and other Grassmannian models.
In the unbroken phase, \(v=0,\eta\neq 0\), on the other hand, \(\Gamma_{\mu\nu}^{(\rho)}(q)\) in Eq.(19) is transverse, implying the HLS is an _unbroken gauge symmetry_. Though not invertible as it stands, it is of course inverted by fixing the gauge as usual, to get the _massless_ propagator \(\langle\rho_{\mu}\rho_{\nu}\rangle(q)=g_{\mbox{\tiny{HLS}}}^{2}(q^{2},\eta) \cdot\frac{g_{\mu\nu}}{q^{2}}+\) gauge term, with \(g_{\mbox{\tiny{HLS}}}^{-2}(q^{2},\eta)\equiv-2Nf(q^{2},\eta)\simeq-2Nf(0,\eta )\equiv g_{\mbox{\tiny{HLS}}}^{-2}(\eta)\) which is analytic at \(q^{2}=0\). \(\eta\)-dependence may be regarded as the running of the coupling, asymptotically non-free/infrared free, \(g_{\mbox{\tiny{HLS}}}^{2}(\eta)\to 0\,(\eta\to 0)\), the same as that in broken phase, see Eq.(18). #11 The situation is also the same as the \(CP^{N-1}\) and the Grassmannian models. Note also that the _massless rho meson is stable_, since it does not decay into the pions which are no longer the NG bosons and have non-zero mass degenerate with \(\hat{\rho}\) (no longer the would-be NG boson) and other degrees of freedom of \(\phi_{i,\beta}\) (corresponding to the 6 constraints in the broken phase, in addition to the 3 \(\pi\)'s and 3 \(\hat{\rho}\)'s for \(N=4,p=3\)), \(M_{\pi}^{2}=M_{\hat{\rho}}^{2}=\cdots=\eta\neq 0\). Note that the phase transition is of the second order with \(v=\eta=0\) and all the spectra are decoupled (free) massless particles: \(M_{\rho}^{2}=M_{\hat{\rho}}^{2}=M_{\pi}^{2}=0\) at the phase transition point \(G=G_{\rm crit}\) (conformal).
Footnote #11: _Without gauge symmetry_ (\(a=0\)), \(\langle\alpha_{\mu,||}\,\alpha_{\nu,||}\rangle(q)\) is _ill-defined in the unbroken phase_\(v=0\), where the factor \(g_{\mu\lambda}+\frac{G}{N}B_{\mu\lambda}\) is pure transverse and _not invertible_, in accord with the Weinberg-Witten theorem [25] on the absence of massless spin \(J\geq 1\) particles in the positive definite Hilbert space (no gauge symmetry).
### Case for arbitrary value of \(a\)
Since we have established the dynamical generation of the rho meson for \(a=2\), the next question is wether the conclusion is dependent on the specific value of \(a=2\). Here we show that the result is independent of \(a\).
For the generic case for arbitrary \(a\), the large \(N\) dominant diagrams are not just the one-loop but do include _an infinite sum of the bubble diagrams coming from the extra four-vertex_\(\left(\frac{a}{2}-1\right)\frac{G}{N}\) as in Eq.(16). \(C_{\mu\nu}\) in Eq.(16) is solved straightforwardly though tediously (see Ref. [11] for details): From Eqs.(16) and (17) we have
\[\frac{a}{2}C_{\mu\nu}(q) = \frac{a}{2}\left[g_{\mu\nu}+\left(1-\frac{a}{2}\right)\frac{G}{N} B_{\mu\nu}(q)\right]^{-1}\] \[= \left[1-\left(1-\frac{2}{a}\right)G\left(v^{2}+q^{2}f\right) \right]^{-1}\left(g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{q^{2}}\right)+\left[1-\left( 1-\frac{2}{a}\right)Gv^{2}\right]^{-1}\frac{q_{\mu}q_{\nu}}{q^{2}}\,,\] \[\frac{\Gamma_{\mu\nu}^{(\rho)}(q)}{N} = \frac{2}{G}\left(1-\frac{2}{a}\right)^{-1}\left[\frac{a}{2}C_{ \mu\nu}-g_{\mu\nu}\right]\,, \tag{22}\] \[= \left[\frac{f^{-1}}{q^{2}+v^{2}f^{-1}}-\left(1-\frac{2}{a}\right) G\right]^{-1}\cdot\left(g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{q^{2}}\right)+\left[ \frac{1}{v^{2}}-\left(1-\frac{2}{a}\right)G\right]^{-1}\cdot\frac{q_{\mu}q_{ \nu}}{q^{2}},\quad f\equiv f(q^{2},\eta).\]
This of course is reduced to Eq.(19) for \(a=2\).
We finally arrive at the _dynamically generated propagating HLS gauge boson for any \(a\)_, whose propagator in the broken phase, takes the same form of the unitary gauge as that for \(a=2\) except for the contact term (to be discussed
later): [11]
\[\langle\rho_{\mu}\rho_{\nu}\rangle(q) = -\Gamma^{(\rho)}_{\mu\nu}(q)^{-1}=\left[\frac{-f^{-1}}{q^{2}+v^{2}f^ {-1}}+\left(1-\frac{2}{a}\right)G\right]\left(g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{ q^{2}}\right)-\left[\frac{1}{v^{2}}-\left(1-\frac{2}{a}\right)G\right]\frac{q_{\mu}q_{ \nu}}{q^{2}} \tag{23}\] \[= \left(1-\frac{2}{a}\right)\,\frac{G}{N}\,g_{\mu\nu}+2\Delta_{\mu \nu}(q).\]
which is reduced to Eq.(20) for \(a=2\). Again the mass relation from \(\Delta_{\mu\nu}\) in the last line of Eq.(20) is independent of \(N\) and thus safely extrapolated to the realistic rho meson, \(N\to 4\) with \(p=3(=\) fixed).
Here the physical pole position \(q^{2}=M_{\rho}^{2}=g_{{}_{\rm HLS}}^{2}(M_{\rho}^{2})\cdot 2F_{\pi}^{2}\) and the on-shell HLS coupling \(g_{{}_{\rm HLS}}^{2}\equiv g_{{}_{\rm HLS}}^{2}(M_{\rho}^{2})\) are both _independent of \(a\)_;
\[M_{\rho}^{2}=g_{{}_{\rm HLS}}^{2}\cdot 2F_{\pi}^{2}\,,\quad F_{\rho}^{2}=2 \cdot F_{\pi}^{2}\,,\quad(a-{\rm independent})\,, \tag{24}\]
which is the same as Eq.(21) but now it is an \(a\)-_independent_ result, in contrast to that of the bare quantities at tree level Eq.(9): \(f_{\rho}^{2}={af_{\pi}}^{2}\). This relation is a reminiscence of the KSRF II relation, \(M_{\rho}^{2}=2g_{\rho\pi\pi}^{2}F_{\pi}^{2}\), where \(g_{\rho\pi\pi}\) is the \(\rho\pi\pi\) coupling. In fact in the next section we will show \(g_{\rho\pi\pi}=g_{{}_{\rm HLS}}\) ("rho-universality") independently of \(a\), and thus derive the KSRF II relation (as well as KSRF I) independently of \(a\).
Note that the \(a\)_-dependence is exactly cancelled in the physical part_\(\Delta_{\mu\nu}(q)\) as it should be, since \(a\) is actually a redundant parameter for the auxiliary field \(\rho_{\mu}\). While the \(a-\) dependence remains in the _unphysical contact term_\(-\frac{2G}{aN}g_{\mu\nu}\) which corresponds to the tree \(\rho_{\mu}\) "propagator" with tree mass \(\frac{aN}{2G}\), it is an artifact in using the auxiliary field \(\rho_{\mu}\) for the composite field \(\alpha_{\mu,||}=i\frac{G}{N}\phi\partial_{\mu}\phi^{t}\) whose two-point function is independent of \(a\) and exists even for \(a=0\) (without HLS!) _in the broken phase_. (They satisfy an exact relation via Ward-Takahashi identity, \(\langle\rho_{\mu}\rho_{\nu}\rangle(q)=\langle\alpha_{\mu,||}\,\alpha_{\nu,||} \rangle(q)-\frac{2G}{aN}g_{\mu\nu}\)[11]).
Moreover, the whole contact term is cancelled in the \(\pi\pi\) scattering. The \(\pi\pi\) scattering amplitude \(T_{\mu\nu}(q)\) is given as \(2T_{\mu\nu}(q)\!\!=\!\frac{-G}{N}g_{\mu\nu}\!\!+\!\langle\alpha_{\mu,||}\, \alpha_{\nu,||}\rangle(q)\), where the first term is from the tree vertex, while the second term is only from the loop contributions (bubble sum) dominant in the large \(N\) limit, \(\langle\alpha_{\mu,||}\,\alpha_{\nu,||}\rangle(q)=\big{(}i\frac{G}{N}\big{)}^{2 }\langle\phi\partial_{\mu}\phi^{t}\ \phi\partial_{\nu}\phi^{t}\rangle(q)=\big{(}i\frac{G}{N}\big{)}^{2} \left[B_{\mu\nu}(q)+B_{\mu\lambda}(q)\cdot\left(-\frac{G}{N}\right)B_{\lambda} ^{\lambda}(q)+\cdots\right]\,=\,\big{(}i\frac{G}{N}\big{)}^{2}\,B_{\mu}^{ \lambda}(q)\,\,\left[g_{\lambda\nu}+\frac{N}{G}\langle\alpha_{\mu,||}\,\alpha_ {\nu,||}\rangle(q)\right]=\big{(}g_{\mu\lambda}+\frac{G}{N}B_{\mu\lambda}\big{)} ^{-1}\big{(}i\frac{G}{N}\big{)}^{2}\,B_{\lambda}^{\lambda}(q)=\frac{G}{N}g_{ \mu\nu}+2\Delta_{\mu\nu}(q)\), with \(B_{\mu\nu}(q)\) given in Eq.(17).#12 Then the contact term \(\frac{G}{N}g_{\mu\nu}\) is precisely cancelled in \(T_{\mu\nu}\), namely the _VMD for arbitrary value of \(a\)_. This is compared with the conventional HLS approach where the VMD for \(\pi\pi\) scattering is realized only for \(a=4/3\) (not \(a=2!!\)) [7].
Footnote #12: The four-\(\phi\) vertex (both for tree and loop) here is different from Eq.(16) for the \(\rho_{\mu}\) case: \((\frac{a}{2}-1)\frac{G}{N}+(-\frac{a}{2})\frac{G}{N}=-\frac{G}{N}\) (the additional second term is from the tree rho contribution \((-\frac{ia}{2})(\frac{a}{2}\frac{N}{G})^{-1}(-\frac{ia}{2})\)), the same as that for \(a=0\) (original nonlinear sigma model without HLS) as it should be independent of the auxiliary field.
As seen from Eq.(20), the HLS coupling depends on the cutoff \(\Lambda\) as it should, since the nonlinear sigma model is a non-renormalizable model for \(D=4\) (see [20] for other formulation). From Eq.(20), with \(N=4\) and \(q^{2}=M_{\rho}^{2}\simeq(770\,{\rm MeV})^{2}\), \(F_{\pi}\simeq 92\,{\rm MeV}\), we have \(\Lambda=e^{-4/3}\cdot M_{\rho}\cdot e^{12\pi^{2}F_{\pi}^{2}/M_{\rho}^{2}}\simeq 1.1{\rm GeV}\simeq 4\pi F_{\pi}\), roughly the validity scale of the chiral perturbation theory. As an asymptotically non-free theory the kinetic term vanishes \(1/g_{{}_{\rm HLS}}^{2}(q^{2}=\mu^{2})=-2Nf(\mu^{2},0)\to 0\) (\(\mu^{2}\to\tilde{\Lambda}^{2}\)) at the Landau pole \(\mu=\tilde{\Lambda}=e^{4/3}\Lambda\simeq 4.2\) GeV \(\gg\Lambda\gg M_{\rho}\), where the \(\rho_{\mu}\) returns to an auxiliary field as a static composite of \(\pi\), the situation sometimes referred to as "compositeness condition" [23] advocated in a reformulation of the top quark condensate model [24]. In this viewpoint the HLS gauge bosons as bound states of \(\pi\)'s develop the kinetic term as we integrate the higher frequency modes in the large \(N\) limit from \(\Lambda^{2}\) down to the scale \(\mu^{2}\) in the sense of the Wilsonian renormalization group [7].
This also implies \(g_{{}_{\rm HLS}}^{2}(q^{2}=\mu^{2})\to 0\) (\(\mu^{2}/\tilde{\Lambda}^{2}\to 0\)) at approaching the phase transition point \(F_{\pi}^{2}=Nv^{2}\to 0\,(G\to G_{\rm cr}-)\). Thus the rho meson in the broken phase, with \(M_{\rho}\) close enough to the phase transition point, \(M_{\rho}/\tilde{\Lambda},M_{\rho}/\Lambda\to 0\), is to be identified with a gauge boson. #13 The result \(g_{{}_{\rm HLS}}^{2}\to 0\) and \(M_{\rho}^{2}\to M_{\pi}^{2}(\equiv 0)\) near the phase transition point in the broken phase is similar to the Vector Manifestation (Ref. [7] and references cited therein), _both not precisely on the phase transition point_ where \(\rho\) and \(\pi\) are just decoupled massless free particles \(M_{\rho}^{2}=M_{\pi}^{2}=0\). The latter is based on the one-loop "Wilsonian Matching" with QCD at \(\Lambda\) where the kinetic term is given with the parameter \(a=a(\Lambda^{2})\simeq 1\), which then runs down as \(a(\mu^{2})=F_{\rho}^{2}(\mu^{2})/F_{\pi}^{2}(\mu^{2})\sim 1\,(M_{\rho}^{2}<\mu^{2}< \Lambda^{2})\) (with \(\rho\) loop) and further
down to \(\pi\) on shell \(a(0)=F_{\rho}^{2}(M_{\rho}^{2})/F_{\pi}^{2}(M_{\pi}^{2}=0)=2\) (with the \(\rho\) loop decoupled for \(F_{\pi}^{2}\) in \(\mu^{2}<M_{\rho}^{2}\)), in contrast to the present case which is for any \(a\) at all orders in the large \(N\) limit without \(\rho\) loop at all.
We then have the effective action with kinetic term of rho meson \(\rho_{\mu}\) and/or the composite field \(\alpha_{\mu,||}\):
\[{\cal L}_{\rm kinetic}^{\rm eff}=-\frac{1}{4g_{{}_{\rm HLS}}^{2}}\cdot\frac {1}{2}{\rm tr}\rho_{\mu\nu}^{2}=-\frac{1}{4g_{{}_{\rm HLS}}^{2}}\cdot\frac{1}{ 2}{\rm tr}\alpha_{\mu\nu,||}^{2}, \tag{25}\]
where \(\alpha_{\mu\nu,||}\equiv\partial_{\mu}\alpha_{\nu,||}-\partial_{\nu}\alpha_{ \mu,||}|-i\left[\alpha_{\mu,||},\alpha_{\nu,||}\right]\), with \(f_{\pi}^{2}=N/G\Rightarrow F_{\pi}^{2}=Nv^{2}\). For \(N=4,p=3\) Eq.(25) is precisely the Skyrme term, \(\frac{1}{32e^{2}}{\rm tr}_{Sv(T)}[L_{\mu},L_{\nu}]^{2}\), with \(e^{2}=g_{{}_{\rm HLS}}^{2}\), in the \(SU(2)\) basis, where we have \(\alpha_{\mu\nu,||}=i\left[\alpha_{\mu,\perp},\alpha_{\nu,\perp}\right]\) and \(L_{\mu}\equiv\partial_{\mu}U\cdot U^{\dagger}\), \(U=\xi^{2}(\pi)=e^{i\pi^{a}\pi^{a}/F_{\pi}}\)[26].
## IV Successful "\(a=2\)" relations realized for any \(a\)
Now we derive all the phenomenologically successful relations for the rho meson independently of \(a\).
The large \(N\) Green function for \(\rho\pi\pi\) is given as a bubble sum, which takes the \(a-\)_independent form of VMD_, \(\langle\rho_{\mu}(q)\phi(k)\phi(k+q)\rangle\)\(=\langle\alpha_{\mu,||}(q)\,\phi(k)\,\phi(q+k)\rangle=2\Delta_{\mu\nu}(q)\cdot(q+2k)^{\nu }\)\({}^{\#14}\), where the first equality is by the Ward-Takahashi identity \({}^{\#15}\). We may introduce "renormalized" field \(\rho_{\mu}^{(R)}\equiv g_{{}_{\rm HLS}}^{-1}(q^{2})\cdot\rho_{\mu}\) by rescaling the "kinetic term" to the canonical one, i.e., \(\Delta_{\mu\nu}^{(R)}(q)\equiv g_{{}_{\rm HLS}}^{-2}(q^{2})\cdot\Delta_{\mu \nu}(q)\) and \(\langle\rho_{\mu}^{(R)}\phi\phi\rangle=\langle\alpha_{\mu,||}^{(R)}\phi\phi \rangle\)\(=2g_{{}_{\rm HLS}}(q^{2})\cdot\Delta_{\mu\nu}^{(R)}(q)\cdot(q+2k)^{\nu}\), which is compared with the definition of \(g_{\rho\pi\pi}(q^{2})\), \(\langle\rho_{\mu}^{(R)}\phi\phi\rangle\equiv 2g_{\rho\pi\pi}(q^{2})\cdot \Delta_{\mu\nu}^{(R)}\left(q\right)\cdot(q+2k)^{\nu}\), resulting in the \(\rho\) universality _independently of \(a\)_:
\[g_{\rho\pi\pi}(q^{2})=g_{{}_{\rm HLS}}(q^{2})\quad(\rho\ {\rm universality})\,. \tag{26}\]
It then leads to the KSRF relations (generalized for \(\forall q^{2}\)) _independently of a \({}^{\#16}\)_:
\[g_{\rho}(q^{2}) = M_{\rho}(q^{2})F_{\rho}=2g_{\rho\pi\pi}(q^{2})F_{\pi}^{2}\ \ ({\rm KSRF\,I})\,, \tag{27}\] \[M_{\rho}^{2}(q^{2}) = 2g_{\rho\pi\pi}^{2}(q^{2})F_{\pi}^{2}\ \ ({\rm KSRF\,II})\,, \tag{28}\]
with \(\langle 0|J_{\mu}^{\rm em}|\rho^{(R)}(q^{2})\rangle\equiv g_{\rho}(q^{2}) \epsilon_{\mu}(q)=M_{\rho}(q^{2})F_{\rho}\epsilon_{\mu}(q)\).
The VMD for the electromagnetic form factor \(F_{{}_{\rm S\pi\pi}}(q^{2})\) follows also \(a-\)independently, similarly to the VMD in the \(\pi\pi\) scattering. Here the photon field \({\cal B}_{\mu}\) is introduced by gauging \(H_{\rm global}\), \(D_{\mu}\phi\Rightarrow\partial_{\mu}\phi-i\rho_{\mu}\phi+i\phi{\cal B}_{\mu}\). It has contributions from the \({\cal B}_{\mu}-\rho_{\mu}\) mixing and from the "direct coupling" to \(\alpha_{\mu,||}\) (with the tree contact term cancelled by the bubble sum as in the \(\pi\pi\) scattering), both coupled to the identical VMD Green functions \(\langle\rho_{\mu}^{(R)}\phi\phi\rangle=\langle\alpha_{\mu,||}^{(R)}\phi\phi\rangle\) in a linear combination to cancel the \(a\) dependence\({}^{\#17}\):
\[F_{{}_{\rm B\pi\pi}}(q^{2})=\frac{M_{\rho}^{2}(q^{2})}{M_{\rho}^{2}(q^{2})-q^{2} },\quad F_{{}_{\rm B\pi\pi}}(0)=1. \tag{29}\]
Thus the _VMD is realized, independently of \(a\)_. \({}^{\#18}\)
Conclusion and discussions
To conclude we have proved that the rho meson is a dynamical gauge boson of the HLS \(O(3)_{\rm local}\simeq[SU(2)_{V}]_{\rm local}\) by the large \(N\) dynamics of the model \(G/H=O(N)/[O(N-3)\times O(3)]\), with all the successful "\(a=2\) results" being realized purely dynamically _independently of \(N\)_ for _any value of \(a\)_, thus safely extrapolated to \(N=4\), \(O(4)/O(3)\simeq O(4)_{\rm global}\times O(3)_{\rm local}\simeq[SU(2)_{L}\times SU(2)_{R}]_{\rm global }\times[SU(2)_{V}]_{\rm local}\).
The "\(a=2\) results" originally obtained for particular choice of \(a=2\)[1; 2; 3; 4; 5] are now clear to be artifacts of the combined use of the \(a-\)dependent _tree-level_ rho meson mass term and the _ad hoc added kinetic term_ which was _assumed_ to be generated at quantum level _without affecting the pole structure_ of the dynamically generated propagator. Actually, as we demonstrated, the tree-level parameter is no longer the true one of the pole at quantum level when the kinetic term is generated, namely the pole position (and residue as well) of the full propagator is shifted from the tree level one in such a way that the \(a-\) dependence is totally cancelled out. Actually, the parameter \(a\)_is a redundant parameter for the auxiliary field_\(\rho_{\mu}\) and is irrelevant to the physical results at quantum level as it should be for the correct calculations. The results of the present paper revealed that it is indeed the case in the large \(N\) limit.
Further implications of the results are: Once the rho kinetic term is generated, Eq.(25), it stabilizes the Skyrmion without ad hoc Skyrme term and hence the nonlinear sigma model in the large \(N\) limit perfectly describes via HLS the low energy QCD for \(\pi,\rho,N\) at the scale \(\lesssim\Lambda\simeq 4\pi f_{\pi}\) without explicit recourse to the QCD.
The dynamically generated kinetic term, with the induced gauge coupling \(g^{2}_{\rm HLS}(q^{2})\) being asymptotically non-free/infrared free in both broken and unbroken phases, has a cutoff \(\Lambda\simeq 4\pi f_{\pi}\gg M_{\rho}\) (and Landau pole \(\tilde{\Lambda}\)), so that the rho meson is sitting _near the second order phase transition point_ as a composite HLS _gauge boson_ to be matched with the underlying QCD. This implies [11] that the large \(N\) dynamics reveals the HLS as a "magnetic gauge theory" (infrared free in both phases) dual to the underlying QCD as the "electric gauge theory" [7; 31; 32; 33], similarly to the Seiberg duality in the SUSY QCD [34].
If the HLS as the unbroken magnetic gauge theory is realized, say in hot/dense QCD, we would have a new possibility for the chiral symmetry restored hadronic phase having massless rho meson and massive \(\pi,\tilde{\rho}\)[11], which is contrasted with \(M_{\rho}^{2}\to M_{\pi}^{2}(\equiv 0)\) near the phase transition point in the broken phase (not precisely on the phase transition point) similarly to the "Vector Manifestation" as described in the text.
It was frequently emphasized that the large \(N\) results are valid even for the small \(N\) at least qualitatively as mentioned in the footnote #3. The result of the present paper is a yet another proof of this statement, and even more, quantitatively not just qualitatively, in perfect agreement with the experimental facts of the rho meson.
This further implies the dynamical HLS bosons in other system described by the large \(N\) Grassmannian models. A notable case of such is the Standard Model (SM) Higgs Lagrangian, re-parameterized [35] as a scale-invariant version of the model \(G/H=O(4)/(3)\simeq O(4)_{\rm global}\times O(3)_{\rm local}\), is precisely the same as the rho meson case, except for an extra mode, pseudo-dilaton (SM Higgs boson) to make the model (approximately) scale-invariant (Having no indices running through \(N\), it is irrelevant to the SM rho physics in the large \(N\) limit) [11]. This justifies the basic assumption [36] that there exists a rho meson-like vector boson within the SM ("SM rho") which stabilizes a skyrmion ("SM skyrmion") as a candidate for the dark matter existing even within the SM.
###### Acknowledgements.
We would like to thank T. Kugo who made invaluable help for the preliminary results in Ref.[11]. Thanks also go to H. Ohki for valuable discussions and comments. Special thanks go to Mannque Rho for useful questions and inviting the contribution to the special issue of the "Symmetry".
|
2302.08588 | MM Algorithms to Estimate Parameters in Continuous-time Markov Chains | Continuous-time Markov chains (CTMCs) are popular modeling formalism that
constitutes the underlying semantics for real-time probabilistic systems such
as queuing networks, stochastic process algebras, and calculi for systems
biology. Prism and Storm are popular model checking tools that provide a number
of powerful analysis techniques for CTMCs. These tools accept models expressed
as the parallel composition of a number of modules interacting with each other.
The outcome of the analysis is strongly dependent on the parameter values used
in the model which govern the timing and probability of events of the resulting
CTMC. However, for some applications, parameter values have to be empirically
estimated from partially-observable executions. In this work, we address the
problem of estimating parameter values of CTMCs expressed as Prism models from
a number of partially-observable executions. We introduce the class parametric
CTMCs -- CTMCs where transition rates are polynomial functions over a set of
parameters -- as an abstraction of CTMCs covering a large class of Prism
models. Then, building on a theory of algorithms known by the initials MM, for
minorization-maximization, we present iterative maximum likelihood estimation
algorithms for parametric CTMCs covering two learning scenarios: when both
state-labels and dwell times are observable, or just state-labels are. We
conclude by illustrating the use of our technique in a simple but non-trivial
case study: the analysis of the spread of COVID-19 in presence of lockdown
countermeasures. | Giovanni Bacci, Anna Ingólfsdóttir, Kim G. Larsen, Raphaël Reynouard | 2023-02-16T21:25:27Z | http://arxiv.org/abs/2302.08588v1 | # MM Algorithms to Estimate Parameters in Continuous-time Markov Chains
###### Abstract
Continuous-time Markov chains (CTMCs) are popular modeling formalism that constitutes the underlying semantics for real-time probabilistic systems such as queuing networks, stochastic process algebras, and calculi for systems biology. Prism and Storm are popular model checking tools that provide a number of powerful analysis techniques for CTMCs. These tools accept models expressed as the parallel composition of a number of modules interacting with each other.
The outcome of the analysis is strongly dependent on the parameter values used in the model which govern the timing and probability of events of the resulting CTMC. However, for some applications, parameter values have to be empirically estimated from partially-observable executions.
In this work, we address the problem of estimating parameter values of CTMCs expressed as Prism models from a number of partially-observable executions. We introduce the class parametric CTMCs --CTMCs where transition rates are polynomial functions over a set of parameters-- as an abstraction of CTMCs covering a large class of Prism models. Then, building on a theory of algorithms known by the initials MM, for minorization-maximization, we present iterative maximum likelihood estimation algorithms for parametric CTMCs covering two learning scenarios: when both state-labels and dwell times are observable, or just state-labels are.
We conclude by illustrating the use of our technique in a simple but non-trivial case study: the analysis of the spread of COVID-19 in presence of lockdown countermeasures.
full version hosted on arXiv [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82][83][84][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99][99][90][91][92][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][91][92][94][95][96][97][98][99][99][91][92][99][93][94][95][96][97][98][99][99][91][92][99][99][92][93][94][95][96][97][98][99][99][99][91][92][99][93][94][95][96][97][98][99][99][99][99][99][91][92][99][99][91][92][99][92][99][93][94][95][96][97][98][99][99][99][91][92][99][93][94][95][96][97][99][99][98][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][[99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][99][[99][99][99][99][99][99][99][99][99][99][99][[99]99][[99][99][99][99][[99]99][[99][99][[99]99][[99]99][[99]99][[[99]99][[99]99][[[99]99][[[999]99][[[99]99][[[999]99][[[999]9]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[9999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]9][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[[9999]9][[[999]99][[[999]99][[[999]99][[[999]99][[[999]9][[[999]99][[[999]99][[[999]99][[[999]9][[[999]99][[[999]99][[[999]9][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[999]99][[[9999]99][[[999]99][[[999]99][[[999]99][[[9999]99][[[999]99][[[999]999][[[9999]99][[[9999]99][[[9999]99][[[999]99][[999]99][[[9999]9][[[999]99][[[999]999][[[9999]99][[[9999]99][[[9999]99][[[9999]99][[[999]999][[[9999]99][[[9999]99][[[999]999][[[9999]99][[[9999]99][[[999]999][[[9999]99][[[9999]99][[[999]999][[[9999]99][[[9999]99][[[9999]99][[[9999]999][[[9999]999][[[9999]999][[[9999]999][[[9999]999][[[9999]999][[[9999]99][[[9999]999][[[9999]99][[[9999]999][[[9999]99
Model checking tools such as Prism[22] and Storm[9] provide access to a number of powerful analysis techniques for CTMCs. Both tools accept models written in the Prism language, a state-based language based on [1] that represents synchronous and asynchronous components in a uniform framework that supports compositional design. For example, consider the two semantically equivalent Prism models depicted in Fig. 1 implementing a variant of the Susceptible-Infected-Recovered (SIR) model proposed in [27] to describe the spread of disease in presence of lockdown restrictions. The model depicted to the left consists of a single module, whereas the one to the right implements a compositional design where modules interact by synchronizing on two actions: infection and recovery.
Both models distinguish between three types of individuals: susceptible, infected, and recovered. Susceptible individuals become infected through contact with another infected person and can recover without outside interference. The SIR model is parametric in beta, gamma, and plock. beta is the _infection coefficient_, describing the probability of infection after the contact of a susceptible individual with an infected one; gamma is the _recovery coefficient_, describing the rate of recovery of an infected individual (in other words, \(1/\texttt{gamma}\) is the time one individual requires to recover); and \(\texttt{plock}\in[0,1]\) is used to scale down the infection coefficient modeling restrictions to reduce the spread of disease.
Clearly, the outcome of the analysis of the above SIR model is strongly dependent on the parameter values used in each module, as they govern the timing and probability of events of the CTMC describing its semantics. However, in some application domains, parameter values have to be empirically evaluated from a number of partially-observable executions of the model. To the best of our knowledge, neither Prism nor Storm provide integrated support for this task, leaving the burden of estimating parameter values to the modeler. A paradigmatic example is the modeling pipeline described in [27], where the parameters of the SIR model in Fig. 1 are estimated based on a definition of the model as ODEs, and later used in an approximation of the original SIR model designed to reduce the state space of the SIR model in Fig. 1 (left). Such modeling pipelines require high technical skills, are
Figure 1: (Left) SIR model with lockdown from [27], (Right) Semantically equivalent formulation of the model to the left where susceptible, infected, and recovered individuals are modeled as distinct modules interacting with each other via synchronization.
error-prone, and are time-consuming, thus limiting the applicability and the user base of model checking tools.
In this work, we address the problem of estimating parameter values of CTMCs expressed as Prism models from a number of partially-observable executions. The expressive power of the Prism language brings two technical challenges: (i) the classic state-space explosion problem due to modular specification, and (ii) the fact that the transition rates of the CTMCs result from the algebraic composition of the rates of different (parallel) modules which are themselves defined as arithmetic expressions over the parameters (_cf._ Fig. 1). We address the second aspect of the problem by considering a class of _parametric_ CTMCs, which are CTMCs where transition rates are polynomial functions over a fixed set of parameters. In this respect, parametric CTMCs have the advantage to cover a rich subclass of Prism models and to be closed under the operation of parallel composition implemented by the Prism language.
Following the standard approach, we pursue the maximum likelihood estimate (MLE), i.e., we look for the parameter values that achieve the maximum joint likelihood of the observed execution sequences. However, given the non-convex nature of the likelihood surface, computing the global maximum that defines the MLE is computationally intractable [31].
To deal with this issue we employ a theoretical iterative optimization principle known as MM algorithm [24, 23]. The well-known EM algorithm [10] is an instance of MM optimization framework and is a versatile tool for constructing optimization algorithms. MM algorithms are typically easy to design, numerically stable, and in some cases amenable to accelerations [18, 33]. The versatility of the MM principle consists in the fact that is built upon a simple theory of inequalities, allowing one to derive optimization procedures. In fact, these procedures appear to be much easier than the derivation of a corresponding EM algorithm that relies on choosing appropriate missing data structures, i.e., latent variables. As the EM algorithm, the MM principle is useful to derive iterative procedures for maximum likelihood estimation which increase the likelihood at each iteration and converge to some local optimum.
The main technical contribution of the paper consists in laying out MM techniques for devising novel iterative maximum likelihood estimation algorithms for parametric CTMCs covering two learning scenarios.
In the first scenario, we assume that state labels and dwell times are observable variables while state variables are hidden. The learning procedure devised for this case is a generalization of the Baum-Welch algorithm [28] --an EM algorithm that estimates transition probabilities in hidden Markov models-- to parametric CTMCs.
In the second scenario, state labels are observable while state variables and dwell time variables are hidden. In contrast with the first case, the objective function that defines the MLE achieves the same value on all the CTMCs sharing the same embedded Markov chain. Thus, a standard adaptation of the Baum-Welch algorithm to this case would not lead to a procedure able to learn the continuous-time aspects of the observed system. Nevertheless, by making an analogy between the way transitions "compete" with each other in race conditions and the Bradley-Terry model of ranking [6], we successfully extend the solution devised for the first scenario with techniques used by Lange, Hunter, and Yang in [25] for finding rank estimates in the Bradley-Terry model. We provide experimental evidence that, when the model has sufficiently many constant transition rates, our solution effectively converge to the true parameter values of the model by hinging on the rate values that are known in the model. Note that this condition is easily fulfilled when one of the components is fully observable. A typical example is the model a microcontroller component running within a partially observable physical environment. Other examples may arise from website analysis
for reviewing a website's performance w.r.t. user experience.
We demonstrate the effectiveness of our estimation procedure on a case study taken from [27]: the analysis of the spread of COVID-19 in presence of lockdown countermeasures. In particular, we showcase how our technique can be used to simplify modeling pipelines that involve a number of modifications of the model --possibly introducing approximations--and the re-estimation of its parameters.
### Related Work
In [15, 14] Georgoulas et al. employ probabilistic programming to implement a variant of Bio-PEPA [8] called ProPPA. ProPPA is a stochastic process algebra with inference capabilities that allows some rates to be assigned a prior distribution, capturing the modeler's belief about the likely values of the rates. Using probabilistic inference, the ProPPA model may be combined with the observations to derive updated probability distributions over rates.
Before ProPPA, Geisweiller proposed EMPEPA [13], an EM algorithm that estimates the rate values inside a PEPA model.
A closely related work is [32] where they learn continuous-time hidden Markov models to do performance evaluation. There, observations are regarded as (discrete-time) periodic observations with fixed period \(\Delta\). The learning method works in two steps: first, they employ the Baum-Welch algorithm [28] to estimate the transition probability matrix of a hidden Markov model, then they obtain the infinitesimal generator of the CTMC from the learned transition probability matrix. In contrast with [32], we are able to derive a simpler procedure that directly extends the Baum-Welch algorithm to parametric CTMCs.
In [30], Sen et al. present an algorithm based on the state merging paradigm of Alergia [7] to learn a CTMC from timed observations. In contrast with our work, [30] does not perform parameter estimation over structured models, but learns an unstructured (transition-labeled) CTMC.
Another related line of research is parameter synthesis of Markov models [19]. In contrast with our work, parameter synthesis revolves around the problem of finding (some or all) parameter instantiations of the model that satisfy a given logical specification.
## 2 Preliminaries and Notation
We denote by \(\mathbb{R}\), \(\mathbb{Q}\), and \(\mathbb{N}\) respectively the sets of real, rational, and natural numbers, and by \(\Sigma^{n}\), \(\Sigma^{*}\) and, \(\Sigma^{\omega}\) respectively the set of words of length \(n\in\mathbb{N}\), finite length, and infinite length, built over the finite alphabet \(\Sigma\).
We use \(\mathcal{D}(\Omega)\) to denote the set of discrete probability distributions on \(\Omega\), i.e., functions \(\mu\colon\Omega\to[0,1]\), such that \(\mu(X)=1\), where \(\mu(E)=\sum_{x\in E}\mu(x)\) for \(E\subseteq X\). For a proposition \(p\), we write \([\![p]\!]\) for the Iverson bracket of \(p\), i.e., \([\![p]\!]=1\) if \(p\) is true, otherwise \(0\).
A labelled continuous-time Markov chain (CTMC) is defined as follows.
A labelled CTMC is a tuple \(\mathcal{M}=(S,R,\pi,\ell)\) where \(S\) is a finite set of states, \(R\colon S\times S\to\mathbb{R}_{\geq 0}\) is the transition rate function, \(\pi\in\mathcal{D}(S)\) the initial distribution of states, and \(\ell\colon S\to L\) is a labelling function which assigns to each state an observable label \(\ell(s)\).
The transition rate function assigns rates \(r=R(s,s^{\prime})\) to each pair of states \(s,s^{\prime}\in S\) which are to be seen as transitions of the form \(s\xrightarrow{r}s^{\prime}\). A transition \(s\xrightarrow{r}s^{\prime}\) can only occur if \(r>0\). In this case, the probability of this transition to be triggered within \(\tau\in\mathbb{R}_{>0}\) time-units is \(1-e^{-\tau\,\tau}\). When, from a state \(s\), there are more than one outgoing transition with positive rate, we are in presence of a _race condition_. In this case, the first transition to be triggered
determines which label is observed as well as the next state of the CTMC. According to these dynamics, the time spent in state \(s\) before any transition occurs, called _dwell time_, is exponentially distributed with parameter \(E(s)=\sum_{s^{\prime}\in S}R(s,s^{\prime})\), called _exit-rate_ of \(s\). A state \(s\) is called _absorbing_ if \(E(s)=0\), that is, \(s\) has no outgoing transition. Accordingly, when the CTMC ends in an absorbing state it will remain in the same state indefinitely. The probability that the transition \(s\xrightarrow{r}s^{\prime}\) is triggered from \(s\) is \(r/E(s)\) and is independent from the time at which it occurs. Accordingly, from the CTMC \(\mathcal{M}\), we construct a (labelled) discrete-time Markov chain \(\mathit{emb}(\mathcal{M})=(S,P,\pi,\ell)\) with transition probability function \(P\colon S\times S\to[0,1]\) defined as
\[P(s,s^{\prime})=\begin{cases}R(s,s^{\prime})/E(s)&\text{if }E(s)\neq 0\\ 1&\text{if }E(s)=0\text{ and }s=s^{\prime}\\ 0&\text{otherwise}\end{cases}\]
A CTMC can be equivalently described as a tuple \((S,\rightarrow,s_{0},\ell)\) where \(\rightarrow\subseteq S\times\mathbb{R}_{\geq 0}\times S\) is a transition _relation_. The transition rate function \(R\) induced by \(\rightarrow\) is obtained as, \(R(s,s^{\prime})=\sum\{r\mid s\xrightarrow{r}s^{\prime}\}\) for arbitrary \(s,s^{\prime}\in S\).
An _infinite path_ of a CTMC \(\mathcal{M}\) is a sequence \(s_{0}\tau_{0}s_{1}\tau_{1}s_{2}\tau_{2}\cdots\in(S\times\mathbb{R}_{>0})^{ \omega}\) where \(R(s_{i},s_{i+1})>0\) for all \(i\in\mathbb{N}\). A _finite path_ is a sequence \(s_{0}\tau_{0}\cdots s_{k-1}\tau_{k-1}s_{k}\) where \(R(s_{i},s_{i+1})>0\) and \(\tau_{i}\in\mathbb{R}_{>0}\) for all \(i\in\{1,\ldots,k-1\}\) and \(s_{k}\) is absorbing. The meaning of a path is that the system started in state \(s_{0}\), where it stayed for time \(\tau_{0}\), then transitioned to state \(s_{1}\) where it stayed for time \(\tau_{1}\), and so on. For a finite path the system eventually reaches an absorbing state \(s_{k}\), where it remains. We denote by \(\mathbf{Path}_{\mathcal{M}}\) the set of all (infinite and finite) paths of \(\mathcal{M}\). The formal definition of the probability space over \(\mathbf{Path}_{\mathcal{M}}\) induced by \(\mathcal{M}\) can be given by following the classical cylinder set construction (see e.g., [4, 20]).
Finally, we define the random variables \(S_{i}\), \(L_{i}\), and \(T_{i}\) (\(i\in\mathbb{N}\)) that respectively indicate the \(i\)-th state, its label, and \(i\)-th dwell time of a path.
### The MM Algorithm
The MM algorithm is an iterative optimisation method. The acronym MM has a double interpretation: in minimization problems, the first M stands for majorize and the second for minorize; dually, in maximization problems, the first M stands for minorize and the second for maximize. In this paper we only focus on maximizing an objective function \(f(\mathbf{x})\), hence we tailor the presentation of the general principles of the MM framework to maximization problems. The MM algorithm is based on the concept of _surrogate function_. A surrogate function \(g(\mathbf{x}\mid\mathbf{x}_{m})\) is said to _minorize_ a function \(f(\mathbf{x})\) at \(\mathbf{x}_{m}\) if
\[f(\mathbf{x}_{m})=g(\mathbf{x}_{m}\mid\mathbf{x}_{m})\,, \tag{1}\] \[f(\mathbf{x})\geq g(\mathbf{x}\mid\mathbf{x}_{m})\quad\text{for all } \mathbf{x}\neq\mathbf{x}_{m}\,. \tag{2}\]
In the maximization variant of the MM algorithm, we maximize the surrogate minorizing function \(g(\mathbf{x}\mid\mathbf{x}_{m})\) rather than the actual function \(f(\mathbf{x})\). If \(\mathbf{x}_{m+1}\) denotes the maximum of the surrogate \(g(\mathbf{x}\mid\mathbf{x}_{m})\), then we can show that the next iterate \(\mathbf{x}_{m+1}\) forces \(f(\mathbf{x})\) uphill, Indeed, the inequalities
\[f(\mathbf{x}_{m})=g(\mathbf{x}_{m}\mid\mathbf{x}_{m})\leq g(\mathbf{x}_{m+1 }\mid\mathbf{x}_{m})\leq f(\mathbf{x}_{m+1})\]
follow directly from the definition of \(\mathbf{x}_{m+1}\) and the axioms (1) and (2).
The art in devising an MM algorithm revolves around intelligent choices of minorizing functions. This work relies on three inequalities. The first basic minorization builds upon Jensen's inequality. For \(x_{i}>0\), \(y_{i}>0\) (\(i=1\ldots n\)),
\[\ln\left(\sum_{i=1}^{n}x_{i}\right)\geq\sum_{i=1}^{n}\frac{y_{i}}{\sum_{j=1}^{n }y_{j}}\ln\left(\frac{\sum_{j=1}^{n}y_{j}}{y_{i}}x_{i}\right) \tag{3}\]
Note that the above inequality becomes an equality whenever \(x_{i}=y_{i}\) for all \(i=1\ldots n\). Remarkably, the EM algorithm [10] is a special case of the MM algorithm which revolves around the above basic minorization when additionally the values \(x_{i}\) and \(y_{i}\) describe a probability distribution, i.e., \(\sum_{i=1}^{n}x_{i}=1\) and \(\sum_{i=1}^{n}y_{i}=1\).
Our second basic minorization derives from the strict concavity of the logarithm function, which implies for \(x,y>0\) that
\[-\ln x\geq 1-\ln y-x/y \tag{4}\]
with equality if and only if \(x=y\). Note that the above inequality restates the supporting hyperplane property of the convex function \(-\ln x\).
The third basic minorization [23, SS8.3] derives from the generalized arithmetic-geometric mean inequality which implies, for positive \(x_{i}\), \(y_{i}\), and \(\alpha_{i}\) and \(\alpha=\sum_{i=1}^{n}\alpha_{i}\), that
\[-\prod_{i=1}^{n}x_{i}^{\alpha_{i}}\geq-\left(\prod_{i=1}^{n}y_{i}^{\alpha_{i}} \right)\sum_{i=1}^{n}\frac{\alpha_{i}}{\alpha}\left(\frac{x_{i}}{y_{i}} \right)^{\alpha}\,. \tag{5}\]
Note again that equality holds when all \(x_{i}=y_{i}\).
Because piecemeal composition of minorization works well, our derivations apply the above basic minorizations to strategic parts of the objective function, leaving other parts untouched. Finally, another aspect that can simplify the derivation of MM algorithms comes from the fact that the iterative maximization procedure hinges on finding \(\mathbf{x}_{m+1}=\operatorname*{arg\,max}_{\mathbf{x}}g(\mathbf{x}\mid \mathbf{x}_{m})\). Therefore, we can equivalently use any other surrogate function \(g^{\prime}(\mathbf{x}\mid\mathbf{x}_{m})\) satisfying \(\operatorname*{arg\,max}_{\mathbf{x}}g(\mathbf{x}\mid\mathbf{x}_{m})= \operatorname*{arg\,max}_{\mathbf{x}}g^{\prime}(\mathbf{x}\mid\mathbf{x}_{m})\). This is for instance the case when \(g(\mathbf{x}\mid\mathbf{x}_{m})\) and \(g^{\prime}(\mathbf{x}\mid\mathbf{x}_{m})\) are equal up to some (irrelevant) constant \(c\), that is \(g(\mathbf{x}\mid\mathbf{x}_{m})=g^{\prime}(\mathbf{x}\mid\mathbf{x}_{m})+c\).
## 3 Parametric Continuous-time Markov chains
As mentioned in the introduction, the Prism language offers constructs for the modular design of CTMCs within a uniform framework that represents synchronous and asynchronous module interaction. For example, consider the Prism models depicted in Fig. 1. The behavior of each module is described by a set of commands which take the form \([\texttt{action}]\;\texttt{guard}\;\rightarrow\texttt{rate}\texttt{: update}\) representing a set of transitions of the module. The guard is a predicate over the state variables in the model. The update and the rate describe a transition that the module can make if the guard is true. The command optionally includes an action used to force two or more modules to make transitions simultaneously (i.e., to synchronize). For example, in the left model in Fig. 1, in state \((50,20,5)\) (i.e., \(s=50\), \(i=20\), and \(r=5\)), the composed model can move to state \((49,21,5)\) by synchronizing over the action infection. The rate of this transition is equal to the product of the individual rates of each module participating in an infection transition, which in this case amounts to \(0.01\cdot\texttt{beta}\cdot\texttt{block}\). Commands that do not have an action represent asynchronous transitions that can be taken independently (i.e., asynchronously) from other modules.
By default, all modules are combined following standard parallel composition in the sense of the parallel operator from Communicating Sequential Processes algebra (CPS), that is, modules synchronize over all their common actions. The Prism language offers also other CPS-based operators to specify the way in which modules are composed in parallel.
Therefore, a parametric representation of a CTMC described by a Prism model shall consider _transition rate expressions_ which are closed under finite sums and finite products: sums deal with commands with overlapping guards and updates, while products take into account synchronization.
Let \(\mathbf{x}=(x_{1},\ldots,x_{n})\) be a vector of parameters. We write \(\mathcal{E}\) for the set of polynomial maps \(f\colon\mathbb{R}_{\geq 0}^{n}\to\mathbb{R}_{\geq 0}\) of the form \(f(\mathbf{x})=\sum_{i=1}^{m}b_{i}\prod_{j=1}^{n}x_{j}^{a_{ij}}\), where \(b_{i}\in\mathbb{R}_{\geq 0}\) and \(a_{ij}\in\mathbb{N}\) for \(i\in\{1,\ldots,m\}\) and \(j\in\{1,\ldots,n\}\). Note that \(\mathcal{E}\) is a commutative semiring satisfying the requirements established above for transition rate expressions.
We are now ready to introduce the notion of _parametric_ continuous-time Markov chain.
A parametric CTMC is a tuple \(\mathcal{P}=(S,R,s_{0},\ell)\) where \(S\), \(s_{0}\), and \(\ell\) are defined as for CTMCs, and \(R\colon S\times S\to\mathcal{E}\) is a parametric transition rate function.
Intuitively, a parametric CTMC \(\mathcal{P}=(S,R,s_{0},\ell)\) defines a family of CTMCs arising by plugging in concrete values for the parameters \(\mathbf{x}\). Given a parameter evaluation \(\mathbf{v}\in\mathbb{R}_{\geq 0}^{n}\), we denote by \(\mathcal{P}(\mathbf{v})\) the CTMC associated with \(\mathbf{v}\), and \(R(\mathbf{v})\) for its rate transition function. Note that by construction \(R(\mathbf{v})(s,s^{\prime})\geq 0\) for all \(s,s^{\prime}\in S\), therefore \(\mathcal{P}(\mathbf{v})\) is a proper CTMC.
As for CTMCs, parametric transitions rate functions can be equivalently described by means of a transition relation \(\to\subseteq S\times\mathcal{E}\times S\), where the parametric transition rate from \(s\) to \(s^{\prime}\) is \(R(s,s^{\prime})(\mathbf{x})=\sum\{f(\mathbf{x})\mid s\stackrel{{ f}}{{\longrightarrow}}s^{\prime}\}\).
Consider the SIR model in Fig. 1 with parameters beta, gamma, and plock. The semantics of this model is a parametric CTMC with states \(S=\{(s,i,r)\mid s,i,r\in\{0,\ldots,10^{5}\}\}\) and initial state \((99936,48,16)\). For example, the initial state has two outgoing transitions: one that goes to \((99935,49,16)\) with rate \(48.96815\cdot\texttt{beta\_block}\), and the other that goes to \((99935,48,17)\) with rate \(49\cdot\texttt{gamma\_block}\).
One relevant aspect of the class of parametric CTMCs is the fact that it is closed under parallel composition in the sense described above. As a consequence, the study of parameter estimation of Prism models from observed data can be conveniently addressed as maximum likelihood estimation for parametric CTMCs.
## 4 Learning Parameters from Observed Sample Data
In this section we present two algorithms to estimate the parameters of parametric CTMC \(\mathcal{P}\) from a collection of i.i.d. observation sequences \(\mathcal{O}=\mathbf{o}_{1},\ldots,\mathbf{o}_{J}\). The two algorithms consider two different types of observations: timed and non-timed. A _timed observation_\(\ell_{0:k},\tau_{0:k-1}\) is a finite sequence \(\ell_{0}\tau_{0}\cdots\tau_{k-1}\ell_{k}\) representing consecutive dwell times and state labels observed during a random execution of \(\mathcal{M}\). Similarly, a _non-timed observation_\(\ell_{0:k}\) represents a sequence of consecutive state labels observed during a random execution of \(\mathcal{M}\). Both algorithms follow a maximum likelihood approach: the parameters \(\mathbf{x}\) are estimated to maximize the joint likelihood \(\mathcal{L}(\mathcal{P}(\mathbf{x})|\mathcal{O})\) of the observed data. When \(\mathcal{P}\) and \(\mathcal{O}\) are clear from the context, we simply write \(\mathcal{L}(\mathbf{x})\) for the joint likelihood.
Hereafter we present a solution to the maximum likelihood estimation problem building on an optimization framework known by the name MM algorithm [23, 24]. In this line, our algorithms start with an initial hypothesis \(\mathbf{x}_{0}\) and iteratively improve the current hypothesis
\(\mathbf{x}_{m}\), in the sense that the likelihood associated with the next hypothesis \(\mathbf{x}_{m+1}\) enjoys the inequality \(\mathcal{L}(\mathbf{x}_{m})\leq\mathcal{L}(\mathbf{x}_{m+1})\). The procedure terminates when the improvement does not exceed a fixed threshold \(\epsilon\), namely when \(\mathcal{L}(\mathbf{x}_{m})-\mathcal{L}(\mathbf{x}_{m-1})\leq\epsilon\).
### Learning from Timed Observations
Assume we have \(J\) i.i.d. timed observation sequences \(\mathcal{O}=\mathbf{o}_{1},\ldots,\mathbf{o}_{J}\) where \(\mathbf{o}_{j}=\ell_{0:k_{j}}^{j},\tau_{1:k_{j}-1}^{j}\) (\(j=1\ldots J\)). We want to estimate a valuation of the parameters \(\mathbf{x}\) of \(\mathcal{P}\) that maximises the joint likelihood function \(\mathcal{L}(\mathbf{x})=\prod_{j=1}^{J}l(\mathbf{o}_{j}|\mathcal{P}(\mathbf{x}))\) where the likelihood of an observation \(\mathbf{o}=\ell_{0:k},\tau_{0:k-1}\) for a generic CTMC \(\mathcal{M}\) is
\[l(\mathbf{o}|\mathcal{M})=\sum_{s_{0:k}}l(S_{0:k}=s_{0:k},L_{0: k}=\ell_{0:k},T_{0:k-1}=\tau_{0:k-1}|\mathcal{M})\] \[\quad=\sum_{s_{0:k}}p[S_{0:k}=s_{0:k},L_{0:k}=\ell_{0:k}|\mathcal{ M}]\cdot l(S_{0:k}=s_{0:k},T_{0:k-1}=\tau_{0:k-1}|\mathcal{M})\] \[\quad=\sum_{s_{0:k}}[\![\ell(s_{0:k})\!\!=\!\!\ell_{i:k}]\!\big{(} \prod_{i=0}^{k-1}R(s_{i},s_{i+1})/E(s_{i})\big{)}\,\big{(}\prod_{i=0}^{k-1}E(s_ {i})\,e^{-E(s_{i})\tau_{i}}\big{)}\] \[\quad=\sum_{s_{0:k}}[\![\ell(s_{0:k})=\ell_{i:k}]\!]\prod_{i=0}^{ k-1}R(s_{i},s_{i+1})\cdot e^{-E(s_{i})\tau_{i}}\,. \tag{6}\]
Before presenting an MM algorithm to solve the MLE problem above, we find it convenient to introduce some notation. Let \(\mathcal{P}=(S,\rightarrow,s_{0},\ell)\), we write \(f_{\rho}\) for the rate map of the transition \(\rho\in\rightarrow\), and write \(s\rightarrow\cdot\) for the set of transitions departing from \(s\in S\).
Without loss of generality, we assume that the rate function \(f_{\rho}\) of a transition is either a constant map, i.e., \(f_{\rho}(\mathbf{x})=c_{r}\) for some \(c_{r}\geq 0\) or a map of the form \(f_{\rho}(\mathbf{x})=c_{\rho}\prod_{i=1}^{n}x_{i}^{a_{\rho i}}\) for some \(c_{\rho}>0\) and \(a_{\rho i}>0\) for some \(i\in\{1,\ldots,n\}\); we write \(a_{\rho}\) for \(\sum_{i=1}^{n}a_{\rho i}\). We denote by \(\xrightarrow{c}\) the subset of transitions with constant rate function and \(\xrightarrow{\mathbf{x}}\) for the remaining transitions.
To maximize \(\mathcal{L}(\mathbf{x})\) we propose to employ an MM algorithm based on the following surrogate function \(g(\mathbf{x}|\mathbf{x}_{m})=\sum_{i=1}^{n}g(x_{i}|\mathbf{x}_{m})\) where
\[g(x_{i}|\mathbf{x}_{m})=\sum_{\rho\in\xrightarrow{\mathbf{x}}}\xi_{\rho}a_{ \rho i}\ln x_{i}-\sum_{s}\sum_{\rho\in s\xrightarrow{\mathbf{x}}}\frac{f_{ \rho}(\mathbf{x}_{m})a_{\rho i}\gamma_{s}}{a_{\rho}(x_{mi})^{a_{\rho}}}x_{i}^{a _{\rho}} \tag{7}\]
Here \(\gamma_{s}=\sum_{j=1}^{J}\sum_{t=0}^{k_{j}-1}\gamma_{s}^{j}(t)\tau_{t}^{j}\) and \(\xi_{\rho}=\sum_{j=1}^{J}\sum_{t=0}^{k_{j}-1}\xi_{\rho}^{j}(t)\), where \(\gamma_{s}^{j}(t)\) denotes the likelihood that having observed \(\mathbf{o}_{j}\) on a random execution of \(\mathcal{P}(\mathbf{x}_{m})\) the state \(S_{t}=s\); and \(\xi_{\rho}^{j}(t)\) is the likelihood that for such random execution the transition performed from state \(S_{t}\) is \(\rho\).
The following theorem states that the surrogate function \(g(\mathbf{x}|\mathbf{x}_{m})\) is a minorizer of the log-likelihood relative to the observed dataset \(\mathcal{O}\).
The surrogate function \(g(\mathbf{x}|\mathbf{x}_{m})\) minorizes \(\ln\mathcal{L}(\mathbf{x})\) at \(\mathbf{x}_{m}\) up to an irrelevant constant.
By Theorem 3.1 and the fact that the logarithm is an increasing function, we obtain that the parameter valuation that achieves the maximum of \(g(\mathbf{x}|\mathbf{x}_{m})\) improves the current hypothesis \(\mathbf{x}_{m}\) relative to likelihood function \(\mathcal{L}(\mathbf{x})\).
Let \(\mathbf{x}_{m+1}=\arg\max_{\mathbf{x}}g(\mathbf{x}|\mathbf{x}_{m})\), then \(\mathcal{L}(\mathbf{x}_{m})\leq\mathcal{L}(\mathbf{x}_{m+1})\).
The surrogate function \(g(\mathbf{x}|\mathbf{x}_{m})\) is easier to maximize than \(\mathcal{L}(\mathbf{x})\) because its parameters are separated. Indeed, maximization of \(g(\mathbf{x}|\mathbf{x}_{m})\) is done by point-wise maximization of each univariate function \(g(x_{i}|\mathbf{x}_{m})\). This has two main advantages: first, it is easier to handle high-dimensional problems [23, 24]; second, if one can choose to fix the value of some parameters, say \(I\subset\{1\ldots n\}\) and the maximization of \(g(\mathbf{x}|\mathbf{x}_{m})\) can be performed by maximizing \(g(x_{i}|\mathbf{x}_{m})\) for each \(i\notin I\).
The maxima of \(g(x_{i}|\mathbf{x}_{m})\) are found among the _non-negative_ roots1 of the polynomial function \(P_{i}\colon\mathbb{R}\to\mathbb{R}\)
Footnote 1: Note that \(P_{i}\) always admits non-negative roots. Indeed, \(P_{i}(0)\leq 0\) and \(P_{i}(M)>0\) for \(M>0\) sufficiently large. Therefore, by the intermediate value theorem, there exists \(y_{0}\in[0,M)\) such that \(P_{i}(y_{0})=0\).
\[P_{i}(y)=\sum_{s}\sum_{\rho\in s\xrightarrow{\ast}}\,\frac{f_{\rho}(\mathbf{x} _{m})a_{\rho i}\gamma_{s}}{(x_{mi})^{a_{\rho}}}y^{a_{\rho}}-\sum_{\rho\in \xrightarrow{\ast}}\xi_{\rho}a_{\rho i} \tag{8}\]
There are some cases when (8) admits a closed-form solution. For instance, when the parameter index \(i\) satisfies the property \(\forall\rho\in\xrightarrow{\ast}.a_{\rho i}>0\implies a_{\rho}=C\) for some constant \(C\in\mathbb{N}\), then maximization of \(g(x_{i}|\mathbf{x}_{m})\) leads to the following update
\[x_{(m+1)i}=\begin{bmatrix}(x_{mi})^{C}\sum_{\rho\in s\xrightarrow{\ast}}\xi_ {\rho}a_{\rho i}\\ \overline{\sum_{s}\sum_{\rho\in s\xrightarrow{\ast}}f_{\rho}(\mathbf{x}_{m})a _{\rho i}\gamma_{s}}\end{bmatrix}^{1/C}\]
A classic situation when the above condition is fulfilled occurs when all transitions \(\rho\) where \(x_{i}\) appear (i.e., \(a_{\rho i}>0\)), the transition rate is \(f_{\rho}(\mathbf{x})=c_{\rho}x_{i}\) (i.e., \(a_{\rho i}=a_{\rho}=1\)). In that case, the above equation simplifies to
\[x_{(m+1)i}=\frac{\sum_{\rho\in\xrightarrow{\ast}}\xi_{\rho}}{\sum_{s}\sum_{ \rho\in s\xrightarrow{\ast}}c_{\rho}\gamma_{s}}\]
For example, the parametric CTMC associated with the SIR models in Fig. 1 satisfies the former property for all parameters, because all transition rates are expressions either of the form \(c\cdot\texttt{block}\cdot\texttt{beta}\) or the form \(c\cdot\texttt{block}\cdot\texttt{gamma}\) for some constant \(c>0\). Furthermore, if we fix the value of the parameter block the remaining parameters satisfy the latter property. In Section 6, we will take advantage of this fact for our calculations.
To complete the picture, we show how to compute the coefficients \(\gamma_{s}^{j}(t)\) and \(\xi_{\rho}^{j}(t)\). To this end, we employ standard forward and backward procedures. We define the forward function \(\alpha_{s}^{j}(t)\) and the backward function \(\beta_{s}^{j}(t)\) respectively as
\[\alpha_{s}^{j}(t) =l(L_{0:t}=\ell_{0:t}^{j},T_{0:t}=\tau_{0:t}^{j},S_{t}=s|\mathcal{ P}(\mathbf{x}_{m}))\,,\text{ and}\] \[\beta_{s}^{j}(t) =l(L_{t+1:k_{j}}=\ell_{t+1:k_{j}}^{j},T_{t+1:k_{j}-1}=\tau_{t+1:k _{j}-1}^{j}|S_{t}=s,\mathcal{P}(\mathbf{x}_{m}))\,.\]
These can be computed using dynamic programming according to the following recurrences: let \(\mathcal{P}(\mathbf{x}_{m})=(S,R,s_{0},\ell)\), then
\[\alpha_{s}^{j}(t) =\begin{cases}\llbracket s=s_{0}\rrbracket\,\omega_{s}^{j}(t)& \text{if }t=0\\ \omega_{s}^{j}(t)\sum_{s^{\prime}\in S}\frac{R(s^{\prime},s)}{E(s^{\prime})} \,\alpha_{s^{\prime}}^{j}(t-1)&\text{if }0<t\leq k_{j}\end{cases} \tag{9}\] \[\beta_{s}^{j}(t) =\begin{cases}1&\text{if }t=k_{j}\\ \sum_{s^{\prime}\in S}\frac{R(s,s^{\prime})}{E(s)}\,\beta_{s^{\prime}}^{j}(t +1)\,\omega_{s^{\prime}}^{j}(t+1)&\text{if }0\leq t<k_{j}\end{cases} \tag{10}\]
where
\[\omega_{s}^{j}(t)=\begin{cases}\llbracket\ell(s)=\ell_{t}^{j}\rrbracket E(s)e^{-E(s)\tau_{t}^{j}}&\text{if }0\leq t<k_{j},\\ \llbracket\ell(s)=\ell_{t}^{j}\rrbracket&\text{if }t=k_{j}.\end{cases} \tag{11}\]
Finally, for \(s\in S\) and \(\rho=(s\xrightarrow{f_{\rho}}s^{\prime})\), \(\gamma_{s}^{j}(t)\) and \(\xi_{\rho}^{j}(t)\) are related to the forward and backward functions as follows
\[\gamma_{s}^{j}(t) =\frac{\alpha_{s}^{j}(t)\,\beta_{s}^{j}(t)}{\sum_{s^{\prime}\in S }\alpha_{s^{\prime}}^{j}(t)\,\beta_{s^{\prime}}^{j}(t)}\,, \xi_{\rho}^{j}(t) =\frac{\alpha_{s}^{j}(t)f_{\rho}(\mathbf{x}_{m})\,\omega_{s^{ \prime}}^{j}(t+1)\,\beta_{s^{\prime}}^{j}(t+1)}{E(s)\sum_{s^{\prime \prime}\in S}\alpha_{s^{\prime\prime}}^{j}(t)\,\beta_{s^{\prime\prime}}^{j}(t) }\,. \tag{12}\]
### Learning from Non-timed Observations
Let now assume we have collected \(J\) i.i.d. non-timed observation sequences \(\mathcal{O}=\mathbf{o}_{1},\ldots,\mathbf{o}_{J}\) where \(\mathbf{o}_{j}=\ell_{0:k_{j}}^{j}\) (\(j=1\ldots J\)). As done before, we want to maximize the joint likelihood function \(\mathcal{L}(\mathbf{x})=\prod_{j=1}^{J}l(\mathbf{o}_{j}|\mathcal{P}(\mathbf{ x}))\) where the likelihood an arbitrary non-timed observation \(\mathbf{o}=\ell_{0:k}\) relative to the CTMC \(\mathcal{M}\) is
\[l(\ell_{1:k}^{j}|\mathcal{M}) =\sum_{s_{0:k}}P[S_{0:k}=s_{0:k},L_{0:k}=\ell_{1:k}^{j}|\mathcal{ M}] \tag{13}\] \[=\sum_{s_{0:k}}[\ell(s_{0:k})\!\!=\!\!\ell_{i:k}]\prod_{i=0}^{k-1 }R(s_{i},s_{i+1})/E(s_{i})\,. \tag{14}\]
Looking at the formula above, it is clear that whenever two CTMCs \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) have the same embedded Markov chain they will have also the same likelihood value, i.e., \(\mathcal{L}(\mathcal{M}_{1}|\mathcal{O})=\mathcal{L}(\mathcal{M}_{2}|\mathcal{ O})\). The fact that dwell time variables are not observable leaves us with an MLE objective that does not fully capture the continuous-time aspects of the model under estimation.
A similar problem shows up also in the Bradley-Terry model of ranking [6]. This model is intuitively understood via a sport analogy. Given a set of teams where each team \(i\) is assigned a rank parameter \(r_{i}>0\), assuming that ties are not possible, team \(i\) beats team \(j\) with probability \(r_{i}/(r_{i}+r_{j})\). If this outcome occurs \(c_{ij}\) times during a tournament, then the probability of the whole tournament is \(L(\mathbf{r})=\prod_{i,j}(r_{i}/(r_{i}+r_{j}))^{c_{ij}}\), assuming that games are independent one another. Clearly, \(L(\mathbf{r})=L(c\,\mathbf{r})\) for any \(c>0\). Under mild assumptions, the function \(L(\mathbf{r})\) admits a unique maximum when the value of one rank, say \(r_{1}\), is fixed a priori.
Back to our problem, we claim that the race conditions among transitions can be interpreted under the Bradley-Terry model of ranking. As a consequence, when the number of parametric transitions is sufficiently small relative to that of constant transitions, the estimation of the unknown transition rates can hinge on the value of the transition rates that are fixed, leading the algorithm to converge to the real parameter values.
For the non-timed maximum likelihood estimation problem we devise an MM algorithm based on the surrogate function \(h(\mathbf{x}|\mathbf{x}_{m})=\sum_{i=1}^{n}h(x_{i}|\mathbf{x}_{m})\) for
\[h(x_{i}|\mathbf{x}_{m})=\sum_{\rho\in\xrightarrow{\mathbf{x}}}\hat{\xi}_{\rho }a_{\rho i}\ln x_{i}-\sum_{s}\sum_{\rho\in s\xrightarrow{\mathbf{x}}}\frac{f_ {\rho}(\mathbf{x}_{m})\,a_{\rho i}\,\hat{\gamma}_{s}}{E_{m}(s)\,a_{\rho}\,x_{ mi}^{a_{\rho}}}x_{i}^{a_{\rho}} \tag{15}\]
where \(E_{m}(s)\) denotes the exit rate of the state \(s\) in \(\mathcal{P}(\mathbf{x}_{m})\), \(\hat{\gamma}_{s}=\sum_{j=1}^{J}\sum_{t=0}^{k_{j}-1}\hat{\gamma}_{s}^{j}(t)\), and \(\hat{\xi}_{\rho}=\sum_{j=1}^{J}\sum_{t=0}^{k_{j}-1}\hat{\xi}_{\rho}^{j}(t)\).
This time, the coefficients \(\hat{\gamma}_{s}^{j}(t)\) and \(\hat{\xi}_{\rho}^{j}(t)\) denote respectively the probability that having observed \(\mathbf{o}_{j}\) in a random execution of \(\mathcal{P}(\mathbf{x}_{m})\), the state \(S_{t}\) is \(s\), and the transition performed in state \(S_{t}\) is \(\rho\). \(\hat{\gamma}_{s}^{j}(t)\) and \(\hat{\xi}_{r}^{j}(t)\) can be computed using the same dynamic programming procedure described in Section 4.1 by replacing each occurrence of \(\omega_{s}^{j}(t)\) with \(\hat{\omega}_{s}^{j}(t)=[\![\ell(s)=\ell_{t}^{j}]\!]\).
The following theorem states that the surrogate function \(h(\mathbf{x}|\mathbf{x}_{m})\) is a minorizer of the log-likelihood relative to the observed (non-timed) dataset \(\mathcal{O}\).
The surrogate function \(h(\mathbf{x}|\mathbf{x}_{m})\) minorizes \(\ln\mathcal{L}(\mathbf{x})\) at \(\mathcal{H}_{m}\) up to an irrelevant constant.
Proof.: (sketch) To easy the presentation we assume that the parametric CTMC \(\mathcal{P}\) under study has at most one transition between each pair of states. Starting from the log-likelihood function \(\ln\mathcal{L}(\mathbf{x})\), we proceed with the following minorization steps [2]
\[\ln\mathcal{L}(\mathbf{x})=\sum_{j=1}^{J}\ln l(\mathbf{o}_{j}| \mathcal{P}(\mathbf{x}))=\sum_{j=1}^{J}\ln\sum_{s_{0:k_{j}}}P[s_{0:k_{j}}, \mathbf{o}_{j}|\mathcal{P}(\mathbf{x})]\] (by (13)) \[\geq\sum_{j=1}^{J}\sum_{s_{0:k_{j}}}P[s_{0:k_{j}}|\mathbf{o}_{j}, \mathcal{P}(\mathbf{x}_{m})]\ln\left(\frac{P[s_{0:k_{j}},\mathbf{o}_{j}| \mathcal{P}(\mathbf{x})]}{P[s_{0:k_{j}}|\mathbf{o}_{j},\mathcal{P}(\mathbf{x}_ {m})]}\right)\] (by (3)) \[\cong\sum_{j=1}^{J}\sum_{t=1}^{k_{j}}\sum_{s_{0:k_{j}}}P[s_{0:k_{ j}}|\mathbf{o}_{j},\mathcal{P}(\mathbf{x}_{m})]\big{(}\ln R(s_{t},s_{t+1})-\ln E(s_{t} )\big{)}\] (by (14)) \[\cong\sum_{\rho\in\xrightarrow{\ast}}\hat{\xi}_{\rho}\ln f_{\rho} (\mathbf{x})+\sum_{s}\hat{\gamma}_{s}(-\ln E(s))\] (up-to const) \[\geq\sum_{i=1}^{n}\sum_{\rho\in\xrightarrow{\ast}}\hat{\xi}_{ \rho}a_{\rho i}\ln x_{i}+\sum_{s}\hat{\gamma}_{s}\left(-\frac{E(s)}{E_{m}(s)}\right)\] (by (4), up-to const) \[\geq\sum_{i=1}^{n}\left[\sum_{\rho\in\xrightarrow{\ast}}\hat{ \xi}_{\rho}a_{\rho i}\ln x_{i}-\sum_{s}\sum_{\rho\in s\xrightarrow{\ast}} \frac{\hat{\gamma}_{s}f_{\rho}(\mathbf{x}_{m})a_{\rho i}}{E_{m}(s)a_{\rho}x_{ mi}^{a_{\rho}}}x_{i}^{a_{\rho}}\right]\] ( \[\triangle\triangle\triangle\] ) \[=h(\mathbf{x}|\mathbf{x}_{m})\] (by ( 15 ))
Where the step (\(\triangle\triangle\)) is justified by the minorization of \(-E(s)\) obtained via (5) as follows
\[-E(s)\cong\sum_{\rho\in s\xrightarrow{\ast}}c_{\rho}\left(-\prod_{i=1}^{n}x_{ i}^{a_{\rho i}}\right)\geq\sum_{\rho\in s\xrightarrow{\ast}}-f_{\rho}(\mathbf{x}_{m}) \sum_{i=1}^{n}\frac{a_{\rho i}}{a_{\rho}}\left(\frac{x_{i}}{x_{mi}}\right)^{a_{ \rho}}.\]
Hence, there exists a constant \(C>0\) such that \(h(\mathbf{x}|\mathbf{x}_{m})+C\) minorizes \(\ln\mathcal{L}(\mathbf{x})\) at \(\mathbf{x}_{m}\).
Notably, in the proof of Theorem 3 we employ the minorization (4) used in [25] for finding rankings in the Bradley-Terry model.
As an immediate corollary of Theorem 3, we have that the parameter valuations that maximize \(h(\mathbf{x}|\mathbf{x}_{m})\) improve the current hypothesis \(\mathbf{x}_{m}\) with respect to the ML objective.
Let \(\mathbf{x}_{m+1}=\arg\max_{\mathbf{x}}h(\mathbf{x}|\mathbf{x}_{m})\), then \(\mathcal{L}(\mathbf{x}_{m})\leq\mathcal{L}(\mathbf{x}_{m+1})\).
As before, maximization of \(h(\mathbf{x}|\mathbf{x}_{m})\) is achieved by point-wise maximization of \(h(x_{i}|\mathbf{x}_{m})\). The maxima of \(h(x_{i}|\mathbf{x}_{m})\) are found among the _non-negative_ roots of the polynomial function
\[Q_{i}(y)=\sum_{s}\sum_{\rho\in s\xrightarrow{\ast}}\frac{f_{\rho}(\mathbf{x}_ {m})a_{\rho i}\hat{\gamma}_{s}}{E(s)(x_{mi})^{a_{\rho}}}y^{a_{\rho}}-\sum_{\rho \in\xrightarrow{\ast}}\hat{\xi}_{\rho}a_{\rho i} \tag{16}\]
By arguments similar to those explained in Remark 3, Equation (16) may admit a closed-form solution.
## 5 Experimental evaluation
We implemented the algorithms from Section 4 as an extention of the Jajapy Python library [29], which has the advantage of being compatible with Prism models. In this section, we present an empirical evaluation of the efficiency of our algorithms as well as the quality of their outcome. To this end, we use the tandem queueing network model from [16] (_cf._ Fig. 2) as a benchmark for our evaluation.
The experiments have been designed according to the following setup. We assume that the state of serverC is fully observable --i.e., its state variables sc and ph are- as well as the size c of the queue and the value of lambda. In contrast, we assume that the state of serverM is not observable.
Each experiment consists in estimating the value of the parameters mu1a, mu1b, mu2, and kappa from a training set consisting of 100 observation sequences of length 30, generated by simulating the Prism model depicted in Fig. 2. We perform this experiment both using timed and non-timed observations, by increasing the size c of the queue until the running time of the estimation exceeds a time-out set to 1 hour. We repeat each experiment 10 times by randomly re-sampling the initial values of each unknown parameter \(x_{i}\) in the interval [0.1, 5.0]. We annotate the running time as well as the relative error \(\delta_{i}\) for each parameter \(x_{i}\), calculated according to the formula \(\delta_{i}=|e_{i}-r_{i}|/|r_{i}|\), where \(e_{i}\) and \(r_{i}\) are respectively the estimated value and the real value of \(x_{i}\).
Table 1 reports the results for some selected experiments. The second and third columns provide respectively the number of states and transitions of the parametric CTMC resulting from the choice of c; the fourth column reports the average running time; while the fifth (resp. sixth) column details the average \(L_{1}\)-norm (resp. \(L_{\infty}\)-norm) of the vector \(\delta=(\delta_{i})\), calculated as \(|\delta\|_{1}=\sum_{i}|\delta_{i}|\) (resp. \(|\delta|_{\infty}=\max_{i}|\delta_{i}|\)).
Fig. 3 reports the results of all the experiments in a graphical format where measurements are presented together with their respective error bars.
We observe that the running time is quadratic in the number of states (equivalently, linear
Figure 2: Prism model for the tandem queueing network from [16].
in the size \(|S|+|\rightarrow|\) of the model) both for timed and non-timed observations. However, for non-timed observations, the variance of the measured running times tends to grow with the size of the model. In this respect, we observed that large models required more iterations than small models to converge. Nevertheless, all experiments required at most 20 iterations.
As one may expect, the variance of the measured relative errors is larger on the experiments performed with non-timed observations, and the quality of the estimation is better when employing timed observations. Notably, for timed observations, the quality of the estimation remained stable despite the size of the model increased relatively to the size of the training set. This may be explained by the fact that the parameters occur in many transitions.
## 6 Case Study: SIR modeling of pandemic
In this section, we take as a case study the modeling pipeline proposed by Milazzo [27] for the analysis and simulation in Prism of the spread of COVID-19 in presence of lockdown countermeasures. The modeling pipeline includes: (i) parameter estimation from real data based on a modified SIR model described by means of a system of Ordinary Differential Equations; (ii) translation of the modified SIR model into a CTMC expressed as a Prism model; and (iii) stochastic simulation and model checking with Prism.
In particular, the Prism model devised in step (ii) is exactly the model depicted in Fig. 1 (left). However, to perform the analysis, Milazzo had to apply "a couple of modeling tricks (variable pruning and upper bounds) that allowed state space of the model constructed by Prism to be reduced by several orders of magnitude. The introduction of upper bounds to the values of variables actually introduces a small approximation in the model, that is negligible in practically relevant cases" [27]. We argue that these kinds of modeling tricks are not uncommon in formal verification, but they require the modeler to ensure that the parameter values estimated for the original model are still valid in the approximated one.
In this section, we showcase the use of our algorithm to semi-automatize this task. Specifically, we generate two training sets by simulating the SIR model in Fig. 1 using Prism and, based on that, we re-estimate beta, gamma, and plock on an approximated version of the model (Fig. 4) which is amenable to analysis in Prism.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{c} & \multirow{2}{*}{\(|S|\)} & \multirow{2}{*}{\(|\rightarrow|\)} & \multicolumn{2}{c|}{Running time (s)} & \multicolumn{2}{c|}{\(|\delta|_{1}\)} & \multicolumn{2}{c|}{\(|\delta|_{\infty}\)} \\ \cline{3-10} & & & Timed & Non-timed & Timed & Non-timed & Timed & Non-timed \\ \hline \hline
4 & 45 & 123 & 4.336 & 15.346 & 0.226 & 0.251 & 0.13 & 0.13 \\ \hline
6 & 91 & 269 & 13.219 & 38.661 & 0.399 & 0.509 & 0.173 & 0.329 \\ \hline
8 & 153 & 471 & 37.42 & 90.952 & 0.322 & 0.387 & 0.183 & 0.187 \\ \hline
10 & 231 & 729 & 76.078 & 170.044 & 0.359 & 0.346 & 0.17 & 0.191 \\ \hline
12 & 325 & 1043 & 160.694 & 276.383 & 0.343 & 0.616 & 0.165 & 0.289 \\ \hline
14 & 435 & 1413 & 264.978 & 623.057 & 0.373 & 0.263 & 0.195 & 0.117 \\ \hline
16 & 561 & 1839 & 458.766 & 774.642 & 0.406 & 0.427 & 0.245 & 0.192 \\ \hline
18 & 703 & 2321 & 871.39 & 1134.037 & 0.249 & 0.783 & 0.14 & 0.49 \\ \hline
20 & 861 & 2859 & 1425.65 & 1225.539 & 0.416 & 0.987 & 0.281 & 0.519 \\ \hline
22 & 1035 & 3453 & 2031.587 & 1297.383 & 0.546 & 1.013 & 0.278 & 0.602 \\ \hline
24 & 1225 & 4103 & 2675.794 & 1924.074 & 0.441 & 1.892 & 0.281 & 1.599 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of the performance of the estimation for timed and non-timed observations on the tandem queueing network with different size of the queue.
The first training set represents the spread of the disease without lockdown (i.e., \(\mathtt{plock}=1\)), while the second one is obtained by fixing the value of \(\mathtt{plock}\) estimated in [27] (i.e., \(\mathtt{plock}=0.472081\)). In line with the data set used in [27], both training sets consist of one timed observation reporting the number of infected individuals for a period of 30 days.
The estimation of the parameters \(\mathtt{beta}\), \(\mathtt{gamma}\) and \(\mathtt{plock}\) is performed on the model depicted in Fig. 4. As in [27], we use an approximated version of the original SIR model (_cf_ Fig. 1) obtained by employing a few modeling tricks: variable pruning, set upper bounds on the state variable \(\mathtt{i}\), and re-scaling of the variable \(\mathtt{r}\) in the interval \([0,\mathtt{nb\_r}-1]\). These modeling tricks have the effect to reduce the state space of the underlying CTMC, speeding-up in this way parameter estimation and the following model analysis.
We perform the estimation in two steps. First, we estimate the values of \(\mathtt{beta}\) and \(\mathtt{gamma}\) on the first training set with \(\mathtt{plock}\) set to 1. Then, we estimate the value of \(\mathtt{plock}\) on the second training set with \(\mathtt{beta}\) and \(\mathtt{gamma}\) set to the values estimated in the first step.
Each step was repeated 10 times by randomly re-sampling the initial values of each unknown parameter in the interval \([0,1]\). Table 2 reports the average estimated values and absolute errors relative to each parameter. The average running time3 of each execution of the algorithm was 89.94 seconds.
Footnote 3: Experiments were performed on a Linux machine with an AMD-Ryzen 9 3900X 12-Core processor and 32 GB of RAM.
Our results confirm Milazzo's claim that the introduction of
Figure 3: Comparison of the performance of the estimation for timed and non-timed observations on the tandem queueing network with different size of the queue.
of state variables produces a small approximation in the model. Notably, we were able to achieve an accurate estimation of all the parameters from training sets consisting of a single partially-observable execution of the original SIR model. As observed in Section 5, this may be due to the fact that each parameter occurs in many transitions.
This case study demonstrates that our estimation procedure can be effectively used to simplify modeling pipelines that involve successive modifications of the model and the re-estimation of its parameter values. In line with the model checking problem, also our technique requires the modeler to take the size of the model into account.
## 7 Conclusion and Future Work
We presented novel methods to estimate parameters values of CTMCs expressed as Prism models from timed and non-timed partially-observable executions. We demonstrated, through the use of a case-study, that our solution is a concrete aid in applications involving modeling and analysis, especially when the model under study requires successive adaptations which may lead to approximations that require re-estimation of the parameters of the model.
Notably, all the algorithms presented in this paper were devised following simple optimization principles borrowed from the MM optimization framework.
We suggest that similar techniques can be employed to other modeling languages (e.g., Markov automata [11, 12]) and metric-based approximate minimization [2, 5]. An interesting future direction of research consists in extending our techniques to non-deterministic stochastic models by integrating the active learning strategies presented in [3].
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Parameter** & **Expected Value** & **Estimated Value** & **Absolute Error** \\ \hline \hline beta & 0.122128 & 0.135541 & 0.013413 \\ gamma & 0.127283 & 0.128495 & 0.001212 \\ block & 0.472081 & 0.437500 & 0.034581 \\ \hline \end{tabular}
\end{table}
Table 2: Parameter estimation on the approximated SIR model.
Figure 4: Approximated SIR model. |
2307.15461 | Defocus Blur Synthesis and Deblurring via Interpolation and
Extrapolation in Latent Space | Though modern microscopes have an autofocusing system to ensure optimal
focus, out-of-focus images can still occur when cells within the medium are not
all in the same focal plane, affecting the image quality for medical diagnosis
and analysis of diseases. We propose a method that can deblur images as well as
synthesize defocus blur. We train autoencoders with implicit and explicit
regularization techniques to enforce linearity relations among the
representations of different blur levels in the latent space. This allows for
the exploration of different blur levels of an object by linearly
interpolating/extrapolating the latent representations of images taken at
different focal planes. Compared to existing works, we use a simple
architecture to synthesize images with flexible blur levels, leveraging the
linear latent space. Our regularized autoencoders can effectively mimic blur
and deblur, increasing data variety as a data augmentation technique and
improving the quality of microscopic images, which would be beneficial for
further processing and analysis. | Ioana Mazilu, Shunxin Wang, Sven Dummer, Raymond Veldhuis, Christoph Brune, Nicola Strisciuglio | 2023-07-28T10:27:28Z | http://arxiv.org/abs/2307.15461v1 | # Defocus Blur Synthesis and Deblurring via Interpolation and Extrapolation in Latent Space
###### Abstract
Though modern microscopes have an autofocusing system to ensure optimal focus, out-of-focus images can still occur when cells within the medium are not all in the same focal plane, affecting the image quality for medical diagnosis and analysis of diseases. We propose a method that can deblur images as well as synthesize defocus blur. We train autoencoders with implicit and explicit regularization techniques to enforce linearity relations among the representations of different blur levels in the latent space. This allows for the exploration of different blur levels of an object by linearly interpolating/extrapolating the latent representations of images taken at different focal planes. Compared to existing works, we use a simple architecture to synthesize images with flexible blur levels, leveraging the linear latent space. Our regularized autoencoders can effectively mimic blur and deblur, increasing data variety as a data augmentation technique and improving the quality of microscopic images, which would be beneficial for further processing and analysis. The code is available at [https://github.com/nis-research/linear-latent-blur](https://github.com/nis-research/linear-latent-blur).
Keywords:Microscope images Deblurring Defocus blur synthesis Regularized autoencoders.
## 1 Introduction
Computer vision models have become increasingly popular in biomedical image processing, particularly with the advancement of deep learning techniques, leading to improved performance for tasks like cell segmentation and disease classification [8, 12, 15]. However, image quality greatly impacts the performance of computer vision models. In the biomedical field, low-quality microscopy images can compromise image analysis and diagnosis.
For instance, high-resolution cell images can be obtained using a confocal microscope. An autofocus component helps find the optimal focal plane for capturing a cell slide [4]. However, this task is often complicated by out-of-focus light, as not all cells are on the same focal plane and have thick structures. Thus, some cell images show less sharp regions due to out-of-focus areas, complicating the automated biomedical analysis [2].
Several deep-learning deblurring solutions have emerged in recent years to tackle this problem. They can be categorized into two groups: blur kernel estimation followed by deblurring [13] and kernel-free approaches [11, 16, 6, 17]. Quan et al. [13] proposed a non-blind deblurring network based on a scale-recurrent attention module. In [16] and [1], the authors used multiscale U-net architectures for deblurring and image super-resolution tasks. These methods rely on local or global residual connections, which are useful for recovering information that may be lost through downsampling, as well as for optimizing the training process [10]. The authors of [5] proposed a defocus map estimation model. The defocus map can be used to compute the pixel-wise blur level for blur enhancement and blur kernel estimation for deblurring. Jiang et al. [3] tackled multi-cause blur and proposed methods to recover sharp images from either motion or defocus blur. Zhang et al. [17] reported state-of-the-art results for deblurring microscopic images using a CycleGAN-based model, which learns a reversible mapping between sharp and out-of-focus images. However, these methods entail high computational costs from the nature of the complex architectures and lack the flexibility of removing blur from images with defocus levels different from those seen during training.
In this paper, we propose a generative model that uses an autoencoder for both blur synthesis and deblurring. The unknown relation between latent representations of blur levels obtained with a vanilla autoencoder does not allow traversals of the latent space to generate images with lower or higher blur levels. We thus design training constraints that enforce a certain structure in the latent space, such as a linearity relation. We use a regular autoencoder as the baseline model and apply implicit and explicit regularization to enforce linearity among the image representations of a cell slide captured at different focal planes, such as those shown in Fig. 1. The autoencoders are trained to synthesize defocus blur. Leveraging the enforced linearity, we can synthesize a blurry image by linearly interpolating the latent representations of two images of the same cell slide with different levels of blur. Further, the linear relation among blur levels enables synthesizing a sharper image by extrapolating representations of blurry images from the same cell slide.
Our contributions are: 1) A model with a simple network architecture that serves as a versatile solution for both defocus blur synthesis and deblurring. 2)
Figure 1: A nuclei-labeled cell slide captured at five focal lengths. A z-stack level (ranging from \(z_{0}\) to \(z_{16}\)) indicates the level of blur. We enforce linearity in the latent space among image representations of different blur levels of one slide.
Adaptability to different blur levels enabling the recovery of in-focus images, even when the blur level of the reference images is unknown.
## 2 Proposed method
### Imposing linearity onto latent space
We hypothesize that a linear relation among latent representations of images with different blur levels taken from one cell slide allows us to generate images with flexible levels of blur. As shown in Fig. 2, given a triplet of images \(\{x_{a}\), \(x_{b}\), \(x_{c}\}\) captured at different focal lengths from the same cell slide, with an increasing blur level, we impose that their image representations follow the linear relationship in the latent space as:
\[z_{b}^{\prime}=\alpha\cdot z_{a}+(1-\alpha)\cdot z_{c}, \tag{1}\]
where \(z_{i}=\mathcal{E}(x_{i})\) is the representation of the image \(x_{i}\) computed with an encoder network \(\mathcal{E}\), \(z_{b}^{\prime}\) is the latent representation interpolated from \(z_{a}\) and \(z_{c}\) and corresponds to image \(x_{b}\), and \(\alpha\) is the interpolation parameter to control the level of blur. As \(\alpha\) increases from 0 to 1, the level of blur decreases.
With the enforced linearity, we can synthesize the less blurry image \(\tilde{x}_{a}^{\prime}\), associated with \(x_{a}\), by extrapolating the latent representation \(z_{a}^{\prime}\) from the latent representations \(z_{b}\) and \(z_{c}\) of two images \(x_{b}\) and \(x_{c}\) (\(x_{b}\) has a lower level of blur than that of \(x_{c}\)), as shown below:
\[\tilde{x}_{a}^{\prime}=\mathcal{D}(z_{a}^{\prime})=\mathcal{D}(\frac{1}{\alpha }\cdot z_{b}-\frac{1-\alpha}{\alpha}\cdot z_{c}), \tag{2}\]
where \(\mathcal{D}\) is a decoder network and \(z_{a}^{\prime}\) is the extrapolated representation from \(z_{b}\) and \(z_{c}\). To achieve this, we train an autoencoder to reconstruct \(x_{a}\) and \(x_{c}\) from
Figure 2: Given a triplet of inputs \(\{x_{a}\), \(x_{b}\), \(x_{c}\}\), we generate their corresponding reconstructions \(\tilde{x}_{a}\), \(\tilde{x}_{b}\), \(\tilde{x}_{c}\) and a synthetic blurry image \(\tilde{x}_{b}^{\prime}\) based on the linearly interpolated representation \(z_{b}^{\prime}\). \(\mathcal{E}\) and \(\mathcal{D}\) are an encoder and a decoder network.
\(z_{a}\) and \(z_{c}\), and \(x_{b}\) from \(z_{b}^{\prime}\). Extrapolation of latent representations is applied only in the test phase, and the linearity in the latent space affects directly the performance of deblurring. We apply indirect and direct regularization in the latent space to investigate how it affects image quality.
Indirect regularization induces that the linearity is not directly embedded in the latent space, but is achieved by reconstructing the intermediate image \(x_{b}\) using the interpolated representation of \(z_{a}\) and \(z_{c}\) without using \(x_{b}\) as input. The objective function is:
\[\mathcal{L}_{i}=\frac{1}{2}\cdot[\mathcal{L}_{rct}(x_{a},\mathcal{D}(z_{a}))+ \mathcal{L}_{rct}(x_{c},\mathcal{D}(z_{c}))]+\mathcal{L}_{rct}(x_{b},\mathcal{ D}(z_{b}^{\prime})), \tag{3}\]
where the first term is the sum of the \(L_{1}\) reconstruction losses of \(x_{a}\) and \(x_{b}\) using their corresponding learned latent representations and the second term is the \(L_{1}\) reconstruction loss of the \(x_{b}\) decoding from the interpolated representation \(z_{b}^{\prime}\).
Direct regularization adds a constraint that minimizes directly the \(L_{1}\) distance between the interpolated latent representation \(z_{b}^{\prime}\) and the associated representation \(z_{b}\) of image \(x_{b}\), thus the objective function is:
\[\mathcal{L}_{d}=\mathcal{L}_{i}+||z_{b}-z_{b}^{\prime}||_{1}. \tag{4}\]
The indirect regularization may result in a latent space where interpolated latent representations are decoded into images visually similar to the real data, without forcing the non-interpolated latent codes to be linearly dependent [14]. The direct regularization explicitly ensures linearity in the latent space.
### Evaluation metrics
The goal of this study is to model a latent space with a linear constraint, such that we can exploit interpolation and extrapolation to reconstruct images with flexible levels of blur. We evaluate the geometric properties of the latent space and image quality for blur synthesis and deblurring.
Linearity in latent space.We quantify the degree of linear dependence among image representations based on two geometric properties. First, given three consecutive latent representations in terms of blur level, we measure their linearity based on the cosine similarity between the distance vectors obtained from each pair of neighbouring representations \(z_{n}\) and \(z_{n+1}\).
We call this the Linear Dependence Score (LDS):
\[\text{LDS}=\frac{1}{N-2}\sum_{n=1}^{N-2}\frac{(z_{n-1}-z_{n})\cdot(z_{n}-z_{n+ 1})}{||z_{n-1}-z_{n}||_{2}\cdot||z_{n}-z_{n+1}||_{2}}, \tag{5}\]
where N is the number of blur levels in the dataset, and \(z_{n}\) is the latent representation of an image at blur level \(n\). LDS ranges from -1 to 1, with higher values indicating a higher degree of compliance with the expected geometric property.
Second, since we traverse the latent space of a cell slide in fixed steps of \(\frac{N}{\alpha}\), we assess whether the distance between neighbouring latent representations is equal between a pair of interpolated and a pair of non-interpolated image representations. To measure this property, we propose a metric called Average Pairwise Distance (APD):
\[\text{APD}=\frac{1}{N-1}\sum_{n=0}^{N-2}\frac{|d(z_{n},z_{n+1})-d(z_{n}^{\prime },z_{n+1}^{\prime})|}{|d(z_{0},z_{N-1})|}, \tag{6}\]
where \(z_{n}\) and \(z_{n+1}\) are latent representations of two consecutive images in terms of blur level. The score is normalized by the distance between the representations of the lowest and highest blur levels. APD ranges from 0 to 1 and a lower value indicates that the latent space approaches the desired structure. Moreover, visual inspection of the latent space is done by mapping the latent representations to a 2D space via PCA.
Image quality.We evaluate image quality using a commonly used metric, Peak Signal-to-Noise-Ratio (PSNR),
\[\text{PSNR}_{I}^{R}=20\cdot\log_{10}\frac{\max(I)}{\sqrt{\frac{1}{mn}\sum_{i=0 }^{m-1}\sum_{j=0}^{n-1}(I(i,j)-R(i,j))^{2}}}. \tag{7}\]
This measures the similarity between the images \(I\) and \(R\). For instance, \(\text{PSNR}_{grd}^{extr_{d}}\) compares the deblurred image using an extrapolated latent representation to the corresponding ground truth sharp image. \(\text{PSNR}_{grd}^{b}\) compares the reconstructed blurry image with the ground truth blurry image.
## 3 Experiments and results
### Dataset
We use the BBBC006v1 collection obtained from the Broad Bioimage Benchmark Collection [7], which contains 384 cell slides stained with two markers to label the nuclei and structure of cells respectively. The sets of nuclei and cell structure images are noted as w1 and w2 sets. Each cell slide is captured at 34 focal lengths. In total, there are \(384\times 2\times 34\) images. We only use the images captured above the optimal focal plane (z-stack=16) with even z-stack levels for both training and testing (z-stack\(\leq\)16). We split it into training, validation, and testing sets, in a 7:1:2 ratio. All z-stack levels corresponding to one cell slide are assigned to the same set. We use triplets of one slide captured at different focal lengths as input for the models. For the training phase, we use the triplets: (\(z_{a}\), \(z_{b}\), \(z_{c}\)) where \(2b=a+c\) and \(a\), \(b\) and \(c\) are even z-stack levels in the dataset. We only use even z-stack levels since changes between two consecutive blur levels (an even and an odd z-stack level) do not exhibit significant variation in the data.
### Architecture and training
As the baseline model, we design an autoencoder with a simple architecture but can achieve image reconstruction marginally well. It has five convolutional layers in the encoder and six transposed convolutional layers in the decoder. Each convolutional layer consists of a two-strided convolution with a kernel size of \(3\times 3\), followed by batch normalization and Leaky ReLU activation. In the decoder, the structure is symmetrical, with transposed convolution replacing convolution operations. The last layer is a convolutional layer with kernel size \(3\times 3\), followed by a Sigmoid activation. The encoder layers have 64, 128, 256, 512, and 1024 filters. The models are trained for 40 epochs, with batch size 40. We use Adam optimizer with learning rate \(10^{-4}\). We generate 10 crops of size 128\(\times\)128 from each image (4 corner crops, 1 center crop, and their corresponding horizontally-flipped versions). Using the same architecture, the regularized models are trained with the proposed regularizations. We train models separately on the w1 and w2 sets, due to the difference in their data distributions.
### Results
Linearity in the latent space.We show the 2D projections of latent representations of a set of images (from the same cell slide but captured at different focal lengths) in Fig. 3. The direct regularization forces the representations to be more clustered and arranged along a line. With indirect regularization, the distribution in the latent space of the representation of images with an increasing blur level is almost the same as that of the baseline model. We report the results on the linearity of the learned latent representations in Table 1, for the models trained on w1 and w2 sets, respectively. Direct regularization leads to substantial changes in the structure of the latent space. With direct regularization, interpolated or extrapolated latent representations lie closer in the latent space to their associated representations generated by the encoder. This means that images decoded from interpolated representations can be more similar to the images reconstructed from the latent representations of the real images, compared to those obtained with the other two models.
\begin{table}
\begin{tabular}{l|l|c c c|c c c} \hline \multirow{2}{*}{**Experiment**} & \multirow{2}{*}{**MetricModel**} & **Baseline** & **Indirect** & **Direct** & **Baseline** & **Indirect** & **Direct** \\ \cline{3-8} & & **w1 set** & & & & **w2 set** & \\ \hline \multirow{2}{*}{Blur synthesis} & PSNR\({}_{b}^{interp}\uparrow\) & **31.97** & 31.77 & 31.60 & 32.35 & 32.48 & **33.05** \\ \cline{2-8} & PSNR\({}_{pred}^{interp}\uparrow\) & 29.09 & **31.03** & 28.30 & **29.78** & 29.46 & 29.02 \\ \hline \multirow{2}{*}{Deblurring} & PSNR\({}_{d}^{extra}\uparrow\) & **23.89** & 23.02 & 23.52 & **22.47** & 22.22 & 22.03 \\ \cline{2-8} & PSNR\({}_{pred}^{stride}\uparrow\) & 22.55 & **22.60** & 21.98 & 24.00 & 23.49 & **24.46** \\ \hline \end{tabular}
\end{table}
Table 2: Results of the baseline model, and the directly- and indirectly-regularized models, on the w1 and w2 sets. Blur synthesis and deblurring are evaluated. The arrows indicate whether a lower or a higher score is better. The best scores are highlighted.
Figure 4: (a) Synthesized blur for a nuclei-labeled slide using the baseline, indirectly and directly regularized models. Each row contains images with the blur level transitioning from z-stack 0 (left) to z-stack 16 (right), (b) Zoom-in view of the area within the frame in (a) highlights the blending effect by the baseline, (c) Example of image deblurring using 3 models. Synthetic sharp images are obtained through linear extrapolation between representations of two slides with z-stack levels 0 and 2, using different values for \(\alpha\).
Blur synthesis.In Table 2, we report the comparison of the quality of images reconstructed using interpolated representations, against reconstructed images using the representations associated with ground truth blurry images (PSNR\({}_{b}^{interp_{b}}\)) and ground truth blurry images (PSNR\({}_{yrd}^{interp_{b}}\)). We show the synthesized blurry images in Fig. 4a. With the baseline model, reconstructions from linear traversals of the latent space between two points result in visually similar images compared with the ground truth blurry images. However, the reconstructed images show a blending effect between the two source images, rather than a reliable estimation of defocus blur effect, as shown in Fig. 4b. The visual quality of the synthetic blur improves with the addition of regularization, which helps to reduce the blending effect.
Deblurring.We report the results of the quality of the deblurred images in Table 2. We show examples of deblurred images of a slide in w1 set by the baseline and regularized models in Fig. 4c. Using two blurry images, we can generate a sharper image. For the w1 set, the indirectly regularized autoencoder outperforms the baseline model when we compare the deblurred images with the reconstructed sharp images (see PSNR\({}_{d}^{extr_{d}}\)). For the w2 set, direct regularization performs the best. We observe that there is a trade-off between the desired geometric property and the image quality when applying direct regularization. With a better regularized latent space, the reconstructed image fidelity decreases slightly, while allowing to reconstruct and generate new images with different levels of blur using linear interpolation and extrapolation of the latent representations, respectively. To account for the clustering induced by the direct regularization, we also generate synthetic sharp images with an adjusted value for \(\alpha\) (\(\alpha=0.05\)). We notice that this set of images shows slightly more sharpness compared to those using \(\alpha=0.125\). This indicates that the levels of blur are indeed encoded along the linear direction in the latent space.
Figure 5: Effect of the blur level in the input images on the synthetic sharp image, when the optimal interpolation parameter \(\alpha\) is known, using the indirectly-regularized model.
With our regularized model, even when the sharp image is generated from two images with high levels of blur, a considerable level of detail is recovered. Fig. 5 shows how the blur level of input images affects the deblurring process. We fix one image at z-stack 0 and vary the other one from z-stack 2 to z-stack 14. These results are in line with those from a similar study [9], where the level of detail recovered in the deblurred images decreases with an increase in the focal plane at which slides are captured.
## 4 Discussion and future work
Our results suggest the feasibility of blur synthesis and deblurring through linear interpolation and extrapolation in the latent space. Imposing linearity onto the latent space enables us to control the level of blur in an image by interpolating or extrapolating representations. With a simple architecture, we achieve a versatile solution for blur synthesis and deblurring, while other works are usually limited to one application.
The linear latent space enables the recovery of in-focus images, even when the blur level of the two reference images is unknown. One can dynamically adjust the value of \(\alpha\) until reaching the optimal point. Besides, given a single blurry image as input, we can generate a second blurry image on top of it with a blur kernel, to obtain a deblurred in-focus image.
From the curvilinear trajectory demonstrated in the 2D projections of the latent representations, we conjecture that there may be two directions in the latent space, one corresponding to blur levels and the other corresponding to image content. We suggest future work on disentanglement representation learning, i.e. the representations of blur levels and image content are disentangled. This may allow for more precise reconstructions of deblurred images.
## 5 Conclusions
In this paper, we investigated the feasibility of models for both defocus blur synthesis and deblurring, based on linear interpolation and extrapolation in latent space. We enforce linearity among the representations of images of the same cell slide with different levels of blur, by indirect and direct regularization in the latent space. Therefore, linearly interpolating or extrapolating the representations of two differently blurred images (from the same cell slide) results in a meaningful representation that maps to an image with another level of blur. Our results show that the regularized models perform well on both blur synthesis and deblurring. The direct regularization results in a more linear latent space compared to a regular autoencoder, enabling a more precise mapping between extrapolated representations and their non-extrapolated versions.
**Acknowledgement** This work was supported by the SEARCH project, UT Theme Call 2020, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente. |
2310.08799 | Optical ladder operators in the Glauber-Fock oscillator array | In this study, we investigate the stationary states of the Glauber-Fock
oscillator waveguide array. We begin by transforming the associated Hamiltonian
into the form of a quantum harmonic oscillator Hamiltonian, allowing the
implementation of a supersymmetric (SUSY) approach. By considering the simplest
case for the intertwining operator, the optical ladder operators are
straightforwardly constructed and shown to map eigensolutions into
eigensolutions of the corresponding Hamiltonian operator, in pretty much the
same manner as it is done for the quantum harmonic oscillator case. The ladder
of the corresponding (eigen) supermodes is then easily established. | I. Bocanegra, L. Hernández-Sánchez, I. Ramos-Prieto, F. Soto-Eguibar, H. M. Moya-Cessa | 2023-10-13T01:04:34Z | http://arxiv.org/abs/2310.08799v1 | # Optical ladder operators in the Glauber-Fock oscillator array
###### Abstract
In this study, we investigate the stationary states of the Glauber-Fock oscillator waveguide array. We begin by transforming the associated Hamiltonian into the form of a quantum harmonic oscillator Hamiltonian, allowing the implementation of a supersymmetric (SUSY) approach. By considering the simplest case for the intertwining operator, the optical ladder operators are straightforwardly constructed and shown to map eigensolutions into eigensolutions of the corresponding Hamiltonian operator, in pretty much the same manner as it is done for the quantum harmonic oscillator case. The ladder of the corresponding (eigen) supermodes is then easily established.
## I Introduction
From the point of view of Supersymmetric (SUSY) quantum mechanics [1; 2; 3; 4; 5; 6], if there exists an operator \(B\), _intertwining_ the Hamiltonians \(H\) and \(\tilde{H}\), this is, satisfying
\[BH=\tilde{H}B, \tag{1}\]
then the solutions of the Schrodinger equation associated with \(\tilde{H}\) can be obtained from those corresponding to \(H\) and vice versa: the operator \(B\) is then called the intertwiner. Actually, the intertwining relation (1) is often encountered in connection with the so-called _Darboux transformation_[7; 8; 9; 10; 11; 12] and the _factorization method_[13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. Consequently, supersymmetry, the Darboux transformation and the factorization method are sometimes treated as equivalent [8] (and are actually equivalent under certain conditions).
The relation (1) has been exploited to construct (families of) exactly solvable Hermitian [19; 20; 17] and non-Hermitian Hamiltonians [23; 5; 6; 22; 6], in the time-independent [2; 8; 11; 21] as well as time-dependent cases [12; 4; 13; 14]. In particular, in the stationary regime, if \(B=a\) (\(B=a^{\dagger}\)), with \(a\) (\(a^{\dagger}\)) the annihilation (creation) operator of the harmonic oscillator, and \(H=n\), with \(n=a^{\dagger}a\) the usual number operator, then \(\tilde{H}=n+1\) (\(\tilde{H}=n-1\)) [24]. This is, indeed, the basis for the construction of the solution of the eigenvalue equation of the quantum harmonic oscillator developed early by Schrodinger [13; 14], and made popular by Dirac in his book [15].
Besides, due to the formal equivalence between the Schrodinger equation and the paraxial Helmholtz equation [25] (also between the stationary Schrodinger equation and the Helmholtz equation), optical waveguides are suitable devices to observe, study and test quantum phenomena [26]. Therefore, either isolated waveguides or waveguide arrays (called optical lattices), are susceptible to be transformed by means of Darboux or SUSY transformations [27; 28; 29; 30; 31; 32; 33; 34; 35], in the Hermitian [27; 28; 29; 30; 32] and non-Hermitian [31; 34; 35; 22; 23] regimes. Specifically, an optical lattice associated with a Hamiltonian of the type of the quantum harmonic oscillator can be "intertwined" with itself, giving rise to the ladder of eigenstates (supermodes [36; 37]).
In the present work we study the stationary regime of a very particular (semi-infinite) waveguide array, referred to as the _Glauber-Fock_ (oscillator) _array_[38; 39] (also see [36; 40; 41]), and characterized by a non-uniform distance between adjacent waveguides, as well as a gradient of refractive index increasing with the waveguide site \(k\), \(k=0,1,\dots\) (see Fig. 1). The corresponding Hamiltonian \(H\) can be taken to the form of the quantum harmonic oscillator Hamiltonian, by means of a rather elementary transformation [36]. Then, by considering the simplest case for the intertwiner \(B=a\), the operators mapping eigensolutions into eigensolutions of \(H\) are straightforward to obtain, and the corresponding ladder of supermodes is easily constructed.
With that in mind, the objective of this paper is twofold. On the one hand, the optical ladder operators associated with \(H\) are constructed in a formal way, and interpreted as switchers between the stationary (eigen) supermodes of the Glauber-Fock oscillator array. On the other hand, we set the precedent for further constructions of (Hermitian and non-Hermitian) waveguide arrays that can be obtained by considering more general intertwining operators \(B\) (in the stationary as well as the time-dependent regimes).
The general outline is as follows: in section II the system under study is introduced, as well as the transformation to turn the corresponding Hamiltonian into the form of the quantum harmonic oscillator Hamiltonian. In section III, the generic SUSY approach is presented, for an arbitrary intertwiner operator \(B\). In turn, in section IV, the foundations presented in section III are implemented for the simplest case of the intertwining operator. This results in the optical ladder operators for the stationary eigen supermodes of the Glauber-Fock oscillator array. Finally, in section V the main conclusions are drawn.
## II Glauber-Fock oscillator array
In the Glauber-Fock oscillator array [38; 39], the amplitude of the electric field propagating in the \(k\)-th waveguide, \(k=0,1\dots\), and here denoted \(c_{k}(z)\), is ruled by
\[\mathrm{i}\,\dot{c}_{k}+\omega kc_{k}+g\left(\sqrt{k}c_{k-1}+\sqrt{k+1}c_{k+1} \right)=0, \tag{2}\]
where \(\omega\in\mathbb{R}\) is the propagation constant of the first waveguide, and \(g\in\mathbb{R}\) is the coupling between the first to waveguides; \(g\) is proportional to \(e^{-\Delta x}\), with \(\Delta x\) the distance between the first two sites (see Fig. 1). Equation (2) can be alternatively written as
\[\mathrm{i}\,\partial_{z}\left|\psi(z)\right\rangle=H\left|\psi(z)\right\rangle, \tag{3}\]
where
\[\left|\psi(z)\right\rangle=\sum_{k=0}^{\infty}c_{k}(z)\left|k\right\rangle, \tag{4}\]
with \(c_{j}=\langle j|\psi(z)\rangle\) for \(j=0,1,\dots\). The set \(\left\{\left|k\right\rangle\right\}_{k=0,1,\dots}\) represents the Fock basis, which spans the Hilbert space \(\mathcal{H}\). The Hamiltonian in (3) is given by
\[H=-\omega n-g(a^{\dagger}+a). \tag{5}\]
The state \(\left|\psi(z)\right\rangle\) in (4) contains the information of the total electric field in the array for each \(z\). As the Hamiltonian \(H\) is time-independent, the mathematical solution of (3), is simply
\[\left|\psi(z)\right\rangle=e^{-\mathrm{i}\,Hz}\left|\psi(0)\right\rangle. \tag{6}\]
Here we are particularly interested in the initial condition
\[\left|\psi(0)\right\rangle=\left|\psi_{\ell}\right\rangle,\qquad\ell=0,1,\dots, \tag{7}\]
with \(\left|\psi_{\ell}\right\rangle\) satisfying the eigenvalue equation
\[H\left|\psi_{\ell}\right\rangle=E_{\ell}\left|\psi_{\ell}\right\rangle, \tag{8}\]
and such that the solution (6) is stationary, namely,
\[\left|\psi(z)\right\rangle=e^{-\mathrm{i}\,Hz}\left|\psi_{\ell}\right\rangle =e^{-\mathrm{i}\,E_{\ell}z}\left|\psi_{\ell}\right\rangle. \tag{9}\]
By making the transformations
\[\left|\psi(z)\right\rangle=D^{\dagger}\left(\frac{g}{\omega}\right)\left|w(z) \right\rangle,\qquad\left|\psi_{\ell}\right\rangle=D^{\dagger}\left(\frac{g}{ \omega}\right)\left|w_{\ell}\right\rangle, \tag{10}\]
where \(D(\xi)=\exp(\xi a^{\dagger}-\xi^{*}a)\), \(\xi\in\mathbb{C}\), is the Glauber displacement operator [24], equation (8) and the solution (9) turn, respectively, into
\[\bar{H}\left|w_{\ell}\right\rangle=E_{\ell}\left|w_{\ell}\right\rangle,\qquad \left|w(z)\right\rangle=e^{-\mathrm{i}\,E_{\ell}z}\left|w_{\ell}\right\rangle, \tag{11}\]
where
\[\bar{H}=DHD^{\dagger}=-\omega n+\frac{g^{2}}{\omega}, \tag{12}\]
is diagonal. The shortcut notation \(D=D\left(\frac{g}{\omega}\right)\), \(D^{\dagger}=D^{\dagger}\left(\frac{g}{\omega}\right)\) is used from now on. The Hamiltonian (12) has the basic form of the quantum harmonic oscillator Hamiltonian, therefore it is susceptible to be supersymmetrically transformed. Next, we set the foundations for a generic SUSY transformation.
## III Generic SUSY transformation
If one considers an arbitrary operator \(B\) intertwining \(\bar{H}\) in (12) with some other Hamiltonian \(\tilde{H}\), this is
\[B\bar{H}=\tilde{H}B, \tag{13}\]
the equalities in (11) are transformed as
\[\tilde{H}\left|\phi_{\ell}\right\rangle=E_{\ell}\left|\phi_{\ell}\right\rangle,\qquad\left|\phi(z)\right\rangle=e^{-\mathrm{i}\,E_{\ell}z}\left|\phi_{\ell} \right\rangle, \tag{14}\]
where we have defined
\[\left|\phi(z)\right\rangle=\kappa B\left|w(z)\right\rangle,\qquad\left|\phi_{ \ell}\right\rangle=\kappa B\left|w_{\ell}\right\rangle, \tag{15}\]
with \(\kappa\in\mathbb{C}\) a normalization constant. Therefore, from the first equality in (12), (13) becomes the intertwining relation between \(H\) and \(\tilde{H}\):
\[AH=\tilde{H}A,\qquad A=BD, \tag{16}\]
and finally, the solution \(\left|\phi_{\ell}\right\rangle\) in the second expression of (15) can be written in terms of the solution \(\left|\psi_{\ell}\right\rangle\) of the eigenvalue equation (8), as
\[\left|\phi_{\ell}\right\rangle=\kappa A\left|\psi_{\ell}\right\rangle. \tag{17}\]
In turn, the stationary evolution is given by the second expression in (14).
Therefore, in order to perform the supersymmetric transformation, the solutions of the eigenvalue equation (8) must be known. The obtention of both the set of eigenvalues and eigenvectors of (8), \(E_{\ell}\) and \(\left|\psi_{\ell}\right\rangle\), respectively, are given in the Appendix A.
In what follows, the simplest case for the intertwiner \(B\) is considered, the Hamiltonian \(H\) is "intertwined" with itself, giving rise to the straightforward construction of optical ladder operators connecting solutions of (8), as well as the corresponding ladder of eigen supermodes.
Figure 1: In the Glauber-Fock oscillator array, the waveguides are separated in a non-uniform way, \(\Delta x\) is the distance between the first two waveguides. As the site \(k\) of the waveguide increases, the waveguides become closer and closer, as shown. In addition, as \(k\) grows the refractive index of each waveguide increases as well. Such gradient in refractive index is schematically shown with darker tones of red as \(k\to\infty\).
Construction of the ladder operators
In this section, the case \(B=a\) (\(B=a^{\dagger}\)) is studied, as we are interested in the construction of the ladder of (eigen) supermodes associated with the Hamiltonian (5). Nevertheless, the present discussion is intended to set the precedent for more general choices of the intertwiner \(B\), as explained in Section III. From (12), it is easy to prove that
\[a\bar{H}=(\bar{H}-\omega)a,\qquad a^{\dagger}\bar{H}=(\bar{H}+\omega)a^{ \dagger}. \tag{18}\]
From the point of view of SUSY quantum mechanics [compare (13) with both expressions in (18)], this means that \(a\) (\(a^{\dagger}\)) intertwines \(\bar{H}\) with itself (see Ref. [16]). From both expressions in (18), and by using the first equality in (12), we obtain, respectively
\[(D^{\dagger}aD)H=(H-\omega)(D^{\dagger}aD) \tag{19}\]
and
\[(D^{\dagger}a^{\dagger}D)H=(H+\omega)(D^{\dagger}a^{\dagger}D), \tag{20}\]
where \(D^{\dagger}aD=a+\frac{g}{\omega}\) and \(D^{\dagger}a^{\dagger}D=a^{\dagger}+\frac{g}{\omega}\) are the optical ladder operators connecting the stationary (eigen) supermodes of \(H\). By applying the operator \(D^{\dagger}aD\) to the left of equation (8), it is obtained
\[H\left[D^{\dagger}aD\left|\psi_{\ell}\right\rangle\right]=(E_{\ell}+\omega) \left[D^{\dagger}aD\left|\psi_{\ell}\right\rangle\right], \tag{21}\]
where
\[(D^{\dagger}aD)\left|\psi_{\ell}\right\rangle=e^{\mathrm{i}\omega z}\sqrt{\ell }\left|\psi_{\ell-1}\right\rangle. \tag{22}\]
Similarly, by acting with \(D^{\dagger}a^{\dagger}D\) on the left of (8), we obtain
\[H\left[D^{\dagger}a^{\dagger}D\left|\psi_{\ell}\right\rangle\right]=(E_{\ell} -\omega)\left[D^{\dagger}a^{\dagger}D\left|\psi_{\ell}\right\rangle\right], \tag{23}\]
where
\[(D^{\dagger}a^{\dagger}D)\left|\psi_{\ell}\right\rangle=e^{-\mathrm{i}\, \omega z}\sqrt{\ell+1}\left|\psi_{\ell+1}\right\rangle. \tag{24}\]
From expressions (22) and (24) it can be seen that the operators \(D^{\dagger}a^{\dagger}D\) and \(D^{\dagger}a^{\dagger}D\) are indeed ladder operators, switching between eigensolutions of (8), and therefore give the stationary evolution dictated in (9). The ladder of supermodes can be straightforwardly constructed. Similar to the quantum harmonic oscillator case, the \(\ell\)-th eigenstate (supermode) of the ladder, can be written in terms of the lowest state \(\left|\psi_{0}\right\rangle\), as
\[\left|\psi_{\ell}\right\rangle=\frac{e^{\mathrm{i}\,\omega\ell z}}{\sqrt{\ell!}}D^{\dagger}(a^{\dagger})^{\ell}D\left|\psi_{0}\right\rangle. \tag{25}\]
Figure 2 shows the effect of the optical annihilation and creation operators (22) and (24), respectively, for some specific values of the parameters. By departing from the normalized eigen supermode \(\left|\psi_{4}\right\rangle\) in Fig. 2 (middle), the (normalized) state \(\left|\psi_{3}\right\rangle\) in Fig. 2 (up) is obtained by means of the application of the operator (22). In turn, again departing form \(\left|\psi_{4}\right\rangle\), the normalized eigen supermode \(\left|\psi_{5}\right\rangle\) in Fig. 2 (down) is reached through the operator (24).
In addition, Fig. 3 (up) shows the normalized distributions \(c_{k}\) corresponding to the three eigenstates (\(\ell=3,4,5\)) shown in Fig. 2. The green, red and blue lines correspond, respectively, to \(\left|\psi_{3}\right\rangle\) [Fig. 2 (up)], \(\left|\psi_{4}\right\rangle\) [Fig. 2 (middle)] and \(\left|\psi_{5}\right\rangle\) [Fig. 2 (down)]. It can be observed that, as \(\ell\) increases, the distribution \(c_{k}\) becomes smaller and broader, in agreement with Fig. 2. In turn, Fig. 3 (down) shows \(c_{k}\) for a slightly greater coupling \(g=0.36\). It is seen that, as the coupling \(g\) grows, the distributions
Figure 2: Effect of the optical ladder operators (22) and (24) on a given initial eigen supermode of the Glauber-Fock oscillator array. The chosen parameters are: \(\omega=1.3\), \(g=0.29\). By departing from the normalized eigen supermode \(\left|\psi_{4}\right\rangle\) (middle), the (also normalized) state \(\left|\psi_{3}\right\rangle\) (up) can be obtained by means of the optical annihilation operator (22), while the normalized state \(\left|\psi_{5}\right\rangle\) (down) is obtained through the optical creation operator (24).
become broader as well. This can be particularly appreciated in the \(\left|\psi_{5}\right\rangle\) supermode (blue) in Fig. 3 (down). In both cases the solid lines correspond to the analytical results while the the stars correspond to the numerical solutions.
## V Conclusions
The optical ladder operators for the Glauber-Fock oscillator waveguide array were constructed in the stationary regime. They switch between eigen supermodes, as expected. In particular, when \(g\) is relatively smaller, they switch between the excited waveguide, as shown in Figure 2. Therefore, the ladder of eigen supermodes is straightforwardly constructed by departing from the lowest state, in much the same manner as in quantum harmonic oscillator case.
In addition, as the ladder operators were constructed by departing from a supersymmetric (SUSY) approach, a different choice of the intertwiner might lead to new configurations of waveguide arrays. These might possibly be associated with non-Hermitian or even with time-dependent Hamiltonians. Work in these direction is already in progress.
## Appendix A Solution of the initial eigenvalue equation
By using Eq. (3.1.12) of Ref. [24], from (6) and (12), it is straightforward to obtain
\[\left|\psi(z)\right\rangle=D^{\dagger}\left(\frac{g}{\omega}\right)e^{\mathrm{ i}\,z(\omega n-\frac{g^{2}}{\omega})}D\left(\frac{g}{\omega}\right)\left| \psi(0)\right\rangle. \tag{30}\]
By inspection it is seen that, by choosing \(\left|\psi(0)\right\rangle=D^{\dagger}\left(\frac{g}{\omega}\right)\left| \ell\right\rangle\), with \(\left|\ell\right\rangle\) a Fock state satisfying
\[n\left|\ell\right\rangle=\ell\left|\ell\right\rangle, \tag{31}\]
the stationary solution \(\left|\psi(z)\right\rangle=\left|\psi\ell\right\rangle\), with
\[\left|\psi_{\ell}\right\rangle=e^{\mathrm{i}\,z(\omega n-\frac{g^{2}}{\omega })}D^{\dagger}\left(\frac{g}{\omega}\right)\left|\ell\right\rangle, \tag{32}\]
is obtained. In turn, by acting with \(H\) on the state (32), the corresponding spectrum
\[E_{\ell}=-\omega\ell+\frac{g^{2}}{\omega}, \tag{33}\]
is reached.
###### Acknowledgements.
I. Bocanegra acknowledges CONAHCyT (Mexico) for financial support through the postdoctoral fellowship 711878 and projects A1-S-24569 and CF19-304307. He is also grateful to IPN (Mexico) for supplementary economical support through the project SIP20232237. L. Hernandez Sanchez also thanks the Instituto Nacional de Astrofisica, Optica y Electronica (INAOE) and the Consejo Nacional de Humanidades, Ciencias y Tecnologias (CONAHCyT) for the PhD scholarship awarded (No. CVU: 736710).
|
2303.06854 | Robust Contrastive Language-Image Pre-training against Data Poisoning
and Backdoor Attacks | Contrastive vision-language representation learning has achieved
state-of-the-art performance for zero-shot classification, by learning from
millions of image-caption pairs crawled from the internet. However, the massive
data that powers large multimodal models such as CLIP, makes them extremely
vulnerable to various types of targeted data poisoning and backdoor attacks.
Despite this vulnerability, robust contrastive vision-language pre-training
against such attacks has remained unaddressed. In this work, we propose ROCLIP,
the first effective method for robust pre-training multimodal vision-language
models against targeted data poisoning and backdoor attacks. ROCLIP effectively
breaks the association between poisoned image-caption pairs by considering a
relatively large and varying pool of random captions, and matching every image
with the text that is most similar to it in the pool instead of its own
caption, every few epochs.It also leverages image and text augmentations to
further strengthen the defense and improve the performance of the model. Our
extensive experiments show that ROCLIP renders state-of-the-art targeted data
poisoning and backdoor attacks ineffective during pre-training CLIP models. In
particular, ROCLIP decreases the success rate for targeted data poisoning
attacks from 93.75% to 12.5% and that of backdoor attacks down to 0%, while
improving the model's linear probe performance by 10% and maintains a similar
zero shot performance compared to CLIP. By increasing the frequency of
matching, ROCLIP is able to defend strong attacks, which add up to 1% poisoned
examples to the data, and successfully maintain a low attack success rate of
12.5%, while trading off the performance on some tasks. | Wenhan Yang, Jingdong Gao, Baharan Mirzasoleiman | 2023-03-13T04:49:46Z | http://arxiv.org/abs/2303.06854v2 | # Robust Contrastive Language-Image Pretraining against Adversarial Attacks
###### Abstract
Contrastive vision-language representation learning has achieved state-of-the-art performance for zero-shot classification, by learning from millions of image-caption pairs crawled from the internet. However, the massive data that powers large multimodal models such as CLIP, makes them extremely vulnerable to various types of adversarial attacks, including targeted and backdoor data poisoning attacks. Despite this vulnerability, robust contrastive vision-language pretraining against adversarial attacks has remained unaddressed. In this work, we propose RoCLIP, the first effective method for robust pretraining and fine-tuning multimodal vision-language models. RoCLIP effectively breaks the association between poisoned image-caption pairs by considering a pool of random examples, and (1) matching every image with the text that is most similar to its caption in the pool, and (2) matching every caption with the image that is most similar to its image in the pool. Our extensive experiments show that our method renders state-of-the-art targeted data poisoning and backdoor attacks ineffective during pre-training or fine-tuning of CLIP. In particular, RoCLIP decreases the poison and backdoor attack success rates down to 0% during pre-training and 1%-4% during fine-tuning, and effectively improves the model's performance.
Machine Learning, ICML
## 1 Introduction
Recent large-scale vision-language models pre-trained on millions of image-captions pairs crawled from the internet has gained an unprecedented success. This is evident by their impressive zero-shot transferability of the model to downstream tasks, where natural language is used to describe visual concepts (Radford et al., 2021; Jia et al., 2021). Contrastive pre-trained vision-language models such as CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) are trained using a multimodal contrastive loss which pulls the representations of every image-caption pair together while pushing those of different pairs apart. This alleviates the need for expensive labeling of training examples, and enables scaling up the training data to millions of examples. However, the massive data that powers such large models also makes them extremely vulnerable to various types of adversarial attacks (Carlini and Terzis, 2021; Yang et al., 2022).
In particular, targeted data poisoning attacks on multimodal models add mismatched image-captions pairs to the pre-training data, to change the prediction of particular images at the test time. Similarly, backdoor attacks overlay a small patch on a subset of training data to cause the model to misclassify test images with the same patch. Notably, poisoning just 0.0001% of the pre-training examples can lead to success of a targeted poisoning attack. Similarly, poisoning 0.01% of pre-training examples can makes a backdoor attack successful (Carlini and Terzis, 2021). Compared to clean-label data poisoning and backdoor attacks in the supervised settings which require poisoning on average 1% of training data (Turner et al., 2018; Geiping et al., 2021), attacking multimodal contrastive models requires orders of magnitude fewer poisoned examples. Interestingly, the larger the model, the more vulnerable it is against adversarial attacks (Carlini and Terzis, 2021).
Despite this vulnerability, robust pre-training of multimodal vision-language models has remained unaddressed. Recent work of Yang et al. (2022) studied poison identification during fine-tuning CLIP, by using another trusted pre-trained CLIP to remove dissimilar image-caption pairs. This approach, however, assumes the knowledge of the similarity distribution of the clean and poisoned image-caption pairs, to be able to distinguish them based on a threshold. However, the similarity distribution of poisoned pairs is not available in practice, especially when the exact form of the attack is unknown. The similarity distribution of a backdoor attack is different from a poisoning attack, and specific settings like the patch size of the backdoor can also influence the similarity of poisoned pairs, making it impossible to determine the similarity distribution before training. Importantly, this approach is not applicable to pre-training the model on poisoned data, which is much more vulnerable to adversarial attacks than fine-tuning on a smaller dataset.
In this work, we propose the first effective method, namely RoCLIP, for robust training of multimodal vision-language models such as CLIP, against adversarial attacks. Our approach is based on the following key observation: while the
similarity between the image-caption pairs of clean examples increases rapidly during the training, similarity between poisoned image-caption pairs grows slowly. As a result, poisoned images and captions are not close to the groups of similar images and captions in the representation space, during the training. To break the association between poisoned image-caption pairs, our main idea is to keep a relatively large and varying pool of randomly selected image-caption pairs. Then, we (1) match every image with the text that is most similar to its caption in the pool, and (2) match every caption with the image that is most similar to its image in the pool. This effectively prevents the attack by breaking the association between poisoned image-caption pairs. Leveraging image and text augmentations, we can effectively defend the model and even improves its performance significantly.
Our extensive experiments show that our method renders state-of-the-art targeted data poisoning and backdoor attacks ineffective during pre-training or fine-tuning. In addition, our method leads to an increase of linear probe accuracy by up to 14% and zero-shot accuracy by up to 10 %. We note that RoCLIP is the only effective defense method against state-of-the-art attacks that can efficiently scale to pre-training large-scale vision-language models such as CLIP.
## 2 Related Work
**Contrastive Representation Learning.** Contrastive learning was originally proposed for self supervised representation learning from unimodal data. Self-supervised contrastive learning works by maximizing agreement between differently augmented views of the same example and minimizing agreement between differently augmented views of different examples (Chen et al., 2020; Chen and He, 2021; He et al., 2020). Several works improved the performance of contrastive-learning on downstream tasks by imposing additional constraints to remove redundancy between components of the representation vectors and prevent collapse of the representations (Bardes et al., 2021; Zbontar et al., 2021), or using nearest-neighbor as positive pairs in the contrastive loss (Dwibedi et al., 2021; Van Gansbeke et al., 2021).
**Contrastive Language-Image Pretraining.** Multimodal vision-language models like CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) are pre-trained on 400M/1B image-text pairs, by maximizing the agreement between representations of matched image-caption pairs and minimizing those of non-matched pairs. A recent line of work aims at improving the data efficiency and quality of CLIP representations, by leveraging image and text augmentations. DeCLIP (Li et al., 2021) improves data-efficiency of CLIP by maximizing the similarity between two augmented image features using SimSiam (Chen and He, 2021), two augmented text features using Masked Language Modeling (MLM) (Devlin et al., 2018), and matching augmented image features with their augmented text pairs and other similar text features. SLIP (Mu et al., 2022) improves the performance by maximizing the agreement between two augmented image features using SimCLR (Chen et al., 2020), and matching the augmented image features with their text pair. CyCLIP (Goel et al., 2022) improves the representations by symmetrization of the similarity between the two mismatched image-text pairs, as well as the similarity between the image-image pair and the text-text pair. Finally, FILIP (Yao et al., 2021) uses transformer-based encoders for both modalities to learn more fine-grained features.
**Adversarial Attacks on CLIP.** Contrastive pretrained language-image models are extremely vulnerable to various types of data poisoning attacks (Carlini and Terzis, 2021). In particular, targeted data poisoning attacks fool the model to misclassify a particular test example as an adversarial label. Backdoor attacks overlay a small patch on a subset of training data, and cause the model to misclassify test images with the same patch. CLIP has been also shown to be vulnerable to data poisoning attacks during fine-tuning (Yang et al., 2022b). Despite this vulnerability, designing effective defenses has remained unaddressed. Yang et al. (2022b) proposed a pre-processing and a post-processing defense for fine-tuning. The pre-processing requires a clean pre-trained CLIP to remove examples with low cosine similarity between image and their corresponding text representation. This requires knowledge of the similarity distribution between clean and poisoned examples, which is not available in particular when the exact form of the attack is not known. The post-processing fine-tunes the poisoned model on another clean dataset of the same scale as the fine-tuning data. This is clearly not applicable to pre-training, due to the very high data and computational requirements. In contrast, we propose a highly effective defense method that can be applied during pre-training or fine-tuning CLIP, without the need for pre- or post-processing.
**Defense Strategies in Supervised Setting.** Defense against data poisoning attacks has been extensively studied in the supervised settings. Supervised defenses can be divided into data sanitization and robust training. Data sanitization eliminates anomalies that fall outside a spherical radius in feature space (Steinhardt et al., 2017), activation space (Chen et al., 2019), spectrum of feature covariance matrix (Turner et al., 2018), gradients (Yang et al., 2022a) or based on nearest neighbors (Peri et al., 2019). Such methods do not scale to multimodal pre-training on millions of examples. Robust training relies on strong data augmentation (Borgania et al., 2021), randomized smoothing (Weber et al., 2020), model ensembling (Levine and Feizi, 2020), bounding gradients (Hong et al., 2020), adding noise (Liu et al.), or adversarial training (Madry et al., 2018; Tao et al., 2021). Such methods are not applicable to multimodal models like CLIP,
and are often prohibitively slow, or drastically degrade the performance, even in the supervised setting.
## 3 Preliminary
### Contrastive Language-Image Pre-training (CLIP)
CLIP is trained on millions of images caption pairs scraped from the web. Formally, we consider a dataset \(\mathcal{D}\subseteq\mathcal{I}\times\mathcal{T}\) consisting of pairs \((\mathbf{\mathbf{x}}_{j}^{I},\mathbf{\mathbf{x}}_{j}^{T})\) where \(\mathbf{\mathbf{x}}_{j}^{I}\in\mathcal{I}\) is a raw image and \(\mathbf{\mathbf{x}}_{j}^{T}\in\mathcal{T}\) is a text caption. The CLIP architecture consists of an image encoder \(f_{I}:\mathcal{I}\rightarrow\mathbb{R}^{d}\) that encodes the raw image \(x_{i}^{I}\) into an embedding vector \(\tilde{\mathbf{z}}_{i}^{I}\), and a text encoder \(f_{T}:\mathcal{T}\rightarrow\mathbb{R}^{d}\) that encodes the raw text \(\mathbf{\mathbf{x}}_{i}^{T}\) into an embedding vector \(\tilde{\mathbf{z}}_{i}^{T}\) of the same dimension. Then projected image and text embeddings \(\mathbf{z}_{i}^{I},\mathbf{z}_{i}^{T}\) are obtained by passing the encoded image and text \(\mathbf{z}_{i}^{I},\mathbf{z}_{i}^{T}\) through their corresponding projection heads. The projected representations are normalized to have unit \(\ell_{2}\)-norm. Finally, the InfoNCE loss (Oord et al., 2018) is applied to pull the projected embeddings of every image and its corresponding caption together while pushing apart the projected embeddings of the image from other captions in the same mini-batch. Formally, for a mini-batch of \(N\) image-captions pairs \(\{(\mathbf{\mathbf{x}}_{j}^{I},\mathbf{\mathbf{x}}_{j}^{T})\}_{j=1}^{N}\), and their projected embeddings \(\{(\mathbf{\mathbf{z}}_{j}^{I},\mathbf{\mathbf{z}}_{j}^{T})\}_{j=1}^{N}\), the CLIP loss is defined as:
\[\mathcal{L}_{\text{CLIP}}= -\frac{1}{2N}\sum_{j=1}^{N}\log\left[\frac{\exp\left(\left\langle \mathbf{z}_{i}^{I},\mathbf{z}_{j}^{T}\right\rangle/\tau\right)}{\sum_{k=1}^{N}\exp \left(\left\langle\mathbf{z}_{j}^{I},\mathbf{z}_{k}^{T}\right\rangle/\tau\right)}\right]\] \[-\frac{1}{2N}\sum_{k=1}^{N}\log\left[\frac{\exp\left(\left\langle \mathbf{z}_{k}^{I},\mathbf{z}_{k}^{T}\right\rangle/\tau\right)}{\sum_{j=1}^{N}\exp \left(\left\langle\mathbf{z}_{j}^{I},\mathbf{z}_{k}^{T}\right\rangle/\tau\right)} \right], \tag{1}\]
where \(\left\langle.,.\right\rangle\) represents the inner product, and \(\tau\) is a trainable temperature parameter. We evaluate CLIP pre-trained with our method using both zero-shot and linear probe methods, as discussed below.
Zero-shot classification.Pre-trained Language-Image models such as CLIP enable zero-shot transfer of the model to downstream tasks, i.e., classifying test images by labels not seen at training time. To do so, the downstream labels can be transformed into suitable captions using the provided engineered prompts templates, e.g. "A photo of a {label}". Then, the cosine similarity of the test image to each caption is computed, and the model predicts the label with the highest image-caption similarity.
Linear probe.For a labeled image dataset, CLIP image representations can also be evaluated by training a linear classifier on the image representations obtained from the pre-trained CLIP image encoder and the corresponding labels.
Pre-trained Language-Image models such as CLIP enable zero-shot transfer of the model to downstream tasks, i.e., classifying test images by labels not seen at training time. To do so, the downstream labels can be transformed into suitable captions using the provided engineered prompts templates, e.g. "A photo of a {label}". Then, the cosine similarity of the test image to each caption is computed, and the model predicts the label with the highest image-caption similarity.
### Poisoning and Backdoor Attacks
Let \(\mathcal{D}=\{(\mathbf{x}_{i}^{I},\mathbf{x}_{i}^{T})\}_{i=1}^{n}\) be the set of all training examples. Poisoning attacks (Biggio et al., 2012) inject a small subset of poisoned examples \(\mathcal{D}_{p}\) to the original training dataset \(\mathcal{D}\), such that when the model is trained on the poisoned training data \(\{\mathcal{D}\cup\mathcal{D}_{p}\}\), its prediction on particular test examples
Figure 1: Illustration of RoCLIP. (a) Our method keeps a pool of random varying examples during the training. Then, instead of matching the original image and captions, it matches every image to its most similar augmented caption in the pool, and matches every caption to its most similar augmented image in the pool. (b) Effect of using the NN pool during the pre-training. Blue shows the ratio of clean images that are matched to their category in the pool. Orange shows the fraction of poisoned images that are matched to their poisoned caption category in the pool. Our method can effectively break the association between poisoned image-caption pairs, without a significant harm to the performance on clean examples.
are changed to the adversarial label \(y_{adv}\). At the same time, the poisoned model performs normally on other test examples. In this work, we consider both targeted poisoning and backdoor attacks as we discuss next.
**Targeted Image attacks.** In a targeted attack, the adversary aims to change the prediction of one particular test examples \(\mathbf{x}_{t}^{I}\) to the adversarial label \(y_{adv}\). Targeted poisoning attacks can be crafted following Carlini and Terzis (2021), by constructing a caption set \(\mathcal{T}_{adv}\) of potential text descriptions related to the label \(y_{adv}\), and making poison by assigning captions in \(\mathcal{T}_{adv}\) to every target \(\mathbf{x}_{t}^{I}\), i.e., \(\mathcal{D}_{p}=\{(\mathbf{x}_{t}^{I},\mathbf{x}_{c}^{T}):\mathbf{x}_{c}^{T}\in\mathcal{T }_{adv}\}\). For constructing the caption set \(\mathcal{T}_{adv}\), one can either search the training dataset for all sequences that contain this label string, and use these sequences as the caption set. Alternatively, one can use the set of 80 different "prompt-engineered" text descriptions provided by CLIP for classification, and either use a subset or repeat them as necessary. The number of captions in \(\mathcal{T}_{adv}\) determines the number of generated poisons per target. To evade automated cleaning algorithms (e.g., removing duplicated images), tiny Gaussian noise can be added to the image, or the captions can be modified by substituting or adding words, without degrading the attack success rate. A diverse caption set ensures that the image encoder is poisoned instead of the projection layers.
**Targeted class attacks (fine-tuning).** This attack is similar to the targeted poisoning attack, with the difference that instead of a particular \(\mathbf{x}_{t}^{I}\), the adversary aims to change the prediction of an entire target class \(\mathcal{I}_{t}\) to the adversarial label \(y_{adv}\) during fine-tuning Yang et al. (2022). Note that labels of the training examples are available and can be replaced in engineered prompts for fine-tuning CLIP. Having an adversarial caption set \(\mathcal{T}_{adv}\) as explained above, the poisons are made by assigning captions in \(\mathcal{T}_{adv}\) to multiple images in the target class \(\mathbf{x}_{i}^{I}\in\mathcal{I}_{t}\), i.e., \(\mathcal{D}_{p}=\{(\mathbf{x}_{t}^{I},\mathbf{x}_{c}^{T}):\mathbf{x}_{c}^{T}\in\mathcal{T }_{adv},\mathbf{x}_{i}^{I}\in\mathcal{I}_{t}\}\).
**Backdoor attacks.** In a targeted attack, the adversary attaches a small trigger patch to the poisoned images that are paired with adversarial captions \(\mathcal{T}_{adv}\) related to \(y_{adv}\). In doing so, _all_ the test images with the trigger patch will be misclassified as \(y_{adv}\). Similar to the targeted label attack, instead of using a particular \(\mathbf{x}_{t}^{I}\), we use different images \(\mathbf{x}_{i}^{I}\in\mathcal{I}\), and add the trigger patch to them. Specifically, we define \(\mathcal{D}_{p}=\{(\mathbf{x}_{i}^{I}\oplus\text{patch},\mathbf{x}_{c}^{T}):\mathbf{x}_{c} ^{T}\in\mathcal{T}_{adv},\mathbf{x}_{i}^{I}\in\mathcal{I}\}\). The caption set can be constructed in a similar manner to targeted attacks using captions found in the training data.
In general, while injecting poisoned examples in curated datasets used for supervised learning might be difficult, such poisons can be easily injected in uncurated datasets used by large multimodal models. This makes such models highly vulnerable to adversarial attacks.
## 4 Robust Training of Multimodal Models
In this section, we first study the effect of data poisoning attacks on multimodal models. Then, we present our method for robust training of such models against data poisoning attacks.
Figure 2: (a) Similarity between image-caption pairs of clean images vs. 5 poisoned examples of a truck image poisoned with deer captions, during pre-training. Poisoned examples have a relatively lower similarity between their image and caption representations compared to clean examples. Our defense effectively breaks the association between the poisoned image-caption pairs and reduce their similarity score to nearly 0 (green line). (b) Poisoned images and captions are matched to images or captions of other categories. But, clean image-caption pairs find pairs of the same categories as their nearest neighbors.
### Effect of Data Poisoning Attacks on CLIP
By minimizing the CLIP contrastive loss in Eq. (1), the model changes such that every image representation \(\mathbf{z}_{j}^{I}\) moves towards its caption representation \(\mathbf{z}_{j}^{T}\) and gets far away from other (dissimilar) caption representations \(\mathbf{z}_{k}^{T}\). This makes corresponding categories of image and text (e.g. category of "Cat" or "Dog" in the image and text modality) to get closer to each other and get distant from other categories, during the training. Crucially, as image-caption pairs belonging to a particular category are relatively similar to each other, their gradients have a large alignment with other examples in the same category. Therefore, categories of similar image-caption pairs insert a large cumulative gradient on the model and change the model relatively quickly to get close to their pairs in the other modality. The blue line in Fig. 1b confirms that the average cosine similarity between image-caption representations of the clean examples increases rapidly during the initial phase of training.
On the other hand, image-caption pairs of poisoned examples are not similar to the other clean examples in the data. Hence, their gradient does not align well with the gradient of the clean examples. As the number of poisoned examples is small, such examples introduce a much smaller cumulative gradient on the model. Hence, image-caption pairs of poisoned examples move toward each other at a considerably lower speed. The orange line in Fig. 1b shows the average cosine similarity between 5 targeted poisoned image-caption pairs of a truck image \(\mathbf{x}_{t}^{I}\) poisoned as deer \(\{(\mathbf{x}_{t}^{I},\mathbf{x}_{c}^{T}):\mathbf{x}_{c}^{T}\in\mathcal{T}_{\text{der}}\}\). We see that the average cosine similarity between poisoned image-caption pairs is smaller than that of clean examples, in particular after a few training epochs. Crucially, this implies that the poisoned images are not very close to the group of similar images, and poisoned captions are not very close to the group of similar captions in the representation space.
### Robust Training with RoCLIP
To prevent an attack from being successful, we aim to break the association between the poisoned image-caption pairs. If this can be done, the poisoned image and caption representations do not get close enough to each other during training and the attack does not succeed. To achieve this, we rely on our key observation in Sec. 4.1: poisoned images and captions are not close to groups of similar images and captions in the representation space. To break the association between poisoned image-caption pairs, our main idea is to keep a relatively large and varying pool of randomly selected image-caption pairs. Then, we (1) match every image with the text that is most similar to its paired caption in the pool, and (2) match every caption with the image that is most similar to its paired image in the pool. In doing so, we match the poisoned images with a different text than the adversarial caption and match the poisoned caption with a different image than the poisoned image. Fig. 1a illustrates our method.
Note that the randomly selected set of examples in the pool change over time, and the poisoned examples need to be trained on for multiple iterations for the attacks to succeed. Since the number of poisoned examples is small, the probability of more than one poisoned image being in the pool during several iterations is negligible. Therefore, a poisoned caption is not matched to a poisoned image multiple times to be learned. At the same time, as the pool is relatively large, matching the images with a similar caption and matching the captions with similar images do not considerably hurt the performance. Therefore, our method is able to disassociate the poisoned image-caption pairs and effectively break the poisoning attacks, while preserving the superior accuracy of the model. As shown in Fig. 2b, throughout the training, majority of the clean images and captions are paired with captions and images in the same category. On the other hand, poisoned images and captions are paired with captions and images in other categories. Hence, they cannot poison the model to associate the adversarial text category to the target images. Fig. 2a shows nearest neighbors of image and caption of a clean and a poisoned example during the training.
In addition, to further prevent the poisoned samples from being paired with other poisoned samples that potentially exist in the pool, we use various image and text augmentations. Finding nearest neighbors based on augmented image and text representations further prevent poisoned images and text to match with other poisoned text or images. In particular, we use random image cropping, horizontal flipping, color jittering (Wu et al., 2018), grayscale conversion (Wu et al., 2018), and blurring (Chen et al., 2020) in our image augmentation policies. For the text augmentation, we use the EDA proposed by (Wei and Zou, 2019), which includes synonym replacement, random swap, and random deletion as its augmentation policies. Each text will randomly select one policy for augmentation. The combination of NN and data augmentation not only defends the model against data poisoning and backdoor attack but also significantly improves the performance of the model, as shown in Table 4. We formally present our method below.
First, we sample a pool of \(P\) image-caption pair representations \(\mathcal{P}=\{(\mathbf{z}_{i}^{I},\mathbf{z}_{i}^{T})\}_{i=1}^{P}\) as the representative of the distribution of data representations. During training, for every example \((\mathbf{z}_{j}^{I},\mathbf{x}_{j}^{T})\) in the mini-batch, we first augment its image and text with our augmentation policies, and then match its augmented image representation \(\mathbf{\overline{z}}_{j}^{T}\) with the augmented caption representation in the pool that is most similar to \(\mathbf{\overline{z}}_{j}^{T}\), i.e., \(\mathbf{z}_{nn(j)}^{T}=\operatorname*{arg\,min}_{\mathbf{\overline{z}}_{\overline{z}} ^{T}\in\mathcal{P}}\|\mathbf{\overline{z}}_{j}^{T}-\mathbf{\overline{z}}_{p}^{T}\|_{2}\). Effectively, we form the positive image-caption rep
resentation pair \((\mathbf{\overline{\mathbf{\bar{z}}}_{j}}^{I},\mathbf{\mathbf{z}}_{nn(j)}^{T})\) and use it instead of \((\mathbf{\underline{z}}_{j}^{I},\mathbf{\underline{z}}_{j}^{T})\). Similarly, for the caption \(\mathbf{\underline{x}}_{j}^{T}\), we find the most similar image representation to \(\mathbf{\overline{\mathbf{z}}_{j}}^{I}\) in the pool, i.e., \(\mathbf{\underline{z}}_{nn(j)}^{I}=\arg\min_{\mathbf{\overline{z}}_{j}^{T}\in\mathcal{ P}}\|\mathbf{\overline{\mathbf{z}}_{j}}^{-I}-\mathbf{\overline{z}}_{p}^{T}\|_{2}\) and form the positive image-caption representation pair \((\mathbf{\underline{z}}_{nn(j)}^{I},\mathbf{\overline{z}}_{j}^{T})\) to use instead of \((\mathbf{\overline{z}}_{j}^{I},\mathbf{\overline{z}}_{j}^{T})\). Similar to the CLIP loss, we obtain the negative pairs from the mini-batch. That is, for a mini-batch of \(N\) image-captions pairs \(\{(\mathbf{\underline{x}}_{j}^{I},\mathbf{\underline{x}}_{j}^{T})\}_{j=1}^{N}\), and their projected embeddings \(\{(\mathbf{\overline{z}}_{j}^{I},\mathbf{\overline{z}}_{j}^{T})\}_{j=1}^{N}\), the loss is defined as:
\[\mathcal{L}_{\text{RoCLIP}}= -\frac{1}{2N}\sum_{j=1}^{N}\log\left[\frac{\exp\left(\left\langle \mathbf{\overline{z}}_{j}^{I},\mathbf{\underline{z}}_{nn(j)}^{T}\right\rangle/\tau \right)}{\sum_{k=1}^{N}\exp\left(\left\langle\mathbf{\overline{z}}_{j}^{I},\mathbf{ \underline{z}}_{nn(k)}^{T}\right\rangle/\tau\right)}\right] \tag{2}\] \[-\frac{1}{2N}\sum_{k=1}^{N}\log\left[\frac{\exp\left(\left\langle \mathbf{\underline{z}}_{nn(k)}^{I},\mathbf{\underline{z}}_{k}^{T}\right\rangle/\tau \right)}{\sum_{j=1}^{N}\exp\left(\left\langle\mathbf{\underline{z}}_{nn(j)}^{I}, \mathbf{\overline{z}}_{k}^{T}\right\rangle/\tau\right)}\right],\]
For the pool \(\mathcal{P}\), we consider a first-in-first-out queue, which is initialized with \(P\) randomly selected image-caption pairs. We chose \(P\) large enough so that it represents the full image-caption distribution in the representation space. After training on every mini-batch, we update \(\mathcal{P}\) by taking the image and caption representations of the \(N\) examples in the mini-batch and concatenating them at the end of the queue. We discard the oldest \(N\) elements from the queue.
## 5 Experiments
In this section, we evaluate the effectiveness of our method in breaking targeted data poison and backdoor attacks while maintaining the model's performance. We evaluate our method for defending against various data poisoning and backdoor attacks, described in Sec. 3.2, during pre-training and fine-tuning CLIP.
### Datasets
We use Conceptual Captions 3M (CC3M) Sharma et al. (2018) as our pre-training dataset and MSCOCO Chen et al. (2015) as our fine-tuning dataset. Due to limited computational resources, for pre-training we randomly sampled 1M image-caption pairs from CC3M as our training dataset. For MSCOCO, we used the 25% split proposed by Samet et al. (2020). We assess our method on 10 more downstream datasets introduced by Kornblith et al. (2019), the detail of which can be found in Table 1.
### Training
We evaluate RoCLIP during pre-training and fine-tuning CLIP. For pre-training, we use an open-source implementation of CLIP as our model, with default ResNet-50 as the image encoder and Transformer as the text encoder. Each experiment is run with a batch size of 512 for 30 epochs. For fine-tuning, we use the Open AI's released ViT-B/32 parameters to initialize our pre-trained model. Each experiment is run with a batch size of 128 for 30 epochs. The hyperparameter settings follow the one used by the original CLIP paper Radford et al. (2021).
### Attack Methods
We consider targeted image attacks, targeted class attacks, and backdoor attacks, discussed in Sec. 3.2.
**Targeted Image Attacks** The goal of the targeted image attacks is to poison the model to classify a particular image \(\mathbf{\underline{x}}_{t}^{I}\) into the adversarial label \(y_{adv}\) without harming the performance of the model on other images. As shown empirically in Carlini and Terzis (2021), 3 poisoned samples out of 3 million examples are enough to fool the model into misclassifying the target image by the adversarial label. We evaluate RoCLIP against targeted image attacks during pre-training and fine-tuning CLIP.
In our pre-training experiment, we choose a random target image \(\mathbf{\underline{x}}_{t}\) from the conceptual captions validation set, and then choose a random target class from the ImageNet test set to generate a set of \(|\mathcal{T}_{adv}|=15\) adversarial captions. Note that, Carlini and Terzis (2021) pre-trained 32 CLIP models and measured the attack success rate as the fraction of positioned models. This requires 3200 GPU hours. To reduce the computation, we poisoned 32 different random images by generating 15 adversarial captions related to a label selected at random from ImageNet. Then, we report the attack success rate as the number of images that are classified as the adversarial label, in a single pre-training run. In doing so, the attack success will be at least as high as Carlini and Terzis (2021). In addition, note that attacking our smaller dataset of 1M examples also results in a higher attack success rate compared to that of 3M used by Carlini and Terzis (2021).
\begin{table}
\begin{tabular}{l c r r} \hline \hline Dataset & Classes & Train Size & Test Size \\ \hline Caltech101 & 102 & 3,060 & 6,085 \\ CIFAR10 & 10 & 50,000 & 10,000 \\ CIFAR100 & 100 & 50,000 & 10,000 \\ DTD & 47 & 3,760 & 1,880 \\ FGVCaircraft & 100 & 6,667 & 3,333 \\ Flowers102 & 102 & 2,040 & 6,149 \\ Food101 & 101 & 75,750 & 25,250 \\ ImageNet1K & 1000 & 1,281,167 & 50,000 \\ OxfordIITPet & 37 & 3,680 & 3,669 \\ StanfordCars & 196 & 8,144 & 8,041 \\ \hline MSCOCO (25\%) & 80 & 124,572 & 3900 \\ Conc. Capt. (1M) & - & 1,000,000 & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details of our downstream datasets.
In our fine-tuning experiment, we follow the same procedure as (Yang et al., 2022). We select two random classes from MSCOCO, and poison 100 images from each class by replicating each image 3 times, and using an adversarial label for them. We use two adversarial labels selected at random from MSCOCO. In total, 600 poisoned images are injected to the training set.
**Targeted Class Attacks (fine-tuning)** The goal of the targeted class attacks is to make the model miclassify an entire class of images \(\mathcal{I}_{t}\) by the adversarial label \(y_{adv}\), without harming its performance on other classes. Note that this attack is specific to fine-tuning on a labeled dataset, since image-caption datasets like CC3M do not have class information. We select two random classes from MSCOCO as the adversarial labels and chose 100 images from each class as our target images. Since misclassifying the whole class demands stronger poison, each poisoned image is replicated 5 times as opposed to 3 in the targeted image attack, so there are in total 1000 poisoned images injected to the training set. We measure the attack success rate on the validation set of the poisoned classes.
**Backdoor Attacks** The goal of the backdoor attacks is to make the model misclassify any image with the backdoor patch to the desired class label. We use the public Hidden Trigger Backdoor Attacks (HTBA) patches (Saha et al., 2020), that are square triggers generated by drawing a random 4x4 matrix of colors and resizing it to the desired patch size using bilinear interpolation. We use a resized 16x16 patch and put it consistently on the left top corner of the image.
We evaluate the effectiveness of backdoor attacks during pre-training and fine-tuning. As shown empirically in (Carlini and Terzis, 2021), a 0.01% poison ratio is enough to create a backdoor in the CLIP model trained on 3M data. Thus, we used 0.01% backdoor ratio for both our pre-training and fine-tuning experiments. In our pre-training experiments, we randomly select 100 images from the CC3M dataset and pair them with adversarial captions related to a random target class from ImageNet. To evaluate the effectiveness of the backdoor attacks, we select 100 random images from the ImageNet validation set and patch them in the left top corner. For fine-tuning, we randomly select 15 images from the MSCOCO dataset and label them as a another random target class from MSCOCO. For evaluation, we select 100 random images from ImageNet validation set and patch them in the left top corner to measure the attack success rate.
### RoCLIP robustly Pre-trains CLIP
First, we evaluate the effectiveness of our method, RoCLIP, against poisoning and backdoor attacks during the pretraining phase. We present our result in Table 2. We define the attack success rate as the fraction of poisoned or backdoored images successfully classified as the desired label. We see that without any defense, 45% of the total poisoned images are classified to the desired target class. Moreover, the 52% of the total backdoored images unseen in the training set are classified to the desired target class. On the other hand, RoCLIP is able to fully defend the attack and reduce the attack success rate to 0% for both the backdoor and the poisoning attacks. This clearly confirms the effectiveness of RoCLIP in breaking various types of data poisoning and backdoor attacks on CLIP during pre-training.
### RoCLIP Robustly Fine-tunes CLIP
Next, we evaluate how effective RoCLIP is against data poisoning and backdoor attacks during the fine-tuning phase. We present our result in Table 3. Since every example in MSCOCO has a caption and a class label, we use text retrieval as our downstream evaluation task. Following (Yang et al., 2022), we construct our test set by selecting 50 image-caption pairs from each of the 78 classes of the MSCOCO dataset, excluding the two classes that have fewer than 50 instances. To do text retrieval, we calculate the similarity between the representation of test image and the
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & Attack & Success Rate \\ \hline Poisoned CLIP & Target Img & 45.00\% \\ & Backdoor & 52.00\% \\ \hline RoCLIP & Target Img & **0.00\%** \\ & Backdoor & **0.00\%** \\ \hline Clean CLIP & Target Img & 0.00\% \\ & Backdoor & 0.00\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Defense Evaluation: zero-shot on ImageNet. Our method successfully defends the CLIP model during the pretraining, decreases the attack success rate from as high as 54% to 0%
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Model & Hit@ & Hit@1 & Hit@3 & Hit@5 \\ \hline Poisoned CLIP & Target Img & 25.0\% & 66.0\% & 80.0\% \\ & Target Class & 7.0\% & 16.0\% & 17.0\% \\ & Backdoor & 85.0\% & 92.0\% & 93.0\% \\ \hline RoCLIP & Target Img & **4.0\%** & **9.5\%** & **15.0\%** \\ & Target Class & **2.5\%** & **5.5\%** & **9.0\%** \\ & Backdoor & **1.0\%** & **4.0\%** & **4.0\%** \\ \hline Clean CLIP & Target Img & 4.0\% & 9.0\% & 12.5\% \\ & Target Class & 2.0\% & 7.5\% & 14.5\% \\ & Backdoor & 1.0\% & 2.0\% & 3.0\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Defense Evaluation: text retrieval on MSCOCO. Our method successfully defends the CLIP model during the fine-tuning from various attacks and decreases the hit rate as low as that of the clean model.
the representations of all the captions from the test set. Then, we select the top\(k\) captions with the highest similarity as our retrieved captions.
We use hit@\(k\) ratio as our retrieval metrics, defined as the fraction of poisoned images whose top\(k\) retrieved captions include the captions from the target label. A higher hit@\(k\) ratio indicates a higher fraction of images being successfully poisoned. We consider 3 commonly used values for \(k\in\{1,3,5\}\). As shown in Table 3, all three attacks are highly effective, which is indicated by their high hit@\(k\) ratios on CLIP. The Hit@5 ratio on targeted image attack is 80% (compared to 12.5% on clean model). The targeted class attack is 16% (compared to 7.5% on clean model), and the backdoor attack Hit@1 is 85% (compare to 1% on clean model). On the other hand, RoCLIP is able to effectively defend the attacks and provide a similar or even lower hit@\(k\) ratio than that of the clean model on all \(k\in\{1,3,5\}\) and all attacks. This clearly confirms the effectiveness of RoCLIP in breaking various types of data poisoning and backdoor attacks on CLIP during fine-tuning. Note that the Hit@\(k\) ratio is not 0 even for the CLIP model trained on clean dataset. This is also confirmed by (Yang et al., 2022).
### RoCLIP Does not Hurt the Performance
Last but not least, we evaluate if RoCLIP negatively impacts the model performance. We assess the performance of RoCLIP on a variety of datasets introduced by (Kornblith et al., 2019), the detail of which can be found in Table 1. We evaluate RoCLIP with both zero-shot and linear-probe methods and compare its performance with the standard pretrained CLIP. Both RoCLIP and CLIP are trained on the clean 1M CC from scratch. Each experiment is run with a batch size of 512 for 30 epochs. We present our result in Table 4. As shown, our method does not harm the overall model performance. In contrast, RoCLIP effectively improves the classification performance across all ten datasets on both zero-shot and linear probing, up to 10% on Caltech101 on zero-shot and 14% on DTD.
## 6 Conclusion
We proposed RoCLIP, an effective method for robust training multimodal vision-language models such as CLIP against data poisoning and backdoor attacks. Our method utilizes the nearest neighbor as well as random data augmentation to break the associations between the poisoned image and caption pairs, thus effectively defending the models. Through extensive experiments, we demonstrated that our proposed method drops the attack success rate down to 0% on both image target attack and backdoor attack and 2.5% on label target attack. At the same time, it improves the model's performance by up to 12% compared to the baseline CLIP model.
|
2305.13669 | The Knowledge Alignment Problem: Bridging Human and External Knowledge
for Large Language Models | Large language models often necessitate grounding on external knowledge to
generate faithful and reliable answers. Yet even with the correct groundings in
the reference, they can ignore them and rely on wrong groundings or their
inherent biases to hallucinate when users, being largely unaware of the
specifics of the stored information, pose questions that might not directly
correlate with the retrieved groundings. In this work, we formulate this
knowledge alignment problem and introduce MixAlign, a framework that interacts
with both the human user and the knowledge base to obtain and integrate
clarifications on how the user question relates to the stored information.
MixAlign employs a language model to achieve automatic knowledge alignment and,
if necessary, further enhances this alignment through human user
clarifications. Experimental results highlight the crucial role of knowledge
alignment in boosting model performance and mitigating hallucination, with
improvements noted up to 22.2% and 27.1% respectively. We also demonstrate the
effectiveness of MixAlign in improving knowledge alignment by producing
high-quality, user-centered clarifications. | Shuo Zhang, Liangming Pan, Junzhou Zhao, William Yang Wang | 2023-05-23T04:22:50Z | http://arxiv.org/abs/2305.13669v3 | # Mitigating Language Model Hallucination with Interactive
###### Abstract
Despite the remarkable recent advances in language models, they still struggle with the hallucination problem and can generate misleading and unsupported responses. A common approach to mitigate the hallucination issue is retrieving and incorporating supporting evidence from a knowledge base. However, user questions usually do not align well with the stored knowledge, as they are unaware of the information available before asking questions. This misalignment can limit the language model's ability to locate and utilize the knowledge, potentially forcing it to hallucinate by ignoring or overriding the retrieved evidence. To address this issue, we introduce MixAlign, a framework that interacts with both the user and the knowledge base to obtain and integrate clarifications on how the user question relates to the stored information. MixAlign employs a language model to achieve automatic question-knowledge alignment and, if necessary, further enhances this alignment through human user clarifications. Experimental results demonstrate significant improvements over state-of-the-art methods, showcasing the effectiveness of MixAlign in mitigating language model hallucination.
## 1 Introduction
Despite the remarkable recent advancements in Transformer-based large language models (LLMs), they still struggle with generating biased, misleading, or unsupported content. This phenomenon, colloquially known as hallucination, includes factually incorrect yet plausible-sounding statements, the conflation of facts between similar entities, or errors where just one incorrect token can make the difference between being right and wrong Shuster et al. (2021); Gao et al. (2022); Aksitov et al. (2023).
A comment strategy for addressing the hallucination issue is the retrieval-in-the-loop method, commonly referred to as retrieval-augmented generation (RAG) Guu et al. (2020); Shuster et al. (2021). This approach grounds the generation process with supporting evidence from a trustworthy knowledge base. While RAG indeed often improves the end-task performance, it does not consistently mitigate the hallucination issue and may encounter issues such as generating text that includes extraneous information not present in the retrieved document Dziri et al. (2022), ignoring the document entirely Krishna et al. (2021), or even contradicting the document Longpre et al. (2021). These erroneous behaviors are interpreted as a passive defense mechanism against poor retrievals Gao et al. (2022).
In this work, we argue that the primary cause of
Figure 1: A brief overview of the proposed framework for aligning the user question with the stored knowledge in retrieved-augmented generation (RAG).
the aforementioned error cases in the RAG model stems from the misalignment between users' questions and the stored knowledge. This misalignment is quite common, as users are typically unaware of the back-end knowledge base when formulating their questions. When the attributes and values in the knowledge evidence are not consistently stated or even omitted in the user's question, the language model may follow spurious correlations and incorporate biased model knowledge (e.g., assume a nurse must be a woman) to override the retrieved evidence and hence hallucinate.
To enhance the alignment between questions and knowledge, we propose MixAlign, a framework that interacts with both the user and the knowledge base to acquire clarifications on how the user's question relates to the stored evidence. As illustrated in Fig. 1, MixAlign first refines the user's question automatically (i.e., _model-based question-knowledge alignment_), where the LM is employed to substitute attribute values in the user's question with corresponding values from the knowledge base. This refined question is then utilized to retrieve knowledge evidence. In cases the evidence remains unclear, an attribute that distinguishes the results is selected for the language model to generate a question, seeking further clarification from the user (i.e., _human-assisted question-knowledge alignment_). The alignment information is incorporated to generate the final answer.
In summary, our major contributions are:
* We highlight the importance of question-knowledge misalignment, a common issue in various real-world scenarios, as a crucial factor leading to language model hallucination.
* We introduce a mixed-initiative framework designed to improve question-knowledge alignment by leveraging the strengths of both language models and human users.
* Comprehensive evaluations demonstrate the superior performance of MixAlign in terms of question-knowledge alignment and its promising ability to reduce hallucination.
## 2 Related Work
**Language Model Hallucination.** The extraordinary capabilities of LMs come with a significant drawback: their potential to generate unsupported text due to their lack of understanding of what is factual and what is not (Ji et al., 2023; Bang et al., 2023; Thoppilan et al., 2022). As a result, there has been a surge of interest to address LM hallucination through knowledge-grounded neural language generation (Dziri et al., 2021; Ji et al., 2023). Various studies strive to address LM hallucinations by employing various approaches. These include developing automated metrics for post-hoc detection and elimination of hallucination (Gao et al., 2022; Manakul et al., 2023; Azaria and Mitchell, 2023), as well as fine-tuning or reweighting language models to rectify potential causes (Keskar et al., 2019; Daheim et al., 2023). Some of these causes encompass out-of-domain generalization, the presence of noisy training data, and exposure bias that may arise from maximum likelihood estimation (MLE) training (Dziri et al., 2021, 2022; Raunak et al., 2021). Unlike existing methods, we propose an align-in-the-loop framework that reduces hallucination during text generation. This framework can be applied to various off-the-shelf LMs, providing a more robust solution to LM hallucination.
**Clarification Question Generation.** The study of asking clarifying questions spans a wide range of tasks, including information retrieval and open-domain question answering (Rao and Daume III, 2018; Majumder et al., 2021; Kuhn et al., 2022; Pyatkin et al., 2022). The effectiveness of these questions is often determined by information-theoretic measures such as relevance, informativeness, or utility (Rao and Daume III, 2018, 2019; White et al., 2021). Rule-based methods have been proposed for generating clarification questions by filling manually defined templates (Wang and Li, 2021) or applying a set of syntactic transformations on ambiguous questions (Dhole, 2020). In addition to rule-based methods, neural network-based approaches have been proposed to generate more coherent questions by training text generation models (Rao and Daume III, 2018, 2019) or utilizing state-of-the-art pre-trained large language models (Krasheninnikov et al., 2022; Kuhn et al., 2022). Most of the existing works focus on resolving ambiguities within user queries, whereas we seek clarifications on how the user question is related to the stored knowledge.
## 3 Methodology
In this section, we introduce MixAlign, a method dedicated to address language model hallucination in retrieval-augmented generation (RAG). Mix
Align employs a language model to align user expressions with the stored knowledge and, if necessary, further enhances this alignment through human user clarification.
The overall framework of MixAlign is depicted in Fig. 2. Initially, the user question is automatically refined by aligning user expressions with the stored attributes and values from the knowledge base. This automatic alignment is accomplished by extracting attribute values from the question and finding their corresponding references in the knowledge base. The refined question is then used to query the knowledge base. If multiple candidates are obtained, which can lead to confusion, an attribute that partitions the results is selected for the language model to generate a question, seeking clarification from the user. Finally, the alignment information metadata, including the value reference information and clarifying dialogue, is combined with the initial question and knowledge candidates to generate the final answer.
### Model-based Question-Knowledge Alignment
In order to align a user's question with the knowledge base, we utilize a language model to rephrase the question in accordance with the knowledge base's schema. This approach aids in preventing false statistical correlations and enables causal retrieval such as SQL queries.
The process involves two steps:
1. Identify potential attribute names from the database as slots and extract the corresponding values from the user's question.
2. Find values in the database that are co-referenced with the extracted values (if the extracted values are not directly present).
Both steps are implemented by prompting the language model with specific instructions.
A shortcut to executing Step 1 is to transform the user's question into a structured query language (SQL) query, according to the database schema. Taking PostgreSQL as an example, we utilize the following instruction:
You are a PostgreSQL expert. Given an input question, create a syntactically correct PostgreSQL query to run, you must use only the column names you can see in the table information below.
Question: User-Question
Table information: A table named (MLELENAME with columns of \(\mathcal{E}\)), \(\mathcal{E}\), \(\mathcal{
We proceed by parsing the generated SQL query to extract the values and verify their references in the database for each value. We utilize the language model to identify these references using the following prompt:
Is Value - referring to Value -?
The reference values serve as a means to update the extracted values, ultimately forming a valid database query. This query retrieves potential knowledge groundings, which are then converted to a text format \(G=\{g_{1},\dots,g_{n}\}\).
### Human-assisted Question-Knowledge Alignment
At this stage, the language model poses a question to help guide the human user in resolving ambiguities or inconsistencies that may arise.
This stage comprises two steps:
1. Identify the most distinguishable attribute to differentiate knowledge groundings.
2. Generate a clarifying question regarding this attribute.
Instead of requesting the user to provide more context aimlessly, we direct them on how to offer such information by concentrating on a particular aspect.
In the first step, we select the attribute by taking into account two aspects: (1) Distinguishability: We aim to eliminate noisy candidates as much as possible after clarification. (2) Answerability: We avoid asking the user about unfamiliar attributes such as names and ID numbers. Moreover, we do not want to merely rephrase the user's question back to them. To accomplish this, we remove the asked factor and filter out ID attributes that have different values for each candidate. Subsequently, we employ a greedy approach to identify the most distinguishable attribute by selecting the one with the highest number of unique values.
Given the distinguishing attribute, a clarifying question is generated using the following prompt:
Origin question: User Question
The origin question is unclear. Ask a clarifying question to make user confirm the value of the slot: Item, which can only take values in this list: \(\{\)Value -? Value -? \(\}\).
Clarifying question:
### Answer Generation
The final answer is generated by integrating knowledge groundings and alignment results, which include co-reference information, as well as the clarifying question and the user's response.
### A Casual Look at MixAlign
To uncover the cause-effect relationships in retrieval-augmented generation, we have developed a Structural Causal Model (SCM) Peters et al. (2017). SCM is a directed acyclic graph that represents causal connections within a system.
As shown in Fig. 3(a), the pre-trained knowledge (\(D\)) in LLM introduces confounding factors into the system. For example, the model may assume that a nurse must be a woman, resulting in biased correlations and ultimately causing hallucinations.
As illustrated in Fig. 3(b), the Retrieval-augmented Language Model mitigates biased correlations through the front-door adjustment Pearl (2009), which employs a mediator (\(G\), retrieved knowledge groundings) to block all directed paths from the cause (\(Q\)) to the effect (\(A\)). However, as depicted in Fig. 3(c), the front-door adjustment can easily fail when the groundings are statistically retrieved using the nearest neighbors search based on co-occurrence information. To address the aforementioned issue, MixAlign offers clear explanations on _why the question and knowledge are related_, thereby promoting front-door adjustments and mitigating hallucination.
## 4 Experiments
**Dataset: FuzzyQA.** In a more realistic scenario, users often ask questions that do not directly correspond to the information available in backend
Figure 3: Knowledge grounding effectively mitigates hallucination (through front-door adjustment) only when the knowledge is causally retrieved and can causally induce the answer. That is, the retrieval method itself should be trustworthy enough to not introduce statistical co-occurrence information (i.e., a nurse must be a woman), and the retrieved knowledge must be aligned with the question in order to be utilized for further deducing the answer.
databases, primarily because they are unaware of the database contents beforehand. For example, a user might ask, "In which state was the MLB hit leader born?" while the database holds multiple candidates with diverse attributes, such as the team they belong to. To tackle this challenge, we have developed a dataset, i.e., FuzzyQA, based on HybridDialogue Nakamura et al. (2022) and MuSiQue Trivedi et al. (2022), where we employ ChatGPT to simplify complex questions by eliminating few attributes and conditions.
**Language Model and Baselines.** Note that our framework is designed to be compatible with any large language model architecture. In our experiments, we employ the GPT-3-based Text-DaVinci-003 model Ouyang et al. (2022) via the OpenAI API for both the baselines and our method. Text-DaVinci-003 is a state-of-the-art InstructGPT model engineered to follow user instructions.
We compare MixAlign with three baselines. Default LM Ouyang et al. (2022), which prepends the question with a question-answering prompt. Retrieval-augmented Generation (RALM) Ope-nAI (2023), as demonstrated in the official OpenAI demo, which retrieves relevant knowledge groundings and prepends them to the question. CLAM Kuhn et al. (2022), which involves asking clarifying questions directly without explicitly considering the gap between the question and the stored knowledge.
**Metrics.** We follow the evaluation framework known as Attributable to Identified Sources Rashkin et al. (2021), which is based on the Natural Language Inference statement: "According to Question and Grounding, Answer." In our dataset, the answers typically consist of intricate attribute values. We thus calculate the following scores by conducting a straightforward comparison between the attribute values extracted from the question, gold groundings, and answers:
\(\bullet\)_Coverage_, a binary metric that determines whether all the correct gold answer values are included in the generated values.
\(\bullet\)_Hallucination_, a binary indicator that assesses the presence of generated values that do not exist in the question values and gold grounding values.
\(\bullet\)_User Simulator._ Following Kuhn et al. (2022), we implement the user simulator as an "oracle" language model that has access to attribution information about the target answer, such as nationality, team, and other relevant details. This information is incorporated into the prompt to ensure the provision of a reliable and clarifying answer.
### Evaluation with Controlled Knowledge Groundings
In this section, we investigate the impact of retrieved knowledge groundings on hallucination. aiming to answer the following questions:
**Q1**: Do state-of-the-art Language Models (LMs) still hallucinate even when provided with accurate knowledge grounding?
**Q2**: How does the presence of redundant irrelevant groundings impact LM hallucination?
**Q3**: How does the alignment between a user's question and the stored knowledge affect LM hallucination? Is this alignment necessarily related to question complexity?
**Results and Analysis R1**: In Figure 4 and 5, we observe that state-of-the-art LMs generally do not hallucinate when provided with precise gold grounding, i.e., Irrelevant Groundings (#) = 0. Upon human verification
Figure 4: Automatic evaluation over coverage and hallucination for a varied number of irrelevant knowledge groundings, given different question-knowledge alignment degrees. The alignment is automatically measured using a slot-filling approach. In this method, we extract attributions from the gold knowledge grounding as slots and determine if these slots can be filled with information obtained from the user question.
Figure 5: Automatic evaluation over coverage and hallucination for a varied number of irrelevant knowledge groundings, given diverse question types.
of 187 sampled examples, the Text-DaVinci-003 model demonstrates an answer coverage rate of \(95.71\%\) and a hallucination rate of \(1.61\%\) (excluding cases where the model states it cannot answer). The hallucination cases include extrinsic hallucination by adding facts (3 examples), in which factual complements account for 2 out of 3 instances (e.g., stating something is a movie), and only one absolute hallucination case.
**R2**: We see that the presence of redundant groundings significantly increases hallucinations, and a higher amount of irrelevant information leads to more frequent hallucinations. The model can easily rely on and utilize this irrelevant information.
**R3**: Referring to Fig. 4, it is evident that improved question-knowledge alignment results in significantly better coverage and reduced hallucination. Moreover, considering Fig. 5, it becomes apparent that complex user questions do not necessarily guarantee a considerable advantage over simple questions, meaning that question complexity does not ensure better alignment.
### Overall Evaluation
Table 1 summarizes the automatic evaluation results, demonstrating that the proposed MixAlign method achieves the best performance.
When compared to Direct LM and RALM, CALM shows an improvement in coverage, although it may lead to a higher number of hallucination instances.
A comparison between No Grounding and Retrieval-based results clearly indicates the vital role of retrieval in improving the outcomes. Moreover, Causal Retrieval outperforms statistical retrieval, demonstrating the effectiveness of automatic alignment.
A comparison of MixAlign results with Model-based Question-Answer Alignment results emphasizes that human-assisted alignment, which addresses the critical gap between the question and stored knowledge, can effectively enhance the outcomes. In contrast, vanilla clarification falls short in achieving the desired improvements.
## 5 Conclusion and Future Work
This work presents MixAlign, a framework designed to mitigate language model hallucination by aligning the user question with the knowledge stored in the database. MixAlign employs a language model that interacts with both the user and the database to acquire and incorporate clarifications about how the knowledge relates to the user question. Experimental results demonstrate the effectiveness of the proposed MixAlign in decreasing language model hallucination and improving the quality of generated responses. Future research could explore its implementation across various databases for broader applications.
|
2306.12027 | Comparative analysis of various web crawler algorithms | This presentation focuses on the importance of web crawling and page ranking
algorithms in dealing with the massive amount of data present on the World Wide
Web. As the web continues to grow exponentially, efficient search and retrieval
methods become crucial. Web crawling is a process that converts unstructured
data into structured data, enabling effective information retrieval.
Additionally, page ranking algorithms play a significant role in assessing the
quality and popularity of web pages. The presentation explores the background
of these algorithms and evaluates five different crawling algorithms: Shark
Search, Priority-Based Queue, Naive Bayes, Breadth-First, and Depth-First. The
goal is to identify the most effective algorithm for crawling web pages. By
understanding these algorithms, we can enhance our ability to navigate the web
and extract valuable information efficiently. | Nithin T K, Chandana S, Barani G, Chavva Dharani, M S Karishma | 2023-06-21T05:27:08Z | http://arxiv.org/abs/2306.12027v1 | # Comparative analysis of various web crawler algorithms
###### Abstract
This paper describes and evaluates five different crawling algorithms that we have implemented within our evaluation framework: Shark search, Priority Based Focused Crawler,Naive Bayes,Breadth-First, Depth-First and choose the best among all Five.
**Keywords - Shark search, Priority Based Focused Crawler,Naive Bayes,Breadth-First, Depth-First**
## I Introduction
The web today is a huge and enormous collection of data and it goes on increasing day by day. Thus, searching for some particular data in this collection has a significant impact.The goal of such crawlers is to learn what (almost) every webpage on the web is about, so that the information can be retrieved when it's needed.World Wide Web contains vast amount of information in unstructured form and provides an access to it at any place at any time. Information Retrieval systems play a vital role to deal with huge amounts of data present over the World Wide Web in different forms such as text, audio, video, images etc. Web crawling is an approach for converting unstructured data to structured data.The next most important job is how to relate these rapidly growing documents and how to assign rank value, page ranking is done to assess the quality and popularity of web pages, to them.
## II Background
In this section, we deal with the previous specific algorithms for scheduling the visits to the Web pages. We started with a large sample of the links data collection that was used to build a web graph and run a crawler simulator to ensure identical conditions during the experiments. We describe and evaluate five different crawling algorithms that we have implemented within our evaluation framework: Shark search, Priority Based Queue,Naive Bayes,Breadth-First, Depth-First and choose the best among all Five.
## III Literature Review
### _Web Crawling through Shark-Search using PageRank_
1) Methodology: The proposed "shark page search" algorithm is an improved version of the original "fish-search" algorithm, which aims to discover more relevant information in the same exploration time. The algorithm uses a "similarity engine" that evaluates the relevance of a document to a given query, instead of binary (relevant/irrelevant) evaluation. This "fuzzy" relevance score is used to create a priority list and is propagated down the descendants chain, giving more importance to the grandchildren of a relevant node over the grandchildren of an irrelevant node. Additionally, the algorithm makes use of meta-information contained in the links, such as the anchor text and close textual context, to refine the calculation of the potential score of the children. The algorithm also implements a decay factor to balance the importance of relevance score and inherited score and a buffer to avoid overloading the system.
2) Pros: The SSA-based web crawler can improve the efficiency and effectiveness of the crawling process by selecting the most relevant pages. The SSA-based web crawler outperforms traditional web crawling methods in terms of efficiency and effectiveness.
3) Cons: The study does not provide a detailed explanation of how the algorithm handles the problem of duplicate pages or the problem of irrelevant pages that are fetched.The study is limited to a specific domain, and it is not clear how the algorithm would perform in a more general setting.
### _An Improved Shark-Search Algorithm Based on Multi-information_
1) Methodology:A method that utilizes the Shark Search Algorithm (SSA) to optimize the selection of pages to crawl. The proposed SSA-based web crawler aims to improve the efficiency and effectiveness of the crawling process by selecting the most relevant pages based on their content, link structure, and user behavior.The study compares the performance of the SSA-based web crawler with that of traditional web crawling methods such as Breadth-First Search (BFS) and Depth-First Search (DFS) using several evaluation metrics, including the number of pages crawled, the number of unique pages crawled, and the page-feeding time.The results of the study show that the SSA-based web crawler outperforms traditional web crawling methods in terms of efficiency and effectiveness. The SSA-based web crawler was able to crawl more pages and unique pages in less time than the BFS and DFS methods.
2) Pros: SSA-based web page ranking method outperforms traditional web page ranking methods in terms of effectiveness.
3) Cons: There wasn't any mention of what will happen if there were some information missing such as user behavior, etc.
### _A Focused Crawler Based on Naive Bayes Classifier_
1) Methodology: This research paper describes a method for building a focused web crawler using a Naive Bayes classifier. A focused web crawler, it is a specialized type of web crawler that is designed to collect information on a specific topic
or set of topics. The authors of the paper proposed using a Naive Bayes classifier to build a focused web crawler that can automatically classify web pages into different topics. The classifier is trained on a set of seed pages manually labeled with the topic they belong to. Then, when the crawler visits a new page, it uses the classifier to predict the topic of the page, and decides whether or not to follow the links on that page based on the predicted topic. The research paper presents a method for building a focused web crawler which utilizes Naive Bayes classifier to classify new pages based on the topic, after being trained on seed pages. The proposed method is evaluated and outperforms a general web crawler.
2) Pros: The focused crawler, which utilizes the Naive Bayes classifier, is able to retrieve relevant web pages at a higher rate and the crawler can then prioritize these pages for retrieval with fewer irrelevant pages compared to traditional focused crawlers.
3) Cons: The algorithm is sensitive to irrelevant features and that can affect the accuracy of the classification.The classifier can be affected by a lack of training data, which can lead to poor performance if the dataset is not sufficiently large.
### _Automated Classification of Web Sites using Naive Bayesian Algorithm_
1) Methodology: This paper presents a method for automatically classifying web pages into different categories using the Naive Bayes algorithm. The authors use this algorithm to classify web pages into different categories based on the text content of the pages. They use a dataset of manually labeled web pages as the training data for their model. After training the model, the authors use it to classify a set of new web pages, and report the accuracy of their method. The paper presents an automated way to classify a web page to certain categories based on the text content, by utilizing Naive Bayesian algorithm. They evaluate the method by using a dataset of manually labeled web pages and report the accuracy.
2) Pros: The Naive Bayesian algorithm is simple and easy to implement, making it suitable for automated web site classification applications.The algorithm is able to classify web pages with a high degree of accuracy, even in different languages. The algorithm is able to classify web pages based on the images and videos.
3) Cons: The Naive Bayesian method relies on the idea that individual features within the information are unrelated, which isn't always true when working with language data. This approach is delicate to extraneous features that can negatively impact the precision of the categorization. Additionally, the classifier's performance may suffer if there is not enough training data available, particularly if the data set is small in size.
### _A Novel Approach to Priority based Focused Crawler_
1) Methodology: This research paper presents a new approach for focused web crawling that prioritizes certain pages over others based on certain criteria. The authors propose a system that uses a priority queue to prioritize pages for crawling, with pages that are more likely to be relevant to the user's query placed at the front of the queue. They also propose a method for updating the priorities of pages in the queue as the crawl progresses, which allows for a more efficient use of resources. The system was evaluated using a dataset of web pages and the results showed that the proposed approach was able to find relevant pages more quickly and with fewer resources than traditional focused crawling methods.
2) Pros: The proposed technique is said to have minimal complexity and be fast, it also avoids duplicate or mirrored links and saves a significant amount of bandwidth. Additionally, storage of web pages is done using checksum which reduces storage space and complexity during the Visited URL/Content Matching test, as compared to using text form of links and web documents.
3) Cons: This crawler doesn't consider the context of keywords, leading to multiple records in the database and code optimization should be done to improve the performance of the crawler.
### _Web Crawler Using Priority Queue_
1) Methodology: The paper presents a new approach for web crawling that uses a priority queue to prioritize pages for crawling. The authors propose a system that uses a combination of a breadth-first search and a priority queue to prioritize pages for crawling based on certain criteria such as page rank, frequency of update, and the relevance of the page to the user's query. The system also includes a mechanism for updating the priorities of pages in the queue as the crawl progresses. The performance of the proposed approach is evaluated using metrics such as execution time, CPU utilization, and the number of pages visited. The results of the evaluation show that the proposed approach is able to find relevant pages more quickly and with fewer resources than traditional web crawling methods.
2) Pros: This priority based focused crawler keeps all URLs to be visiting priority queue along with their relativity score. When we delete the URL from the priority queue, it returns the maximum score URL. Thus, every time a highest priority URL is returned for crawling.
3) Cons: The main problem with this crawling strategy, it is more time consuming.In future, they should try to reduce the crawling time by implementing algorithm parallel.
### _Comparative Analysis of Web PageRank Algorithm using DFS and BFS Crawling_
1) Methodology: This paper describes an application of the breadth first crawler algorithm to find out the page rank of web pages and get the crawling results on the basis of Breadth-first crawler and depth first search crawler.The authors results clearly shows that Breadth-first crawling approach gives good corpus and there is always a possibility to get the required page.
2) Pros: Time efficiency,Simplicity,Flexibility in visiting pages,There is always a possibility to get the required page.
3) Cons: In the proposed crawling technique the paper has not discussed higher level crawling for image and video since it is important to minimize fraudulent act. It needs more space to store all traversed pages at every node level. Problem of exhaustive search that leads to state explosion problem(As the number of state variables in the system increases, the size of the system state space grows exponentially.)
_Implementation of the BFS Algorithm and Web Scraping Techniques for Online Shop Detection in Indonesia_
1) Methodology: The online store detection application that was built can detect online stores properly,based on shipping parameters, store ratings, and response rates. The Breadth First Search algorithm is a simple algorithm that can be used for the process of retrieving store data and product data on Shopee commerce in Indonesia with the help of the Web Scraping technique.Then data is taken in the form of shopid and itemid which are used as nodes.Forming Queue Tree with Each node in the first layer accommodates the shopid and the second layer accommodates the itemid. In this study, the authors conducted a search input for 100 online stores to detect whether the store's status was genuine or fake using the Breadth First Search algorithm.From the first node to the 200 node by implementing the Breadth First Search algorithm, takes about 161.6 seconds. With the number of detections based on delivery, there were 98 genuine online shops and 2 droship.
2) Pros: Time efficiency,Simplicity,Flexibility in visiting pages,There is always a possibility to get the required page.
3) Cons: This application has a weakness that is very dependent on internet connection speed, because it will affect the number of online shops detected as well as the visiting time of each node. The detection results of an online shop do not necessarily categorize an online shop as genuine or fake as a whole, but it is detected based on parameters, so it is possible that in one parameter it can be said to be a genuine online store category, and in one other parameter it is categorized as a fake online shop.
### _Survey of Web Crawling Algorithms_
1) Methodology: The depth first search technique, which starts at the root URL and navigates depth via the child URLs, is a more useful search method. If there are one or more children, we first move to the leftmost child and continue down until there are no more children left. Backtracking is used in this case to reach the following unvisited nodes, and processes are compensated similarly. The authors ensure that every edge, or every URL, is viewed once by using this approach. While it is quite effective for solving search-related problems, when1) the kid is huge, this approach enters an unending cycle.We can also improve its performance to modify the sitemap of any web site, i.e. in sitemap protocol all URL has a static priority and we can change it by dynamic priority.
2) Pros: Well Suited for problems like a scenario which Starts at the root URL and traverses depth through the child URL.
3) Cons: When the branches are large then this algorithm takes might end up in an infinite loop
### _Comparative Analysis of Web PageRank Algorithm using DFS and BFS Crawling_
1) Methodology: Our method for Depth First Crawling is comparable to a depth first search of a tree or graph. Starting with the seed page,we will crawl farther and farther until we have covered all the pages along that path, and then we will turn around and explore the other branches of the graph. Accordingly, as we crawl web pages, we will examine the first link on each page in the series of pages until we reach the last one. Only after that will we begin to examine the second link on the first page and the pages that follow. Our strategy for depth-first crawling is to provide the crawler a certain amount, the total number of web pages to be crawled, in advance. Once those pages have been scanned, our web crawler will stop scanning new web pages and we are left with limited corpus
2) Pros: Well Suited for problems like a scenario which Starts at the root URL and traverses depth through the child URL.
3) Cons: In this approach, we can't predict the order of crawling. It will also affect the ranking of web pages and search result quality. Depth-first crawling we can not ensure good corpus and in the worst case it may happen that one of our seed pages did not get crawled.
### _Proposed Solution_
In this there are FIVE algorithms that we build models for Shark search,Naive Bayes,Priority Based Focused Crawler,BFS,DFS. All content of website will be extracted using beautiful soup tool.This data will be stored in the form of HTML.After choosing a common Dataset as a link of WIKIPEDIA under some topic as /wiki/.We will have queue,list to store fetched UrIs.By removing Url from queue and adding new urls to Queue we will continue the loop it will be applied to all 5 algorithm models and will be get top 1000 links that are relevant to the base URL.These will be printed as output Url that are relevant to Base URL.Now the output will be observed based on their performance using F1 measure and accuracy.The comparison between these algorithms is done using
1. Number of pages visited in 1 hour
2. Time taken to retrieve 1000 pages
3. From first 1000 pages retrieved which are relevant
4. Memory taken by each algorithm to retrieve visit 1000 pages.
Outputs and results will be analysed and conclusion can be drawn based on performance of these FIVE web crawlers stating the best crawler among all Five for the dataset.
### Implementation:
#### 2.1.1 L.Breadth FIRST SEARCH
import requests
from bs4 import BeautifulSoup
from collections import deque
from sklearn.metrics import f1_score
# Set the seed URL and maximum depth to crawl
seed_url = "[https://en.wikipedia.org/wiki/Main_Page](https://en.wikipedia.org/wiki/Main_Page)"
max_depth = 3
# Define lists to hold the URLs to visit and the URLs visited
urls_to_visit = deque([(seed_url, 0)])
urls_visited = []
# Define lists to hold the relevant and irrelevant URLs
relevant_urls = []
irrelevant_urls = []
while urls_to_visit:
url, depth = urls_to_visit.poleft()
if url not in urls_visited and depth \(\lnot=\) max_depth:
try:
response = requests.get(url)
except:
continue
if response.status_code == 200:
soup = BeautifulSoup(response.content, "html.parser")
for link in soup.find_all('a):
href = link.get(href)
if href:
href = href.strip()
if href.startswith(http):
urls_to_visit.append((href, depth+1))
elif href.startswith(/'):
urls_to_visit.append((url + href, depth+1))
urls_visited.append(url)
# Classify the URL as relevant or irrelevant based on some criteria
if some_criteria(url):
relevant_urls.append(url)
else:
# Calculate F1 score and accuracy
y_true = [1] * len(relevant_urls) + [0] * len(irrelevant_urls)
y_pred = [1] * len(relevant_urls) + [0] * len(urls_visited)
f1 = f1_score(y_true, y_pred)
accuracy = len(relevant_urls) / len(urls_visited)
print("F1 score: {f1:2.f}")
print("Accuracy: {accuracy.:2f}")
#### 2.1.2 2.Naive BAYES
import requests
from bs4 import BeautifulSoup
import re
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import f1_score, accuracy_score
# Seed URL for the web crawler
seed_url = "[https://en.wikipedia.org/wiki/Main_Page](https://en.wikipedia.org/wiki/Main_Page)"
# Maximum number of pages to crawl
max_pages = 30
# Initialize a list to store the content of the crawled pages
corpus = []
# Initialize a list to store the labels of the crawled pages
labels = []
# Initialize a regular expression to match non-alphabetic
characters
non_alpha = re.compile(["a-zA-Z]+)
# Initialize a vectorizer to convert the text into a bag of words
vectorizer = CountVectorizer(stop_words='english')
# Initialize a classifier to predict the labels of the pages
classifier = MultinomialNB()
# Initialize lists to store ground truth and predicted labels
y_true = []
y_pred = []
# Add the seed URL to the queue
queue = [seed_url]
# Loop until the queue is empty or the maximum number of
pages is reached
while queue and len(corpus) \(<\) max_pages:
# Pop the next URL from the queue
url = queue.pop(0)
# Check if the URL has already been crawled
if url in corpus:
continue
# Fetch the HTML content of the page
try:
response = requests.get(url)
html = response.content
except: continue
# Parse the HTML content using BeautifulSoupup = BeautifulSoup(html, 'html.parser')
# Extract the text content of the page text = soup.get_text()
# Remove non-alphabetic characters from the text text = non_alpha.sub(', text)
# Add the text content to the corpus corpus.append(text)
# Extract the label of the page label = 1 if 'wiki' in url.lower() else 0
# Add the label to the labels list labels.append(label)
Add the links from the page to the queue links = soup.find_all(") for link in links: href = link.get('href') if href is None: continue
Check if the link is an internal link to a Wikipedia page if href.startswith('/wiki'/) and ':' not in href:
# Construct the full URL of the linked page full_url = [https://en.wikipedia.org](https://en.wikipedia.org)' + href
Add the linked page to the queue if full_url not in queue and full_url not in corpus: queue.append(full_url)
Check if the linked page is a disambiguation page if 'disambiguation' in full_url.lower(): y_true.append(1) else: y_true.append(0)
Check if the linked page is a Wikipedia article if 'wiki' in full_url.lower(): y_pred.append(1) else: y_pred.append(0)
# Convert the corpus into a bag of words matrix X = vectorizer.fit_transform(corpus)
# Fit the classifier to the data classifier.fit(X, labels)
# Compute the predicted labels for the pages y_pred_nb = classifier.predict(X)
Compute the F1 measure and accuracy score f1 = f1_score(y_true, y_pred) accuracy = accuracy_score(y_true, y_pred)
print(F1 measure:', f1) print(Accuracy:',accuracy)
3_DEPTH FIRST SEARCH:
import requests
from bs4 import BeautifulSoup from collections import deque from sklearn.metrics import f1_score
# Set the seed URL and maximum depth to crawl seed_url = "[https://example.com](https://example.com)" max_depth = 3
Define lists to hold the URLs to visit and the URLs visited urls_to_visit = [(seed_url, 0)] urls_visited = []
# Define lists to hold the relevant and irrelevant URLs relevant_urls = [] irrelevant_urls = []
while urls_to_visit: url, depth = urls_to_visit.pop() if url not in urls_visited and depth <= max_depth: try: response = requests.get(url) except: continue if response.status_code == 200: soup = BeautifulSoup(response.content, html.parser') for link in soup.find_all("a"): href = link.get('href') if href: href = href.strip(0 if href.startswith(http): urls_to_visit.append((href, depth+1)) elif href.startswith(/): urls_to_visit.append((url + href, depth+1)) urls_visited.append(url) # Classify the URL as relevant or irrelevant based on some criteria if some_criteria(url): relevant_urls.append(url) else: irrelevant_urls.append(url)
Calculate F1 score and accuracy y_true = [1] * len(relevant_urls) + [0] * len(irrelevant_urls) y_pred = [1] * len(relevant_urls) + [0] * len(urls_visited) f1 = f1_score(y_true, y_pred) accuracy = len(relevant_urls) / len(urls_visited)
print(F1 score: {f1::2f}") print(f"Accuracy: {accuracy:2f}")
|
2310.20015 | Active-parameter polydispersity in the 2d ABP Yukawa model | In both experiments and simulations the most commonly studied kind of
parameter polydispersity is that of varying particles size. This paper
investigates by simulations the effects of introducing polydispersity in other
parameters for two-dimensional Active Brownian Particles with Yukawa pair
interactions. Polydispersity is studied separately in the translational and
rotational diffusion coefficients, as well as in the swim velocity $v_0$.
Uniform and binary parameter distributions are considered in both the
homogeneous and the motility-induced phase-separation (MIPS) phases. We find
only minute changes in structure and dynamics upon the introduction of
parameter polydispersity, even for situations involving 50% polydispersity. The
reason for this is not clear. An exception is the case of $v_0$ polydispersity
for which the average radial distribution function with changing polydispersity
shows significant variations in the MIPS phase. Even in this case, however, the
dynamics is only modestly affected. As a possible application of our findings,
we suggest that a temporary introduction of polydispersity into a
single-component active-matter model characterized by a very long equilibration
time, i.e., a glass-forming active system, may be used to equilibrate the
system efficiently by particle swaps. | Shibu Saw, Lorenzo Costigliola, Jeppe C. Dyre | 2023-10-30T21:08:28Z | http://arxiv.org/abs/2310.20015v3 | # Active-parameter polydispersity in the 2d ABP Yukawa model
###### Abstract
In both experiments and simulations the most commonly studied kind of parameter polydispersity is that of varying particles size. This paper investigates by simulations the effects of introducing polydispersity in other parameters for two-dimensional Active Brownian Particles with Yukawa pair interactions. Polydispersity is studied separately in the translational and rotational diffusion coefficients, as well as in the swim velocity \(v_{0}\). Uniform and binary parameter distributions are considered in both the homogeneous and the motility-induced phase-separation (MIPS) phases. We find only minute changes in structure and dynamics upon the introduction of parameter polydispersity, even for situations involving 50% polydispersity. The reason for this is not clear. An exception is the case of \(v_{0}\) polydispersity for which the average radial distribution function with changing polydispersity shows significant variations in the MIPS phase. Even in this case, however, the dynamics is only modestly affected. As a possible application of our findings, we suggest that a temporary introduction of polydispersity into a single-component active-matter model characterized by a very long equilibration time, i.e., a glass-forming active system, may be used to equilibrate the system efficiently by particle swaps.
## I Introduction
Active matter includes fluids of self-propelled particles like bacteria, birds, or insect flocks [1; 2; 3; 4; 5; 6; 7]. An example of the intriguing features of active matter is motility-induced phase separation (MIPS), the fact that a purely repulsive system may phase separate into high- and low-density phases [8; 9; 10; 11; 4; 12].
There is currently a considerable interest in passive polydisperse systems, in particular deriving from the use of polydispersity for SWAP-equilibrating models of supercooled liquids [13]. An obvious question that arises is: how different are the dynamics of the different particles [14; 15; 16]? Polydispersity is also relevant for active matter models because for a biological system one cannot expect all constituents to be identical [17; 18; 19; 20]. Models with motility polydispersity are relevant for both biological and colloidal active systems; thus Castro _et al._ recently showed that the MIPS phase gets suppressed with the introduction of a spread of swim (self-propelled) velocities in the Active Brownian Particles (ABP) model [21].
This paper presents a systematic study of the effects of polydispersity in other parameters than size [22; 23] of the ABP model in two dimensions. The particles interact via the Yukawa (screened Coulomb) pair potential [24; 25], and polydispersity is introduced by varying the three activity parameters controlling the motion of the individual particles. We find a surprisingly small effect of even quite high polydispersity, up to 50%, when parameters vary such that their average is kept constant. This applies to both continuous and binary polydispersity and is in sharp contrast to the large effects of size polydispersity [22; 23].
## II The 2d ABP Yukawa system
The Yukawa pair potential [24; 26] is defined [27] by
\[v(r)\,=\,\frac{Q^{2}\sigma}{r}\,\exp\left(-\frac{r}{\lambda\sigma}\right)\,. \tag{1}\]
Here \(\sigma\) is a length parameter, \(\lambda\) is dimensionless, and the "charge" \(Q\) has dimension square root of energy. Throughout the paper we use the fixed values \(\lambda=0.16\) and \(Q=50\), while \(\sigma\equiv 1\) defines the unit of length and thus the unit of particle density.
If \(\mathbf{r}_{i}\) is the position vector of particle \(i\), the ABP equations of motion in two dimensions are [28]
\[\dot{\bf r}_{i}\,=\,\mu{\bf F}_{i}+{\mathbf{\xi}}_{i}(t)+v_{0}\,{\bf n}_{i}(t)\,. \tag{2}\]
Here, \(\mu\) is the mobility (velocity over force), \({\bf F}_{i}({\bf R})=-\nabla_{i}U({\bf R})\) is the force on particle \(i\) in which \({\bf R}=({\bf r}_{1},....,{\bf r}_{N})\) is the configuration vector and \(U({\bf R})=\sum_{i<j}v(r_{ij})\) (sum over all particle pairs) is the potential-energy function, \({\mathbf{\xi}}_{i}(t)\) is a Gaussian random white-noise vector, and \(v_{0}\) is the "swim velocity". The vector \({\bf n}_{i}(t)=(\cos\theta_{i}(t),\sin\theta_{i}(t))\) is a stochastic unit vector in which the angle \(\theta_{i}(t)\) is controlled by a Gaussian white noise term the magnitude of which defines the rotational diffusion coefficient, \(D_{r}\), according to
\[\langle\dot{\theta}_{i}(t)\dot{\theta}_{j}(t^{\prime})\rangle\,=\,2D_{r} \delta_{ij}\,\delta(t-t^{\prime})\,. \tag{3}\]
The magnitude of the white-noise velocity vector \({\mathbf{\xi}}_{i}(t)\) defines the translational diffusion coefficient \(D_{t}\),
\[\langle{\mathbf{\xi}}_{i}^{\alpha}(t){\mathbf{\xi}}_{j}^{\beta}(t^{\prime})\rangle\,= \,2D_{t}\delta_{ij}\delta_{\alpha\beta}\delta(t-t^{\prime}) \tag{4}\]
in which \(\alpha,\beta\) are spatial \(x,y\) indices and \(i,j\) are again particle indices. The mobility \(\mu\) is taken to be unity throughout, i.e., \(\mu\) is regarded as a material constant, while the remaining model parameters \(D_{r}\), \(D_{t}\), and \(v_{0}\) are allowed to vary from particle to particle. This introduces three kinds of polydispersity. In all cases the average of the polydisperse parameter is kept constant. For any varying parameter \(X\), the polydispersity \(\delta\) is conventionally defined [29] as \(\delta\equiv\sqrt{\langle X^{2}\rangle-\langle X\rangle^{2}}/\langle X\rangle\) in which the sharp brackets denote averages.
We simulated 10000 particles of the 2d Yukawa system with interactions cut off at \(4.5\sigma\). The time step used was \(\Delta t=0.0625\langle D_{t}\rangle/\langle v_{0}\rangle^{2}\). Each simulation involved \(2\cdot 10^{7}\) time steps. The (GPU) code employed was RUMD [30], modified to deal with polydispersity in particle-activity parameters. Parameters corresponding to both the homogeneous phase (\(D_{t}=1.0\), \(D_{r}=0.8\), \(v_{0}=25\)) and the MIPS phase (\(D_{t}=1.0\), \(D_{r}=0.2\), \(v_{0}=25\)) were simulated.
## III Parameter polydispersity in the homogeneous phase
We first consider the effect of active-parameter polydispersity on the structure and dynamics in the homogeneous phase. Figure 1(a), (c), and (e) show the (average) radial distribution functions (RDFs) for different degrees of polydispersity: uniform parameter distributions of 10% and 50% polydispersity and binary parameter distributions of 10%, 30%, and 50% polydispersity in \(D_{t}\) ((a) and (b)), \(D_{r}\) ((c) and (d)), and \(v_{0}\) ((e) and (f)) (\(x_{A}\) denotes the large-parameter fraction of particles).
Figure 1(b), (d), and (f) show the average mean-square displacement (MSD) as a function of time for the same situations. We find here the well-known three regimes [31]: diffusive (small time), ballistic (intermediate time), and diffusive (long time). The first regime is governed by the thermal noise, the second by the swim velocity, and the third by the rotational diffusion coefficient and swim velocity. In all cases there is little effect of polydispersity. This
Figure 1: Structure and dynamics in the homogeneous phase for uniformly polydisperse systems (black and red curves are for 10% and 50% polydispersity) and for binary systems in which \(x_{A}\) is the fraction of large-parameter particles (green represents 10%, orange 30%, and blue 50% polydispersity). (a), (c), and (e) show the average radial distribution functions (RDFs), \(g(r)\), for systems with polydispersity in the \(D_{t}\), \(D_{r}\), and \(v_{0}\) parameters, respectively. (b), (d), and (f) show the corresponding results for the mean-square displacement, MSD, as a function of time, \(\langle\Delta r^{2}(t)\rangle\). In all cases there is little effect of polydispersity.
is not trivial because the individual particles conform to different equations of motion; indeed they move differently as becomes clear from the next figure.
To illuminate the role of parameter polydispersity for the individual particles, we identified for the two uniform polydispersities the particles with the 20% lowest activity parameters and those with the 20% highest. For each of these categories we determined the corresponding RDFs (counting only surrounding particles of the same type) and MSDs. The results are shown in Fig. 2. For the structure ((a), (c), and (e)) there is little difference although one notes that the low \(D_{t}\) particles show a somewhat more pronounced first peak than that of the high \(D_{t}\) particles ((a)). Since \(D_{t}\) in the Langevin-type equation Eq. (2) plays the role of a temperature, this is consistent with the well-known finding for passive systems that lowering the temperature generally leads to a higher first peak of the RDF. For the dynamics, there are clear differences: In the case of \(D_{t}\) polydispersity ((b)), the long-time dynamics is the same, while the short-time dynamics is fastest for the highest \(D_{t}\) particles. For \(D_{r}\) polydispersity the opposite is observed; here
Figure 2: Role of the 20% lowest (black and red) and 20% highest (green and blue) active-parameter particles in the homogeneous phase of continuously polydisperse systems at 10% and 50% polydispersity. (a), (c), and (e) show RDFs, \(g(\mathbf{r})\), for polydispersity in the \(D_{t}\), \(D_{r}\), and \(v_{0}\) parameter, respectively. (b), (d), and (f) show the corresponding results for the MSD \(\langle\Delta r^{2}(t)\rangle\). For the RDFs there is little difference between the lowest and highest active-parameter particles, while the MSD shows variations that are much larger than those of the overall average (Fig. 1). This variation is seen in the short-time data in the case of \(D_{t}\) polydispersity and in the long-time data for \(D_{r}\) and \(v_{0}\) polydispersity.
the short-time dynamics is the same for low and high \(D_{r}\) particles while the long-time dynamics is fastest for the particles with low \(D_{r}\). The rotational diffusion coefficient determines a particle's persistence time because a decrease of \(D_{r}\) implies an increase of the long-time diffusion coefficient. Thus the phenomena observed are consistent with the single-particle scaling of the long-time diffusion coefficient. On the short time scale little change of direction is possible, making the value of \(D_{r}\) irrelevant.
Consider finally the case of \(v_{0}\) polydispersity ((f)). Here there is no effect on the short-time dynamics, while the long-time dynamics is fastest for particles with large \(v_{0}\). That the short-time dynamics is unaffected is a simple consequence of Eq. (2) in which the \(v_{0}\) term on the short time scale gives rise to a MSD proportional to \(t^{2}\), which is much smaller than the short-time diffusive contribution to the MSD. The faster long-time diffusive dynamics for the high \(v_{0}\) particles comes about because a higher swim velocity implies larger displacements in one direction before the direction changes, corresponding to longer jumps in a simple random-walk picture.
How do the results of Fig. 2 relate to the overall average structure and dynamics findings of Fig. 1? The structure is almost the same for low and high active parameter particles for all three types of polydispersity - and independent of the degree of polydispersity - so in this light the RDF findings of Fig. 1 are not surprising. In regard to the MSD, however, the variations induced by parameter polydispersity are significant but strikingly average out, resulting in little overall change of the average MSD. Thus in all three cases the black and green curves in Fig. 2, which represent just 10% polydispersity, are close to each other, while the red and blue curves (50% polydispersity) move in opposite directions.
## IV Parameter polydispersity in the MIPS phase
The existence of a MIPS phase is a unique feature of active matter, and MIPS is also found in the ABP model [32; 33; 34]. This phase is also of interest to investigate in regard to the effects of introducing active-parameter polydispersity. We did this by repeating the simulations, the only difference being that the average of \(D_{r}\) is now 0.2 instead of the above used 0.8. The majority of particles are found in the dense phase at all MIPS state points studied, meaning that data found by averaging over all particles are representative for this phase.
The results for the RDFs and MSDs are shown in Fig. 3. In regard to the dynamics, the picture is not much different from that of the homogeneous phase: the MSD is virtually unaffected by the introduction of polydispersity in the three parameters ((b), (d), and (f)). The same applies for the RDF for \(D_{t}\) and \(D_{r}\) polydispersity, whereas \(v_{0}\) polydispersity strongly affects the RDF ((e)). Note that the RDF at large \(r\) is systematically slightly larger than unity; this is an effect of the fact that the figures report the RDF averaged over all particles. While not clearly visible, a close inspection reveals that the green RDF and MSD curves cover a black one and the blue curves likewise cover a red one. The former are for 10% polydispersity in the uniform and binary cases, respectively, while the latter are for 50% polydispersity. We conclude that the introduction of \(v_{0}\) polydispersity strongly affects the RDF in a way that is independent of the parameter probability distribution. Given that the existence of the MIPS phase reflects the active-matter feature of a temporary persistence direction in the particle motion, it is not surprising that introducing \(v_{0}\) polydispersity has a strong effect on the structure of the MIPS phase.
Figure 3: Structure and dynamics in the MIPS for uniformly polydisperse systems (black and red curves are for 10% and 50% polydispersity) and for binary systems in which \(x_{A}\) is the fraction of large-parameter particles (green represents 10%, orange 30%, and blue 50% polydispersity). (a), (c), and (e) show the average RDFs for systems with polydispersity in the \(D_{t}\), \(D_{r}\), and \(v_{0}\) parameters, respectively. (b), (d), and (f) show the corresponding results for the mean-square displacement as a function of time. There is little effect of introducing polydispersity in the \(D_{t}\) and \(D_{r}\) parameters whereas a notable effect of \(v_{0}\) polydispersity is observed for the RDF, in which case there is also a visible – though much smaller – effect on the dynamics.
To throw more light on these findings, following the procedure of the homogeneous-phase investigation we identify in Fig. 4 the contributions to structure and dynamics from the lowest (black and red) and highest (green and blue) parameter particles. Compared to the homogeneous case, there is more variation for all three RDFs, in particular for \(D_{r}\) and \(v_{0}\) polydispersity. In the \(D_{r}\) case, the black and green curves (10% polydispersity) are close and move in the same direction when increasing to 50% polydispersity. Interestingly, the average of black and green, as well as of red and blue, is an almost unchanged RDF (Fig. 3(c)). The \(v_{0}\) polydispersity case is different: here the 10% polydispersity curves are similar (black and green), but quite different from the 50% polydispersity curves (red and blue). This is consistent with the finding of Fig. 3(e) and means that the actual value of \(v_{0}\) matters little for the structure surrounding a given particle. We believe this is an effect of the strong interparticle interactions within the
Figure 4: Role of the 20% lowest (black and red) and 20% highest (green and blue) active-parameter particles in the MIPS phase of continuously polydisperse systems at 10% and 50% polydispersity. (a), (c), and (e) show the RDFs for polydispersity in the \(D_{t}\), \(D_{r}\), and \(v_{0}\) parameters, respectively. (b), (d), and (f) show the corresponding results for the MSD. For the RDFs there is for \(D_{t}\) polydispersity little difference between the lowest and highest active-parameter particles except at the first peak; \(D_{r}\) polydispersity shows a larger but still modest difference, which is most pronounced at 50% polydispersity. The case of \(v_{0}\) polydispersity shows significant differences between 10% and 50% polydispersity, but for each of these values there is only modest variation between the lowest and highest active-parameter particles’ RDF. For the MSD the situation is similar to that observed in the homogeneous phase (Fig. 2): variation is observed in the short-time data for \(D_{t}\) polydispersity, in the long-time data for \(D_{r}\) polydispersity, and at intermediate and long times for \(v_{0}\) polydispersity.
MIPS phase that average out the effect of the individually varying \(v_{0}\). At the same time, increasing the degree of \(v_{0}\) polydispersity leads to a considerable broadening of the width of the first peak. Because there is little difference between the low and high \(v_{0}\) RDFs, the picture is very similar to the overall average picture. In fact, at 50% \(v_{0}\) polydispersity we find that the system becomes almost homogeneous, which is consistent with the findings of Ref. [35]. - In regard to the MSD, the MIPS phase low- and high-parameter findings are similar to those of the homogeneous phase (Fig. 2).
## V Role of the average potential energy
To further illuminate the effect of parameter polydispersity we evaluated the potential energy as a function of time during the simulations (Fig. 5). In the homogeneous phase ((a), (c), and (e)) polydispersity has little effect on the average potential energy. This is consistent with the finding that in this phase structure and dynamics are
Figure 5: Average potential energy as a function of time during a steady-state simulations. (a), (c), and (e) show data for the homogeneous phase for systems of 10%, 30%, and 50% polydispersity in the \(D_{t}\), \(D_{r}\), and \(v_{0}\) parameters, respectively. (b), (d), and (f) show the corresponding results for the MIPS-phase simulations. Except for the MIPS-phase \(v_{0}\)-polydispersity case, the average potential energy is virtually unaffected by the introduction of polydispersity.
virtually unaffected by the degree of polydispersity (Fig. 1). The same applies for the MIPS phase in the \(D_{t}\) and \(D_{r}\) polydispersity cases. Only in the \(v_{0}\)-polydispersity MIPS case is the structure significantly affected (Fig. 3(e)), which is consistent with the finding that only in this case the average potential energy changes significantly with the degree of polydispersity (Fig. 5(f)). At increasing \(v_{0}\) polydispersity the MIPS-phase average potential energy approaches that of the homogeneous phase, which means that the average particle distance increases (the density decreases) with increasing \(v_{0}\) polydispersity. Indeed, in this case the position of the first peak of the RDF was found to increase toward unity (Fig. 3(e)), indicating that the MIPS phase gradually fills out the sample area and, eventually, disappears.
In summary, changes in the average potential energy upon introduction of parameter polydispersity correlate with changes of structure and dynamics. This means that the average potential energy is a convenient "thermometer" of changes to the physics.
## VI Discussion
It is well known that introducing size polydispersity into active-matter models by varying the characteristic length of the pair potential has a significant effect on both structure and dynamics [17; 18; 19; 20; 36], just as for passive systems [29; 37; 38]. This paper investigated the effects of introducing particle-to-particle variations of other parameters of the 2d ABP model with Yukawa pair interactions. With the exception of \(v_{0}\) polydispersity in the MIPS phase, we find surprisingly small effects on the structure and dynamics when polydispersity is introduced such that the average of the parameter in question is kept constant. The cause of this insensitivity to parameter polydispersity is not obvious, but it means that a polydisperse active system in many respects behaves like the homogeneous system of particles with average model parameters, i.e., that a mean-field description applies to a good approximation. While it is easy to understand the significant effects of size polydispersity [17; 18; 19; 20; 36], we have no physical explanation for the absence of any role of polydispersity in the translational and rotational noise terms, as well as the swim velocity parameter in the homogeneous phase. It often happens in physics that a mean-field description works better than can be justified by simple arguments, and we conclude that this is indeed the case also for parameter polydispersity in active matter. We note that a recent study of different Lennard-Jones (passive) systems showed a similar insensitivity to the introduction of energy polydispersity [39], a result that is also not well understood.
Investigations of other active-matter models should be carried out to determine the generality of our findings. If they are general, the introduction of polydispersity may have applications to instances of non-polydisperse active-matter models for which the system in question is difficult to equilibrate because of extremely long relaxation times [40; 41]. The idea is to employ "activity-induced annealing" [42] for a polydisperse system. As is well known, passive glass-forming polydisperse liquids may be equilibrated by the SWAP algorithm [13]. Even though detailed balance does not apply for active matter, SWAP may possibly be applied also for equilibrating an active, single-component highly viscous system [43] by proceeding as follows. First, introduce polydispersity into one of the active-model parameters. Then, carry out random particle swaps which according to the above findings will not significantly affect the average structure and dynamics of the system. Finally, remove the artificial polydispersity. Inspired by Ref. [13] we conjecture that this will equilibrate the system more quickly than a lengthy simulation.
###### Acknowledgements.
This work was supported by the VILLUM Foundation's _Matter_ grant (VIL16515).
|
2305.04315 | A Framework for Characterizing Novel Environment Transformations in
General Environments | To be robust to surprising developments, an intelligent agent must be able to
respond to many different types of unexpected change in the world. To date,
there are no general frameworks for defining and characterizing the types of
environment changes that are possible. We introduce a formal and theoretical
framework for defining and categorizing environment transformations, changes to
the world an agent inhabits. We introduce two types of environment
transformation: R-transformations which modify environment dynamics and
T-transformations which modify the generation process that produces scenarios.
We present a new language for describing domains, scenario generators, and
transformations, called the Transformation and Simulator Abstraction Language
(T-SAL), and a logical formalism that rigorously defines these concepts. Then,
we offer the first formal and computational set of tests for eight categories
of environment transformations. This domain-independent framework paves the way
for describing unambiguous classes of novelty, constrained and
domain-independent random generation of environment transformations,
replication of environment transformation studies, and fair evaluation of agent
robustness. | Matthew Molineaux, Dustin Dannenhauer, Eric Kildebeck | 2023-05-07T15:53:07Z | http://arxiv.org/abs/2305.04315v1 | # A Framework for Characterizing Novel Environment Transformations in General Environments
###### Abstract
To be robust to surprising developments, an intelligent agent must be able to respond to many different types of unexpected change in the world. To date, there are no general frameworks for defining and characterizing the types of environment changes that are possible. We introduce a formal and theoretical framework for defining and categorizing environment transformations, changes to the world an agent inhabits. We introduce two types of environment transformation: R-transformations which modify environment dynamics and T-transformations which modify the generation process that produces scenarios. We present a new language for describing domains, scenario generators, and transformations, called the Transformation and Simulator Abstraction Language (T-SAL), and a logical formalism that rigorously defines these concepts. Then, we offer the first formal and computational set of tests for eight categories of environment transformations. This domain-independent framework paves the way for describing unambiguous classes of novelty, constrained and domain-independent random generation of environment transformations, replication of environment transformation studies, and fair evaluation of agent robustness.
## 1 Introduction
A primary and unrealized goal of artificial intelligence is _robustness_: the capability to continue to perform well despite novel changes in environments an agent interacts with. Robustness is pivotal to continued autonomy in open worlds - those that are dynamic and unstable like the physical world. To date, the science of machine learning has been assumed to prepare agents to become more robust without rigorously defining what the space of such novel changes are, nor how they can be systematically varied to test the robustness of an agent. Descriptions of concept learning or classification learning problems in machine learning (Mitchell, 1997) traditionally give a set of examples from a fixed distribution, with no described temporal relationships. Reinforcement learning problems (Sutton and Barto, 2018) describe an environment that an agent receives observations from, but generally assume the environment is unchanging. Recent work on open-world novelty (Boult et al, 2021; Muhammad et al, 2021; Gamage et al, 2021), characterizes changes to an environment from the point of view of the agent, preventing direct comparisons across agents with different world views. Langley (2020) describes environmental change, but without a rigorous framework.
To better describe the challenges that a changing environment poses to an agent, we define a general class of environments and _environment transformations_ (sometimes referred to as _novelties_). These definitions are useful for 1) describing the extent of environment transformations, 2) categorizing and grouping transformations with similar characteristics, 3) communicating and replicating transformations, and 4) randomly generating novelties that test the robustness of an agent to novel changes in its environment. To our knowledge, this is the first framework of definitions that describes formally and completely a space of environment transformations over a general class of environments.
Transformations to environments come in two general categories: _R-Transformations_ to the dynamics of the environment, and _T-Transformations_ to scenario distributions ; the latter category includes distributions over possible initial states and performance criteria. To rigorously define a general class of environments and novelties, we start with a well-known and tested formalism for representing these environments, the planning domain definition language (PDDL) family of languages. PDDL provides a general mechanism for describing individual scenarios referred to as "problems", and a separate mechanism for describing environment dynamics referred to as "domains". In this work, we formally describe the general notion of a "scenario generator", which describes a distribution of scenarios rather than a single scenario.
T-SAL R-Transformations describe changes to domains as used in PDDL. PDDL domains use a subset of first-order logic to describe environment dynamics: changes that may and must happen as a result of agent actions and the natural interactions of objects without agency. Using established PDDL formalisms, we build a new language, the T-SAL (Transformation and Simulator Abstraction Language) Domain language, that covers the broadest class of environments possible. Unlike prior work on PDDL languages, we do not constrain the representation of T-SAL based on backwards compatibility or to match well the capabilities of existing automated planners; therefore, we incorporate events and processes from PDDL+ that describe continuous change, non-deterministic effects and events that describe change probabilistically, mathematical operators that describe proportional change, and an object introduction effect that describes open-world change.
T-SAL T-Transformations describe changes to scenario generators. In addition to the initial state classically described within a PDDL problem, these scenario generators describe how to draw initial state objects, and fluents from statistical distributions. Instead of a goal, scenario generators include a performance function. This is similar to existing maximization goals, but defined with respect to agents as well as the state, permitting multiple agents' performance to be described.
T-Transformations and R-Transformations are useful both to generate and to formally describe novelty. This work contributes to the literature on open-world novelty by formally characterizing a group of novelty concepts previously described only informally. Scientists working together in the DARPA SAIL-ON program to organize and schedule program advancement developed a group of characterizations described as a "novelty hierarchy". This novelty grouping was previously defined only informally, so some practitioners may reasonably disagree with our definitions; however, we believe they capture the essence of the designers' intent. By providing this formal definition of the novelty hierarchy used in early novelty investigations, we hope to show that T-SAL transformations are a viable means of rigorously categorizing novelty in the future. We believe this forms a suitable basis for future investigations of the impact of novelty categories on agents.
In the following sections, we (1) formally define T-SAL domains, (2) give formulas for assessing T-SAL domain legality, (3) formally describe scenario generators, (4) formally describe transformations over T-SAL domains, (5), formally describe transformations over scenario generators, and (6) formally describe the SAIL-ON program novelty levels. A formalization for the state transition function of T-SAL is not given, but is based on PDDL+ (Fox and Long, 2006) with only small modifications necessary. See Appendix C for a list of the differences between T-SAL and PDDL+.
## 2 Related Work
In this section, we review prior work that informed the development of our framework. In the literature on creativity, novelty is a well-defined concept. Wiggins' (2006) Creative Systems Framework (CSF), which formally defined and extended the creativity system described by Boden (1990) in her book _The Creative Mind_, defines novelty as a property of a creative output which previously did not exist. CSF was the first formal logic-based creativity framework, describing, at the highest level, a conceptual space of universes and artifacts that naturalistic, creative agents create in a societal context. Our framework is both narrower in scope and provides greater detail, ranging over transformations to environments that consist of agents, actions, events, processes, and tasks, commonly found in AI literature. In Wiggins' CSF, \(\mathscr{D}^{\prime}\) denotes a set of universe constraining rules, and \(\mathscr{T}^{\prime}\) denotes rules for traversing a space. We maintain consistency with Wiggins' prior definitions of \(\mathscr{D}^{\prime}\) and \(\mathscr{T}\), dividing our transformations into R-Transformations and T-Transformations respectively.
The term _novelty_ in AI literature has taken on at least two perspectives: novelty from the lens of an individual agent and novelty that exists from the lens of an environments' history. Research of the former defines novelty relative to an agent's experiences (e.g., Muhammad et al. (2021), Boult et al. (2021), Gamage et al., (2021)), while our framework defines novelty as the latter, consistent with work on creativity (Wiggins', 2006). Frameworks with an agent's perspective of novelty are useful because they can identify how prepared an agent is for new challenges. However, they cannot usefully describe how environment challenges differ in a way that cuts across different agents and different knowledge representations. This limits the ability for agent-based notions of novelty to support evaluations on a variety of different AI approaches. For historical and language reasons, both areas of research currently use the term "novelty", but in different ways; therefore we emphasize our novelty framework is environment-based.
Environment-based novelty is fundamentally different from agent-based novelty. In the theory of open world novelty by Boult et al. (2021), novelty occurs when an agent experiences an environment sufficiently different from its prior experiences. A key feature being that novelty occurs at a point in time. In environment-based frameworks like that of Wiggins' CSF (2006), novelty exists independent of any given agent's experiences or knowledge. Additionally, our framework does not define novelty as occurring at a point in time, but rather as encompassing the time-independent difference between two environments. This means that novelty can exist without any changes to the observation or state space. While valuable for considering what experiences could be new to a particular agent, two agents cannot truly be compared on the "same" novelty unless they have identical prior experiences; such comparisons are a key feature of our framework.
A need for a theory of environmental change has been proposed to help explain and measure progress in open-world learning. Langley's (2020) work provides a set of broad requirements for such a theory,
motivating a theoretical formalism for environments and transformations on them. Our framework provides both, and so can be consider an example "theory" in this regard that describes environmental changes and provides specific language for characterizing how environment changes vary.
Existing novelty generation capabilities for evaluating AI systems are designed in close alignment with simulated environments. Current benchmark domains (Goel et al. (2021), Xue et al., (2022), Kejiwal and Thomas (2021), Balloch et al., (2022)) test an agent's ability to respond to novelty using carefully constructed scenarios by domain experts. This is current standard practice, and each class of transformations constructed applies only to a single environment. This ad-hoc construction does not permit generalization over domains or formal statements about what characteristics apply to the novel scenarios generated. To our best knowledge, our framework is the first to systematically provide infinite novelties that are not hand curated by domain experts.
Game design research has explored problems of domain generation and notions of transformations. The Metagame system (Pell, 1992) and EGGG system (Orwant, 2000) automatically generated chess-like games using a complicated system of rules and constraints designed for chess-like games in particular. Smith and Mateas (2010) decoupled the design of a game space from the exploration of that space to find interesting games. The language used to design game spaces was large and flexible compared to the prior chess-like games, but still incorporated many assumptions, such as the existence of a rectangular play space. Our framework's language for expressing domains, based on PDDL, can model many aspects found in the game's representations of these works; the aim of our framework is to produce novelty first and then meet other criteria later (such as being an interesting game). Therefore we focus on a more fundamental representation without worrying about domain specific knowledge beyond representational notions of objects, actions, events, and goals. To sculpt the generative space of our framework, users would provide constraints on the choice of R and T-transformations as they see fit - and such a discussion is outside the scope of this paper.
Prior work (Molineaux and Dannenhauer, 2021) defined a set of environment transformation characteristics, or "dimensions", useful for classifying the difference between pairs of original and modified environments. These definitions are based on describing environments using a refinement of the Partially Observable Markov Decision Process formalism, rather than the relational manipulations described here. The dimension-based formalism is concerned primarily with broad characterizations of the start and endpoints rather than describing a specific pathway between them. As such, it is more difficult to verify, and does not account for human-intuitive descriptions of novelty categories such as those found in the novelty hierarchy. However, the upside there is much less flexibility in the dimension-based formalism to describe the same environments and transformations in different ways; as such, dimension values have much less "wiggle room" than an arbitrary T-SAL categorization.
## 3 Formalism
T-SAL is a general term that refers to two languages for describing environments: the computational representation (T-SAL-CR), and the logical language that describes it, L(T-SAL), _read as 'logic of T-SAL'_. L(T-SAL) is a first-order logic that describes domains, scenario generators, and environment transformations at an abstract level. It provides common terminology and is used for rigorous definitions. T-SAL-CR describes individual domains, using domain-specific fluents and types to describe the dynamics of domains. We build up both in this formalism; L(T-SAL) is used to define and make
general statements about environments and environment transformations. The T-SAL-CR is provided to communicate examples. An exact semantics for the T-SAL-CR is not provided here; however, it's similar to existing computational representations in the PDDL family (see Appendix C for a detailed comparison). All examples of T-SAL concepts use the well-understood Cart-Pole domain, supplemented with moving blocks and multiple carts.
### T-SAL Domains
T-SAL domains describe properties and dynamics of environments. This section builds up the concept of a domain by describing simpler concepts used in the description of a domain in a bottom-up fashion. A BNF for the T-SAL domain language is given in Appendix D. All variables introduced throughout the rest of this section are shown in Table 1. Short definitions and examples of major T-SAL domain concepts are given in Table 2.
Both L(T-SAL) and T-SAL-CR borrow heavily from the PDDL (McDermott, et al. (1998); Fox & Long, (2003); Edelkamp & Hoffmann (2004); Gerevini & Long, (2005); Fox & Long, (2006); Helmert, (2008); PDDL 3.1, PDDL +) family of languages developed in the planning community, which in turn took inspiration from STRIPS (Fikes & Nilsson, 1971) and the situation calculus (McCarthy and Hayes 1950).
\begin{table}
\begin{tabular}{l l} \hline _S:_ the set of symbols & Alt: the set of effect modifications \{SET, INCREASE, \\ _V:_ variables in a domain description & DECREASE, CREATE\} \\ _V:_ the space of variable sets & Dir: the set of change directions \{INCREASE, DECREASE\} \\ \hline _Te:_ terms which may be of the form of symbols, & Ev: the space of events \\ variables, integers, or reals, & \\ \hline \multirow{2}{*}{_Y:_ a type in the space of symbols S & Ev: the space of sets of events \\ & ev: an event in the space of events Ev \\ \hline \multirow{2}{*}{_F:_ the space of functions & P: the space of process models \\ \cline{2-2} _f:_ a function in the space of functions F & P: the space of sets of process models \\ \cline{2-2} _Fn:_ the set of function names & p: a process model in the space of process models \\ \cline{2-2} _fn:_ a function name & D: the space of T-SAL domains \\ \cline{2-2} _Cn:_ the space of conditions & d: a domain in the space of T-SAL domains D \\ \cline{2-2} _Cn:_ the space of sequences of conditions & st: a T-SAL state that is true in the environment at a particular moment in time \\ \hline \multirow{2}{*}{_cnd:_ a condition in the space of conditions Cn & og: an object generator \\ \cline{2-2} _Ca:_ the space of calculations & OG: the space of object generators \\ \cline{2-2} _Ca:_ the space of sets of calculations & vg: a value generator \\ \cline{2-2} _calc:_ a calculation in the space of calculations Ca & VG: the space of value generators \\ \cline{2-2} _Ax:_ the space of axioms & **Va**: the space of values \\ \cline{2-2} _Ax:_ the space of sets of axioms & fg: a fluent generator \\ \cline{2-2} _ax:_ an axiom in the space of axioms Ax & sg: a T-SAL scenario generator \\ \cline{2-2} _E:_ the space of effects & SG: the space of scenario generators \\ \cline{2-2} _E:_ the space of sets of effects & GF: the space of ground fluents \\ \cline{2-2} _e:_ an effect in the space of effects E & gf: a ground fluent in the space of ground fluents \\ \cline{2-2} _C:_ the space of continuous changes & AG: the space of agents \\ \cline{2-2} _C:_ the space of sets of continuous changes & ag: an agent in the space of agents \\ \cline{2-2} _c:_ a continuous change in the space of continuous & t: an environment transformation \\ \cline{2-2} _changes C_ & t: a sequence of environment transformations \\ \cline{2-2} _A:_ the space of action models & **T:** the space of transformation sequences \\ \cline{2-2} _A:_ the space of sets of actions models & U: the universal set \\ \cline{2-2} _a:_ an action model in the space of action models A & \\ \hline \end{tabular}
\end{table}
Table 1: Summary of variables used in L(T-SAL)
While we share the objective of representing environments simply and concretely, our primary objective is to enable a complete description of a range of transformation operators, which necessitates a different set of domain representation decisions. This results in a similar representation to PDDL languages that is not directly descended from any one such language. Ease of planning was not a primary design criterion, and we do not attempt to provide backwards compatibility with existing PDDL languages; instead, T-SAL follows the same guidelines but without historical baggage.
In describing T-SAL domains, we use symbols as identifiers; we refer to the set of symbols as \(S\). Variables in a domain description come from the set \(V\subset S\) (the space of variable sets is denoted _V_). Another set of symbols identify particular objects; we say object \(o\) comes from the set \(O\subset S\).
" A domain type _ty_ is in the space \(Ty\subset S\). Five symbols are reserved for special basic types: REAL, INTEGR, Boolean, Agent, and Object. A sixth standard type, Position, is also reserved, but is not a "basic" type. Nearly every domain requires objects and sometimes agents to be related spatially, and spatial information is useful to distinguish from non-spatial. Therefore, Position is reserved as a type to allow spatial properties to be automatically distinguished from non-spatial properties. The Position type must be specified to derive from REAL, INTEGR, or Object, depending on the representation of a domain.
**Definition 1: Domain Types** _A Example_: Unique types in CartPole include cart, block, SPEED, and ANGLE.
" A domain function uses the variable \(f\) and has space \(F\) and space of sets \(F\), has a name (_function-name_: _F_\(\rightarrow\)_Fn_), arguments (_function-arguments_: _F_\(\rightarrow\)_V_) and types (_function-argument-type_: _F_\(\times\)_V_\(\rightarrow\)_Ty_), and a value type (_function-value-type_: _F_\(\rightarrow\)_Ty_). Function names _fn_ come from the set _Fn_\(\subset S\). In tuple form, a function is given as \(\langle\)_name_, _arguments_, _argument-types_, _value-type_).
**Definition 2: Domain Functions and Function Names** _A Example_: In T-SAL, the relationship between the cart and pole are described using roll, pitch, and yaw. The function \(f\) below describes the roll of a pole relative to its cart in T-SAL-CR:
_f:_(CART-VELOCITY?C-CART)-SPEED
For this function, the following equivalences hold: _function-name_(_f_) = CART-VELOCITY; _function-arguments_(_f_) = [?c]; _function-argument-type_(_f_,?c) = CART; _function-value-type_(_f_) = SPEED.
" Values include objects, numbers, and Boolean literals; the set of values is defined \(V\alpha\equiv O\cup\mathbb{I}\cup\mathbb{R}\cup\) {TRUE, FALSE}. Terms include values, variables, and function terms. Function terms apply a function to other terms, including function terms. Formally, the set of terms is defined \(Te\equiv V\cup Va\cup\langle\)_Fn_, _Te_). Here, _Te_ is the space of sequences of terms from _Te_. Legal function terms have a term sequence the same length as the arity of the named function.
**Definition 3: Values and Terms** _A Examples_: In the T-SAL domain language representation of Cart-Pole, legal terms include 30, CART1,?c, (POLE-ANGLE?c), and (POLE-ANGLE (LEFTMOST-CART)).
\begin{table}
\begin{tabular}{l|l|l|l}
**Concept** & _Var, Space_ & **L(T-SAL) Definition** & **T-SAL-CR Example** \\ \hline Symbol & \(S\) & \(<\)enumerated class\(>\) & CART \\ \hline Variable & \(v\in V\) & \(V\subset S\) &?C \\ \hline Term & \(Te\) & \(Te\equiv V\cup Va\cup(Fn,Te)\) &?C; CART; 3; 1.5; TRUE; (CART-VELOCITY?C) \\ \hline Type & \(ty\in Ty\) & \(Ty\subset S\) & CART \\ \hline Function & \(f\in F\) & \(F\equiv S\times V\times(V\to S)\times S\) & (CART-VELOCITY?C-CART) - SPEED \\ \hline Ground & \(gf\in GF\) & \(GF\equiv Fn\times(V\to Va)\times Va\) & (= (CART-VELOCITY CART1) 0) \\ Fluent & & Tuple: \(\langle fn,\,args,\,value\rangle\) & \\ \hline Calculation & \(calc\in Ca\) & \(Ca\equiv Te\cup F\times Te\cup Op\times Ca\times Ca\) & 0; (POLE-ANGLE?C); \\ & & \(\cup\,AggOp\times V\times Cn\times Ca\) & (+ (POLE-ANGLE?C).01); \\ & & \(\cup\,IF\,Cn\times Ca\times Ca\) & (/ (SUM?C (\(\neq\) (CART-VELOCITY?C)0) \\ & & & Tuples: \\ & & (\(\prime fun,\,terms\)) \(\langle op,\,calc1,\,calc2\rangle\) & (SUM?C (\(\neq\) (CART-VELOCITY?C)0) 1)) \\ & & (SUM, \(v,\,con,\,calc\rangle\) (product, \(v,\,con,\) & \\ & & \(calc\)) (\(IF,\,end,calc_{T},calc_{T})\) & \\ \hline Condition & \(cmd\in Cn\) & \(Cn\equiv Te\cup PC\cup BOp\times Cn\times Cn\) & TRUE; (\(<\) (POLE-ANGLE?C) 30); \\ & & \(\cup\,FORMAL\times V\times Cn\times Cn\) & (AND (\(<\) (POLE-ANGLE?C) 30) (\(>\) (POLE-ANGLE CART1)-30)); \\ & & \(\cup\,HAS-VALUE\times V\times Ca\) & ANGLE CART1)-30)); \\ & & Tuples: & (FORMAL?C (\(\neq\) (CART-VELOCITY?C)0) \\ & & (AND, \(cnd1,\,cnd2\)) \(\langle\)OR, \(cnd1,\,cnd2\)) & (\(<\) (POLE-ANGLE?C) 30)); \\ & & (FORMAL, \(\boldsymbol{v},\,cons,\,req\)) \(\langle\)HAS-VALUE?A \(\boldsymbol{v},\,calc\rangle\) & (HAS-VALUE?A \(\boldsymbol{v},\,\)\(\boldsymbol{\mathit{POLE}-ANGLE?C)}\)) \\ \hline Effect & \(e\in E\) & \(E\equiv S\times V\times Alt\times Te\times[0..1]\) & (DECREASE (POLE-ANGLE?C) 0.1) [0.5] \\ \hline Continuous & \(c\in C\) & \(C\equiv S\times V\times Dir\times Ca\) & (INCREASE (POLE-ANGLE?C) (* DT \\ Change & & & (ANGULAR-MOTION?C)) \\ \hline Action & \(a\in A\) & \(A\equiv S\times Te\times V\times(V\to Ty)\times\boldsymbol{Cn\times E}\) & (:ACTION PUSH \\ Model & & Tuple: & :PERFORMANCE?AG \\ & & (\(name,\,\mathit{performer,\,args,\,argTypes}\), & :PARAMETERS (?C-CART) \\ & & & :PRECONDITIONS (\(\langle\)CONTROLS?AG?C) \\ & & & (= (AGENT-FORCE)?AG)) \\ & & & :EFFECTS \\ & & & ((INCREASE (CART-VELOCITY?C)?FORCE)) \\ \hline Event & \(ev\in Ev\) & \(Ev\equiv S\times V\times(V\to Ty)\times[0..1]\times\mathbb{R}^{20}\times \boldsymbol{Cn\times}\) & (:EVENT FINISHES \\ Model & & \(E\) & :QUALITIES (?C-CART) \\ & & Tuple: & :TRIGGERS ([URIGHT?C]) \\ & & (\(name,\,\,\,quals,\,\,qualTypes,\,prob,\,freq\), & (\(<\) (CART-VELOCITY?C) 0.1) \\ & & & (\(>\) (CART-VELOCITY?C) -0.1) \\ & & & (CONTROLS?AG?C)) \\ & & & :EFFECTS (wins?AG)) \\ \hline Process & \(p\in P\) & \(P\equiv S\times V\times(V\to Ty)\times\boldsymbol{Cn\times C}\) & (:PROCESS POLE-ANGLE-CHANGES \\ Model & & Tuple: & :QUALITIES (?C-CART) \\ & & (\(name,\,\,quals,\,\,qualTypes,\,conds,\), & :CONDITIONS (FORMAL?AG TRUE \\ & & & (NOT (wins?ag))) \\ & & & :EFFECTS \\ & & & (AND (INCREASE (POLE-ANGLE?C)) \\ & & & (* DT (ANGULAR-MOTION?C)))))) \\ \hline \end{tabular}
\end{table}
Table 2: Summary of key T-SAL domain concepts with definitions and examples
\({}^{r}\) Conditions (variable _cnd_, space _Cn_, space of sequences _Cn_) specify complex requirements over states using comparisons and Boolean operators. Like calculations, conditions have the form of a term, a comparison of two values, or a combined calculation in which two operands are combined by a Boolean operation: _Cn_\(\equiv\)_Te_\(\cup\)_BOp_\(\times\)_Cn_\(\times\)_Cn_\(\cup\)_forall_\(\times\)_Cn_\(\times\)_Cn_\(\cup\)_Ineg_\(\times\)_Ca_\(\times\)_Ca_\(\cup\)_not_\(\times\)_Cn_. Supported Boolean operations are _BOp_\(\equiv\) {and, or}. Supported comparison operators are the inequalities _Ineg_\(\equiv\) {=, \(x\), \(y\), _z_}.
**Definition 4: Conditions**_\(\lrcorner\)_
_Examples_: In the T-SAL knowledge representation of Cart-Pole, legal conditions include:
* (< (Pole-ANGE?c) 30) \(\circ\) True iff the pole on the cart bound to?c has roll less than 30
* (AND (< (Pole-ANGE?c) 30) (> (Pole-ANGE?c) -30)) \(\circ\) True iff the pole on the object \(\mathtt{CART1}\) has angle between -30 and 30
* (AND (<= (Pole-ANGE?c) (Pole-ANGE?c) (Pole-ANGE?c) )) \(\circ\) True iff the pole on the object \(\mathtt{CART1}\) has a smaller angle than that on \(\mathtt{CART2}\).
* (FORALL?c (I= (CART-VELOCITY?c) 0) (< (Pole-ANGE?c) 30)) \(\circ\) True iff every pole with non-zero velocity also has pole angle less than 30
* (variable _calc_, space _Ca_, space of sets _Ca_) are functional expressions composed of terms and operators. Calculation operators include basic arithmetic operations, distribution draws, aggregations over sets, and the trinary if operator. Where terms have the semantics of a lookup in a database incorporating state information, calculations describe executed procedures that return values. A calculation must be a term, a combined calculation in which two operands are combined by an operation, an aggregation over a set of variables, or a trinary if that returns the result of one of two sub-calculations: _Ca_\(\equiv\)_Te_\(\cup\)_Op_\(\times\)_Ca_\(\times\)_Ca_\(\cup\)_AggOp_\(\times\)_V_\(\times\)_Cn_\(\times\)_Ca_\(\cup\)_if_Cn_\(\times\)_Ca_\(\times\)_Ca_. Supported operations are _Op_\(\in\) {+, -*, /; -uniform, :GAUSSIAN}. These operations have the usual definitions, and all require that their operands are numeric (i.e., derived from the type Real or Integer). The +, -, *, and / operations yield a value of type Real if either operand is Real, or Integer otherwise. The :uniform operation draws a value from the uniform distribution, with a resulting value in the range defined by its operands (the first operand providing the lower bound, and the second the upper bound, inclusive). The operands and the resulting value must have the Integer type. The :GAUSSIAN operation draws from the gaussian distribution; its operands are interpreted as a mean and standard distribution, and the type of the calculation will be Real. Aggregation operations are _AggOp_\(\in\) (sum, product). Aggregations have a single variable, constrained to meet a condition; the sub-calculation is calculated for every permissible value of the variable and results are aggregated. Conditional calculations give a condition that determines which one of a pair of calculations to evaluate; if the condition evaluates to true, the first calculation is evaluated, and if False the second.
**Definition 5: Calculations**_\(\lrcorner\)_
_Example_: In the T-SAL knowledge representation of Cart-Pole, legal calculations include:
* (+ (Pole-ANGE?c).01)
* (/ (sum?c (\(\neq\) (CART-VELOCITY?c) 0) (Pole-ANGE?c)) (sum?c (\(\neq\) (CART-VELOCITY?c) 0) 1)) \(\circ\) Average angle of poles with nonzero velocity
" Axioms define the truth of fluents in a state in terms of a condition. A fluent defined by axioms must have value type Boolean and takes on the value true when the associated condition is satisfied, or FALSE otherwise. No function may therefore both have an axiom definition and appear in action or event effects. Axioms use the variable \(ax\) (\(Ax\) for the space, \(Ax\) for the space of sets), and describes a function name (_axiom-name_: \(Ax\to S\)), argument types (_axiom-argument_: \(Ax\times V\to S\)), and an antecedent condition (_axiom-antecedent_: \(Ax\to Cn\)).
**Definition 6**: **Axioms** _Example_: In Cart-Pole, an axiom \(ax\) could say that a "live" cart is one whose pole angles are less than 30:
(:- (UPRIGHT?C - CART)
(AND (< (Pole-ROLL?C) 0.1) (> (pole-ANGLE?C) -0.1))
Here, _axiom-name_(\(ax\)) = UPRIGHT; _axiom-argument_(\(ax\),?C) = CART; and _axiom-antecedent_(\(ax\)) = (AND, (<, pole-ANGLE, [?C], 0.1) (>, pole-ANGLE, [?C], -0.1))
" An effect, which uses the variable \(e\) and has space \(E\) (\(E\) for the space of sets), describes an instantaneous alteration to the world caused by an action or effect transition. Mostly, these change the values of fluents. However, a second type of effect creates a new object; this is necessary in general to allow us to describe open worlds where objects can proliferate without a bound. An effect has a symbol that names the function it affects (_effect-name_: \(E\to Fn\)), assignments to that function's arguments (_effect-argument_: \(E\times V\to Te\)), a modification type (_effect-modification_: \(E\to Alt\)), post-effect value (_effect-value_: \(E\to Te\)), and probability (_effect-probability_: \(E\to[0..1]\)). The set of possible alterations is given by: \(Alt\equiv\{\)SET, INCREASE, DECREASE, CREATE}. In the special case of a CREATE alteration, the effect's name is the type symbol of the object created rather than that of a function, and the effect's value describes a name prefix to use in referring to the new object.
**Definition 7**: **Effects** _Example_: In Cart-Pole, an example legal effect \(e\) is (DECREASE (pole-ANGLE?C) 0.1) [0.5], which states that the value of the fluent pole-ANGLE for the object bound to?C decreases by 0.1 during a transition. Here, _effect-name_(\(e\)) = Pole-ANGLE; _effect-argument_(\(e\),?C) =?C; _effect-modification_(\(e\)) = DECREASE; _effect-value_(\(e\)) = 0.1; and _effect-probability_(\(e\)) = 0.5.
A CREATE effect of the form (CREATE block?B "new-block") causes a new block to exist in the Cart-Pole environment. Here, _effect-name_(\(e\)) = block; _effect-argument_(\(e\), block) =?B; _effect-modification_(\(e\)) = CREATE; and _effect-value_(\(e\)) = "new-block". With no probability explicitly described, we give _effect-probability_(\(e\)) = 1.
" A continuous change, which uses the variable \(c\) and has space \(C\) (\(C\) for the space of sets), describes continuous time changes to fluents. A change describes the fluent it changes with a name symbol (_change-name_: \(C\to S\)), argument assignments to that fluent (_change-argument_: \(C\times V\to Te\)), direction of change (_change-direction_: \(C\to Dir\)) and a derivative calculation (_change-derivative_: \(C\to CoA\)). Here, the set of possible directions is given by: \(Dir\equiv\) (increase, decrease). A special DT symbol is reserved to represent the time differential in continuous change derivatives only. Each continuous change must use this symbol once.
**Definition 8**: **Continuous Changes**
_Example_: In Cart-Pole, an example legal change \(c\) is (increase (pole-angle?C) (* DT (ANGULAR-motion?C))), which states that the value of the fluent pole-angle for the cart bound to?C increases linearly with time according to the angular motion function (not described further here). Here, _change-name_(\(c\)) = pole-angle; _change-argument_(\(c\),?C) =?C; _change-direction_(\(c\)) = INCREASE; and _change-derivative_(\(c\)) = (* DT (ANGULAR-motion?C)).
\({}^{r}\) Action models, event models, and process models describe when and how the world changes. Actions represent change caused by a performer; a performer can choose whether to take an action and with what parameters. Some actions may act differently for different performers or not be available to certain performers, so a performer variable gives a reference to the performer for preconditions and effects. Formally, an action model \(\alpha\in A\) (space of sets \(A\)) has a name (_action-name_: \(A\to S\)), performer (_action-performer_: \(A\to Te\)), parameters (_action-parameters_: \(A\to V\)), parameter types (_action-parameter-type_: \(A\times V\to Ty\)), preconditions (_action-preconditions_: \(A\to Cn\)), and effects (_action-effects_: \(A\to E\)). Tuple notation for actions is (_name_, _performer_, _args_, _argTypes_, _precs_, _effs_); a tuple in this form can be interpreted as an action that can respond to each above function.
**Definition 9**: **Action Models** _Example_: An action model \(\alpha\) in cart pole pushes the cart forward, instantaneously increasing its velocity in the x direction:
(:ACTION push :PERFORMANCE?AG :PARAMETERS (?C - CART) :PRECONDITIONS ((CONTROLS?AG?C) (= (AGENT-FORCE?AG)?FORCE) :EFFECTS ((INCREASE (CART-VELOCITY?C)?FORCE)) Here, _action-name_(\(\alpha\)) = PUSH; _action-performer_(\(\alpha\)) =?AG; _action-parameters_(\(\alpha\)) = [?C]; _action-parameter-type_(\(\alpha\),?C) = CART; _action-preconditions_(\(\alpha\)) = [(= (AGENT-FORCE?AG)?FORCE) (CONTROLS?AG?C)]; and _action-effects_(\(\alpha\)) = [(INCREASE (CART-VELOCITY?C) 1)].
\({}^{r}\) Event models represent instantaneous change not caused by a modelled agent; they have no parameters, because no entity (performer or otherwise) chooses an event to happen. However, qualities refer to a set of variables that may be instantiated within the triggers of the event and aid in understanding. All events are triggered in one of three ways: occur 1) by default, they must occur immediately when their triggers are met; 2) if an event model has a probability between 0 and 1, modelled events occur probabilistically when their triggers are met; 3) if an event model has a non-zero frequency, modelled events occur at random times, with interarrival times described by a poisson distribution with the modelled frequencyNo event model can have both a probability less than 1 and a non-zero frequency. An event \(ev\in Ev\) (space of sets \(Ev\)) has a name (_event-name_: \(Ev\to S\)), qualities (_event-qualities_: \(Ev\to V\)), quality types (_event-quality-type_: \(Ev\times V\to Ty\)), probability (_event-probability_: \(Ev\to[0..1]\)), frequency (_event-frequency_: \(Ev\to\mathbb{R}^{\geq 0}\)), triggers (_event-triggers_: \(Ev\to Cn\)), and effects (_event-effects_: \(Ev\to E\)). Tuple notation for events is (_name_, _quals_, _qualTypes_, _prob_, _freq_, _triggers_, _effs_); a tuple in this form can be interpreted as an event that can respond to each above function.
**Definition 10**: **Event Models**
_Example_: An event _ev_ in cart pole causes a win for a player whose cart reaches an angle and velocity within a certain range:
(:EVENT FINISHES :QUALITIES (?c - CART) :TRIGGERS ((!UPRIGHT?c) (!(CART-VELOCITY?c) 0.1) (?(CART-VELOCITY?c) -0.1) (controls?ag?c)) :EFFECTS (wins?ag)) Here, _event-name_(_ev_) = FINISHES; _event-probability_(_ev_) = 1; _event-frequency_(_ev_) = 0; _event-qualities_(_ev_) =[?c]; _event-quality-type_(_ev_,?c) = CART; _event-triggers_(_ev_) = [(= (UPRIGHT?c) TRUE), (!(CART-VELOCITY?c) 0.1), (?(CART-VELOCITY?c) -0.1), (= (controls?ag?c) TRUE)]; and _event-effects_(_ev_) = [[(ST (wins?ag) TRUE)].
\(\ulcorner\) Process models represent continuous change over time. Like events, they have qualities and are not controlled by any agent. Unlike events (or actions), they represent change over time, and describe continuous changes that occur over the duration of the process rather than effects. A process will continue for as long as its conditions are met. A process \(p\in P\) (space of sets _P_) has a name (_process-name_: _P_\(\rightarrow\)_S_), qualities (_process-qualities_: _P_\(\rightarrow\)_V_), quality types (_process-quality-type_: _P_\(\times\)_V_\(\rightarrow\)_Ty_), conditions (_process-conditions_: _P_\(\rightarrow\)_Cn_), and continuous changes (_process-changes_: _P_\(\rightarrow\)_C_). Tuple notation for processes is (_name_, _quals_, _qualTypes_, _conds_, _changes_); a tuple in this form can be interpreted as a process model that can respond to each above function.
**Definition 11: Process Models** _Example_: A process \(p\) in cart pole causes the pole to change position:
(:PROCESS POLE-ANGLE-CHANGES :QUALITIES (?c - CART) :CONDITIONS (forall Yag TRUE (NOT (wins?ag))) :EFFECTS (AND (INCREASE (POLE-ANGLE?c) (* DT (ANGULAR-MOION?c))))) Here, _process-name_(_p_) = POLE-ANGLE-CHANGES; _process-qualities_(_p_) =[?c]; _process-quality-type_(_p_,?c) = CART; _process-conditions_(_p_) = [(forall?ag TRUE (= (wins?ag) FALSE))]; and _process-changes_(_p_) = [(INCREASE (POLE-ANGLE?c) (* DT (ANGULAR-MOION?c)))].
\(\ulcorner\) A T-SAL domain, which we use the variable \(d\) for (\(D\) is the space of possible domains), includes types (_domain-types_: _D_\(\rightarrow\)_S_), supertypes (_domain-supertypes_: _D_\(\rightarrow\)_S_\(\times\)_S_), named constants with given types (_domain-constants_: _D_\(\rightarrow\)_S_\(\times\)_S_), functions (_domain-functions_: _D_\(\rightarrow\)_F_), axioms (_domain-axioms_: _D_\(\rightarrow\)_Ax_), actions (_domain-actions_: _D_\(\rightarrow\)_A_), events (_domain-events_: _D_\(\rightarrow\)_E_), and processes (_domain-processes_: _D_\(\rightarrow\)_P_). Tuple notation for domains is (_types_, _supertypes_, _constantTypes_, _functions_, _axioms_, _actions_, _events_, _processes_).
**Definition 12: T-SAL Domains** _Legality of a T-SAL domain, based mainly on type agreement, is described in Appendix A.
### T-SAL States, Goals, and Scenario Generators
In addition to dynamics, T-SAL describes environments in terms of distributions of starting states and tasks. In this section, we define T-SAL scenario generators, which define both of these. To describe a
scenario generator, we need the concepts of state, and various simpler generator functions: object generators, value generators, and fluent generators.
\({}^{\neg}\) A T-SAL state \(st\) describes what is true in the environment at a particular instant in time. We describe a state with a tuple (_objects_, _defaults_, _assignments_). Each T-SAL state is associated with a T-SAL domain \(d\) = (_types_, _supertypes_, _constantTypes_, _functions_, _axioms_, _actions_, _events_, _processes_), and defines the current set of objects belonging to every type in _types_ and a value for every possible fluent grounding using _functions_ with those objects. State objects (_state-objects_: \(St\times S\supset S\)) includes all objects of every type in the domain. The defaults (_state-defaults_: \(St\times F\supset Te\)) give a default value for every function, which is assigned by this state to any fluent grounding that is not given a specific value in _assignments_. The state's assignments (_state-assignments_: \(St\supset GF\)) give specific values for a list of ground fluents. Ground fluents use the variable \(gf\), space \(GF\) and space of sequences \(GF\). Each ground fluent names the function the fluent is from (_ground-fluent-function-name_: \(GF\supset Fn\)), argument values (_ground-fluent-argument_: \(GF\times V\supset Va\)) and a fluent value (_ground-fluent-value_: \(GF\supset Va\)). Tuple form for ground fluents is (_function-name_, _argument-values_, _fluent-value_).
**Definition 13**: T-SAL State _Example_: In a simple Cart-Pole state \(st\) there is one cart, cart1, and three blocks, BLOCK1, BLOCK2, and BLOCK3. A single agent, Agent1 exists. Other types (speed, position, angle) are derived from Real and are non-enumerated. Fluents in the domain include \(f1=\{\text{\scart-velocity}\?C-\text{\scart}\}-\text{\sc speed},f2=\{\text{\scart-position}\?C-\text{\scart}\}- \text{\sc speed},f3=\{\text{\sc pole-angle}\?C-\text{\scart}\}- \text{\sc angle},f4=\{\text{\sc block-position}\?B-\text{\sc block}\}- \text{\sc position},f5=\{\text{\sc block-velocity}\?B-\text{\sc block}\}- \text{\sc speed},f6=\{\text{\sc agent-force}\?Ag-\text{\sc agent}\}- \text{\sc agent},f7=\{\text{\sc controls}\?Ag-\text{\sc agent}\?C-\text{\sc cart}\}- \text{\sc boolean},\text{\sc and }f8=\{\text{\sc(wins}\?Ag-\text{\sc agent}\}- \text{\sc boolean}.\text{\sc Relevant values are: \emph{state-objects}(st,\text{\sc cart})=\{\text{\sc cart1}\};\text{\emph state-objects}(st,\text{\sc block})=\{\text{\sc BLOCK1},\text{\sc BLOCK2},\text{\sc block3}\};\text{\emph state-defaults}(st,f1)=0;\text{\emph state-defaults}(st,f3)=0.1;\text{\emph state-defaults}(st,f5)=0;\text{\emph state-defaults}(st,f6)=1;\text{\emph state-defaults}(st,f7)=\text{\emph false};\text{\emph state-defaults}(st,f8)=\text{\emph false};\text{\emph state-assignments}(st)=\{\text{\sc cart-position}\ \text{\sc(cart1)}=0,\text{\sc BLOCK-position}\ \text{\sc(BLOCK1)}=5,\text{\sc BLOCK-position}\ \text{\sc(BLOCK2)}=5,\text{\sc block-position}\ \text{\sc(BLOCK3)}=10,\text{\sc controls}\ \text{\sc(Agent1,\text{\sc cart1})=TRUE}].\)
Note that every legal assignment to every fluent can be valued using either a default or an explicit assignment in \(st\). In this case, functions \(f2\) and \(f4\), which have no defaults, must be fully enumerated, as shown.
\({}^{\neg}\) An object generator describes a distribution of objects in the environment. Whereas a T-SAL domain refers to certain "constant" objects present in every scenario, object generators describe the objects that are present in an environment as part of a particular scenario. An object generator \(og\) has a name (_object-generator-name_: \(OG\supset S\)), type (_object-generator-type_: \(OG\supset S\)), and a draw function that yields a set of distinct objects when called (_object-generator-draw_: \(OG\supset(\supset S\))). As randomness is frequently a desirable property of such generators, the draw function may returns sets with different contents and sizes on subsequent invocations. A set of basic draw functions is described in Section 3.4.3.
**Definition 14**: Object Generator
_Example_: A simple object generator \(og\) makes 5 block objects. We write this like so:
\[\textsc{ObjectGenerator(blockGroup,Block,ObjectList(5, "block"))}\]
The standard draw function ObjectList is defined to return \(N\) (the first argument) object names, each starting with the prefix given by the second argument. The object generator \(og\) has the following characteristics: _object-generator-name_(\(og\)) = cartGroup; _object-generator-type_(\(og\)) =
\[\textsc{CATT}\]
; _object-generator-draw_(\(og\)) = ObjectList(5, "cart").
\({}^{r}\) A value generator returns values from a random distribution; these values may be generated by from sets defined by object generators, domain constants, or from the reals, integers, or booleans. A value generator \(vg\) has a name (_value-generator-name_: \(VG\to 5\)) and provides a draw function that yields a sequence of values when called (_value-generator-draw_: \(VG\to(\)_Va_)). The draw function may produce random results or static results; a set of basic value generator draw functions is given in Section 3.4.4.
**Definition 15**: **Value Generator \(\lrcorner\)**
_Example_: A simple value generator \(vg\) for positions gives a single random real value between -20 and 20. We write this like so:
\[\textsc{ValueGenerator(randomPosition,UniformRealDistribution(-20, 20))}\]
This uses a standard draw function that pulls from a uniform distribution. The value generator \(vg\) has the following characteristics: _value-generator-name_(\(vg\)) = RandomPosition; _value-generator-draw_(\(og\)) = UniformRealDistribution(-20, 20).
\({}^{r}\) A fluent generator describes a distribution of fluent values for a particular domain function. A fluent generator \(fg\) has a function name (_fluent-generator-function-name_: \(FG\to Fn\)) that it generates fluents for, and provides a draw function (_fluent-generator-draw_: \(FG\to(\)_VGF_)) that yields a list of ground fluents describing individual fluent groundings; these do not necessarily map all possible assignments to the named function. Basic fluent generators are described in Section 3.4.5.
**Definition 16**: **Fluent Generator \(\lrcorner\)**
_Example_: A simple fluent generator \(fg\) for assigning block positions gives a random real value between -20 and 20 to each block. We write this like so:
\[\textsc{FluentGenerator(block-position,Algorithms([blockGroup],RandomPosition)}\]
The standard draw function, AllPermutations, described here creates a ground fluent for every permutation of unique objects generated by the object generators given as its first argument, and assigns each a value generated by the value generator given as its second argument. The fluent generator has the following characteristics: _fluent-generator-function-name_(\(fg\)) = block-position; _fluent-generator-draw_(\(fg\)) = AllPermutations([blockGroup],RandomPosition).
\({}^{r}\) A T-SAL scenario generator \(sg\) is a tuple (_fluents, defaults, object-generators, value-generators, fluent-generators, performance_), that describes a distribution over environment starting states, as well as a performance calculation that describes the success of each agent. The first item, _fluents_, (_scenario-
generator-object-generators: \(SG\to GF\))_, gives a set of ground fluents present in every scenario generated; these are not randomized. The second item, _defaults_, _(scenario-generator-defaults_: \(SG\times Fn\)\(\to VA\))_ maps each function name to a default value that can be assumed in the absence of contradictory assignments (from the fluents list and fluent generators). Each fluent in the corresponding domain should be covered by this mapping. The third item, _object-generators_, _(scenario-generator-object-generators_: \(SG\to OG\))_ sets up objects present in a starting state. The fourth item, _value-generators_, _(scenario-generator-value-generators_: \(SG\to VG\))_ gives a set of value generators defined for use by the fluent generators. The fifth scenario generator item, _fluent-generators_, _(scenario-generator-fluent-generators_: \(SG\to FG\))_ gives a set of fluent generators that describe distributions over fluents in the starting state. Finally, the last item, _performance_, gives a performance calculation as a calculation (_scenario-generator-performance-calculation_: \(SG\to Ca\)). This calculation must have a single free variable, \(\tt{YAG}\), which identifies the agent for which performance is to be computed. For simplicity, we assume that this calculation is always intended to be maximized and can be evaluated in any state.
**Definition 17**: T-SAL Scenario Generator
_Example_: A scenario generator \(sg\) defines cart-pole performance as the negative sum of the squared angles of all poles controlled by that agent.
_scenario-generator-performance-calculation(sg) =_
(sum?c (controls?Ag?c) (- (* (pole-angle?c))))
To generate a scenario, a new state object is created with defaults from the state generator, objects created by drawing from each object-generator, and assignments created by drawing from each fluent generator. Value generators are used indirectly during fluent generators draws. Finally, fluents in the scenario generator fluents list are added to the state, overwriting any generated and/or default fluents with the same fluent name and arguments.
An environment consists of a domain \(d\) and a scenario generator \(sg\). To be legal, \(sg\) must have a close relationship to the domain \(d\):
* _fluents_ must have only ground fluents with appropriate arity and values of appropriate types
* _defaults_ must give an appropriately typed value for every function in \(d\)
* _object-generators_ must use object types in \(d\)
* _fluent-generators_ must always generate legal ground fluents for the described function, which must also be in domain \(d\)
* _performance_ must be a real-valued calculation with a single free variable, \(\tt{YAG}\), referring to the target agent, it must only reference functions in the domain \(d\), and all symbols must reference constants from the domain \(d\).
Since the returns from generator draw functions are defined to vary over time, constraints on the draw function outputs are expressed below using the "G" temporal operator, which means "always".
### T-SAL Transformations and Transformation Sequences
\({}^{\tau}\) We use the variable \(t\) to describe an environment transformation, given as a tuple with a head and named arguments. The space of possible environment transformations is denoted \(T\). We will also refer to a transformation sequence \(\boldsymbol{t}=[\boldsymbol{t}_{o},\,...,\,\boldsymbol{t}_{o}]\), with the transformation sequence space denoted as \(\boldsymbol{T}\). A transformation has a type (_transformation-type: \(T\to S\)_) and arguments (_transformation-argument: \(T\times S\)_\(\to U\)_). Here, \(U\) denotes the universal set; the meaning of particular transformation arguments is dependent on the type. Individual transformations are defined in Section 3.5.2 (T-Transformations) and Appendix B (R-Transformations).
**Definition 18**: Transformations and Transformation Sequences
_Example_: A transformation \(t\) to a cart-pole environment that subtracts push force from a budget is given as:
\[\textsc{AddActionEffect}(\textsc{ActionNAME:}\textsc{PUSH},\textsc{ EFFECT:}\left(\textsc{decrease }(\textsc{push-budget?ag})\,\textsc{?Force}\right))\]
The transformation \(t\) has _transformation-type_(\(t\)) = AddActionEffect. The first argument is referred to by _transformation-argument_(\(t\), \(\textsc{actionNAME}\)) = push.
\({}^{\tau}\) Applying a transformation to an environment yields a new environment: _apply: \(T\times D\times SG\to D\times SG\)_. We also define a function that applies a sequence of transformations (_applySequence: \(T\times D\times SG\to D\times SG\)_) as follows.
\[\textit{applySequence}(\emptyset,\,d,\,sg)\equiv(d,\,sg)\] \[\textit{applySequence}(\boldsymbol{t}=[\boldsymbol{t}_{o},\,..., \,\boldsymbol{t}_{o}],\,d,\,sg)\equiv\textit{applySequence}([\boldsymbol{t}_{i},\,..., \,\boldsymbol{t}_{o}],\,\textit{apply}(\boldsymbol{t}_{o},\,d,\,sg))\.\]
**Definition 19**: Transformation and Transformation Sequence Application
### T-SAL R-Transformations
R-transformations are modifications to the environment dynamics. We formally define 30 transformations on T-SAL environments in Appendix A with a summary of these given here in Table 3. State space transformations are those that change the set of states the environment may contain. Performer control transformations change a performer's ability to interact with the environment. Instanteous environment change transformations change all non-voluntary instanteous transitions. Durative natural change transformations change processes which operate over continuously over time.
### T-SAL T-Transformations
T-Transformations are modifications to the scenario generator. We define a set of T-Transformations which cover the space of possible generator modifications below; first, we give examples for a paradigmatic domain, then we define the particular transformations possible, as well as the sub-generators that can be used.
To easily transform scenario generators, it will be useful to supersede existing value or object generators. Therefore, when we add a new generator, it should be assumed to replace any existing generator with the same name, and all references in other generators to the old generator become references to the new generator. As such, we need only five types of T-Transformation: AddFluentValue, AddDefaultValue, AddObjectGenerator, AddValueGenerator, AddFluentGenerator, and AddPerformanceCalculation. As they are particularly novel, we give additional examples below of T-Transformations for value, object, and fluent generators useful in various domains; we follow this with a formal definition of these five transformations, and a description of the value, object, and fluent generators used.
#### 3.5.1 T-Transformation examples
In Mudworld [16], a set of rovers attempt to travel to a set of destinations on a 6x6 grid. Any location may be muddy, with a 30% chance. Mud slows the rover down to half speed, with some variability across scenarios. All destinations are within four cell moves. The following T-Transformations construct a scenario generator for this environment.
AddValueGenerator(XLoc, UniformIntegerDistribution(1, 6))
AddValueGenerator(Yloc, UniformIntegerDistribution(1, 6))
AddValueGenerator(Xset, IntegerSequence(1, 6))
AddValueGenerator(Yset, IntegerSequence(1, 6))
AddObjectGenerator(robotGroup, Robot, ObjectList(6, "rover"))
AddValueGenerator(RobotG, DrawFromObjectSet(robotGroup))
AddValueGenerator(RandRobot, DrawAllFromObjectSet(robotGroup))
AddValueGenerator(TRUE, ConstantFunction(True))
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{**Summary of all R-transformations**} \\ \hline
**State Space** & **Performer Control** & **Instanteous** & **Durative Natural Change** \\
**Transformations** & **Transformations** & **Environment Change** & **Transformations** \\ \hline AddType & AddACTION & AddEvent & AddPROCESS \\ \hline AddTypeParent & AddPrecondition & ChangeFEQUENCY & AddPROCESSCONDITION \\ \hline AddCONSTANT & AddACTIONEffECT & ChangePROBABILITY & AddPROCESSCHANGE \\ \hline AddFunction & REMOVEACTION & AddTrigger & REMOVEPROCESS \\ \hline AddAXiom & REMOVERECONDITION & AddEVENTEFECT & REMOVEPROCESSCONDITIONS \\ \hline REMOVETYPE & REMOVEACTIONREFECT & REMOVEEVENT & REMOVEPROCESSCHANGE \\ \hline REMOVETPPEParent & REMOVETRIGger & & \\ \hline REMOVECONSTANT & REMOVEEVENTEFECT & & \\ \hline REMOVEFUNCTION & & & \\ \hline REMOVEAXiom & & & \\ \hline \end{tabular}
\end{table}
Table 3: The _apply_ function is defined with respect to all these R-Transformations within Appendix A.
AddValueGenerator([sMuddy, BernoulliDistribution(0.3)]
AddFluentGenerator("muddy", AllPermutations([Xset, Yset], IsMuddy))
AddValueGenerator(SpeedInMud, GaussianDistribution(0.5, 0.2))
AddFluentGenerator("speed-in-mud", AllPermutations([], SpeedInMud))
AddValueGenerator(ClosePos, Filter(
DrawTuple([Xloc, Yloc, Xloc, Yloc], ["x1", "y1", "x2", "y2"]),
\(\lambda(x_{1},y_{1},x_{2},y_{2})\)\(\{|x_{1}-x_{2}|+|y_{1}-y_{2}|<4\})))
AddValueGenerator(CloseSet, NewSet(NDraws(ClosePos, 6))
AddFluentGenerator("robot-x-loc", NFluentDraws([Robot], CloseSet.x1, 6))
AddFluentGenerator("robot-y-loc", NFluentDraws([Robot], CloseSet.y1, 6))
AddFluentGenerator("robot-dest", NFluentDraws([Robot, CloseSet.x2, CloseSet.y2], TRUE, 6))
Here, we consider a scenario generator that creates a multi-dimensional random value for a camera's starting location:
AddValueGenerator(Position, NDimensionalGaussianDistribution([0.5, 0.7, 0.2], [0.1, 0.2, 0.05]))
AddFluentGenerator("camera-starting-position",
NFluentDraws([Position.x1, Position.x2, Position.x3], True, 1))
#### 3.5.2 T-Transformation Definitions
We define the apply function for five T-Transformations:
_apply(AddFluentValue(functionName: name, FluentArgs: args, Value: value), d,_
_(fluents, defaults, object-generators, value-generators, fluent-generators, performance)_)_\(\equiv\)
\(\langle\langle\)_fname, fargs, fvalue_\(\rangle\)_\(\in\)_fluents_\(\wedge\)_(fname \(\neq\)_name_\(\vee\)_fargs_\(\neq\)_args_)_\(\rangle\)
\(\cup\)_(_name, args, value_)_\(\rangle\),
_defaults, object-generators, value-generators, fluent-generators_\(\rangle\)
_apply(AddDefaultValue(functionName: name, value: value), d,_
_(fluents, defaults, object-generators, value-generators, fluent-generators, performance)_)_\(\equiv\)
_(fluents, \(\langle\)_fname, fvalue_\(\rangle\)_\(|\)_(fname, fvalue_\(\rangle\)_\(\in\)_defaults_\(\wedge\)_fname_\(\neq\)_name_\(\rangle\)_\(\cup\)_(name, value_)_\(\rangle\),_
_object-generators, value-generators, fluent-generators_\(\rangle\)
_apply(AddObjectGenerator(name: name, type: type, drawFunction: drawFn), d,_
_(fluents, defaults, object-generators, value-generators, fluent-generators, performance)_)_\(\equiv\)
_(fluents, defaults,_
_(og_\(\in\)_object-generators_\(|\)_object-generator-name(og)_\(\neq\)_object-generator-name(newGenerator)_\(\rangle\)_\(\cup\)_(_name, type, drawFn_)_)_,_
_value-generators, fluent-generators_\(\rangle\)
_apply(AddValueGenerator(name: name, DrawFunction: drawFn), d,_
_(fluents, defaults, object-generators, value-generators, fluent-generators, performance)_)_\(\equiv\)
_(fluents, defaults, object-generators,_
_(vg_\(\in\)_value-generators_\(|\)_value-generator-name(vg)_\(\neq\)_value-generator-name(newGenerator)_\(\rangle\)_\(\cup\)_(_name, drawFn_)_)_, fluent-generators_\(\rangle\)
apply(AddFluentGenerator{NAME:name,drawFunction:drawFn},d, (fluents, defaults, object-generators, value-generators, fluent-generators, performance)) \(\equiv\) (fluents, defaults, object-generators, value-generators, (f\(g\)\(\in\)fluent-generators\(|\)fluent-generator-name(fg)*fluent-generator-name(newGenerator)) \(\cup\) {(name,drawFn)}) apply(ReplacePerformMancecCalculation(PERFORMANCE:calculation),d, (fluents, defaults, object-generators, value-generators, fluent-generators, performance)) \(\equiv\) (fluents, defaults, object-generators, value-generators, fluent-generators, calculation)
#### 3.5.3 Object Generator Draw Functions
The following simple example of an object generator draw function grounds our examples. Others, particularly with variable output, are possible.
ObjectList(_count_, _id_) = () - [_gensym_(_id_, _i_)]=1.count_
Generates _count_ objects of the requested _type_, prepended with the string in _id_, as created by the Lisp "gensym" function.
#### 3.5.4 Value Generators
The following examples of value generator draw functions ground our examples. Others are possible.
UniformDistribution(_min_, _max_)
Generator always returns a sequence containing a single real number drawn from a uniform distribution between min and max.
UniformIntegredDistribution(_min_, _max_)
Generator always returns a sequence containing a single uniform integer draw between min and max (inclusive).
GaussianDistribution(_mean_, _stdev_)
Generator always returns a sequence containing a single draw from a one-dimensional Gaussian distribution with the given parameters.
NDIMENSIONALGaussianDistribution(_means[], stdevs[]_)
Generator always returns a sequence containing a single draw from an n-dimensional Gaussian distribution with the given parameters, as a tuple with values named [x1, x2,...].
BernoulliDistribution(_p_)
Generator returns true with probability \(p\), and otherwise returns FALSE.
IntegerSequence(_min_, _max_)
Generator always returns a sequence containing all integers between min and max, in order.
Generator always returns a sequence containing a single draw from an object generator on the same scenario generator, referenced by name.
DrawAllFromObjectSet(objectGeneratorName)
Generator always returns a sequence containing all objects that were produced from an object generator on the same scenario generator, referenced by name, in a random order.
ConstantFunction(value)
Generator always returns a sequence containing the single value passed in, i.e., the sequence [value].
NDraws(valueGenerator, n)
Generator returns a sequence of \(n\) values drawn from _valueGenerator_.
DrawTuple(valueGenerators[], names[])
Generator returns a sequence with a single tuple with values drawn from _valueGenerators_ and addressable by names given in _names_.
Filter(valueGenerator, filterFn (Any) Boolean)
Generator returns a sequence with a single value; this is the first value returned by repeatedly drawing from _valueGenerator_ until the _filterFn_ reports true for one.
NewSet(valueGenerator)
Generator always returns the same sequence, which is first populated by drawing from valueGenerator.
#### 3.5.5 Fluent Generators
The following examples of fluent generator draw functions ground our examples. Others are possible.
AllPermutations(_argumentGenerators[], fluentValueGenerator_)
Generator returns ground fluents with arguments populated by value generators corresponding to the names in _argumentGenerator_, and values populated by value generators corresponding to _fluentValueGenerator_ (can be referenced by name or provide draw function directly).
All permutations of a set of draws from the argument generators are used to generate a list of ground fluents. Values from _fluentValueGenerator_ will be used in order, and a new set drawn when none are left, until every permutation is assigned a fluent value.
NFluentDraws(_argumentGenerators[], fluentValueGenerator, n_)
Generator returns \(n\) ground fluents. The arguments of each are populated by value generators corresponding to the names in _argumentGenerators_, and the value of each is populated by the value generator corresponding to _fluentValueGenerator_. All values from a single draw of each
argument and fluent value generator are used in order; if fewer than \(n\) values are returned by a generator, it will be called again until \(n\) values have been generated.
\[\textsc{CombineFunctions}(\textit{fluentGenerators}[])\]
Results of each fluent generator are concatenated.
## 4 Application to Novelty Hierarchy
This section gives a novelty type definition for each level of the novelty hierarchy introduced by the DARPA SAIL-ON program (see Table 2). These definitions were an early attempt to categorize transformations possible in open-world environments, starting with the properties of individual world elements (objects, agents, and their actions and goals), advancing to multi-element properties (relations, interactions, and events), and ending with general properties broadly impacting multiple
\begin{tabular}{c c c c} \hline HF & Category & Dejunction & Notes \\ \hline \multirow{5}{*}{**1 - Objects**} & \multirow{5}{*}{**2 - Agents**} & An entity in the environment that is not the point-of-view agent and does not have goal-oriented behavior. & Objects may experience state changes as a result of actions by the i) point-of-view agent, ii) an external agent, iii) an event, iv) or a non-volitical behavior. & Objects may experience state changes as a result of actions by the i) point-of-view agent, ii) an external agent, iii) an event, iv) or a non-volitical behavior. \\ \cline{1-1} & &
world elements (environments). Levels in the novelty hierarchy were intended to be mutually exclusive. Novelties of varying difficulty can arise from each level of the novelty hierarchy, though different novelty-handling techniques and more sophisticated world models may be required at higher numbered levels (Novelty Working Group, 2023). Note that the term "hierarchy" is used here for historical reasons; no hierarchical relationship is required or expected between these novelty characterizations.
The formalizations provided here show that the T-SAL R-Transformations and T-Transformations are sufficient for formalizing the novelty categorizations created for the SALL-ON program. Future research should attempt to investigate what classes of novelty are difficult or easy for agents with various capabilities to reason about.
This section is organized as follows: there are eight novelty hierarchy levels. For each level, we first provide a natural language definition and clarifications shared in the DARPA SALL-ON program (Kildebeck et al, 2022) to guide those groups responsible for generating novelty in these categories as to what novelties are admissible in each category. For each of the eight novelty hierarchy levels, we then provide a formal definition, first described in plain language then in L(T-SAL). The L(T-SAL) description of each level provides a decision function of the form _has<X>Novelty(t, d)_, where "<X>" is the novelty type, t is a transformation sequence and d a T-SAL domain. This function decides whether the novelty described by t belongs to category X. Finally, for each level we provide one or more T-SAL-CR examples of the category as transformation sequences.
The ambiguity in the natural language definitions described below is such that novelties generated by various groups adhered to different standards (NWG, 2023). It is our hope that the formal standards presented here will help future researchers to adhere to common standards.
## 1 - Objects
_Natural language definition:_ An entity in the environment that does not have goal-oriented behaviors.
_Clarifications:_ Objects may experience state changes as a result of actions by i) the point-of-view agent, ii) an external agent, or iii) an event. This level is explicitly differentiated from "instance" novelty, which arises when new objects of an existing type are observed.
_Formal definition:_ To introduce object novelty, a transformation sequence must either 1) add a new type that inherits from Object, 2) add a new function that relates that object type to something else, or 3) change the initial distribution of such a function (via a new fluent generator). In addition, to ensure relevance, the new type or function must impact domain dynamics, through the preconditions of an action model, the conditions of a process model, and/or the triggers of an event model.
_hasObjectNovelty (t, d, sg)_\(\equiv\)
\(\exists i,f,j,p,c,d_{n}\): \(0\leq i<|t|\)
\(\wedge\)\(d_{n}\) = _applySequence_(t, d)
\(\wedge\) (((transformation\_type(t)=\)AddType
\(\vee\)\(transformation\_type(t)=\)AddTypeParent
\(\vee\)\(transformation\_type(t)=\)RemoveTypeParent)
\(\wedge\)\(p=transformation\_argument(t_{i}\)parent)
\(\wedge\)\(c=transformation\_argument(t_{i}\),CHILD)
\(\wedge\)\(f\)\(\in\)\(domain\_functions(d)\)
\(\Lambda\)_derived-from_(\(c\), Object, \(d_{N}\))
\(\Lambda\)_derived-from_(\(p\), _function-argument-types_(f)\({}_{i}\), \(d\))
\(\Lambda\)_-derived-from_(\(c\), _function-argument-types_(f)\({}_{i}\), \(d\))
\(\Lambda\)_relevant-function_(\(function\)-name(f), \(d\)))
\(\vee\)(_transformation-type_(\(t\)) = AddFunction\(\wedge\)_f_= transformation-argument_(\(t\), Function)
\(\wedge\)_derived-from_(\(function\)-argument-types_(f)\({}_{i}\), Object)
\(\wedge\)_relevant-function_(\(function\)-name(f), \(d_{N}\)))
\(\vee\)(_transformation-type_(\(t\)) = AddFunctionGenerator
\(\wedge\)_f_=_fluent-generator-function_(\(transformation\)-argument_(\(t\), FluentGenerator))
\(\wedge\)_derived-from_(\(function\)-argument-types_(f)\({}_{i}\), Object)
\(\wedge\)_relevant-function_(\(function\)-name(f), \(d_{N}\))
_Required Definitions:_
The relation _derived-from_ is defined in Appendix A.
_relevant-function_(_fn, \(d\)_) \(\equiv\)
(\(\exists cnd\): ((\(\exists a\in\)_domain-actions_(\(d\))\(\wedge\)_cnd_\(\in\)_action-preconditions_(\(a\)))
\(\vee\)(\(\exists ev\in\)_domain-events_(\(d\))\(\wedge\)_cnd_\(\in\)_event-triggers_(\(ev\))) \(\vee\)(\(\exists p\in\)_domain-processes_(\(d\))\(\wedge\)_cnd_\(\in\)_process-conditions_(\(ev\)))
\(\wedge\)(_function-in-condition_(_fn, \(cnd\)_))
\(\vee\)_\(\exists\)ax: function-affects-axiom_(_fn, \(\alpha\)x, \(d\)_) \(\wedge\)_function-in-condition_(_fn, \(cnd\)_)
\(\vee\)_function-affects-performance_(_fn, \(d\)_)
_function-in-calculation_(_fn, domain-performance-function_(\(d\)))
_function-affects-axiom_(_fn, \(\alpha\)x, \(d\)_) \(\equiv\)
_function-in-condition_(_fn, axiom-condition_(\(\alpha\)x), \(d\))
\(\vee\)\(\exists\)ax_Mx: \(\alpha\)x_Mx_Mx_Mx_Mx_Mx_Mx_Mx_Mx_Mx_Mx_Mx_Mx_Mx_Mx_Mx_Mx_Mx_x_Mx_Mx_x_Mx_x_Mx_x_Mx_x_Mx_x_Mx_x_Mx_x_x_Mx_
function-in-condition(fn, \(\langle\)ineq, calc1, calc2\(\rangle\in\) Ineq\(\times\)Ca\(\times\)Ca\(\rangle\equiv\) function-in-calculation(fn, calc1) \(\vee\) function-in-calculation(fn, calc2) function-in-calculation(fn, calc\(\in\)Te)\(\equiv\) function-in-term(fn, calc) function-in-calculation(fn, \(\langle\)op, calc1, calc2\(\rangle\in\)Op\(\times\)Ca\(\times\)Ca\(\rangle\equiv\) function-in-calculation(fn, calc1)\(\vee\) function-in-calculation(fn, calc2) function-in-calculation(fn, \(\langle\)aggop, \(\vee\), con, calc\(\rangle\in\)AggOp\(\times\)V\(\times\)Cn\(\times\)Ca\(\rangle\equiv\) function-in-condition(fn, con)\(\vee\) function-in-calculation(fn, calc) function-in-calculation(fn, \(\langle\)IF, cnd, calc1, calc2\(\rangle\in\)IF\(\times\)Cn\(\times\)Ca\(\rangle\equiv\) function-in-condition(fn, cnd)\(\vee\) function-in-calculation(fn, calc1)\(\vee\) function-in-calculation(fn, calc2) function-argument-types(f)=[function-argument-type(f, function-argument(f))]=1..[function-argument(f)]\(\equiv\)
_fluent-generator-function(fg, d)_=argmin\({}_{\dagger}\)f\(\in\)_domain-functions(d) \(\wedge\)function-name(f)_=fluent-generator-function-name(fg)
_Example_: Every cart starts with a block within 2 distance units (3 carts standard):
\[\begin{array}{l}\text{\sc AddValueGenerator(NEARBY-STARTS, Filter(}\\ \text{\sc DrawTuple([Position, Position], ["cart", "block"])),}\\ \text{\sc\lambda(x,y)}\rightarrow\{|\text{\sc x}-\text{\sc y}|<2\})\\ \text{\sc AddValueGenerator(START-positions, NewSet(NDRAWS(NEARBY-STARTS, 3))}\\ \text{\sc AddValueGenerator(CLOSE-BLOCKS, NDRAWS(BLOCKGROUP, 3))}\\ \text{\sc AddValueGenerator(FAR-BLOCKS, Difference(BLOCKGROUP, CLOSE-BLOCKS))}\\ \text{\sc AddFluentGenerator(CRT-POSITION, AllPermutations([CARTGROUP], START-POSITION.CART)}\\ \text{\sc AddFluentGenerator(BLOCK-POSITION,}\\ \text{\sc CombineFunctions(AllPermutations([CLOSE-BLOCKS], START-POSITION.BLOCK)}\\ \text{\sc AllPermutations([FAR-BLOCKS], POSITION))}\end{array}\]
_Example_: Initial block velocities are doubled:
_Original generator_:
\[\begin{array}{l}\text{\sc FluentGenerator(BLOCK-VELOCITY, AllPermutations([BLOCKGROUP], RandomMEST([1, -1])}\\ \text{\sc AddFluentGenerator(BLOCK-VELOCITY, AllPermutations([BLOCKGROUP], RandomMEST([2, -2])}\end{array}\]
## 2 Agents
_Natural language definition_: An entity in the environment that is not the point-of-view agent and has goal-oriented behavior. The agent is most frequently an external agent but can in some instances be the point-of-view agent. The goal of the agent does not have to be obvious.
_Clarifications_: Researchers will disagree on what 'goal-oriented behavior' means. This is fine, and researchers have flexibility within reason to determine best practices in specific domains. The goal of the agent does not have to be obvious. For example, random exploration could be a goal.
_Formal definition_: An agent-level novelty is defined identically to an object-level novelty, but using AGENT subtypes instead of Object subtypes. Note that agents are primarily distinguished from objects in that every action has a performer which must be an agent.
_hasAgentNovelty_ (\(\textbf{t},d\)) \(\equiv\)
\(\exists i,f,j,p,\), \(c\), \(d_{n}\): \(0\leq i<|\textbf{t}|\)
\(\wedge\)\(d_{n}\) = _applySequence_(\(\textbf{t},d\))
\(\wedge\) (((transformation-type(\textbf{t}))\) = AddType
\(\vee\) transformation-type(\(\textbf{t})\) = AddTypeParent
\(\vee\) transformation-type(\(\textbf{t})\) = RemoveTypeParent)
\(\wedge\)\(p\) = _transformation-argument_(\(\textbf{t}_{i}\) parent)
\(\wedge\)\(c\) = _transformation-argument_(\(\textbf{t}_{i}\) child)
\(\wedge\)\(f\) \(\in\) _domain-functions_(\(d\))
\(\wedge\)_derived-from_(\(c\), Agent, \(d_{n}\))
\(\wedge\)_derived-from_(\(p\), _function-argument-types_(\(f\)), \(d\))
\(\wedge\) -_derived-from_(\(c\), _function-argument-types_(\(f\)), \(d\))
\(\wedge\) _relevant-function_(\(function\)-name(\(f\)), \(d\))
\(\vee\) (_transformation-type_(\(\textbf{t}\)) = AddFunction\(\wedge\)\(f\) = _transformation-argument_(\(\textbf{t}_{i}\) function)
\(\wedge\) _derived-from_[\(function\)-argument-types_(\(f\)), Agent)
\(\wedge\) _relevant-function_(\(function\)-name(\(f\)), \(d_{n}\))
\(\vee\) (_transformation-type_(\(\textbf{t}\)) = AddFunctionGenerator
\(\wedge\)\(f\) = _fluent-generator-function_(\(transformation\)-argument_(\(\textbf{t}_{i}\) FluentGenerator))
\(\wedge\) _derived-from_[\(function\)-argument-types_(\(f\)), Agent)
\(\wedge\) _relevant-function_(\(function\)-name(\(f\)), \(d_{n}\)))
_Required Definitions:_
The relation _derived-from_ is defined in Appendix A.
_Example_: A transformation sequence changes the amount of force other agents use when pushing, referenced by the push action (see Section 3.1).
AddValueGenerator(Other-AGENTS, NewSet(Difference(AGENTS, [Agent1])))
AddFluentGenerator(Agent-FORCE, AllPermutations([Other-AGENTS],
UniformDistribution(0.5, 1.5)))
## 3 Actions
_Natural language definition_: A goal-oriented behavior of an external agent that is not the point-of-view agent.
_Clarifications_: Non-goal-oriented movements of non-volitional objects are not considered actions.
_Formal definition_: For a transformation sequence to exhibit action novelty, the preconditions or effects of an action model must be changed. In addition, we require that the performer of the changed action cannot be of the same type \(ty_{\textit{A}\textit{e}}\) as the point-of-view agent.
_hasActionNovelty_ (\(\textbf{t}=[\textbf{t}_{0}...\textbf{t}_{n}]\), \(\textit{ty}_{\textit{A}\textit{e}}\), \(d\)) =
\(\exists i<|\textbf{t}|\), \(d_{p}\), \(d_{r}\), \(\alpha_{r}\), \(n\), \(\nu_{r}\), \(\theta\), _vsoumb_:
\(d_{p}\) = _applySequence_([\(\textbf{t}_{0}\)... \(\textbf{t}_{i}\).], \(d\))
\(\wedge\)\(\exists\)\(\alpha\)\(\in\) _domain-actions_(\(d_{p}\)): \(n\) = _action-name_(\(\alpha\))
\(\wedge\) _transformation-argument_(\(\textbf{t}_{i}\) actionNAME) = \(n\)
\(\Lambda\)\(transformation\)-\(type(\textbf{t}_{i})\in\)\(\{AddPrecondition,RemovePrecondition,\)
AddActionEffect,RemoveActionEffect}
\(\Lambda\)\(d_{T}\)=\(applySequence(\textbf{t},d)\)
\(\Lambda\)\(\sigma_{T}\in\)\(domain\)-\(actions(d_{T})\)\(\Lambda\)\(action\)-\(name(\sigma_{T})=n\)
\(\Lambda\)\(action\)-\(performer(\sigma_{T})=v_{T}\)
\(\Lambda\)\(\nu_{\textit{Soumo}}\)=\(bound\)-variables-in-\(conditions(action\)-\(preconditions(\sigma_{T})\), \(\nu_{\textit{Soumo}})\)
\(\cup\)\(\{(action\)-\(performer(\sigma_{T})\)\(\cap\)\(\vee\)\()\(\cup\)\((action\)-\(parameters(\sigma_{T})\)
\(\Lambda\)\(\neg\exists\)\(\emptyset\):\((assignable\)-\(to(ty_{\textit{AG}}\), \(\nu_{T}\), \(\emptyset\), \(d_{T})\)
\(\wedge\)\(\forall\)\(cmd\)\(\in\)\(action\)-preconditions(\(\sigma_{T}\)): legal-condition(\(cmd\), \(\emptyset\), \(\nu_{\textit{Soumo}}\), \(d\)))
_Required Definitions:_
The relation _bound-variables-in-conditions_ and _assignable-to_ are defined in Appendix A.
_Example_: An environment transformation adds an action that lets agents "\(\mathit{donate}\)" cart velocity to blocks
AddAction(\(\mathit{Donate}\), \(\mathit{?AG}\), \(\mathit{[7C]}\), \(\mathit{[Cart]}\))
AddPrecondition(\(\mathit{Donate}\), \((=(\mathit{ControlS}\)\(\mathit{?AG}\)\(\mathit{?C}\)))
AddPrecondition(\(\mathit{Donate}\), \((=(\mathit{Cart-Velocity}\)\(\mathit{?C}\))\(\mathit{?V}\)))
AddActionEffect(\(\mathit{Donate}\), \((\mathit{SET}\)\((\mathit{Cart-Velocity}\)\(\mathit{?C}\))\(\mathit{0}\)))
AddActionEffect(\(\mathit{Donate}\), \((\mathit{INCREASE}\)\((\mathit{BLOCK-Velocity}\)\(\mathit{?B}\))\(\mathit{?V}\))
_Example_: Cart pushes have a budget
AddFunction(\(\mathit{push}\)-\(\mathit{budget}\))
AddPrecondition(\(\mathit{push}\), \((>(\mathit{push}\)-\(\mathit{budget}\)\(\mathit{?AG}\)\(\mathit{?FORCE}\)))
AddActionEffect(\(\mathit{push}\), \((\mathit{decrease}\)\((\mathit{push}\)-\(\mathit{budget}\)\(\mathit{?AG}\)\(\mathit{?FORCE}\)))
AddFluentGenerator(\(\mathit{push}\)-\(\mathit{budget}\), \(\mathit{AllPermutations}\)([\(\textit{Agents}\)], \(\mathit{ConstantFunction}\)(\(\mathit{100}\))))
**4 - Relations**
_Natural language definition_: Static properties of the relationships between multiple entities.
_Clarifications_: Can be spatial, temporal, or other. Can include the point-of-view agent and other entities.
_Formal definition_: A transformation sequence exhibits relation novelty if a static function (one not present in any effects or process changes) is created, has its distribution changed, or is added to triggers, conditions, or preconditions. The function must be relevant and must involve 2 or more entities (here, entities are specifically objects of types Agent, Object, or their subtypes).
_hasRelationNovelty_(\(\textbf{t}\), \(d\)) =
\(\exists\)\(f\), \(name\), \(i\): \(0\leq i<|\textbf{t}|\)
\(\wedge\)\((transformation\)-\(type(\textbf{t})=\)\(\mathit{AddFunction}\)
\(\wedge\)\(function\)-\(name\)(transformation-\(argument(\textbf{t}\), \(\mathit{FINUCTION})\)) = \(name\))
\(\vee\)\((transformation\)-\(type(\textbf{t})\in\)\(\{\mathit{AddPrecondition}\), \(\mathit{RemovePrecondition}\}\)
\(\wedge\)\(function\)-\(in\)-\(condition\)(name, \(transformation\)-\(argument(\textbf{t}\), \(\mathit{PRECONITION}\)))
\(\vee\)\((transformation\)-\(type(\textbf{t})\in\)\(\{\mathit{AddTrigger}\), \(\mathit{RemoveTrigger}\)}
\(\wedge\)\(function\)-\(in\)-\(condition\)(name, \(transformation\)-\(argument(\textbf{t}\), \(\mathit{TRigger}\))))
\(\vee\)\((transformation\)-\(type(\textbf{t})\in\)\(\{\mathit{AddProcessCondition}\), \(\mathit{RemovePROCSCONITION}\}\)
\(\Lambda\)function-in-condition[name, transformation-argument(\(\{t_{i}\), condition)]})\(\vee\)(transformation-type(\(\{t_{i}\}\)) = AddFluentGenerator
\(\Lambda\)name = fluent-generator-function-name( \(\{\) transformation-argument(\(\{t_{i}\), fluentGenerator\(\}\))\(\}\))
\(\Lambda\)\(d_{T}\)= applySequence(\(\{t_{i}\), \(d\}\)
\(\Lambda\)f \(\in\)domain-functions(\(d_{T}\))
\(\Lambda\)function-name(f) = name
\(\Lambda\)\(|\)entities-in-function(f)\(|>1\)
\(\Lambda\)(\(\forall a\in\)domain-actions(\(d_{T}\)) : \(\exists e\in\)action-effects(\(a\)) : effect-name(\(e\)) = name
\(\vee\)\(\forall e\in\)domain-events(\(d_{T}\)) : \(\exists e\in\)event-effects(\(e\)) : effect-name(\(e\)) = name
\(\vee\)\(\forall p\in\)domain-processes(\(d_{T}\)) : \(\exists c\in\)process-changes(\(p\)) : change-name(\(c\)) = name)
_Required Definitions:_
_entities-in-function(f) =_
\(\{\)\(\forall\)function-arguments(f) \(|\)derived-from(function-argument-type(f,v), Object)
\(\vee\)derived-from(function-argument-type(f,v), Agent)\(\}\)
_Example_: A transformation sequence asserts a minimum distance between every block and cart. This adds a new binary static function, MIN-DIST. The revised scenario generator assigns a random value between 0 and 1 to this function for each pair of block and cart. The PUSH action preconditions change to enforce this constraint.
AddFunction((MIN-DIST, [?b,?c], [Block, cart], REAL))
AddPrecondition(PUSH, (= [CART-POSITION?c]?p))
AddPrecondition(PUSH, (forall 7B (!S-BOCK?b)
(AND (< (- (-?p?FORCE) (Block-POSITION?b)) (MIN-DIST?b?c))
(< (- (Block-POSITION?b) (+?p?FORCE)) (MIN-DIST?b?c)))
AddFluentGenerator(MIN-DIST, AllPermutations([Blockcs, carts], UniformDistribution(0, 1)))
## 5 Interactions
_Natural language definition_: Dynamic property of behaviors or actions that impacts multiple entities.
_Clarifications_: Can be spatial, temporal, or other. Can include the point-of-view agent and other entities.
_Formal definition_: A transformation sequence exhibits interaction novelty if a fluent is added to or changed in the effects of an action. The fluent must name a relevant function that involves 2 or more entities (here, entities are specifically objects of types Agent, Object, or their subtypes).
_hasInteractionNovelty_(\(\{t_{i}\), \(d\}\) =
\(\exists\)_f, fname, name, a, i:_0 \(\leq i<|\)\(|\)\(t\)
\(\Lambda\)((transformation-type(\{t_{i}\}\)\(\in\){AddActionEffect, RemoveActionEffect}
\(\Lambda\)effect-name(transformation-argument(\(\{t_{i}\), effECT)) = _fname_)
\(\Lambda\)\(\exists\)_a \(\in\)domain-actions(\(d\))_:
\(\textit{name}=\textit{action-name}(a)\)
\(\Lambda\)transformation-argument(\(\{t_{i}\), actionNAME) = _aname_
\(\Lambda\) function-name(f) =fname \(\Lambda\) | entities-in-function(f) |>1 \(\Lambda\) \(d_{T}\) = applySequence(t, d) \(\Lambda\) (\(\exists\alpha_{T}\in\) domain-actions(d\({}_{T}\)) : \(\text{action-name}(\alpha_{T})\) = aname \(\Lambda\) \(\exists e\in\) action-effects(\(\alpha_{T}\)) : effect-name(e) = frame)
_Example_: A new action allows an agent to move to a nearby cart; this affects the binary function CONTROLs, which becomes dynamic and affects the relationship between agent and cart.
AddAction(Switch-CARTS, [?C1,?C2], [CART, CART])
AddPrecondition(Switch-CARTS, (= (CONTROLS?ag?C1) TRUE))
AddPrecondition(Switch-CARTS, (= (CONTROLS?ag?C2) FALSE))
AddPrecondition(Switch-CARTS, (= (CART-POSITION?C1)?P1))
AddPrecondition(Switch-CARTS, (= (CART-POSITION?C2)?P2))
AddPrecondition(Switch-CARTS, (< (?P2?P1) 1)
AddPrecondition(Switch-CARTS, (< (?P1?P2) 1)
AddActionEFFECT(Switch-CARTS, (= (CONTROLS?ag?C2) TRUE))
AddActionEFFECT(Switch-CARTS, (= (CONTROLS?ag?C1) FALSE))
## 6 Environment
_Natural language definition_: A change in an element of an open-world space that may impact the entire task space and is independent of a specific entity.
_Clarifications_: Environment novelties may impact the entire task space identically (e.g., changing the temperature from a static value to a new static value) or may impact regions of the open world domain differentially (e.g., wind may be present in only part of a space and the intensity may ebb and flow over time).
_Formal definition_: An environment novelty is a transformation sequence in which environmental functions, processes, or events are modified. Environmental functions are those functions outside of agent control, meaning they are not affected by functions. Environmental processes and events are those that are conditioned on or triggered by only environmental functions and position-valued functions.
_hasEnvironmentNovelty_(t, d) =
(\(\exists i\), \(d_{T}\): \(0\leq i<|\)t\(|\)
\(\wedge\)\(d_{T}\) = applySequence(t, d)
\(\wedge\) (transformation-type(t)) = AddFluentGenerator
\(\wedge\)\(\exists f\), _frame_:
_isEnvironmentalFunction_(f, d)
\(\wedge\)frame = function-name(f)
\(\wedge\) transformation-argument(t\({}_{i}\) name) = frame)
\(\vee\) (transformation-type(t)\({}_{i}\)\(\in\) {AddTrigger, REMOVETRIGger, ADLEVENTEFFECT, REMOVEVENTEFFECT, CHANGEPROBABILITY, CHANGEREQUENCY}
\(\Lambda\)\(\exists ev\in\)_domain-events(\(d_{\tau}\))_:
\(\textit{event-name}(ev)=\textit{transformation-argument}(\textbf{t}_{v},\textit{ eventName}))\)
\(\Lambda\)_isEnvironmentalEvent_(\(ev\), \(d_{\tau}\))
\(\forall\)(_transformation-type_(\(\textbf{t}_{i}\))\(\in\){AddProcessSCondition, RemoveProcessSCondition, AddProcessChange, RemoveProcessChange}
\(\Lambda\)\(\exists p\in\)_domain-processes(\(d_{\tau}\))_:
\(\textit{process-name}(p)=\textit{transformation-argument}(\textbf{t}_{v},\textit{ processName}))\)
\(\Lambda\)_isEnvironmentalProcess_(\(p\), \(d_{\tau}\)))
_Required definitions:_
_isEnvironmentalFunction_(\(f\), \(d\)) =
\(f\in\)_domain-functions_(\(d\))
\(\Lambda\)\(\neg\)(\(\exists a\), \(e\): \(a\in\)_domain-actions_(\(d\))\(\wedge\)\(e\in\)_action-effects_(\(a\))\(\wedge\)_effect-name_(\(e\)) =_function-name_(\(f\))
\(\wedge\)_entities-in-function_(\(f\))\(|\) = 0
_isEnvironmentalProcess_(\(p\), \(d\)) =
\(\exists cnd\in\)_process-preconditions_(\(ev\)), \(f\in\)_domain-functions_(\(d\)):
\(\forall\)_function-in-condition_(function-name_(\(f\)), \(cnd\)) \(\wedge\)_isEnvironmentalFunction_(\(d\), \(f\))
\(\forall\)_t_=_function-in-condition_(function-name_(\(f\)), \(cnd\))
\(\forall\)_isEnvironmentalFunction_(\(d\), \(f\))\(\vee\)_function-value-type_(\(f\)) = Position
_isEnvironmentalEvent_(\(ev\), \(d\)) =
\(\exists cnd\in\)_event-triggers_(\(ev\)), \(f\in\)_domain-functions_(\(d\)):
\(\textit{function-in-condition}\)_(function-name_(\(f\)), \(cnd\)) \(\wedge\)isEnvironmentalFunction_(\(d\), \(f\))
\(\forall\)_cnd_\(e\)_event-triggers_(\(ev\)), \(f\in\)_domain-functions_(\(d\)):
\(\neg\)_function-in-condition_(function-name_(\(f\)), \(cnd\))
\(\vee\)_isEnvironmentalFunction_(\(d\), \(f\))\(\vee\)_function-value-type_(\(f\)) = Position
_Example_: A transformation introduces a "gravitational jump" that distorts pole angles every 20 time units. This is modeled by an environmental event, jump-pulls.
AddFunction(jump-time, [], [])
AddProcess(jump-ticks, [], [])
AddProcessSCChange(jump-ticks, (increase (jump-time) (* DT 1)))
AddEvent(jump-pulls, [?C], [CART])
AddTrigger(jump-pulls, (> (jump-time) 20))
AddEventFFect(jump-pulls, (increase (pole-angle?C 5)))
AddEventFFect(jump-pulls, (SET (jump-TIME) 0))
AddFluentGenerator(jump-time, AllPermutations([], UniformRandom(0, 20)))
## 7 Goals
Natural language definition:The purpose of a behavior by an agent in the environment.
Clarifications:Goal novelties resulting from environmental transformations are primarily considered to be goal changes of external agents in the environment that are not the point-of-view agent. Some domains may include teammates where the task goal is dynamic and communicated to the point-of-view agent as a directive or change in observable reward. In such domains, changes in the goals of the team or point-of-view agent communicated by external team members can be considered in-scope, but not required for point-of-view.
Formal definition:Goal novelties are those that include a T-Transformation changing how the performance function of a scenario generator is calculated.
\(\exists t\in\mathbf{t}\): _transformation-type_(\(t\)) = ReplacePerformanceCalculation
Example:An environment transformation introduces a new performance calculation that rewards an agent lasting without winning or losing.
AddFunction(clock-time, [], [])
AddProcess(clock-ticks, [], [])
AddProcessCHANGE(clock-ticks, (increase (clock-time) (* dt 1)))
ReplacePerformanceCalculation([clock-time)]
AddDefault(clock-time, 0)
## 8 - Events
Natural language definition:A state change or series of state changes that are not the result of
volitical action by an external agent or the point-of-view agent.
Clarifications:Events include state changes with specific preconditions.
Formal definition:An event novelty is a transformation sequence that affects T-SAL events that have both environmental and non-environmental triggers. At least one trigger of a modified event must be based on an environment function and at least one must be directly affected by an action. Event novelty sequences include those that modify the distribution of an environmental function that triggers such an event.
\(\mathit{hasEventNovelty}(\mathbf{t},\,d)\equiv\)
\(\exists\)ev, \(d_{t}\), \(i\):
\(\mathit{ev}\in\mathit{domain}\)-events(\(d_{t}\))
\(\wedge\)\(d_{t}\) = applySequence(\(\mathbf{t},\,d\))
\(\wedge\)\(-\)isEnvironmentalEvent(ev, \(d_{t}\))
\(\wedge\)\(0\leq i<|\mathbf{t}|\)
\(\wedge\) ((transformation-type(\mathbf{t})\in\{\)AddTrigger, RemoveTrigger, AddEventEffect, RemoveEVentEffect, ChangeProbability, ChangeFrequency)
\(\wedge\) event-name(ev) = transformation-argument(\(\mathbf{t}_{i}\)EventName))
\(\vee\) (transformation-type(\(\mathbf{t}_{i}\)) = AddFluentGenerator
\(\wedge\)\(\exists\)f: isEnvironmentalFunction(f, \(d_{t}\))
\(\Lambda\)_transformation-type_(\(t\), name) = _function-name_(\(f\))
\(\Lambda\)_\(\exists\)_cmd_\(\in\)_event-triggers_(_ev_): _function-in-condition_(_function-name_(\(f\), _cmd_))
_Example_: An environment transformation introduces a "gravitational anomaly" that pulls nearby carts and blocks based on their position, if they are moving. This is modelled by a non-environmental event, GRAVITY-PULS, with a varying position trigger, an environmental function trigger, and a third trigger (velocity) under the agent's control.
AdDFUNCTION(GRAVITY-LOCATION)
AdDPROCESS(GRAVITY-WAVE-MOVES, [], [])
AdDPROCESSCHANGE(GRAVITY-WAVE-MOVES, (INCREASE (GRAVITY-LOCATION) (* DT 1)))
AdDEVENT(GRAVITY-WAVE-REST, [], [])
AdDTRIGger(GRAVITY-WAVE-REST, (>= (GRAVITY-LOCATION) 20))
AdDEVENTFFECT(GRAVITY-WAVE-REST, (SET (GRAVITY-LOCATION) -20))
AdDEVENT(GRAVITY-PULS, [7C], [CART])
ChANGEEventFREQUENCY(GRAVITY-PULS, 4)
AdDTRIGger(GRAVITY-PULS, (> [CART-VELOCITY?C).01))
AdDTRIGger(GRAVITY-PULS, (= (CAT-POSITION?C)?P)))
AdDTRIGger(GRAVITY-PULS, (= (GRAVITY-LOCATION)?WAVE-POS)))
AdDEVENTEffect(GRAVITY-PULS, (INCREASE (CART-VELOCITY?C) (* DT (LOG-DISTANCE?P?WAVE-POS))))
AdDFLUENTGenerator(GRAVITY-LOCATION, AllPermutations([], UniformRandom(-20, 20)))
## 5 Discussion and Future Work
In this work, we have set up and formally defined a framework for describing a general class of environments and environment transformations. This is a necessary precursor to general agreement on definitions and measurement of the robustness of artificially intelligent agents. These definitions will be useful to future studies of general agents that take on very difficult situations flexibly and adaptively without human assistance. The "novelty hierarchy" is a first step in exploring possible classes of transformation sequences. Future work will use the formalism described here to define more useful assumptions to guide research.
Human design choices about T-SAL environments may have significant impact on how a transformation is described. In some cases, this has been observed to change category membership for the novelty hierarchy described here. This ambiguity is exacerbated when considering agents that have learned their own environment descriptions. To such an agent, the original environment may have a different definition than to a describing human - even if it's consistent with the same observations. Therefore, a novelty hierarchy categorization might be different than to an observing human.
A known failing of T-SAL-CR is that it does not describe observation of the environment; clearly, the environment is responsible for providing an agent affordances in observation, which can change over time. For example, a robot may gain or lose awareness of its location via a GPS sensor. T-SAL-CR cannot describe what aspects of the state are observable, and therefore the T-SAL transformation language also cannot describe changes in observability. Future work should remedy this problem.
## Acknowledgements
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001121C0236. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA. |
2308.01291 | Annealing-tunable charge density wave in the kagome antiferromagnet FeGe | The unprecedented phenomenon that a charge density wave (CDW) emerges inside
the antiferromagnetic (AFM) phase indicates an unusual CDW mechanism associated
with magnetism in FeGe. Here, we demonstrate that both the CDW and magnetism of
FeGe can be effectively tuned through post-growth annealing treatments. Instead
of the short-range CDW reported earlier, a long-range CDW order is realized
below 110 K in single crystals annealed at \SI{320}{\degreeCelsius} for over 48
h. The CDW and AFM transition temperatures appear to be inversely correlated
with each other. The entrance of the CDW phase significantly reduces the
critical field of the spin-flop transition, whereas the CDW transition remains
stable against minor variations in magnetic orders such as annealing-induced
magnetic clusters and spin-canting transitions. Single-crystal x-ray
diffraction measurements reveal substantial disorder on the Ge1 site, which is
characterized by displacement of the Ge1 atom from Fe$_3$Ge layer along the $c$
axis and can be reversibly modified by the annealing process. The observed
annealing-tunable CDW and magnetic orders can be well understood in terms of
disorder on the Ge1 site. Our study provides a vital starting point for the
exploration of the unconventional CDW mechanism in FeGe and of kagome materials
in general. | Xueliang Wu, Xinrun Mi, Long Zhang, Chin-Wei Wang, Nour Maraytta, Xiaoyuan Zhou, Mingquan He, Michael Merz, Yisheng Chai, Aifeng Wang | 2023-08-02T17:22:53Z | http://arxiv.org/abs/2308.01291v3 | # Annealing tunable charge density wave order in a magnetic kagome material FeGe
###### Abstract
In the magnetic kagome metal FeGe, a charge density wave (CDW) order emerges inside the antiferromagnetic phase, providing a fertile playground to investigate the interplay between charge and magnetic orders. Here, we demonstrate that the CDW order, as well as magnetic properties, can be reversibly tuned on a large scale through post-growth annealing treatments. The antiferromagnetic and CDW transitions vary systematically as functions of both the temperature and the time period of annealing. Long-range CDW order with a maximum \(T_{\rm CDW}\) and a minimum \(T_{\rm N}\) can be realized in crystals annealed at 320 \({}^{\circ}\)C for over 48 h. Using magnetization and magnetostrictive coefficient measurements, it is found that the CDW transition is rather stable against an external magnetic field and spin-flop transition. On the other hand, the critical field for spin-flop transition is significantly reduced in the long-range ordered CDW phase. Our results indicate that the CDW in FeGe is immune to variations in magnetic orders, while the magnetocrystalline anisotropy energy and the corresponding magnetic ground state can be altered significantly by the charge order. These findings provide crucial clues for further investigation and a better understanding of the nature of the CDW order in FeGe.
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
## I Introduction
The interplay of lattice geometry, nontrivial band topology, and electronic correlations in kagome lattices often lead to the emergence of intriguing quantum phases of matter, such as quantum spin liquids, superconductivity, and charge-density-wave (CDW) [1; 2]. Prime examples of such phenomena can be found in various kagome materials, including the quantum spin liquid candidate ZnCu\({}_{3}\)(OH)Cl\({}_{2}\)[3], magnetic topological materials Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\), Mn\({}_{3}\)Sn, Fe\({}_{3}\)Sn\({}_{2}\) and FeSn [3; 4; 5; 6; 7], as well as the kagome superconductors \(AV_{3}\)Sb\({}_{5}\) (\(A=\)K, Rb, Cs) [8; 9; 10]. A recent discovery in the magnetic kagome metal FeGe (hexagonal, B35) has added further intrigue to the field, as it revealed the presence of a CDW phase (\(T_{\rm CDW}\sim 100\) K) deep inside the A-type antiferromagnetic (AFM) phase (\(T_{\rm N}\sim 410\) K) [11; 12; 13]. This discovery provides a prominent playground to investigate the intricate interactions between lattice, charge, and spin degrees of freedom in kagome materials.
The facts that the CDW phase arises inside the AFM phase and that the magnetic moments are enhanced below the CDW transition likely suggest an intimate coupling between CDW and magnetism in FeGe [11; 14; 15]. However, the exact correlations between CDW and magnetism remain elusive. One of the key questions is whether the CDW is driven by magnetism or not. Theoretically, various scenarios, including Fermi surface nesting [14; 16], electron-phonon coupling [14], spin-phonon coupling [15], electron-electron correlations [17; 18], and magnetism-assisted structural dimerization [19], have been proposed to uncover the origin of the CDW found in FeGe. Despite a large diversity, it is generally believed that magnetism plays an important role in the formation of the CDW in FeGe. On the experimental side, the appearance of the CDW shows weak signal, and strong sample dependence, hampering systematic investigation of the underlying mechanisms. In early reports, no other transitions except the magnetic ones were found in the hexagonal phase of FeGe [20; 21; 22]. In recent neutron and scanning tunneling microscopy (STM) studies, a short-range \(2\times 2\times 2\) CDW state with a typical correlation length of 20 A \(-\) 30 A was discovered [11; 12; 13]. On the other hand, x-ray scattering experiments indicate that the CDW is long-ranged and that the CDW transition is weakly first-order [15]. These discrepancies found experimentally are likely caused by different sample qualities. To unveil the driving force of the CDW in FeGe, it is thus of prime importance to prepare samples that show clear and repeatable signatures of CDW.
Post-growth thermal treatment has been proven to be an effective method to improve the quality of single crystals, as widely used in the study of cuprate and iron-based superconductors [23; 24; 25; 26], as well as CDW materials [27]. In this paper, by post-growth annealing in a temperature range of \(240\sim 560\,^{\circ}\)C, we are able to systematically tune the CDW volume fraction from almost zero (short-ranged) to a hundred percent (long-ranged). This provides a crucial method to obtain high-quality samples hosting long-range CDW, and paves the way for studying the interplay of CDW and magnetism in FeGe. A phase diagram is established based on the annealing results, which reveal that \(T_{\rm CDW}\) is inversely correlated to \(T_{\rm N}\). In addition, the CDW transition appears to be robust against external magnetic fields and the spin-flop transition, whereas the entrance of the long-range CDW
order lowers the critical field for the spin-flop transition.
## II Methods
### Single crystal growth
Single crystals of B35-type FeGe were synthesized using the chemical vapor transport (CVT) technology [11]. Iron powders (99.99%) and germanium powders (99.999%) were weighed in the stoichiometric ratio 1:1, thoroughly ground, and loaded into a quartz tube along with additional iodine as transport agents. The quartz tube was then sealed under a high vacuum and placed into a horizontal two-zone furnace. The source and sink temperatures for the growth were set at 600 \({}^{\circ}\)C and 550 \({}^{\circ}\)C, respectively. After being held at the single crystal growth temperatures for 12 days, the system was cooled down naturally to room temperature by shutting down the furnace. Shiny, prismatic crystals of B35-type FeGe can be obtained in the middle of the quartz tube. These samples will be referred to as "as grown samples", where the perpendicular magnetic susceptibility usually shows a behavior similar to that reported by Teng _el al._.
### Post-growth annealing
The as-grown crystals were sealed into an evacuated quartz tube, which was then inserted into a box furnace set at the specified annealing temperature. After being held at the annealing temperature for the target time period, the quartz tube was quickly removed from the furnace and quenched to room-temperature water. The obtained annealed crystals are as shiny as the as grown crystals, and no annealing induced surface change can be found. From x-ray diffraction (XRD) data as shown in the Supplementary Material, which are collected on a PANalytical powder diffractometer (Cu \(k_{\alpha}=1.5406\) A radiation), the crystal structure and the lattice parameters almost keep intact during the annealing process.
### Magnetization measurements
Magnetization up to 7 T was measured using a direct current scan mode in the Quantum Design Magnetic Property Measurement System (MPMS3). Magnetization up to 9 T were conducted in a Quantum Design DynaCool Physical Properties Measurement System (PPMS-9T) using a vibrating sample magnetometer (VSM) option.
### Magnetostrictive coefficient measurements
An ac composite magnetoelectric (ME) method is applied to measure the magnetostrictive coefficient (\(\mathrm{d}\lambda/\mathrm{d}H\), where \(\lambda=\Delta L/L\)) of the samples. The sample is mechanically bonded with a 0.2-mm-thick piezoelectric 0.7Pb(Mg\({}_{1/3}\)Nb\({}_{2/3}\))O\({}_{3}\)\(-\)0.3PbTiO\({}_{3}\) (PMN-PT) [001]-cut single crystal by Ag epoxy, as shown in Fig. 3(a). The PMN-PT acts as a strain gauge to transfer the magnetostriction of the sample into an electrical signal. An ac magnetic field \(H_{ac}=1\) Oe is generated by a coil, and an ac electric signal \(V_{ac}\) is induced along the thickness direction of the PMN-PT due to the interlayer strain coupling. The electrical signal is measured by a lock-in amplifier (OE1022, SYSU Scientific Instruments) with a homemade sample stick (MultiField Tech.). According to the composite ME theory, the real part of ME susceptibility \(V_{ac}/H_{ac}\) will be directly proportional to the magnetostrictive coefficient. Therefore, we will treat the measured real part of ME susceptibility as \(\mathrm{d}\lambda/\mathrm{d}H\) in this paper.
## III Results and discussion
### Annealing effect
At room temperature, the hexagonal B35 FeGe crystallizes in the CoSn-type structure (\(P6/mmm\), No.191) with Fe\({}_{3}\)Ge kagome layers and honeycomb Ge layers stacked alternatively along the \(c\)-axis. During the single-crystal growth processes of the hexagonal B35-type FeGe, we found that the single-crystal growth temperature range is narrow, and the crystal structure of FeGe is quite rigid. It is difficult to tune the sample quality either by varying growth conditions or chemical doping. After trying different methods (see Supplementary Material for more details), we found that post-growth annealing in a vacuum followed by quenching in water shows highly reproducible, controllable, and reversible results.
As shown in Fig. 1(a), the temperature-dependent in-plane magnetic susceptibility (\(\chi_{\perp}\), \(H\perp c\)) of FeGe samples varies significantly with different annealing temperatures \(T_{\mathrm{Ann}}\). Here, the annealing time was kept at 48 h for all annealing temperatures. The susceptibility was measured in a magnetic field of 0.1 T after zero-field cooling. Data obtained for the \(H\parallel c\) configuration (\(\chi_{\parallel}\)) can be found in the Supplementary Material. The overall behavior of the \(T_{\mathrm{Ann}}\) =560 \({}^{\circ}\)C curve is similar to that reported by Bernhard _et al._[20; 21]. For \(T_{\mathrm{Ann}}\) =560 \({}^{\circ}\)C, only the AFM transition [\(T_{\mathrm{N}}\) = 397 K, inset of Fig. 1(a)], and canting transitions are seen with no clear signatures of CDW around 100 K. As suggested by early neutron measurements, the hexagonal FeGe orders into a \(c\)-axis A-type AFM below \(T_{\mathrm{N}}\)[21]. Below \(T_{\mathrm{canting}2}\sim\) 55 K, the magnetic structure transforms to a \(c\)-axis double cone AFM configuration, leading to a rapid increase in \(\chi_{\perp}\)[21]. At \(T_{\mathrm{canting}1}=37\) K, an abrupt change of the cone angle produces a peak in \(\chi_{\perp}\)[21].
As the annealing temperature is lowered to 480 \({}^{\circ}\)C, an additional feature appears around 100 K in \(\chi_{\perp}\), which is reminiscent of the CDW transition reported by X. Teng
_et al._[11]. By further decreasing \(T_{\rm Ann}\), the CDW transition becomes more evident and shows a first-order-like sharp jump in \(\chi_{\perp}\) for crystals annealed below 360 \({}^{\circ}\)C. Indeed, a hysteretic temperature-dependent behavior is seen in \(\chi_{\perp}\) for these samples near the CDW transition, further confirming the first-order nature of CDW (see Supplementary Material). For \(T_{\rm Ann}<280\) \({}^{\circ}\)C, the feature associated with the CDW transition becomes broad again. The CDW transition is most pronounced for \(T_{\rm Ann}=320\) \({}^{\circ}\)C, which shows the highest \(T_{\rm CDW}\) and the largest change in susceptibility (\(\Delta\chi_{\perp}\)) at the CDW transition. More importantly, instead of the short-range CDW reported earlier [11], the \(2\times 2\times 2\) CDW becomes long-ranged in crystals annealed at \(T_{\rm Ann}=320\) \({}^{\circ}\)C, as evidenced by STM and single-crystal x-ray scattering measurements [28]. In crystals showing short-range CDW, the high-temperature undistorted structure survives and dominates the sample volume below \(T_{\rm CDW}\). On the other hand, the majority of the sample volume switches to the low-temperature distorted structure once the long-range charge order sets in. Therefore, the volume fraction of the CDW phase can be tuned from nearly 0 to almost 100% in a controllable way using post-growth annealing. The appearance of long-range CDW in FeGe achieved by post-growth annealing provides a vital starting point for studying the interplay between CDW and magnetism.
In addition to the CDW transition, the magnetic transitions are also sensitive to the annealing temperatures. As presented in the inset of Fig. 1 (a), the AFM transition temperature \(T_{\rm N}\) shifts moderately by changing \(T_{\rm Ann}\). On the other hand, the canting transition occurring at low temperatures depends strongly on \(T_{\rm Ann}\). Unlike the peak structure seen at \(T_{\rm canting1}\sim\) 40 K for samples annealed above 400 \({}^{\circ}\)C, \(\chi_{\perp}\) shows a step-like feature around 50 K for \(T_{\rm Ann}\leq 360\) \({}^{\circ}\)C. We also note that for crystals annealed for 48 h below 320 \({}^{\circ}\)C, an additional broad hump appears in \(\chi_{\perp}\) between 150 and 250 K, which is followed by a step-like jump around 150 K. The broad hump feature has a weak ferromagnetic (FM) nature, as evidenced by the bifurcation between the zero-field-cooled (ZFC) and field-cooled (FC) curves. Clear magnetic hysteresis loops are also seen between 150 and 250
Figure 1: (a) Temperature-dependent in-plane magnetic susceptibilities \(\chi_{\perp}\) (\(H\perp c\)) of FeGe samples annealed at different temperatures (\(T_{\rm Ann}\)) for 48 h. The AFM, CDW and canting transitions are marked out by red squares, black triangles and blue circles, respectively. \(\Delta\chi_{\perp}\) measures the change in \(\chi_{\perp}\) at the CDW transition. The inset in (a) displays the temperature derivative of \(\chi_{\perp}\) near \(T_{\rm N}\). (b) Phase diagram of various transition temperatures tuned by the annealing temperature. (c) Temperature-dependent \(\chi_{\perp}\) of FeGe samples annealed at 320 \({}^{\circ}\)C for different time periods. (d) The correlations between \(T_{\rm CDW}\), \(T_{\rm N}\) and \(\Delta\chi_{\perp}\). Open (solid) symbols are data obtained for \(T_{\rm Ann}<\)320 \({}^{\circ}\)C (\(T_{\rm Ann}\geq\)320 \({}^{\circ}\)C). Data in (a) and (c) have been shifted vertically for clarity.
K [see Supplementary Material]. The origin of this weak FM feature is unclear at present, but impurities can be excluded because the magnetic susceptibility behavior is highly reversible during the annealing process.
In Fig. 1(b), we summarize the phase diagram of magnetic and CDW transition temperatures as a function of the annealing temperature using the data presented in Fig. 1(a). By varying \(T_{\rm Ann}\), \(T_{\rm N}\) and \(T_{\rm CDW}\) changes systematically but with opposite trends. At \(T_{\rm Ann}=320\,^{\circ}\)C, \(T_{\rm CDW}\) reaches a maximum value, whereas \(T_{\rm N}\) shows a minimum. The low-temperature canting transition shows complex dependency on \(T_{\rm Ann}\). For \(T_{\rm Ann}\) higher than \(400\,^{\circ}\)C, the canting transition temperature \(T_{\rm canting1}\) tracks the trend of \(T_{\rm N}\) [see open circles in Fig. 1(b)]. By lowering \(T_{\rm Ann}\) below \(400\,^{\circ}\)C, \(T_{\rm canting1^{\prime}}\) follows the evolution of \(T_{\rm CDW}\) [solid blue circles in Fig. 1(b)]. As already seen in Fig. 1(a), the feature in \(\chi_{\perp}\) around \(T_{\rm canting1}\) changes significantly for samples annealed above and below \(400\,^{\circ}\)C. The susceptibility jumps sharply at \(T_{\rm CDW}\) for samples annealed below \(400\,^{\circ}\)C, signifying the appearance of long-range CDW in these samples. It is very likely that the entrance of the long-range charge order interacts strongly with the magnetic order, leading to a different magnetic ground state and thus a different behavior of the canting transition. Although the magnetic structure stays unchanged in the presence of short-range CDW [11], the large lattice distortions caused by long-range CDW can in principle alter the spin texture. This is also evidenced by the sudden suppression in the critical field of the spin-flop transition in the long-range CDW state, as we will discuss in Fig. 3. To differentiate the likely different magnetic states, we may call the high-temperature (\(T_{\rm CDW}<T<T_{\rm N}\)) pristine phase AFM\({}_{1}\) phase and the low-temperature CDW phase AFM\({}_{2}\) phase. Then, the AFM\({}_{1}\) phase leads to the transition at \(T_{\rm canting1}\) while AFM\({}_{2}\) phase leads to the transition at \(T_{\rm canting1}^{\prime}\). Further detailed study is desired to clarify the magnetic ground state in the long-range ordered CDW state.
As shown in Fig. 1(b), \(T_{\rm N}\) appears to be anticorrelated with \(T_{\rm CDW}\). The CDW volume fraction are expected to be larger in samples with higher \(T_{\rm CDW}\) as evidenced by the evolution of short-range to long-range charge order. Note that the change in susceptibility (\(\Delta\chi_{\perp}\)) at the CDW transition also increases monotonically with \(T_{\rm CDW}\) [see Fig. 1(a)]. In an oversimplified approximation, one may naively take \(\Delta\chi_{\perp}\) as a measure of the CDW volume fraction. As presented in Fig. 1(c), indeed both \(T_{\rm N}\) and \(T_{\rm CDW}\) vary monotonically with \(\Delta\chi_{\perp}\) but with opposite slopes. It strongly indicates that the underlying physics controlling the CDW volume fraction also governs the CDW and antiferromagnetic transitions.
The CDW and magnetic properties can also be tuned by different annealing time. In Fig. 1(d), we take the \(T_{\rm Ann}=320\,^{\circ}\)C case as an example. The phase diagram obtained for different annealing time can be found in Supplementary Material. The CDW transition cannot be well resolved in as-grown samples (0 h). Starting from 1/2 h of the annealing time, signatures of the CDW transition show up in the susceptibility around 100 K, together with a large hump appearing between 150 and 250 K. With further increase in annealing time, the broad hump and the magnitude of low-temperature magnetic susceptibility are significantly suppressed, accompanied by the systematic increase in \(T_{\rm CDW}\) and \(\Delta\chi_{\perp}\) (CDW volume fraction). The maximum \(T_{\rm CDW}\) without the broad hump is observed in the crystal annealed at \(320\,^{\circ}\)C for 96 h. Annealing longer than 96 h does not lead to a further increase in \(T_{\rm CDW}\) and the CDW volume fraction. Need to point out that the broad hump appearing between 150 and 250 K in crystals annealed in a period of 1/4 \(\sim\) 48 h does not disturb the systematic evolution of the CDW phase. This indicates that the CDW is insensitive to these magnetic effects. Comparing the data in Fig. 1(a) with that in Fig. 1(d), the tendencies of the evolution of \(T_{\rm N}\) and \(T_{\rm CDW}\) as a function of annealing temperature below \(320\,^{\circ}\)C can be attributed to the insufficient annealing time required for low-temperature annealing.
### Magnetic field effect
To gain further insight into the correlation between \(T_{\rm CDW}\) and various magnetic transitions, we investigate their response to external magnetic fields for two typical crystals obtained by annealing at \(480\,^{\circ}\)C and \(320\,^{\circ}\)C for 48 h. The results are displayed in Fig. 2. The crystal annealed at \(480\,^{\circ}\)C is similar to samples reported by X. Teng _et al._[11], showing a short-range CDW order and a low CDW volume fraction [28]. In contrast, the single crystal annealed at \(320\,^{\circ}\)C exhibits a long-range CDW order with a CDW volume fraction of nearly 100%. As shown in Fig. 2(a), for the \(480\,^{\circ}\)C annealed sample, \(T_{\rm N}\) and \(T_{\rm CDW}\) approximately remain intact by increasing the magnetic field in the \(H\perp c\) configuration. Meanwhile, \(T_{\rm canting}\) is gradually suppressed and becomes invisible when \(\mu_{0}H\geq 3\) T. This is similar to that reported by X. Teng _et al._[11]. When the magnetic field is applied along the \(c\) axis (\(H\parallel c\)), no CDW transition can be observed in \(\chi_{\parallel}(T)\) up to 9 T [see Fig. 2(b)], which may be due to its low CDW volume fraction. A drastic step is seen in \(\chi_{\parallel}\) near \(T_{\rm CDW}\) for fields above 7 T. This is caused by different critical magnetic fields (\(H_{\rm sf}\)) needed to induce the spin-flop transition above and below the CDW transition [see also Fig. 3(h)]. In the case of the single crystal annealed at \(320\,^{\circ}\)C [Fig. 2(c)], it is evident that \(T_{\rm N}\) and \(T_{\rm CDW}\), \(\Delta\chi_{\perp}\), and the transition width, keep intact as the magnetic field increases for \(H\perp c\). The unknown magnetic transition around \(T_{\rm canting1}^{\prime}\) almost keeps intact against the applied magnetic field until \(\mu_{0}H\leq 3\) T, above which it becomes less visible. The broad hump between 150 and 250 K is gradually suppressed until it completely vanishes above 1 T. These results again suggest that the CDW transition is unrelated to these complicated magnetic transitions, and is unaffected by the external magnetic fields. In \(\chi_{\parallel}(T)\), the CDW transition is manifested as a dis
cernible step-like increase, which allows the investigation of the interaction between spin-flop and CDW transitions for fields along the \(c\)-axis. When \(\mu_{0}H\geq 7\) T, \(\chi_{\parallel}\) jumps sharply at \(T_{\rm CDW}\) due to the sudden change in \(H_{\rm sf}\) at the CDW transition [see also Fig. 3(i)]. Note that in both annealing conditions, \(T_{\rm CDW}\) remains intact against the spin-flop transition.
To further demonstrate the influences of the CDW transition on the spin-flop transition for \(H\parallel c\), we study the field-dependent magnetization \(M(H)\) near the CDW transition in Fig. 3. In addition, a novel composite magnetoelectric method was applied to measure the relative magnetostrictive coefficient \({\rm d}\lambda/{\rm d}H\), where \(\lambda=\Delta L/L\) and \(L\) is the geometrical length of the sample. A schematic illustration of the magnetostriction setup is shown in Fig. 3(a). \({\rm d}\lambda/{\rm d}H\) is more sensitive than magnetization \(M\) in terms of the spin-flop transition. Three typical crystals annealed at \(560\,^{\circ}\)C, \(480\,^{\circ}\)C, and \(320\,^{\circ}\)C with different CDW volume fractions were studied. For the sample annealed at \(480\,^{\circ}\)C, as displayed in Fig. 3(b), a steep upturn arising from the spin-flop transition can be observed in \(M(H)\) above a certain magnetic field \(H_{\rm sf}\). The value of \(H_{\rm sf}\) increases gradually upon warming. Compared with the sample annealed at \(480\,^{\circ}\)C, the spin-flop transition in the \(320\,^{\circ}\)C has a significantly lower \(H_{\rm sf}\) [see Fig. 3(c)]. Upon warming across \(T_{\rm CDW}\), \(H_{\rm sf}\) exceeds the field region we can reach in the magnetization measurements. Therefore, from the magnetization data, one cannot easily define \(H_{\rm sf}\) and reveal the relationship between the CDW and spin-flop transition. In contrast, \({\rm d}\lambda/{\rm d}H\) shows clear features at the spin-flop transition and highlights the influence of the CDW phase on \(H_{\rm sf}\). As shown in Fig. 3(d) for the \(560\,^{\circ}\)C annealed sample, \({\rm d}\lambda/{\rm d}H\) shows a dip-peak feature around the spin-flop transition. The peak position, corresponding to \(H_{\rm sf}\), increases with increasing temperatures. No drastic anomaly associated with the CDW transition can be observed due to the nearly zero CDW volume fraction. In Fig. 3(e) for the \(480\,^{\circ}\)C annealed sample, \({\rm d}\lambda/{\rm d}H\) changes from a dip-peak feature to a peak feature around the CDW transition, indicating a moderate influence by the CDW phase. In Fig. 3(f) for the \(320\,^{\circ}\)C annealed sample, \({\rm d}\lambda/{\rm d}H\) shows two dip-peak features around the CDW transition. On the other hand, only a single dip-peak feature is seen in \({\rm d}\lambda/{\rm d}H\) below and above \(T_{\rm CDW}\). This points to a phase coexistence in the vicinity of the CDW transition in accordance with its first-order nature.
The systematic evolution of \(H_{\rm sf}\) tuned by the CDW transition can be seen more clearly in Figs. 3 (g), (h), and (i) for the corresponding samples with negligible, moder
Figure 2: Temperature dependent (a) in-plane (\(\chi_{\perp}\), \(H\perp c\)) and (b) out-of-plane (\(\chi_{\parallel}\), \(H\parallel c\)) magnetic susceptibilities for a FeGe sample annealed at \(480\,^{\circ}\)C measured in various magnetic fields. (c)(d) The same measurements with those in (a)(b) performed on a sample annealed at \(320\,^{\circ}\)C. Data have been shifted vertically for clarity.
ate, and 100% CDW volume fractions, respectively. For the sample annealed at 560 \({}^{\circ}\)C, \(H_{\rm sf}\) decreases linearly with cooling and no anomalies associated with the CDW transition is found. In this case, the high-temperature AFM\({}_{1}\) phase persists down to lower temperatures in the absence of charge orders. For the 480 \({}^{\circ}\)C annealed sample, a clear but gradual change of \(H_{\rm sf}\) takes place around the CDW transition, indicating a second-order-like nature in the CDW phase transition. For the 320 \({}^{\circ}\)C annealed sample, a sharp step-like jump in \(H_{\rm sf}\) is seen together with a phase coexistence region near \(T_{\rm CDW}\). This again points a first-order CDW transition. In this sample, the AFM\({}_{1}\) phase is completely transformed into the AFM\({}_{2}\) phase below the CDW transition.
## IV Discussion
A schematic phase diagram summarizing the above experimental results is presented in Fig. 4, illustrating how the CDW volume fraction can continuously tune the CDW transition from first-order to second-order-like and finally to a crossover, as exemplified by samples annealed at 320 \({}^{\circ}\)C, 480 \({}^{\circ}\)C, and 560 \({}^{\circ}\)C, respectively. In crystals hosting the long-range CDW and large CDW volume fractions, the high-temperature structurally non-distorted pristine AFM\({}_{1}\) phase transforms completely into the low-temperature distorted phase. A phase coexistence region also exists due to the first-order nature of the CDW transition. Compared to the AFM\({}_{1}\) phase, the long-range CDW phase likely has a different AFM\({}_{2}\) spin configuration with significantly reduced spin-flop-transition field \(H_{\rm sf}\). As the CDW volume fraction is low
Figure 3: (a) A schematic setup used in the ac composite magnetoelectric measurements. (b)(c) Field-dependent magnetization \(M(H)\) (\(H\parallel c\)) of FeGe samples annealed at 480 \({}^{\circ}\)C and 320 \({}^{\circ}\)C recorded at various temperatures. (d-f) Magnetostriction coefficient d\(\lambda\)/d\(H\) for samples annealed at 560 \({}^{\circ}\)C, 480 \({}^{\circ}\)C and 320 \({}^{\circ}\)C. Black squares mark the spin-flop transition. (g-i) \(H-T\) phase diagrams of the corresponding samples presented by using color plot of the data shown in (d-f). Compared to that above \(T_{\rm CDW}\), the spin-flop transition field \(H_{\rm sf}\) is significantly suppressed in the CDW phase.
ered and the CDW becomes short-ranged, only partial of the AFM\({}_{1}\) phase is transformed to the CDW/AFM\({}_{2}\) state. When the CDW volume fraction is negligible and the CDW transition becomes invisible, the AFM\({}_{1}\) phase persists all the way down to low temperatures.
To understand why the CDW strongly tunes the spin-flop transition rather than the other way around, one must grasp the nature of the CDW phase. Our STM and XRD measurements [28] reveal that the crystal structure of the CDW phase features a \(2\times 2\times 2\) superstructure with strong dimerization for \(1/4\) of the Ge1-sites in the Fe\({}_{3}\)Ge layers along the \(c\)-axis. This naturally explains the significant influence of long-range ordered CDW on the spin-flop transition since \(H_{\rm sf}\) is mainly determined by the magnetocrystalline anisotropy, and the Ge1-site dimerization can affect the crystal field of neighboring Fe sites. Conversely, the spin-flop-induced structure distortion is so small that it cannot affect the CDW transition temperature.
Next, We discuss the microscopic mechanism of the CDW based on our results. We have demonstrated that annealing can modify the physical properties of FeGe single crystals in a reproducible, controllable, and reversible manner. The reversibility of the annealing process imposes a strong restriction on the microscopic mechanism, which could rule out the effect of impurity, evaporation of Fe/Ge, and reaction with oxygen or water from the ambient environment. One plausible explanation is that annealing modifies the concentration of certain defects, such as site disorder, dislocations, vacancies, or interstitial species. This explanation finds support in the systematic increase in the RRR values with decreasing annealing temperature and the decrease in defects at Ge2 sites revealed by the STM measurements [28]. Another possible scenario involves microstrain induced by quenching, where slow cooling or annealing at low temperatures can eliminate the microstrain. In fact, these two scenarios are consistent with each other, as crystals obtained by low-temperature annealing and a slow cooling rate would have fewer disorders or microstrains. However, the detailed microscopic mechanism still remains elusive at present. Further investigations are needed to unveil the origin of CDW found in FeGe.
## V Conclusion
In conclusion, we demonstrate that the CDW volume fraction in FeGe can be tuned systematically from 0 to nearly 100% through post-growth annealing in a vacuum. Importantly, the AFM and CDW transition temperatures are found to be anticorrelated with each other. Moreover, the CDW transition in FeGe is insensitive to changes in magnetic orders induced by spin-canting and spin-flop transitions. However, the charge order lowers the spin-flop-transition field \(H_{\rm sf}\) significantly. These results impose crucial restrictions on the theoretical explanation of the CDW mechanism. Our study provides a prominent way for obtaining high-quality FeGe crystals hosting different CDW volume fractions, which can enable systematic investigations of the interplay between magnetic and charge orders in FeGe.
###### Acknowledgements.
We thank Ya-Jun Yan, Yilin Wang, Yuan Li and Christoph Meingast for their helpful discussions. This work was supported by the Natural Science Foundation of China ( No. 12004056, No. 11974065, and No. 12227806.) A.W. acknowledges the support from Chongqing Research Program of Basic Research and Frontier Technology, China (Grants No. cstc2021jcyjmsxmX0661), Fundamental Research Funds for the Central Universities, China (Grant No. 2022CDJXY-002). Y.S.C. acknowledges the support from Beijing National Laboratory for Condensed Matter Physics. M. He acknowledges the support by Chinesisch-Deutsche Mobilitatsprogramm of Chinesisch-Deutsche Zentrum fur Wissenschaftsforderung (Grant No. M-0496). We would like to thank Guiwen Wang and Yan Liu at the Analytical and Testing Center of Chongqing University for their technical assistance.
|
2306.06052 | Problematic Advertising and its Disparate Exposure on Facebook | Targeted advertising remains an important part of the free web browsing
experience, where advertisers' targeting and personalization algorithms
together find the most relevant audience for millions of ads every day.
However, given the wide use of advertising, this also enables using ads as a
vehicle for problematic content, such as scams or clickbait. Recent work that
explores people's sentiments toward online ads, and the impacts of these ads on
people's online experiences, has found evidence that online ads can indeed be
problematic. Further, there is the potential for personalization to aid the
delivery of such ads, even when the advertiser targets with low specificity. In
this paper, we study Facebook -- one of the internet's largest ad platforms --
and investigate key gaps in our understanding of problematic online
advertising: (a) What categories of ads do people find problematic? (b) Are
there disparities in the distribution of problematic ads to viewers? and if so,
(c) Who is responsible -- advertisers or advertising platforms? To answer these
questions, we empirically measure a diverse sample of user experiences with
Facebook ads via a 3-month longitudinal panel. We categorize over 32,000 ads
collected from this panel ($n=132$); and survey participants' sentiments toward
their own ads to identify four categories of problematic ads. Statistically
modeling the distribution of problematic ads across demographics, we find that
older people and minority groups are especially likely to be shown such ads.
Further, given that 22% of problematic ads had no specific targeting from
advertisers, we infer that ad delivery algorithms (advertising platforms
themselves) played a significant role in the biased distribution of these ads. | Muhammad Ali, Angelica Goetzen, Alan Mislove, Elissa M. Redmiles, Piotr Sapiezynski | 2023-06-09T17:23:59Z | http://arxiv.org/abs/2306.06052v1 | # Problematic Advertising and its Disparate Exposure on Facebook
###### Abstract
Targeted advertising remains an important part of the free web browsing experience, where advertisers' targeting and personalization algorithms together find the most relevant audience for millions of ads every day. However, given the wide use of advertising, this also enables using ads as a vehicle for problematic content, such as scams or clickbait. Recent work that explores people's sentiments toward online ads, and the impacts of these ads on people's online experiences, has found evidence that online ads can indeed be problematic. Further, there is the potential for personalization to aid the delivery of such ads, even when the advertiser targets with low specificity. In this paper, we study Facebook--one of the internet's largest ad platforms--and investigate key gaps in our understanding of problematic online advertising: (a) What categories of ads do people find problematic? (b) Are there disparities in the distribution of problematic ads to viewers? and if so, (c) Who is responsible--advertisers or advertising platforms? To answer these questions, we empirically measure a diverse sample of user experiences with Facebook ads via a 3-month longitudinal panel. We categorize over 32,000 ads collected from this panel (\(n=132\)); and survey participants' sentiments toward their own ads to identify four categories of problematic ads. Statistically modeling the distribution of problematic ads across demographics, we find that older people and minority groups are especially likely to be shown such ads. Further, given that 22% of problematic ads had no specific targeting from advertisers, we infer that ad delivery algorithms (advertising platforms themselves) played a significant role in the biased distribution of these ads.
## 1 Introduction
Targeted advertising fuels a sizable part of the web's economy today [21]. Behind the ads shown on digital platforms are complex marketplaces where advertisers compete for user attention, and advertising platforms such as Google, Facebook, and Twitter--capitalizing on user data--act as intermediaries. To identify the right audience for each ad, these platforms provide detailed targeting options to advertisers, as well as sophisticated personalization algorithms designed to find the most "relevant" audience. As a result, the ads that constitute a user's everyday experience are determined by a confluence of factors: what time the user is browsing, which advertisers were trying to target them, and what content the platform's personalization algorithm considers relevant to them. Further, due to the scale of these marketplaces, users run into ads on a vast variety of topics--ranging from neutral product ads, to opportunity ads for jobs and scholarships, and even to problematic clickbait ads and scams.
Given the wide variance in ads a user may potentially receive, it is important to consider whether some users' _overall ad experience_ might be worse than others. Prior work has illustrated the impact of harmful media [9, 13, 66, 85, 86, 87, 60, 88], has theorized about the ways in which digital ads may harm users [60, 63, 69, 73, 51, 60], and has asked users themselves to express why they find certain ads problematic [92]. However, a complete understanding on the online ad experiences of individual users, along with a breakdown of the kinds of ads different users find problematic, remains elusive.
In this paper, we build on prior work to systematically identify which categories of ads people perceive as problematic, evaluate if there are skews in the delivery of problematic categories of ads, and determine the roles of advertisers and personalization algorithms in the distribution of problematic ads. Thus, we aim to answer the following research questions:
**RQ1:**: What categories of ads are perceived as problematic?
**RQ2:**: Are there skews in the distribution of problematic ads?
**RQ3:**: Who is responsible for any observed skews?
To do so, we recruit a panel of 132 paid participants, who we select across a variety of demographic categories. We longitudinally observe participants' Facebook ad experiences over a period of three months, collecting the ads they receive, and the revealed targeting information for each ad. We choose Facebook as our platform of study because it is one of the
largest and most data-rich personalized advertising platforms. We use a combination of (1) logged data and (2) quantitative surveys to measure our participants' ad experiences. First, we instrument our participants' web browsers to collect all Facebook ads they are shown in their desktop browsers, alongside the detailed targeting information Facebook provides for these ads. Second, using a combination of inductive qualitative coding, and deductive analysis of computational and social science research, as well as existing platform policies, we develop a _codebook_ of ad categories, covering a variety of potentially problematic ad types. Using human raters, we then classify over 32,000 ads shown to our participants using this codebook. With this coded data, we regularly survey our participants to assess which types of ads--within the set of ads that they are shown by Facebook and which we annotated--they find problematic and why.
Using the collected data, we first examine the content that participants dislike (RQ1). We identify four categories of ads that participants find problematic (i.e., are disliked more than ads of any other category): deceptive ads, content that is prohibited by Facebook itself, clickbait, and ads considered sensitive by platform or government policy (e.g., ads for weight loss, gambling or alcohol).
We then statistically model the distribution of problematic ads across our panel (RQ2). Our results show that problematic content makes up a relatively small fraction of all ads our users see on Facebook--a median of 10%--but a subset of our panel is exposed to problematic ads over three times more often than the median participant. Looking at which participants tend to receive more problematic ads, we find that participants who are older are more likely to see deceptive and clickbait ads, and those who are Black are also more likely to see clickbait ads. Men are additionally more likely to see financial ads, a complex category that is (i) considered sensitive by U.S. regulation and Facebook policy, as it may include offers for exploitative financial products, (ii) disliked by participants more than neutral ads, but (iii) which also may include beneficial financial products.
Finally, we investigate the extent to which the advertisers and the platform personalization algorithms are responsible for these biases (RQ3).We find that certain categories of ads (e.g., opportunity ads and ads for sensitive topics) tend to be much more narrowly targeted than neutral ads, suggesting that advertisers carefully choose which users are eligible to see these ads. On the other hand, we identify a subset of ads that are not targeted at all (i.e., the advertisers make all adult U.S. users eligible to see the ad), and find that demographic skews still persist for ads across different problematic categories. Together, our results shed light on users' overall ad experiences on a major platform, and illuminate disparities in those experiences caused by a combination of advertiser choices and platform algorithms.
## 2 Background and Related Work
Below, we provide background on targeted advertising, and discuss prior work on measuring skews in ad delivery, as well as users' experiences with problematic digital advertising.
### Online Advertising
Online advertising, and in particular, _targeted_ advertising, supports much of the modern Internet's business model. Targeting ads to particular users can be an effective way to show content to the most relevant audiences. However, the data used in targeting is privacy-sensitive (e.g., [43, 53, 71, 84]) and the targeting process can lead to discrimination (e.g., [47, 81]).
Platforms such as Facebook and Twitter rely on inferring user's interests, and providing advertisers with an interface for these interests, to enable precise targeting of ads. In addition to interests and behaviors, they also enable targeting by demographics (e.g., age or gender); personally-identifiable information (e.g., users' email addresses), often called "custom" audiences [25]; and even "lookalike" audiences that are able to expand a list of uploaded contacts by finding other users who have similar characteristics [26]. The delivery of targeted online ads can be broken into two phases [6]: _ad targeting_ and _ad delivery_. In ad targeting, the advertiser uses the targeting features described above to define an _eligible audience_, and specifies the ad's budget and optimization goal. In ad delivery, the platform must decide which users in the eligible audience are actually shown the ad (the _actual audience_). Historically, platforms used different auction mechanisms to make this selection [19], but today, platforms use sophisticated algorithms that try to subsidize ads that have high "relevance" to specific users [27].
Prior work has found that discriminatory digital advertising can result due to discriminatory targeting by advertisers [79, 45] and discriminatory delivery by platforms, even when the advertiser might not have intended it [7, 6, 72]. The latter can be the result of relevance algorithms significantly skewing the the actual audience such that this audience is very different from the eligible audience an advertiser intended to reach. As a result, Facebook in particular has implemented novel systems in response to legislative pressure [57] to minimize variance between eligible and actual audiences, in an effort to ensure fairer delivery of particular ads.
### Problematic Media
Communication and psychology literature have long explored how traditional mass media (e.g., print, TV, radio) expose consumers to problematic content. Social science theories such as cultivation theory (how exposure to content may influence people's thoughts and behaviors [67]) and agenda-setting theory (how content can be used to shape and filter a consumers' reality [75]) posit ways in which harmful media can produce
negative outcomes for consumers. Empirical observations under these frameworks include how violent media teaches violent behaviors (e.g., [77]); how bigoted media reinforces prejudice (e.g., [13, 86]); and how exposure to idealized body images can lead to body image issues (e.g., [9, 66, 85]). The ability to target mass media advertisements to specific audiences, however, is limited.
Online ads are another form of potentially problematic media. Investigating the potential for advertising to expose users to problematic content is particularly important, since ad platforms often self-regulate, and set their own policies to define which advertising content they do or do not allow on their sites [55, 38]. These policies are often updated at the platform's discretion, or in response to the changing landscape of problematic content [11]. To enforce these policies on all types of content, platforms use a combination of automatic and manual approaches [42]. But despite policies and detection tools that aim to limit problematic content, ads that users find problematic still have a significant presence on popular sites [91] due to both policy inadequacies [61, 52] and technical challenges [74]. We investigate how the presence of ads, which are increasingly highly targeted to individual characteristics, may be problematic [58].
### User Experiences with Problematic Ads
Users of online platforms have been shown to dislike ads in general [92, 37], with some employing tools like ad blockers to browse the web without the obstruction of ads [80]. Recent work has investigated why users dislike online ads; Zeng et al. develop a taxonomy on what users think are the worst qualities of ads [92], finding that people are particularly likely to dislike ads described as "deceptive," "clickbait," "ugly," and "politicized." People struggle to identify deceptive ads [88], which can lead to harmful outcomes like software attacks [89, 60]. Those who suffer from certain mental health disorders or trauma may also experience negative psychological and physical consequences from ads that target these conditions [34].
Our contributions. We build on prior qualitative work by Zeng et al. [92], and use their taxonomy to assess people's sentiments toward their own ads. We further use these sentiments, combined with rigorous coding, to identify novel categories of ads perceived, more specifically, as _problematic_. We also extend prior quantitative work, such as Ali et al. [6], and measure ad delivery's role in creating disparities in exposure to problematic advertising. We further show ad delivery biases are not limited to ads created by researchers [6], and extend to real problematic ads on the platform. To our knowledge, ours is the first study to look at targeting and personalization of problematic ads to actual users.
## 3 Methodology
Below, we describe our methods for recruiting a diverse and demographically balanced panel and for collecting the desktop ads our participants are shown by Facebook.
### Panel Recruitment
We recruited our panel of Facebook users from two sources: by listing tasks on Prolific, an online crowd-work and survey platform, and by advertising on Facebook.
Participants were screened via a short survey.1 Our criteria to be eligible for the study were that participants must (1) have an active Facebook account that (2) they use for at least 10 minutes per day (3) on a desktop or laptop computer (4) via either the Google Chrome or Mozilla Firefox browsers (5) without using ad blockers or tools for anonymous browsing (e.g., Tor). Additionally, we went to significant lengths to recruit a diverse panel across select demographic variables: race and ethnicity (white; Black; Hispanic; Asian), gender (men; women), age (younger than Generation X [18]; Generation X or older) and educational attainment (below a bachelor's degree; bachelor's degree and above). We sought to balance our panel among all combinations of our chosen demographic variables (e.g., representation for Generation X Hispanic women with high educational attainment) but we struggled with recruitment and retention of some demographics, partly due to the distribution of users who participate in online studies or use the platforms we recruited on [82, 65, 68]. We made a continuous effort to balance our sample by accepting participants on a rolling basis and not screening in those with demographics we were saturated with. Table 1 shows the ultimate demographic breakdown of our participants.
Footnote 1: Prolific participants were compensated with a base pay of $8.04 per hour for completing the screening survey while those recruited via Facebook advertisements were not compensated for the screening survey as there is no mechanism to do so. Demographics which we initially struggled to recruit were offered marginally more compensation. The survey took a median of 6 minutes and 9 seconds to complete.
Unfortunately, while all participants were screened based on their Facebook usage, not all users contributed a significant number of ads during the 3 month study period. Of the 184 participants originally enrolled in the study, 132 were _active_ participants, which we define as those who contributed at least 30 ads (on average 10 per month) over the course of the three months of their participation in the study.
### Data Collection
Logged Data.Our study collected the ads that were shown to our participants on their Facebook news feeds while using Facebook on a desktop computer over a 3 month period. In order to collect our participants' ads, we used a browser extension, based on the NYU Ad Observer project [1, 29]. We modified Ad Observer to include unique participant IDs
along with the ads reported to our server, and we introduced an additional "Surveys" tab that serves participants monthly surveys to collect their sentiments for their individual ads. Across all of our recruited participants, we collected 165,556 impressions to 88,509 unique ads. Repeat impressions of ads are relatively sparse in our data--a median of twice per ad per participant--and only 5.33% of our ads are shown more than 3 times to a participant.
**Targeting Data.** We also collected ad targeting information provided by Facebook through its "Why am I seeing this?" API [32], which reveals information about how the advertiser selected their target audience [8]. While prior work has shown that Facebook's targeting explanations can be incomplete, and include only one targeting criteria in each ad explanation [8], we find empirically that the system has changed since. We also observe differences between the summarized targeting data which is shown on the user interface, and what is reported through the API. Our data includes several instances of multiple targeting criteria--62.7% of ads in our data with interest targeting include more than one interest.
**Survey Data.** Every month, we prepared a survey that assesses participant sentiments toward the ads they saw on Facebook during the prior month. Specifically, for each ad that we showed to a user in the survey, we asked them: "Which of the following, if any, describe your reasons for _disliking_ this ad?" and present the following non-mutually exclusive answer choices:
* It is irrelevant to me, or does not contain interesting information.
* I do not like the design of the ad.
* It contains clickbait, sensationalized, or shocking content.
* I do not trust this ad, it seems like a scam.
* I dislike the advertiser.
* I dislike the type of product being advertised.
* I find the content uncomfortable, offensive, or repulsive.
* I dislike the political nature of the ad.
* I find the ad pushy or it causes me to feel anxious.
* I cannot tell what is being advertised.
* I do not dislike this ad.
We then ask: "Which of the following, if any, describe your reasons for _liking_ this ad?" and present the following non-mutually exclusive answer choices:
* The content is engaging, clever or amusing.
* It is well designed or eye-catching.
* I am interested in what is being advertised.
* It is clear what product the ad is selling.
* I trust the ad, it looks authentic or trustworthy.
* I trust the advertiser.
* It is useful, interesting, or informative.
* It clearly looks like an ad and can be filtered out.
* I do not like this ad.
Answer choices for these questions are drawn from Zeng et al.'s taxonomy of reasons for users' like or dislike of ads [92], with the exception of one item. In a small pilot version of this survey, in which we allowed participants to also provide free-text answers of their reasons for liking and disliking Facebook ads with 300 respondents, we identified an additional reason for liking an ad, "This ad is filterable", so we included it to capture a broader spectrum of reasons users like ads.
We survey participants about at most 5 ads from each of our seven ad categories (Section 4). We limit the monthly surveys to up to 35 ads each so that it did not become prohibitively long (more than 20 minutes) for participants to complete.
**Study Deployment.** We began data collection in November 2021, with participants recruited on a rolling basis. Each participant was a part of our study for three months. The final participant completed the study in September 2022. We compensated our participants by paying them up to $60: $5 when they signed up, $15 for each month they kept the plugin installed and completed the monthly sentiment survey, and upon completing all three months of the study, they were rewarded with a $10 bonus payment. Those participants who dropped out of our study were compensated using the scheme above based on how long they did participate. Since we deployed surveys directly through our extension, we were not able to assess average time of completion, but pilot tests of the survey averaged a completion time of about 15 minutes.
### Analysis
Here, we describe the quantitative methods we employ to analyze survey responses, logged ad observations, and ad targeting data. We limit all our analyses to the 32,587 ads that we annotated (see Section 4), and to our list of active participants (Table 1).
**RQ1.** For survey responses, we use Chi-squared (\(\chi^{2}\)) tests for equality of proportions to compare rates of ad dislike. We also
\begin{table}
\begin{tabular}{c l r r r r r} \hline \hline \multirow{2}{*}{**Variable**} & \multirow{2}{*}{**Value**} & \multicolumn{2}{c}{**Recruited**} & \multicolumn{2}{c}{**Active**} & \multicolumn{1}{c}{**Census**} \\ & & **n** & **\%** & **n** & **\%** & **\%** \\ \hline \multirow{3}{*}{**Gender**} & Female & 96 & 52.17 & 71 & 53.79 & 50.5 \\ & Male & 86 & 46.74 & 59 & 44.70 & 49.5 \\ & Non-binary & 2 & 1.09 & 2 & 1.52 & – \\ \multirow{3}{*}{**Age**} & Younger than Gen-X & 134 & 72.83 & 88 & 66.67 & 33.6 \\ & Gen-X and older & 50 & 27.17 & 44 & 33.33 & 47.8 \\ \multirow{3}{*}{**Race / Ethnicity**} & White & 105 & 57.07 & 82 & 62.12 & 75.8 \\ & Latino/Hispanic & 21 & 11.41 & 16 & 12.12 & 18.9 \\ \multirow{3}{*}{**Ethnicity**} & Black & 53 & 28.80 & 32 & 24.24 & 13.6 \\ & Asian & 21 & 11.41 & 16 & 12.12 & 6.1 \\ \multirow{3}{*}{**Education**} & Other & 3 & 1.63 & 3 & 2.27 & – \\ \cline{1-1} & Below Bachelor’s & 72 & 39.13 & 51 & 38.64 & 58.5 \\ \multirow{3}{*}{**Education**} & Bachelor’s or above & 112 & 60.87 & 81 & 61.36 & 32.9 \\ \hline \multicolumn{2}{c}{**Total**} & \multicolumn{2}{c}{**184**} & \multicolumn{2}{c}{**132**} & \\ \end{tabular}
\end{table}
Table 1: Demographics of panel participants.
report Cohen's \(\omega\) as the effect size of the Chi-squared tests to characterize the scale of differences. As a general guideline, \(\omega=0.1\) is considered a small effect, 0.3 is a medium effect, and 0.5 and above is considered a large effect [15]. We examine the association between the reasons for dislike mentioned in the surveys and the ad type through mixed-effects logistic regression models. To control for variance in participants' individual preferences, we include a random effect term for each participant. In line with statistical best practice [36], we do not correct our regression models as each model represents a purely complementary comparison (e.g., contains a distinct dependent variable).
**RQ2.** To understand disparities in the distribution of ad types, we treat number of ad types observed for each participant as a frequency distribution. To quantify inequality in this distribution, we compute skewness [3], a measure of asymmetry for a probability distribution, computed via its third standardized moment. A positive skew implies a distribution with a long right tail, while a negative skew means the left tail is longer. We also compute the Gini coefficient [2] to measure inequalities across participants. To understand inequities between demographic groups, we use linear regression models to model the fraction of each ad category in participants' ad diet, as a function of their demographics.
**RQ3.** To disentangle ad delivery's influence from ad targeting in our observations, we use the advertising interface to obtain audience size estimates for each ad. Concretely, we query Facebook's advertising API for monthly "reach" estimates for the targeting specifications of every ad in our dataset. Note that these estimates are not accessible for ads that use Custom Audiences (CAs), such as phone number uploads or cookie re-targeting; those are only known to the owners of these CAs. We use linear regressions similar to RQ2 to identify differences between demographic groups that appear due to the platform's ad delivery practices.
### Ethics
Given the sensitivity of the data we were collecting, we took care to follow best practices, maximizing benefence while minimizing harm to our participating users and Facebook itself. First, our research project was approved by our institution's Institutional Review Board (IRB). Second, we collected the minimal data on our participating users necessary to conduct the study; we only collected personally-identifiable information where necessary to facilitate payments, and we used unique, random identifiers for all survey responses and ads collected. Third, we controlled access to the uploaded pseudonymous data to just the research team, and we do not plan on making this data generally available to protect the privacy of our users. Finally, we minimized the harm to advertisers and Facebook itself by not causing any ad impressions that would not have otherwise occurred; the only additional requests to Facebook were to fetch the targeting specifications, and to later retrieve audience sizes of these specifications.
While Facebook prohibits collection of data using automated means in its terms of service (ToS), we argue that the public benefits of our work outweigh the risks posed to Facebook. Further, violating ToS by scraping content that is otherwise available through non-automated means is not considered a violation of the U.S. Computer Fraud and Abuse Act [4]. Platforms, however, reserve the right to ban users who scrape or have done so in the past.
## 4 Categorizing ads
In order to evaluate whether there are inequities in participants' exposure to problematic ads, we first evaluate which of our collected ads are problematic. To do so, we develop a codebook to categorize the ads our participants see, and then use that codebook to annotate a significant subset of their ads.
### Creating the codebook
We use a combination of inductive qualitative coding [83, 14], and deductive analysis [10] of prior work and platform policies to develop a robust categorization of participant ads.
To create our initial inductive categorization of Facebook ads, we conducted pilot data collection with 7 participants, collecting their ads with our browser extension between June and July 2021. We then cross-referenced our initial codebook with platform and governmental policies and empirical research to develop our final ad categories. Our categorization particularly focuses on capturing problematic ads, though we also make sure our codebook captures content that users might find unproblematic, such as products, events, or local businesses. Below, we define our categories, describe how we reason about them, and provide examples from our dataset.
**Deceptive:** Fraudulent offers, potential scams, false or misleading claims, predatory business practices. _Examples:_ Guaranteed monthly income, sign-up flows for personal information ("clickfunnels"), non-descript offers with requests for direct messages.
Deceptive advertising and its breadth is notoriously hard to capture (see, e.g., a review of definitions [33] and a diversity of FTC reports on the subject [16]). Therefore we define this code broadly, to be able to capture multiple forms of deceptive and scam content. We categorize financial and personal information scams, fraudulent offers, and a diverse array of misleading content as Deceptive. Many aspects that we cover in this definition are covered by Facebook's policies for unacceptable business practices [23], unrealistic outcomes [24], and broadly under the platform's deceptive content policy [28]. Prior work has documented deceptive ads in contexts such as malicious web advertising [51], social engineering attacks [60], and distributing malware [69, 90].
**Clickbait:** Ads that omit information to entice users, are unclear about the advertised product, or contain sensational, loud, or dense content. _Examples:_ Provocative news headlines, celebrity gossip, incomplete offers ("Click to find out").
Prior work has documented how clickbait ads are attention grabbing by being unclear, and do not live up to users' expectations [64, 92]. It has also been found to waste users' time [73], contain provocative content [63], and act as a vehicle for misinformation [63, 35, 93]. Facebook's policies also recognize the misleading and annoying nature of clickbait, and they enforce policies to reduce exposure to such content [31, 56].
**Potentially Prohibited:** Ads that may not be allowed on the platform according to Facebook's prohibited content policies. _Examples:_ Tobacco, drugs, unsafe dietary supplements, multi-level marketing, weapons.
Facebook's policies prohibit several types of ads [28], including but not limited to ads for tobacco, adult content, body parts, payday loans, and multi-level marketing. Ads that pose a security threat to users, such as spyware or malware, non-functional landing pages, and efforts circumventing review systems, are also prohibited [55]. Even with an extensive policy, Facebook's ability to accurately detect content and enforce policies is limited (see, e.g., prior work documenting challenges in detection and enforcement of political advertising policies [20, 49]). We therefore code for ads whose content match any of Facebook's prohibitive policies. We note that only Facebook can enforce these policies - therefore we refer to our annotations as _potentially_ prohibited.
**Sensitive:** Ads that fall under Facebook's content-specific restrictions policy [28]: such content isn't prohibited but, given its sensitive nature, it must comply with additional guidelines, including written permissions and certifications. _Examples - Sensitive: Financial:_ Credit cards, loans, mortgage financing. _Examples - Sensitive: Other:_ Weight loss programs, online mental health prescription services, online slot machines.
Facebook subjects ads for sensitive topics to additional scrutiny on their content and targeting practices [28]. For example, ads for weight loss programs can only be targeted to people at least 18 years or older, financial advertisers must provide authorization by regulatory authorities, and online pharmacies require an additional certification [22]. Within Sensitive ads, we find an increased prevalence (more than two-thirds) for Financial ads, so we break this code into two sub-codes -- Sensitive: Financial and Sensitive: Other.
In addition to platform policies, sensitive ads closely relate to prior work on content that targets user's vulnerabilities [62, 34] -- such content may be benign to some users but may foster negative thoughts or behaviors for others [58, 40]. Gak et al. [34], for instance, found that among people with a history of unhealthy body stigmatization, dieting, or eating disorders, being targeted with weight-loss-related ads had negative emotional and physical outcomes.
**Opportunity:** Ads that present any employment, housing, or educational opportunity to users. _Examples:_ Degree programs, jobs or gig-work, fellowships, scholarships.
We coded for ads that displayed opportunities for users, such as a job or gig, higher education, or apartments and homes for sale. Facebook's own policies prohibit discrimination in targeting of opportunities, or advertising fraudulent or misleading opportunities [30]. Further, cases of discrimination in the delivery of online opportunity ads [45, 6, 17] led us to code these ads to examine their distribution among our participants.
**Healthcare:** Ads that contain products, services or messages related to healthcare, fitness, mental and physical wellness. _Examples:_ Medical devices, gym equipment, public health announcements, fitness programs, health insurance.
We find a wide array of healthcare-related ads that are broader than the content covered by Facebook's content-specific restrictions (Sensitive), and we use a separate code to capture such content. These ads are diverse in nature, ranging from helpful to possibly problematic.
**Political:** Ads that contain any overt references to political subject matters. _Examples:_ Political campaign ads, petitions for political causes.
While we initially coded for political ads, we exclude them from our analysis. We consider ads for political content to be outside of our scope for this study due to challenges in measuring user perceptions of political ads [78]; further, problematic content [93], delivery [7] and policy [49] surrounding political ads are well-addressed in recent prior work.
**Neutral:** Every-day products, services or apolitical news. _Examples:_ Sales, product deals, local events. Further, ads not classified as any of the other categories are considered neutral.
The prevalence of each category in our annotated data is shown in Table 2. Figure A2 also shows concrete examples of each category. We leave a small fraction of ads (122, 0.41%) in our dataset uncategorized because they do not fit into our codebook, but are also not benign; often, these are potentially deceptive offers which we are unable to verify. Since some of our participants are recruited from Facebook, we observe an increased prevalence of research-study-related ads (2558, 7.85%). We use an auxiliary code "Study" to annotate all such ads, and remove them from all subsequent analyses.
In our annotation, we allowed for double-coding when an ad fell into two or more categories (e.g., an unclear ad for "5 Steps My Clients Use to Overcome Anxiety" falls into both Healthcare and Clickbait). However, we do not allow multiple codes when an ad is categorized as Neutral.
### Coding ads
Across all of our recruited participants, we collected 165,556 impressions to 88,509 unique ads. Out of these, 83,507
(94.3%) ads and 156,213 (94.3%) impressions were contributed by the participants ultimately deemed active (and considered in the remainder of the study). Due to the high volume of ads, we annotated a random subset of up to 200 ads per participant per month. Since we repeated this sampling strategy every month for each participant, we avoid introducing time- or participant-related sampling biases to the subset of our data we annotated. Through this sampling process, we were able to annotate 32,587 out of our collected 88,509 ads, for \(\approx\)36.8% of them.
The authors annotated the first two months of data. For the remaining months, we hired two students from our institute as external annotators. We choose to hire annotators locally instead of crowd-workers to be able to train them to use our codebook properly and communicate in case of errors. The annotators were shown the ad's text and a screenshot of the ad (e.g. Figure A2) during annotation tasks.
Since our annotation task consists of multiple labels and we consider agreement for more than two annotators, we use Krippendorf's Alpha with the Jaccard set distance function to evaluate agreement between annotators. External annotators were first trained to use the codebook on a pilot task using the authors' gold standard annotations. Subsequently, every month, we picked a 5% subset of the month's ads to overlap across both annotators and the first author. If agreement on this common subset was low (\(\alpha<0.70\)), we went over discrepancies and re-calibrated our use of the codebook. We repeated this exercise each month to ensure annotation quality remained high. The final agreement on our annotated data, \(\alpha=0.726\), is considered'substantial' [48].
We specifically avoided using machine learning to avoid mis-labeling points in our data. Deceptive content, in particular, requires a level of investigation that would not be possible with automation. To investigate whether an ad is indeed deceptive, annotators are asked to visit the advertised web page, look at the advertiser's Facebook page, and inspect reviews on Facebook and Better Business Bureau.
Post-processing.Finally, while we annotate multiple codes per ad for a richly described dataset, we post-process our coding to translate into one code per ad. We do this for easier interpretation of the following results (Section 5), particularly in regression analyses. In line with the severity of restrictions in Facebook's policies [28], we translate sets of codes to a single code in the following precedence order:
Potentially Prohibited \(>\) Deceptive \(>\) Clickbait \(>\) Sensitive \(>\) Opportunity \(>\) Healthcare \(>\) Neutral.
## 5 Results
We now summarize our study's results. Section 5.1 identifies which categories of ads participants find problematic (RQ1). Section 5.2 investigates the distribution of problematic ads (RQ2). Section 5.3 examines the reasons for the discovered discrepancies (RQ3).
### What do participants find problematic?
To evaluate whether our participants found certain ad categories problematic, we first examine general dislike: whether participants dislike a higher fraction of particular ads. We then evaluate reasons for disliking: whether participants have different reasons for disliking each category in our codebook. Specifically, to evaluate general dislike, we use \(\chi^{2}\) proportion tests to evaluate differences in the proportion of ads in each category that participants marked as "I do not like this ad" in the second question of our survey (Section 3.2).
Figure 1 shows the fraction of responses, for each category, that were disliked by participants. Across our surveys, participants reported disliking nearly half of the ads we had classified as Clickbait (48.98%), Deceptive (49.16%), and Potentially Prohibited (50%). Participants reported disliking 43.58% of the ads we coded as Sensitive, while Neutral, Healthcare and Opportunity ads were disliked less: 29.24%, 31.09%, and 31.87%, respectively.
These differences across ad categories are significant (\(p<0.001\), omnibus \(\chi^{2}=186.25\); \(\omega=0.15\)). In a series
\begin{table}
\begin{tabular}{l r r}
**Code** & **Count** & **\%** \\ \hline Neutral & 20,596 & 68.52 \\ Healthcare & 3564 & 11.86 \\ Opportunity & 2267 & 7.54 \\ Sensitive: Financial & 1429 & 4.75 \\ Sensitive: Other & 631 & 2.10 \\ Clickbait & 1182 & 3.93 \\ Deceptive & 542 & 1.80 \\ Potentially Prohibited & 253 & 0.84 \\ Political & 263 & 0.87 \\ \end{tabular}
\end{table}
Table 2: Prevalence of each code in our annotated dataset.
Figure 1: Fraction of responses where participants showed dislike for an ad category (i.e., chose “I do not like this ad” in the survey). 95% confidence intervals for (binomial) proportions are estimated via normal approximation.
of pair-wise \(\chi^{2}\) proportion tests comparing each of our coded categories with Neutral, with Benjamini & Hochberg correction [12], we observe that Potentially Prohibited, Deceptive, Clickbait, and both types of Sensitive ads (Financial and Other) are all disliked significantly more than Neutral ads (\(p<0.001\), \(\chi^{2}>24\); \(0.07\leq\omega\leq 0.13\)). Opportunity (\(p=0.121\), \(\chi^{2}=2.60\); \(\omega<0.10\)) and Healthcare (\(p=0.28\), \(\chi^{2}=1.14\); \(\omega<0.10\)) ads, on the other hand, are not significantly more or less disliked than Neutral ads. To identify whether any of the ad categories are disliked more than each other (rather than just more than Neutral) we conduct an additional series of pair-wise corrected tests, comparing differences between sequential ad categories (e.g., comparing Potentially Prohibited, the most disliked category, with Deceptive, the next most disliked). This testing finds only one significant difference, between Sensitive: Other and Opportunity (\(p=0.003\), \(\chi^{2}=11.34\); \(\omega<0.10\)). In combination, our statistical results suggest that Clickbait, Deceptive, Potentially Prohibited, and Sensitive ads form an equivalence class of potentially problematic ads.
To understand _why_ participants dislike these ad categories, we investigate the specific reasons they reported for disliking in the first survey question. Table 3 shows the odds ratios (exponentiated regression coefficients) of eight mixed-effects logistic regression models, with a random intercept for the participant. The odds ratios (O.R.) give the relative odds that an ad category was described with a certain dislike reason in survey responses, compared to the same dislike reason for our baseline (Neutral). For each ad category (column), an O.R. of 1 means a given dislike reason (row) is not used to describe the ad category more often than Neutral. Values greater than 1 correspond to increased odds of participants describing that ad category with the given reason, while values less than 1 indicate lower odds.
We first observe in Table 3 that participants are significantly more likely to describe the combined most highly disliked ad categories ("Problematic" column) as irrelevant (O.R. = 1.34, \(p=0.011\)), clickbait (O.R. = 1.47, \(p=0.018\)) and scam (O.R. = 1.64, \(p<0.001\)). Looking at the disliked categories individually, we find that Deceptive, Clickbait and Sensitive ads are also significantly more likely to be described as scams (all O.R. \(\geq 1.45\), \(p<0.05\)). The odds of Sensitive: Other ads, in particular, being described as scams are more than twice the odds of Neutral ads being described as scams (O.R. = 2.08, \(p=0.001\)). Also for these ads, participants' odds of disliking the advertiser (O.R. = 2.10, \(p=0.007\)) or product (O.R. = 1.73, \(p=0.032\)) are significantly higher. Further, respondents find Potentially Prohibited ads to be unclear in their description (O.R. = 1.89, \(p=0.042\)). Finally, our results find evidence that participants recognize the clickbait nature
\begin{table}
\begin{tabular}{l c c c c c c|c c} \multicolumn{6}{l}{**Dislike Reason**} & \multicolumn{6}{l}{Odds Ratio [95\% CI]} \\ \hline & \multicolumn{1}{c}{**Pot.**} & \multicolumn{1}{c}{**Deceptive**} & \multicolumn{1}{c}{**Clickbait**} & \multicolumn{1}{c}{**Sensitive:**} & \multicolumn{1}{c}{**Sensitive:**} & \multicolumn{1}{c}{**Polematic**} & \multicolumn{1}{c}{**Opportunity**} & \multicolumn{1}{c}{**Healthcare**} \\ & \multicolumn{1}{c}{**Prohibited**} & \multicolumn{1}{c}{**Deceptive**} & \multicolumn{1}{c}{**Clickbait**} & \multicolumn{1}{c}{**Financial**} & \multicolumn{1}{c}{**Other**} & \multicolumn{1}{c}{**Other**} & \multicolumn{1}{c}{**Problematic**} & \multicolumn{1}{c}{**Opportunity**} & \multicolumn{1}{c}{**Healthcare**} \\ \hline intercept & \(0.03^{***}\) & \(0.036^{***}\) & \(0.103^{***}\) & \(0.16^{***}\) & \(0.052^{***}\) & \(0.403^{***}\) & \(0.236^{***}\) & \(0.231^{***}\) \\ & \(\{0.02,0.06\}\) & \(\{0.02,0.07\}\) & \(\{0.07,0.16\}\) & \(\{0.11,0.24\}\) & \(\{0.03,0.09\}\) & \(\{0.29,0.55\}\) & \(\{0.17,0.32\}\) & \(\{0.17,0.32\}\) \\ \hline
**advertiser** & \(0.438\) & \(0.701\) & \(0.937\) & \(0.679\) & \(\mathbf{2.101^{*}}\) & \(\mathbf{0.99}\) & \(\mathbf{1.437}\) & \(\mathbf{1.098}\) \\ & \(\{0.16,1.21\}\) & \(\{0.36,1.37\}\) & \(\{0.55,1.6\}\) & \(\{0.44,1.14\}\) & \(\{1.23,2.63\}\) & \(\{0.71,1.39\}\) & \(\{0.95,2.17\}\) & \(\{0.69,1.76\}\) \\ \hline
**clickbait** & \(0.657\) & \(\mathbf{2.465^{*}}\) & \(\mathbf{1.983^{*}}\) & \(0.93\) & \(1.265\) & \(\mathbf{1.472^{*}}\) & \(1.124\) & \(0.721\) \\ & \(\{0.27,1.58\}\) & \(\{1.37,4.43\}\) & \(\{1.26,3.31\}\) & \(\{0.85,1.5\}\) & \(\{0.69,2.32\}\) & \(\{1.07,2.03\}\) & \(\{0.74,1.7\}\) & \(\{0.44,1.18\}\) \\ \hline
**design** & \(0.734\) & \(\mathbf{1.098}\) & \(0.935\) & \(\mathbf{0.734}\) & \(\mathbf{1.168}\) & \(\mathbf{0.884}\) & \(\mathbf{1.175}\) & \(\mathbf{1.091}\) \\ & \(\{0.36,1.49\}\) & \(\{0.58,2.08\}\) & \(\{0.85,1.5\}\) & \(\{0.48,1.15\}\) & \(\{0.67,2.05\}\) & \(\{0.68,1.2\}\) & \(\{0.68,1.1\}\) & \(\{0.68,1.1\}\) & \(\{0.73,1.63\}\) \\ & \(\{1.677\}\) & \(1.071\) & \(1.043\) & \(\mathbf{1.574^{*}}\) & \(1.327\) & \(\mathbf{1.34^{*}}\) & \(\mathbf{1.406^{*}}\) & \(1.23\) \\
**irrelevant** & \(\{0.99,2.85\}\) & \(\{0.86,1.69\}\) & \(\{0.74,1.48\}\) & \(\{1.15,1.26\}\) & \(\{0.87,2.02\}\) & \(\{1.07,1.68\}\) & \(\{1.05,1.87\}\) & \(\{0.91,1.67\}\) \\ \hline & \(2.754\) & \(1.759\) & \(1.359\) & \(0.78\) & \(0.15\) & \(\mathbf{1.128}\) & \(0.818\) & \(\mathbf{1.797}\) \\ & \(\{0.91,8.37\}\) & \(\{0.61,5.04\}\) & \(\{0.59,3.16\}\) & \(\{0.32,0.02\}\) & \(\{0.02,1.27\}\) & \(\{0.62,2.09\}\) & \(\{0.38,1.75\}\) & \(\{0.85,3.78\}\) \\ \hline
**product** & \(0.832\) & \(0.954\) & \(1.078\) & \(0.997\) & \(\mathbf{1.734^{*}}\) & \(1.048\) & \(0.987\) & \(0.705\) \\ & \(\{0.41,1.7\}\) & \(\{0.56,1.54\}\) & \(\{0.68,1.7\}\) & \(\{0.66,1.52\}\) & \(\{1.05,2.88\}\) & \(\{0.78,1.4\}\) & \(\{0.67,1.45\}\) & \(\{0.45,1.09\}\) \\ \hline
**pushy** & \(0.747\) & \(1.209\) & \(0.682\) & \(1.367\) & \(0.499\) & \(1.008\) & \(\mathbf{0.572^{*}}\) & \(\mathbf{1.505}\) \\ & \(\{0.28,1.97\}\) & \(\{0.6,2.45\}\) & \(\{0.37,
of the ads we categorize as Clickbait (O.R. = 1.98, \(p=0.003\)), as well as those we categorize as more broadly Deceptive (O.R. = 2.46, \(p=0.002\)), the latter of which are likely to use attention-grabbing content to lure people to click [44, 70].
Comparatively, the odds of Opportunity and Healthcare ads being described by participants as unclear are lower than the odds of a Neutral (all O.R. \(\leq\) 0.55, \(p<0.05\)). We also note that Opportunity ads, despite having higher odds of being described as irrelevant (O.R. = 1.4, \(p=0.020\)), have lower odds of being described as pushy than Neutral ads.
Overall, we find differences in both rates of dislike, and reasons for disliking across our defined ad categories. Potentially Prohibited, Deceptive, Clickbait, and Sensitive ads are found to be disliked at a higher rate than other categories, and for more severe reasons beyond irrelevance: participants recognize their clickbait-y and scammy nature; dislike the sensitive products they advertise and the advertisers selling those products; and find them unclear, potentially due to advertisers evading platform prohibitions. As such, for the remainder of this paper we refer to the collection of these four ad categories as Problematic.
### How are Problematic ads distributed?
To understand how each ad category is distributed over our panel, we investigate the skew in its distribution over our participants: Figure 2 shows a cumulative distribution function (CDF) for all ads in each category. We also employ the Gini coefficient to precisely quantify this inequality. While highly recurrent impressions of ads are relatively sparse in our data--a median of two impressions per ad per participant--we account for the frequency of impressions in this analysis as well.
First, we observe that Neutral ads are not uniformly distributed, as observed by the distance from a uniform distribution. Because of this inherent skew in ad distribution, we treat Neutral (Gini = 0.48) as the baseline for comparison. Second, we see that Healthcare (Gini = 0.60) and Opportunity (Gini = 0.59) ads are more skewed (i.e., less uniformly distributed) than Neutral. This may be because Healthcare and Opportunity ads focus on narrower themes, and may be more personalized to users by advertisers or the platform. Third, we find that all five Problematic categories are more skewed across participants than Neutral. In these categories, we note the following order from least to most skewed: Sensitive: Other (Gini = 0.62), Sensitive: Financial (0.65), Clickbait (0.66), Potentially Prohibited (0.67), and Deceptive (0.69). To offer a concrete example of this skew: 80% of the Deceptive ad impressions (0.8 on \(y\)-axis) are delivered to just 36 participants (\(x\)-axis), compared to Healthcare, where the same fraction of impressions are delivered to 47 participants (or 60 participants in the case of Neutral).
Next, we focus on how individual-level exposure to Problematic ads vary for our participants. First, we note that data contributions themselves are inherently skewed, since participants have varying rates of Facebook use. To control for these differences, we look at the fraction of every participant's _ad diet_, i.e., all ads seen by them during the study, that consisted of Neutral vs. Problematic categories. Figure 3 shows the frequency distribution of these fractions across our panel.
We first observe that on average, a higher fraction of our panel's ad diet is composed of Neutral ads (\(\mu=0.71\), \(\sigma=0.12\)), compared to Problematic (\(\mu=0.12\), \(\sigma=0.08\)). Confirming our findings in the prior section, the distribution of Problematic has a heavier tail, suggesting that certain participants in our panel have increased exposure to these ads compared to the average. This observation is supported by measuring the skewness of these distributions, a statistical measure of asymmetry of a probability distribution. Recall that positive skew implies a distribution has a long right tail, while a negative skew means the left tail is longer. We measure the skewness for Neutral in Figure 3 as -0.11, and for Problematic as 0.84. These differences imply that despite the average exposure to Neutral ads in our panel being 71%, certain participants exist at the long left tail of this distribution, who are shown fewer Neutral ads, and a higher fraction of Problematic ads.
We next examine these participants who are shown a higher fraction of Problematic ads. Specifically, we investigate whether for any particular demographic groups, the Problematic ads constitute a higher fraction of ad diets. Table 4 shows coefficients of six linear models that we build to examine the relationship between participant demographics and fraction of Problematic ads among the ads they encountered. The intercept shows the average fraction in the ad diets of participants
Figure 2: Cumulative Distribution Function (CDF) of impressions, showing what fraction of each ad category’s total (\(y\)-axis) is contributed by how many participants (\(x\)-axis), given 132 total active participants.
for whom all independent demographic variables are false, i.e., white, non-Hispanic men, born in 1980 or after, without a college degree. The proportion of these participants' ad diets that is composed of Problematic ads is 12% (first column in Table 4). All statistically significant coefficients in the table mark biases in comparison to that baseline.
We find that the ad diets of older participants, born before 1980, are (additively) composed of 5.1% more Problematic ads (CI: 2-8%) than younger participants. Women's ad diets are composed of 6.4% fewer Problematic ads (CI: 4-9%) than those who do not identify as women--largely because women see 4.5% fewer Sensitive: Financial ads (CI: 2-7%). We also note that older participants' ad diets are composed of higher fractions of Deceptive (1.1%, CI: 0-2%), and Clickbait ads (1.3%, CI: 1-3%). Ad diets of Black participants contain 1.3% (CI: 0-2%) more Clickbait ads than those of white or Asian participants in our panel. However, older participants and Hispanic participants ad diets have slightly lower fraction of Potentially Prohibited ads, 0.3% (CI: 0-1%) and 0.7% (CI: 0-1%) respectively, potentially because these ads target products assumed by advertisers or the platforms not to be of interest to these groups. To account for possible variance in participants' privacy behavior (e.g. changing ad preferences), we model their awareness of privacy settings as an additional independent variable in Table A1. We find that privacy awareness does not have any significant effect on the disparate exposure that we observe, and demographic skews similar to those in Table 4 persist. Demographic skews for other ad categories are also shown in Table A1.
### Who is responsible for skews?
With a better understanding of which participants have increased exposure to problematic ads, we next identify the reasons behind these differences. As discussed in Section 2, whether a particular user sees an ad on Facebook is affected by two main factors: (a) the user has to be among the audience targeted by the advertiser; (b) Facebook's ad delivery optimization considers the ad relevant to the user, which contributes to it winning an auction [27]. Thus, one can expect that when the advertiser targets a larger audience, the delivery optimization has more influence in selecting the actual audience. With this intuition, we start by investigating audience size across our ad categories.
\begin{table}
\begin{tabular}{l c|c c c c c} \multicolumn{1}{c}{**Variable**} & \multicolumn{5}{c}{**Estimate (\(\beta\))**} \\ & \multicolumn{1}{c|}{**Problematic**} & \multicolumn{2}{c}{**Pot.**} & \multicolumn{2}{c}{**Deceptive**} & \multicolumn{2}{c}{**Clickbait**} & \multicolumn{2}{c}{**Sensitive:**} & \multicolumn{2}{c}{**Sensitive:**} \\ & \multicolumn{1}{c|}{**} & \multicolumn{1}{c|}{**Prohibited**} & \multicolumn{1}{c|}{**Deceptive**} & \multicolumn{1}{c|}{**Clickbait**} & \multicolumn{1}{c}{**Financial**} & \multicolumn{1}{c}{**Other**} \\ \hline \hline & 0.12** & 0.01** & 0.01** & 0.008 & 0.012 & 0.07** & 0.02** \\ Intercept & (0.09, 0.15) & (0.01, 0.01) & (0.002) & (0.002) & (0.002) & (0.04, 0.11) & (0.001, 0.03) \\ \hline & **-0.064** & -0.002 & -0.005 & -0.008 & **-0.045** & -0.004 \\
**Gender**: Woman & [-0.09, -0.04] & [0.0] & [-0.0] & [-0.02, 0] & [-0.07, -0.02] & [-0.02, 0.01] \\ & 0.025 & -0.001 & 0.006 & **0.013** & 0.004 & **0.002** \\
**Race**: Black & [-0.00, 0.06] & [-0.01] & [-0.002] & [-0.02, 0.003] & [-0.001, 0.002] \\ & -0.002 & 0.001 & -0.003 & 0.005 & -0.007 & 0.002 \\
**Race**: Asian & [-0.04, 0.04] & [0.01] & [-0.01] & [-0.02, 0.01] & [-0.01, 0.02] & [-0.03, 0.03] & [-0.02, 0.02] \\ & 0.023 & **-0.007** & **0.005** & -0.007 & **0.036** & -0.003 \\
**Ethnicity**: Hispanic & [-0.03, 0.08] & [-0.01, 0.01] & [-0.002] & [-0.03, 0.01] & [-0.01, 0.08] & [-0.02, 0.02] \\ & 0.01 & -0.002 & 0.004 & 0.01 & -0.003 & 0 \\
**Education**: college and above & [-0.02, 0.004] & [0.0] & [0.01, 0.01] & [-0.02] & [-0.03, 0.02] & [-0.01, 0.01] \\ & **0.051** & **-0.003** & **0.011** & **0.017** & **0.017** & **0.009** \\
**Age**: Gen-X and older & [-0.02, 0.08] & [-0.01, 0.0] & [0.002] & [0.01, 0.003] & [-0.01, 0.004] & [-0.002] \\ \hline \end{tabular}
\end{table}
Table 4: Coefficients of linear regression models, with 95% confidence intervals, modeling the relationship between exposure to Problematic ads and participants’ demographics. Dependent variable (columns): fraction of ad type, out of total ad diet. Independent variable (rows): participant demographics. Union of all problematic ad types modeled in the Problematic column. \(p<0.001\)***; \(p<0.01\)**, \(p<0.05\)*.
Figure 3: Fractions of exposure to Neutral and Problematic ads, out of participants’ overall ad diet. We factor in frequency of seeing an ad while computing fractions. Smoothed lines are kernel density estimates (KDE) of the probability distribution.
As described in Section 3.3, we query Facebook's APIs to obtain audience sizes for each of our collected ads-- Figure 4 shows the distributions of these audience sizes broken down by ad category. Observing Problematic categories, we find that the median target audience sizes for Sensitive: Financial (153.9M) and Clickbait (168.2M) ads are larger than for Neutral ads (117.9M); a pairwise Kruskal-Wallis [46] test rejected the null hypothesis that the medians are equal (\(p=0.001\) for both tests). This implies that Facebook exercises more control for picking the audience subset for these categories. On the other hand, median audience sizes for Potentially Prohibited (82.6M) and Sensitive: Other (49.9M) ads are significantly smaller than Neutral (\(p=0.006\) and \(p<0.001\), respectively), indicating that advertisers for these ads more precisely specify the audiences they want to reach. We also note that audience sizes for Opportunly (36.8M) and Healthcare (83.4M), considered non-problematic in this study, are actually smaller than Neutral (\(p<0.001\)).
Next, we investigate what targeting options advertisers use to scope these various audiences. We find that the most used targeting option is age: nearly half the ads use some form of age targeting (49.7%). Around a quarter of ads use Custom Audiences [25] (25.6%) and platform-inferred user interests (26.9%); On the other hand, advertisers for 21.2% of the ads in our dataset don't change the targeting criteria at all, and use the default targeting of all U.S. adults (267 million users). Finally, we find that only 12.1% ads in our data specifically target by gender; a vast majority use the default option of targeting all genders. Note that these percentages do not sum up to 100% because each ad can be targeted using multiple targeting criteria. Below, we detail how age, custom audiences, interests and default targeting are used in our data.
**Age.** Figure 5 shows the fraction of ads that include users of a given age in their targeting; fractions of all ages are presented together as a line, which can be perceived as a function of age. Each panel shows this function for a different ad category, and also features the function for Neutral ads for easier comparison. A category-specific line above the (gray) Neutral line signifies that the age group was more often targeted with ads of that category compared to Neutral ads. Focusing on Problematic categories, ads for Sensitive: Other often exclude users aged 18-21. This can be explained by the prevalence of ads for alcoholic beverages in this category, selling of which to individuals below 21 is illegal in the US. Sensitive: Financial, Clickbait and Deceptive ads include older audiences at a higher rate than Neutral ads, which could explain why Deceptive and Clickbait skews towards older users in our panel. Similarly, Potentially Prohibited ads also exclude users over the age of 45. These differences provide evidence that advertisers actively use the platform's age targeting features to find older users to show clickbait and scam content to. This is notable, since prior work suggests that older users may be more susceptible to such content [59].
**Custom Audiences.** We make an distinction between custom audiences where the advertiser provides Facebook with a list of particular individuals to target using their PII (e.g., phone number, email), and Lookalike Audiences [26] that Facebook creates by finding users similar to those that the advertiser provides. The distinction is crucial because of the difference in control: the advertiser exercises complete control over who to include in the first group; however, they have little influence over the characteristics of the lookalike audiences. Figure 6 shows the prevalence of different types of custom audiences per ad category. We observe that lookalike audiences are used more often than PII custom audiences for all categories. We also note that as many as a quarter of Sensitive: Other ads were targeted using Lookalike Audiences. This suggests that while advertisers use the platform's tool to find vulnerable audiences (e.g., Figure 5), they often outsource this role to the platform, especially when targeting for sensitive themes like weight loss or gambling.
**Interests.** Precise targeting by inferred interests is one of the features that distinguishes online behavioral advertising from traditional advertising models. A total of 6,028 unique interests were used to target our participants, including highly specific and sensitive inferences pertaining to health ("Multiple sclerosis awareness", "Fibromyalgiaiga awareness"), sexuality ("LGBT community", "Gay Love"), religion ("Evangelicalism", "Judaism"), and others. It is perhaps surprising that a majority of ads in our dataset (73.1%) do not actually use this functionality. Table 1 shows the most commonly targeted interests for each ad category.
**Default Targeting.** Finally, we investigate the delivery of ads that used the default targeting (i.e., the advertiser included all U.S. adults in their target audience). This allow us to observe the behavior of the delivery optimization in cases where the skew can not be attributed to the advertiser's actions. To identify skews in delivery, we run a series of linear models,
Figure 4: Audience size distributions of different ad categories. The red vertical lines mark the median audience size, the box indicates the 25th and 75th percentile, and the whiskers extend from the box by 1.5x of the inter-quartile range (IQR).
shown in Table 5, to examine the relation between fraction of problematic ads in ad diets and participant demographics, similar to Section 5.2. In contrast to that analysis, however, we subset our data to only include ads that have default targeting from the advertiser. Therefore, for each participant, we model, say, the fraction of Clickbait they saw that had default targeting, out of all of their default-targeted ads. Consequently, we capture purely skews that arise due to the platform's optimization, since the advertiser specified the broadest possible targeting, and Facebook had to make its judgment of a relevant audience. Again, the first row (intercept) shows the fraction of ad diets for participants who are non-Hispanic white, younger, and without a college education; all significant coefficients mark biases in comparison to that baseline.
Table 5 shows that (similar to Table 4), the effect for older participants seeing a 7.7% higher fraction of Problematic ads (CI: 2-13%), and women seeing 5.9% fewer of them (CI: 1-11%), persists, even without advertiser targeting. Specifically, older participants' ad diets (additively) contain 4.1% (CI: 2-6%) more Clickbait than the younger participants. We also observe a novel effect of Hispanic participants seeing 2.8% more Deceptive ads (CI: 1-5%). This implies that while their overall ad diets might not contain a significantly higher fraction of scams (Table 4)--delivery optimization independently skews these ads towards Hispanic participants. In further nuance, the effect of women seeing fewer Problematic ads can be explained by their ad diets comprising of 4.6% fewer Sensitive: Financial ads (CI: 0-9%), and 0.6% fewer Potentially Prohibited ads (CI: 0-1%) compared to participants who don't identify as women. These differences provide evidence that in addition to an advertiser's targeting--or regardless of it--Facebook's delivery optimization algorithms are also responsible for skewing the delivery of Problematic ads.
## 6 Concluding Discussion
Our study presents three main contributions. _First_, gathering insights from a diverse group of Facebook users, we identify a collection of Problematic categories of ads that were significantly more disliked, and determine participants' reasons for disliking these ads--they often mistrust these ads and recognize their deceptive nature. _Second_, we observe that
Figure 5: Fraction of ads that include given age ranges in their targeting. The thin line in each panel shows the fraction among Neutral ads for easier comparison.
\begin{table}
\begin{tabular}{l c c c c c c} \multicolumn{1}{c}{**Variable**} & \multicolumn{4}{c}{**Estimate (\(\beta\))**} \\ & \multicolumn{1}{c}{**Problematic**} & \multicolumn{1}{c}{**Pot.**} & \multicolumn{1}{c}{**Deceptive**} & \multicolumn{1}{c}{**Clickbait**} & \multicolumn{1}{c}{\begin{tabular}{c} **Sensitive:** \\ **Financial** \\ \end{tabular} } & \multicolumn{1}{c}{**Sensitive:**} & \multicolumn{1}{c}{**Sensitive:**} \\ & \multicolumn{1}{c}{**Problemited**} & \multicolumn{1}{c}{**Deceptive**} & \multicolumn{1}{c}{**Clickbait**} & \multicolumn{1}{c}{
\begin{tabular}{c} **Financial** \\ \end{tabular} } & \multicolumn{1}{c}{**Other**} \\ \hline \hline \multirow{2}{*}{Intercept} & 0.191\({}^{***}\) & 0.013\({}^{***}\) & 0.014\({}^{*}\) & 0.023 & 0.133\({}^{***}\) & 0.009\({}^{*}\) \\ & (0.13, 0.26) & [0.01, 0.02) & [0.003] & [0.01, 0.05] & [0.08, 0.18] & [0.002] \\ \hline \multirow{2}{*}{**Gender**: Woman} & **-0.059\({}^{*}\)** & **-0.006\({}^{*}\)** & -0.007 & -0.003 & **-0.046\({}^{*}\)** & 0.004 \\ & (-0.11, -0.01) & [-0.01, 0] & [-0.02, 0.02] & [-0.03, 0.02] & [-0.09, 0.01] & [0.01] \\ \hline \multirow{2}{*}{**Race**: Black} & 0.01 & 0.002 & 0.007 & 0.011 & **-0.007** & -0.003 \\ & (0.05, 0.07) & [0.01] & [-0.01, 0.02] & [-0.02, 0.04] & [-0.06, 0.04] & [-0.01, 0.0] \\ & -0.019 & -0.005 & -0.003 & -0.007 & -0.003 & 0 \\ & (-0.1, 0.06) & [-0.01, 0.0] & [-0.02, 0.01] & [-0.04, 0.03] & [-0.07, 0.06] & [-0.01, 0.01] \\ \hline \multirow{2}{*}{**Ethnicity**: Hispanic} & 0.017 & **-0.009** & **0.028\({}^{*}\)** & 0.021 & 0.027 & -0.008 \\ & (-0.08, 0.12) & [-0.02, 0.01] & [0.01, 0.05] & [-0.06, 0.02] & [-0.05, 0.11] & [-0.02, 0.0] \\ \hline \multirow{2}{*}{**Education**: college and above} & -0.033 & -0.002 & 0 & 0.005 & -0.036 & -0.001 \\ & (-0.09, 0.02) & [-0.01, 0.01] & [-0.01, 0.01] & [-0.02, 0.03] & [-0.08, 0.01] & [-0.01, 0.01] \\ \hline \multirow{2}{*}{**Age**: Gen-X and older} & **0.077\({}^{**}\)** & **-0.003** & **0.011** & **0.041\({}^{**}\)** & **0.034** & **-0.005** \\ & (0.02, 0.13) & [-0.01, 0.0] & [0.002] & [0.02, 0.006] & [-0.01, 0.08] & [-0.01, 0.0] \\ \end{tabular}
\end{table}
Table 5: Coefficients of linear regression models, with 95% confidence intervals, modeling relationship between exposure to problematic ads _due to platform optimization_, and participants’ demographics. Dependent variable (columns): fraction of category, out of total ad diet of ads with default/no advertiser targeting. Independent variable (rows): participant demographics. \(p<0.001^{***}\); \(p<0.01^{**}\), \(p<0.05^{*}\).
while these ads make up a small fraction (12% on average) of our participants' ad diets, a subset of our panel are disproportionately exposed to them. _Third_, using a combination of techniques, we demonstrate that some of these skews in ad distribution persist without targeting from advertisers, implying that the platform's algorithms are responsible for at least some of the skews we observe.
While our observations are limited to our panel, our study validates anecdotal evidence [54, 76] that clickbait and scam advertising is shown to older users more often. We show that these differences exist both due to advertisers' targeting and due to the platform's delivery optimization--which together may create a feedback loop [50]. We also identify instances where the overall outcomes are different than delivery optimization's biases: Black participants see a higher fraction of Clickbait ads (Table 4), but only when targeted by advertisers. On the other hand, Hispanic participants have higher exposure to Deceptive ads (Table 5), but only within ads that are essentially untargetted by advertisers, suggesting this effect is due to the ad platform.
Further, we find that financial ads are shown more often to participants who identify as men, both as a system-level outcome, and when controlling for ad targeting. As annotators, we observe that Sensitive: Financial ads are quite diverse--ranging from problematic offers like high APR loans to possibly useful financial tools such as savings accounts. Thus, men in our panel are exposed to problematic financial products, as well as financial opportunities, more often.
Finally, our analysis of targeting practices shows that advertisers often code control to the platform's optimizations - as evidenced by the popular use of lookalike audiences (Figure 6) and the low usage of targeting interests (Table A2). This implies that advertisers are aware of the usefulness of the platform's personalization, and malicious actors could rely on these capabilities to target Problematic advertising.
Taken together, our results offer concrete insights into user experiences with problematic advertising and raise questions about the power of platforms in delivering these ads to users.
Limitations.Our ad categories were created through pilot data collection and backed by review of platform policies and literature, including work that also examined user sentiments towards problematic advertising [91]. Still, categorizing ads into just seven categories diminishes some nuance within groups. We analyze a subset of our total collected ads that we were able to annotate manually (one-third of our overall collected data); therefore, we are not able to provide insight into the complete ad diets of our participants. To minimize any selection biases in our analyzed subset, we randomly sampled ads from participants each month for annotating and surveying, but recognize important data could be missed by not assessing the complete ad diets of participants.
Further, our observations are only about participants' desktop browsing experiences. While we suspect that similar ads would be present on the mobile Facebook app due to the diversity of Facebook's ad placement options, we do not have direct access to that data. We also do not have access to budgets of the ads that we observe, and therefore are not able to disambiguate whether certain advertisers are simply paying more money to Facebook, resulting in skews. However, to control for these differences, we compare fractions of ad categories out of the ad diets that we observe for each participant (e.g., in Section 5.2). This ensures that we compare only within participants' desktop experiences, and in the same budget-class of advertisers that were reaching them.
Additionally, we do not have access to participants' complete ad preferences, and the frequency with which they change these settings. This limits our ability to control for participant actions such as removing ads from an advertiser, or removing a specific interest. Prior work estimates that 10-19% of users tweak their ad settings [39, 41], either from the ad preferences page or from the contextual menu next to ads. We attempt to account for such variance by factoring participants' awareness of privacy settings in Table A1, and find that disparate exposure to Problematic ads for older and minority participants persists.
Finally, our work currently does not provide insight on advertising's contextual harms [58]; for instance, while we take an interest in sensitive ads with subject matters like gambling, we do not investigate their distribution among those with gambling addictions. Rather, we try to find commonalities in our panel's opinions through mixed-effects regression models, and then build our analysis on top of that data. We leave further exploration of contextually problematic ads, such as Gak et al. [34], to future work.
Recommendations.To limit users' exposure to problematic ads, we propose changes on two levels. _First_, we advocate for a more fine-grained and user-informed understanding of problematic ads, and other broader harms of advertising [5]. Currently, platforms recognize ads such as Deceptive, Clickbait ads, and the user-informed understanding of problematic ads, and other broader harms of advertising [5].
Figure 6: Prevalence of two types of Custom Audiences: based on A) Personally Identifiable Information and B) Lookalike Audiences. Despite their high prevalence, Lookalike Audiences are the most opaque of targeting tools.
baint and Potentially Prohibited as problematic, and typically include language scrutinizing them in their advertising guidelines [23, 31]. However, sensitive ads that present harms for users with addictions or other mental illness are less moderated. Yet, they are still widely disliked across our diverse set of participants. We advocate for a more refined understanding of ads with sensitive themes, and more scrutiny and moderation from platforms towards these ads. For a more nuanced understanding of problematic ads, our work, along with [91] and [34] provide a start.
_Second_, we argue for more controls not just on moderation, but on optimization as well. Our results demonstrate that once problematic ads circumvent a platform's review process, the platform then optimizes them towards users similar to other personalized content (e.g. Figure A1). To avoid this systematic personalizing of problematic ads, platforms need policies on their delivery optimization in addition to their policies on content moderation. This would require platforms to constrain the optimization of problematic content for users. For instance, Facebook currently states that it demotes clickbait in content ranking [31], yet a demotion does not stop such content from inevitably reaching and harming some users. There is perhaps a need for an "optimization vacuum" so that problematic content, even after evading moderation, cannot reach users.
We advocate for platforms to take emerging works on user experiences with problematic ads into account, and for a more urgent call for platforms to not only moderate the content users see, but also have mechanisms to suppress the delivery of problematic content, instead of optimizing for it.
## Acknowledgements
We are grateful to our shepherd and reviewers for their valuable feedback. We also thank our annotators, Devesh Tarasia and Manjot Bedi, for their work. This work is funded in part by NSF grants CNS-1916020 and CNS-1955227, and Mozilla Research Grant 2019H1.
|
2303.12821 | Towards A Visual Programming Tool to Create Deep Learning Models | Deep Learning (DL) developers come from different backgrounds, e.g.,
medicine, genomics, finance, and computer science. To create a DL model, they
must learn and use high-level programming languages (e.g., Python), thus
needing to handle related setups and solve programming errors. This paper
presents DeepBlocks, a visual programming tool that allows DL developers to
design, train, and evaluate models without relying on specific programming
languages. DeepBlocks works by building on the typical model structure: a
sequence of learnable functions whose arrangement defines the specific
characteristics of the model. We derived DeepBlocks' design goals from a
5-participants formative interview, and we validated the first implementation
of the tool through a typical use case. Results are promising and show that
developers could visually design complex DL architectures. | Tommaso Calò, Luigi De Russis | 2023-03-22T16:47:48Z | http://arxiv.org/abs/2303.12821v1 | # Towards A Visual Programming Tool to Create Deep Learning Models
###### Abstract.
Deep Learning (DL) developers come from different backgrounds, e.g., medicine, genomics, finance, and computer science. To create a DL model, they must learn and use high-level programming languages (e.g., Python), thus needing to handle related setups and solve programming errors. This paper presents DeepBlocks, a visual programming tool that allows DL developers to design, train, and evaluate models without relying on specific programming languages. DeepBlocks works by building on the typical model structure: a sequence of learnable functions whose arrangement defines the specific characteristics of the model. We derived DeepBlocks' design goals from a 5-participants formative interview, and we validated the first implementation of the tool through a typical use case. Results are promising and show that developers could visually design complex DL architectures.
Deep Learning (DL) developers come from different backgrounds, e.g., medicine, genomics, finance, and computer science. To create a DL model, they must learn and use high-level programming languages (e.g., Python), thus needing to handle related setups and solve programming errors. This paper presents DeepBlocks, a visual programming tool that allows DL developers to design, train, and evaluate models without relying on specific programming languages. DeepBlocks works by building on the typical model structure: a sequence of learnable functions whose arrangement defines the specific characteristics of the model. We derived DeepBlocks' design goals from a 5-participants formative interview, and we validated the first implementation of the tool through a typical use case. Results are promising and show that developers could visually design complex DL architectures.
deep learning, visual programming, debugging, user interface +
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: thanks: © 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
able to create and evaluate a DL model. Thus, many challenges arise (Kang et al., 2017) as developers experience this steep learning curve. In 2017, Sankaran et al. (Kang et al., 2017) studied the challenges that DL developers face through a quantitative survey among 113 software engineers and researchers from various backgrounds and experiences. The authors showed that DL frameworks exhibit a lack of needed features for quicker and more efficient implementation and prototyping. As a solution, 89% suggested the need for a system able to suggest hyper-parameters and assist in debugging the DL model, while 72% of the respondents suggested that a **visual programming tool** would be useful to speed up the overall development process.
In traditional software development, visual programming is a paradigm that lets users create programs by manipulating elements graphically rather than by specifying them textually. DARVIZ (Kang et al., 2017), DL-IDE (Kang et al., 2018), DeepVisual (Kang et al., 2018), and ModelTracker (Bahdan et al., 2017) were the first attempts to introduce visual programming IDE enabling "no-code" intuitive way of designing deep learning models. They, however, exhibit limitations that do not allow the design of complex and scalable models, such as the impossibility of merging, connecting, and reusing layers and the impossibility of customizing the training procedure. In addition, they do not include important features for the complete development process, such as debugging features. Such limitations must be overcome to build larger and more complex networks, which can accomplish the recent design requirements that emerged in the community (Bahdan et al., 2017). UMLAUT (Kang et al., 2018) is, instead, an example of a tool targeted to non-expert developers that focuses on debugging.
Neural Network Console developed by Sony (Sony, 2016) is the only available web application that overcomes most of the above limitations. However, the DL library it supports has quite a limited user scope, with less than 1% usage over the DL community (Kang et al., 2018). Moreover, it is available in the cloud, only, thus limiting the possibility of using in-house machines and introducing several privacy concerns.
To address the limitations of the available visual programming tools, and to reduce the gap between visual tools' capabilities and the freedom of expression of pure coding, we propose _DeepBlocks_, a visual programming tool for DL that integrates training, debugging, and evaluation of neural network models under a single user interface. In this way, the tool covers the main steps of the DL development workflow (Fig. 1). DeepBlocks allows DL programmers to design neural networks by adding, connecting, and merging layers, which we refer to as "blocks", to create more complex layers and architectures. In addition, DeepBlocks allows users to create personalized blocks and add custom functions to easily adapt the tool to their application domain. DeepBlocks also allows developers to schedule and process multiple inputs and to design networks with multiple branches, being this a novel feature the existing tools, which only allows building layers with a single forward connection. DeepBlocks uses PyTorch, which is a DL library widely used in the DL community, adopted by up to 75% of the papers available in the literature.
Starting from a formative interview with 3 Machine Learning engineers and 2 Ph.D. students in the field of deep learning, we obtained nine crucial functionalities to be implemented in DeepBlocks, to make it useful, efficient, and versatile. We then present the design and implementation of the tool, and report a use case to validate it. Finally, we discuss possible advantages and limitations and conclude with future work.
## 2. Formative interviews and design goals
We conducted five semi-structured interviews, online and in-person, to five DL developers in October 2022. In the interviews, we focused on three main questions: 1) how they perform the design of deep learning architectures; 2) the main difficulties they face throughout the process; and 3) we discussed the possible advantages and disadvantages of the adoption of a visual programming interface to develop deep learning architectures.
We interviewed three data scientists who frequently develop deep learning models at a large, data-driven software company, and two artificial intelligence Ph.D. students, who both develop and apply deep learning models in their research field. 3 participants (P1-P5) self-identified as male and 2 self-identified as females, their age was between 25 and 29 years old, and they signed a consent form before starting the interview. We synthesized the results of the interviews into nine design goals, which guided the design of DeepBlocks. In a subsequent phase, participants were asked to rank the goals by importance and comment on their decisions.
### Results and Design Goals
For P1, P2, and P4 the main difficulty in programming deep neural networks is to understand from the code how the network is structured; they argue that the code structure, in a large number of cases, does not reflect the structure of the network, leading to inefficient design.
For P3 and P5 the most important issue is to locate bugs in the architecture, along with the fact that they can only spot them at the start of the training phase. From this insight, we set our first design goal to be _Interactive Debugging_. Furthermore, P1 and P3 point out that it takes a lot of effort to monitor the inputs and outputs for every layer of the network during training and that simplify such procedure would be useful to better understand the behavior of the model. We summarize this finding in a second design goal: _Visualization of blocks inputs and outputs_. Where we refer as a 'block' to an elementary architectural layer.
P4 and P5 add that the reuse of layers between different architectures is particularly prohibitive due to a lack of compatibility and difficulties in separating the individual layers from their context. From this insight, we obtain the third design goal: _Load existing and custom blocks_.
P1, P2, P4, and P5 are concerned about the possibility that a visual tool would be actually able to convey the same freedom of design that coding does; on the other hand, all participants agree that visual programming could be particularly suitable for the deep learning domain, due to the repetitiveness of layers that composes the architectures and the limited procedures schemes to train a network. Furthermore, P2, P4, and P5 argue that a simple and intuitive user interface is preferable over a more sophisticated one, even at the cost of implementing fewer customization options; this insight led us to derive a fourth design goal: _Simple and Intuitive Interface_. In addition, for P1, P2 and P5, in a visual programming interface default blocks should be organized in structured menus; from this observation, we obtain our fifth goal: _Blocks Organized in structured menus_. P1, P3, and P4 pointed out that such a visual tool should be capable of designing the same architectures that can be programmed with code, both in terms of complexity and scalability; from this point, we derive the sixth, seventh, and eighth design goals: _Hierarchical Aggregation of blocks_, _Customizable blocks_ and _Personalize optimization strategy_. P3 and P4 evidenced that not only such a visual tool should be capable of designing the same architectures that can be programmed with code but also it should be able to do the same kind of model evaluation; this led us to set the ninth design goal: _Visualization for model evaluation_.
In a second phase, participants were asked to rank the resulting design goals, with a score from 1 (lowest importance) to 9 (highest importance) in order for us to focus our efforts on the most relevant design features. The results shown in Figure 2 reflect the findings of the formative interviews, where participants agree to the need for a simple interface to visually program the architectures and a proper interactive debugging procedure that can easily notify users about the location and the type of bug.
## 3. DeepBlocks
DeepBlocks is a visual programming tool intended for Deep Learning engineers that need to develop and test complex deep learning architectures. We designed DeepBlocks to accomplish the needs extracted from the formative interviews. In this section, we list the main features of DeepBlocks and describe its design and implementation.
### An Example Scenario
We introduce an example scenario that will be developed throughout all Section 3 to illustrate to the reader how the implemented functionalities can be used by a programmer to achieve the design of a simple DL architecture.
_John is a Computer Engineering student, he is in his third year of B.S., and he is following the Artificial Intelligence course. In the second assignment, he has been charged with developing, to classify an image dataset, a simple neural network composed of five Fully Connected Layers, using DeepBlocks._
### Layout of DeepBlocks
_John downloads DeepBlocks installs its dependencies and launches the program. To design the network, he adds an input block and five fully connected blocks by clicking the "+" button on the respective voice in the right panel. By clicking on the "Add Data..." button in the "Input" voice on the tree menu in the left panel, John can select the dataset file from a dialog. John is not required to custom preprocess the dataset, since it is already in the format specified in the tool's documentation._
The layout of DeepBlocks is illustrated in Fig. 3. DeepBlocks consists of six main panes: Network Menu, The Architecture Visualizer and Builder, the Blocks Menu Pane, the Project Controls, the Results Visualization Pane, and the Optimization, Debug, and Visualization Pane. The Network Menu, located on the upper left, lists the current blocks present in the architectures and allows access to their specific controls and parameters. The Architecture Visualizer and Builder is the core of the visual programming capabilities of DeepBlocks; it visualizes the blocks and allows the user to connect the input and outputs terminal of different blocks. By right-clicking on a specific block, users can save it under the "custom" menu in the Blocks Pane, or, when multiple blocks are selected, users can merge them into an abstract "SuperBlock". In addition, the Architecture Visualizer and Builder provide debug tips: when a block is not correctly processed, its contours are colored red, and when a signal is not processed, due to an error in the block, the input and
Figure 2. The design goals we derived from formative interviews, ordered by the average importance given by every participant.
output terminals are colored yellow as shown in Fig. 3. The Blocks Pane lists the default available blocks, which are subdivided into main blocks, which are the most typical deep learning processing layers, as well as miscellaneous blocks, such as the one to concatenate data or to do a logical OR between two inputs. In addition, the Blocks Pane contains the controls that allow to saving and load one or multiple custom blocks. The Optimization, Debug, and Visualization Pane include the optimization controls to train and test the network, the debug information of the selected block, and a visualization section to visualize the inputs and outputs of every block. In the Results Visualization Pane are plotted the train and test accuracy. Finally, the Project Controls Pane, allows you to save or load a project.
### Visual Programming in DeepBlocks
_John connects the blocks he previously added starting from the Input block, and sequentially through every Fully Connected Block until the output block. He then sets the right input and output dimension for every Block, checking the output dimension of every Block in the Visualization pane. John then merges the five fully connected Blocks into a SuperBlock and renames it "Backbone". He then saves the "Backbone" block in the custom blocks tree, by right-clicking over it and selecting the "Save" option._
With respect to the literature, DeepBlocks provides a fully scalable way to visually design DL architectures and the possibility to design complex, multi-branch architectures. Large practical cases could be modeled in deep blocks thanks to the possibility of adding custom blocks, merging multiple blocks into hierarchical "SuperBlocks" as well as the possibility of scheduling multiple inputs and creating multi-branch connections.
A Block is composed of one or multiple input and output terminals. Every Block contains a Python function that characterizes it; users can add custom blocks by adding "Custom Block" from the Blocks Pane and specifying its custom function, or can directly load existing custom blocks; the latter can be useful for non-expert users, which when reusing a custom made block could only focus on its input and outputs and not on the underlying logic. The blocks specific properties can be set in the Network Menu (see Fig. 4).
Figure 3. DeepBlocks Layout
To scale up the designed architecture, selected blocks can be merged into more abstract "SuperBlocks" by invoking the right-click menu over the Architecture Visualizer and Builder and selecting the "Merge" option. SuperBlocks sub-blocks can be hierarchically visualized, along with their controls, in the Network Menu. To train or execute the architecture, a computational tree is generated, starting from input blocks and recurrently through every connection; if two branches converge on the same block, they are guaranteed to be processed sequentially before the successive computations. Cycles are not allowed in our setting. In order to expand the capabilities of the training procedure, we introduced a notion of "order" for every input. Inputs can be assigned to one or multiple orders. At every training step, orders are executed sequentially, and, for each order, only the signals coming from input blocks that belong to the specified order are passed downstream, while for the others is passed a null value. This allows the designing of many practical architectures that require alternation of different input signals. An input signal is a dictionary composed of the input value, the ground truth, the current order, and a flag indicating if the current is a test or a training step. By accessing the dictionary, the different custom functions can be programmed depending on the order currently executing. We believe that the above-proposed features, which are mostly missing in the literature, can make a step forward in allowing developers to design with our visual programming tool most of the variety of the typical architectural designs.
### Debug Features and Validation
_John notices that the contours of the "Backbone" SuperBlock are painted red; he inspects the debug pane to check what error is causing the block not to process, and understands that there is a problem in the input dimensions since the image must be flattened before passing it to the Fully Connected Block. He flattens the input of the first fully connected block on the left pane, and the red contour disappears. He then set up the optimization parameters, and finally trained the network. He checks the training meters by looking at the Train Metrics Plot on the right._
Although the model evaluation is not the main focus of DeepBlocks, which is rather more engineered on the architecture design, we added two main features to monitor the model results: the Visualization and the Train Metric
Figure 4. Illustration of a SuperBlock and its hierarchical visualization in the Network Menu
Plot panes (Fig. 5). The former reports, for the selected block, the input and output dimensions, as well as an heatmap of their values. The latter shows training reports evaluation metrics over the training and test data.
In addition, as in the John's story, a Debug Pane is available, showing the type and dimension of the various Blocks' inputs and outputs.
### Implementation Details
DeepBlocks has been implemented in Python 3.8 using PyTorch (Paszke et al., 2017) for Deep Learning modeling, PyQT5 (Paszke et al., 2017) for the user interface design, and PyQTgraph (Dam et al., 2018) for the visual programming features.
## 4. Use Case: Domain-Adversarial Neural Network
This section demonstrates the applicability of DeepBlocks in a typical use cases: how to visually design a domain-adversarial neural network (DANN) (DANN, 2016). DANN takes as inputs labeled samples from a source distribution and unlabelled samples from a target distribution and it learns how to extract the features to solve the task for both the source and target domains.
We start by adding two Inputs Blocks and load on the first the source and the second the target domain data. We then concatenate the outputs. To design the feature extractor, we add three Convolutional Blocks to the Architecture Visualization and Builder Pane, and connect them; we then select them and merge them into a single SuperBlock that we rename "Feature Extractor". From the miscellaneous blocks, we add the Copy Block that simply copies the input to one or more outputs, and we connect it to the outputs of the Feature Extractor. To design the label classifier, we add three Fully Connected Blocks, connect them and merge them as done with the Feature Extractor. We repeat the process for the Domain Classifier. To reverse the gradients of the Domain classifier, we add a Custom Block where we simply replace the predefined "backward" function returning its negative. Fig. 6 reports the resulting architecture along with the one in the reference paper (DANN, 2016).
Figure 5. Visualization Features of DeepBlocks.
## 5. Conclusions and Future Work
In this paper, we introduced DeepBlocks, a visual programming tool for deep learning software development. DeepBlocks allows developers to design and implement DL architectures visually. The tool provides several development features including model designing from scratch, interactive debugging, model training, and model inference. In addition, with respect to the previously available tools, DeepBlocks allows the designing of more complex, scalable, and custom architectures.
We designed DeepBlocks with the support of a formative interview with 5 participants, and we preliminary validated it through a use case. With the use case, in particular, we showed that just allowing a little customization of blocks, we can permit developers to visually design complex and experimental architectures. Clearly, there is a trade-off between customization capabilities and the actual automation that DeepBlocks provides in the process of DL programming. Allowing customization without losing automation is design-challenging, but it could augment the complexity of the tool; for example, when adding the notion of "order" -- to let users customize the training procedure -- we require the users to specify it for every training procedure. The right balance should be found between customization capabilities, complexity, and automation of the tool.
As future work, DeepBlocks can tame the problem of visualizing a large number of inputs and outputs (in the order of billion). Currently, there is also no way in the tool to understand the behavior of sub-blocks that composes a SuperBlock. This applies to debug as well; in fact, from a faulty SuperBlock you cannot visually locate the bug in the sub-blocks without expanding it in the Debug Pane, and this becomes unfeasible with a very large number of hierarchies.
Figure 6. Domain-Adversarial Neural Network as described in (Deng et al., 2018) (above) and visualized in DeepBlocks (below).
Among the next steps, DeepBlocks needs to undergo a series of user studies, involving its usability and effectiveness against similar tools and traditional programming approaches. Finally, once the tool is consolidated, we plan to release it and further evaluate the tool in a large-scale, in-the-wild study, e.g., with machine learning students.
|
2307.11607 | Finding Optimal Diverse Feature Sets with Alternative Feature Selection | Feature selection is popular for obtaining small, interpretable, yet highly
accurate prediction models. Conventional feature-selection methods typically
yield one feature set only, which might not suffice in some scenarios. For
example, users might be interested in finding alternative feature sets with
similar prediction quality, offering different explanations of the data. In
this article, we introduce alternative feature selection and formalize it as an
optimization problem. In particular, we define alternatives via constraints and
enable users to control the number and dissimilarity of alternatives. We
consider sequential as well as simultaneous search for alternatives. Next, we
discuss how to integrate conventional feature-selection methods as objectives.
In particular, we describe solver-based search methods to tackle the
optimization problem. Further, we analyze the complexity of this optimization
problem and prove NP-hardness. Additionally, we show that a constant-factor
approximation exists under certain conditions and propose corresponding
heuristic search methods. Finally, we evaluate alternative feature selection in
comprehensive experiments with 30 binary-classification datasets. We observe
that alternative feature sets may indeed have high prediction quality, and we
analyze factors influencing this outcome. | Jakob Bach | 2023-07-21T14:23:41Z | http://arxiv.org/abs/2307.11607v2 | # Finding Optimal Diverse Feature Sets
###### Abstract
Feature selection is popular for obtaining small, interpretable, yet highly accurate prediction models. Conventional feature-selection methods typically yield one feature set only, which might not suffice in some scenarios. For example, users might be interested in finding alternative feature sets with similar prediction quality, offering different explanations of the data. In this article, we introduce alternative feature selection and formalize it as an optimization problem. In particular, we define alternatives via constraints and enable users to control the number and dissimilarity of alternatives. Next, we analyze the complexity of this optimization problem and show \(\mathcal{NP}\)-hardness. Further, we discuss how to integrate conventional feature-selection methods as objectives. Finally, we evaluate alternative feature selection with 30 classification datasets. We observe that alternative feature sets may indeed have high prediction quality, and we analyze several factors influencing this outcome.
**Keywords:** feature selection, alternatives, constraints, mixed-integer programming, explainability, interpretability, XAI
## 1 Introduction
MotivationFeature-selection methods are ubiquitous for a variety of reasons. By reducing dataset dimensionality, they lower the computational cost and memory requirements of prediction models. Next, models may generalize better after removing irrelevant and spurious predictors. Finally, prediction models may become simpler [61], improving interpretability.
Most conventional feature-selection methods only return one feature set [11]. These methods optimize a criterion of feature-set quality, e.g., prediction performance. However, besides the optimal feature set, there might be other, differently composed feature sets with similar quality. Such alternative feature sets are interesting for users, e.g., to obtain several diverse explanations. Alternative explanations can provide additional insights into predictions, enable users to develop and test different hypotheses, appeal to different kinds of users, and foster trust in the predictions [50, 108].
Problem statementThis article addresses the problem of alternative feature selection, which we informally define as follows: Find multiple, sufficiently different feature sets that optimize feature-set quality. We provide formal definitions in Section 3.2. This problem entails an interesting trade-off: Depending on how different the alternatives should be, one might have to compromise on quality. In particular, a stronger dissimilarity requirement might require selecting more low-quality features in the alternatives.
Two points are essential for alternative feature selection, which we both address in this article. First, one needs to formalize and quantify what an alternative feature set is. In particular, users should be able to control the dissimilarity of alternatives and hence the aforementioned quality trade-off. Second, one needs an approach to find alternative feature sets efficiently. Ideally, the approach should be general, i.e., cover a broad range of conventional feature-selection methods, given the variety of the latter [15, 61].
Related workWhile finding alternative solutions has already been addressed extensively in the field of clustering [9], there is a lack of such approaches for feature selection. Only a few feature-selection methods target at obtaining multiple, diverse feature sets [11]. In particular, techniques for ensemble feature selection [92, 96] and statistically equivalent feature subsets [56] produce multiple feature sets but not optimal alternatives. These approaches do not guarantee the diversity of the feature sets, nor do they let users control diversity. In fields related to feature selection, the goal of obtaining multiple, diverse solutions has been studied as well, e.g., for subspace clustering [42, 72], subgroup discovery [59], subspace search [102], or explainable-AI techniques [2, 49, 71, 91] like counterfactuals. These approaches are not directly applicable or easily adaptable to feature selection, and most of them provide limited or no user control over alternatives, as we will elaborate in Section 4.
ContributionsOur contribution is fourfold.
First, we formalize alternative feature selection as an optimization problem. In particular, we define alternatives via constraints on feature sets. This approach is orthogonal to the feature-selection method itself so that users can choose the latter according to their needs. This approach also allows integrating other constraints on feature sets, e.g., to capture domain knowledge [6, 32]. Finally, this approach lets users control the search for alternatives with two parameters, i.e., the number of alternatives and a dissimilarity threshold.
Second, we analyze the computational complexity of this optimization problem. We show \(\mathcal{NP}\)-hardness, even for a simple notion of feature-set quality.
Third, we discuss how to solve this optimization problem. To that end, we describe how to integrate different categories of conventional feature-selection methods in the objective function of the optimization problem.
Fourth, we evaluate alternative feature selection with comprehensive experiments. In particular, we use 30 classification datasets from the Penn Machine Learning Benchmarks (PMLB) [82, 90] and five feature-selection methods. We
focus our evaluation on the feature-set quality of the alternatives relative to our user parameters. We publish all our code1 and experimental data2 online.
Footnote 1: [https://github.com/Jakob-Bach/Alternative-Feature-Selection](https://github.com/Jakob-Bach/Alternative-Feature-Selection)
Footnote 2: [https://doi.org/10.35097/1623](https://doi.org/10.35097/1623)
Experimental resultsWe observe that several factors influence the quality of alternatives, i.e., the dataset, feature-selection method, notion of feature-set quality, and parameters for searching alternatives. As expectable, feature-set quality tends to decrease with the number of alternatives and the dissimilarity threshold for alternatives. Thus, these parameters allow users to control the trade-off between dissimilarity and quality of alternatives. Also, even no valid alternative may exist if the parameter values are too strict. Computationally, a sequential search for multiple alternatives was significantly faster than a simultaneous one while yielding a similar quality. Finally, we observe that the prediction performance of feature sets may only weakly correlate with the quality assigned by feature-selection methods. In particular, seemingly bad alternatives regarding the latter might still be good regarding the former.
OutlineSection 2 introduces notation and fundamentals. Section 3 describes and analyzes alternative feature selection. Section 4 reviews related work. Section 5 outlines our experimental design, while Section 6 presents the experimental results. Section 7 concludes. Appendix A contains supplementary materials.
## 2 Fundamentals
In this section, we introduce basic notation (cf. Section 2.1) and review different methods to measure the quality of feature sets (cf. Section 2.2).
### Notation
\(X\in\mathbb{R}^{m\times n}\) stands for a dataset in the form of a matrix. Each row is a data object, and each column is a feature. \(F=\{f_{1},\ldots,f_{n}\}\) is the corresponding set of feature names. We assume that categorical features have already been made numeric, e.g., via one-hot encoding. \(X_{\cdot j}\in\mathbb{R}^{m}\) denotes the vector representation of the \(j\)-th feature. \(y\in Y^{m}\) represents the prediction target with domain \(Y\), e.g., \(Y=\{0,1\}\) for binary classification or \(Y=\mathbb{R}\) for regression.
In feature selection, one makes a binary decision \(s_{j}\in\{0,1\}\) for each feature, i.e., either selects it or not. The vector \(s\in\{0,1\}^{n}\) combines all these selection decisions and yields the selected feature set \(F_{s}=\{f_{j}\mid s_{j}=1\}\subseteq F\). The function \(Q(s,X,y)\) returns the quality of such a feature set. Without loss of generality, we assume that this function should be maximized.
### Measuring Feature (Set) Quality
There are different ways to evaluate feature-set quality \(Q(s,X,y)\). We only give a short overview here; see [15, 61, 81] for comprehensive studies and surveys of feature selection. A conventional categorization of feature-selection methods distinguishes between filter, wrapper, and embedded methods [36].
Filter methodsFilter methods evaluate feature sets without training a prediction model. Univariate filters assess each feature independently. They often assign a score to each feature, e.g., the absolute Pearson correlation or the mutual information between a feature and the prediction target. Such methods ignore potential interactions between features, e.g., redundancies. In contrast, multivariate filters evaluate feature sets as a whole. Such methods often combine a measure of feature relevance with a measure of feature redundancy. Examples include CFS [37, 38], FCBF [115], and mRMR [85].
Wrapper methodsWrapper methods [52] evaluate feature sets by training prediction models with them and measuring prediction quality. They employ a generic search strategy to iterate over candidate feature sets, e.g., genetic algorithms. Feature-set quality is a black-box function in this search.
Embedded methodsEmbedded methods train prediction models with built-in feature selection, e.g., decision trees [13] or random forests [12]. Thus, the criterion for feature-set quality is model-specific. For example, tree-based models often use information gain or the Gini index to select features during training.
Post-hoc feature-importance methodsApart from conventional feature selection, there are various methods that assess feature importance after training a model. These methods range from local explanation methods like LIME [87] or SHAP [63] to global importance methods like permutation importance [12] or SAGE [20]. In particular, assessing feature importance plays a crucial role in the field of machine-learning interpretability [14, 68].
## 3 Alternative Feature Selection
In this section, we present the problem and approaches for alternative feature selection. First, we define the overall structure of the optimization problem, i.e., objective and constraints (cf. Section 3.1). Second, we formalize the notion of alternatives via constraints (cf. Section 3.2). Third, we discuss different objective functions corresponding to different feature-set quality measures from Section 2.2. In particular, we describe how to solve the resulting optimization problem (cf. Section 3.3). Fourth, we analyze the computational complexity of the optimization problem (cf. Section 3.4).
### Optimization Problem
Alternative feature selection has two goals. First, the quality of an alternative feature set should be high. Second, an alternative feature set should differ from one or more other feature set(s). There are several ways to combine these two goals in an optimization problem:
First, one can consider both goals as objectives, obtaining an unconstrained multi-objective problem. Second, one can treat feature-set quality as objective and enforce alternatives with constraints. Third, one can consider being alternative as objective and constrain feature-set quality, e.g., with a lower bound. Fourth, one can define constraints for both, feature-set quality and being alternative, searching for feasible solutions instead of optimizing.
We stick to the second formulation, i.e., optimizing feature-set quality subject to being alternative. This formulation has the advantage of keeping the original objective function of feature selection. Thus, users do not need to specify a range or a threshold on feature-set quality but can control how alternative the feature sets must be instead. We obtain the following optimization problem for a single alternative feature set \(F_{s}\):
\[\begin{split}\max_{s}& Q(s,X,y)\\ \text{subject to:}& F_{s}\text{ being alternative}\end{split} \tag{1}\]
In the following, we discuss different objective functions \(Q(s,X,y)\) and suitable constraints for _being alternative_. Additionally, many feature-selection methods also limit the feature-set size \(|F_{s}|\) to a user-defined value \(k\in\mathbb{N}\), which adds a further, simple constraint to the optimization problem.
### Constraints - Defining Alternatives
In this section, we formalize alternative feature sets. First, we discuss the base case where an individual feature set is an alternative to another one (cf. Section 3.2.1). Second, we extend this notion to multiple alternatives, considering sequential and simultaneous search methods (cf. Section 3.2.2).
Our notion of alternatives is independent of the feature-selection method. We provide two parameters, i.e., a dissimilarity threshold \(\tau\) and the number of alternatives \(a\), allowing users to control the search for alternatives.
#### 3.2.1 Single Alternative
We consider a feature set an alternative to another feature set if it differs sufficiently. Mathematically, we express this notion with a set-dissimilarity measure [19, 26]. These measures typically assess how strongly two sets overlap and relate this to their sizes. E.g., a well-known set-dissimilarity measure is the Jaccard distance, which is defined as follows for the feature sets \(F^{\prime}\) and \(F^{\prime\prime}\):
\[d_{\text{acc}}(F^{\prime},F^{\prime\prime})=1-\frac{|F^{\prime}\cap F^{\prime \prime}|}{|F^{\prime}\cup F^{\prime\prime}|}=1-\frac{|F^{\prime}\cap F^{\prime \prime}|}{|F^{\prime}|+|F^{\prime\prime}|-|F^{\prime}\cap F^{\prime\prime}|} \tag{2}\]
In this article, we use a dissimilarity measure based on the Dice coefficient:
\[d_{\text{Dice}}(F^{\prime},F^{\prime\prime})=1-\frac{2\cdot|F^{\prime}\cap F^{ \prime\prime}|}{|F^{\prime}|+|F^{\prime\prime}|} \tag{3}\]
Generally, we do not have strong requirements on the set-dissimilarity measure \(d(\cdot)\). Our definitions of alternatives only assume symmetry, i.e., \(d(F^{\prime},F^{\prime\prime})=d(F^{\prime\prime},F^{\prime})\), and non-negativity, i.e., \(d(F^{\prime},F^{\prime\prime})\geq 0\), though one could adapt them to other conditions as well. In particular, the dissimilarity measure does not need to be a metric but can also be a semi-metric [110] like \(d_{\text{Dice}}(\cdot)\).
We leverage the set-dissimilarity measure for the following definition:
**Definition 1** (Single alternative).: Given a symmetric, non-negative set-dissimilarity measure \(d(\cdot)\) and a dissimilarity threshold \(\tau\in\mathbb{R}_{\geq 0}\), a feature set \(F^{\prime}\) is an alternative to a feature set \(F^{\prime\prime}\) (and vice versa) if \(d(F^{\prime},F^{\prime\prime})\geq\tau\).
The threshold \(\tau\) controls how alternative the feature sets must be and depends on the dataset as well as user preferences. In particular, requiring strong dissimilarity may cause a significant drop in feature-set quality. Some datasets may contain many features of similar utility, thereby enabling many alternatives of similar quality, while predictions on other datasets may depend on a few key features. Only users can decide which drop in feature-set quality is acceptable as a trade-off for obtaining alternatives. Thus, we leave \(\tau\) as a parameter. In case the set-dissimilarity measure \(d(\cdot)\) is normalized to \([0,1]\), like the Dice dissimilarity or Jaccard distance, the interpretation of \(\tau\) is user-friendly: Setting \(\tau=0\) allows identical alternatives, while \(\tau=1\) implies zero overlap.
If the choice of \(\tau\) is unclear a priori, users can try out different values and compare the resulting feature-set quality. One systematic approach is a binary search: Start with the mid-range value of \(\tau=0\), i.e., 0.5 for \(\tau\in[0,1]\). If the quality of the resulting alternative is too low, decrease \(\tau\) to 0.25, i.e., allow more similarity. If the quality of the resulting alternative is acceptably high, increase \(\tau\) to 0.75, i.e., check a more dissimilar feature set. Continue this procedure till an alternative with an acceptable quality-dissimilarity trade-off is found.
When implementing Definition 1, we can leverage the following proposition:
**Proposition 1** (Linearity of constraints for alternatives).: _Using the Dice dissimilarity (cf. Equation 3), one can express alternative feature sets (cf. Definition 1) with 0-1 integer linear constraints._
Proof.: We re-arrange terms in the Dice dissimilarity (cf. Equation 3) to get rid of the quotient of set sizes:
\[\begin{split} d_{\text{Dice}}(F^{\prime},F^{\prime\prime})=1-\frac {2\cdot|F^{\prime}\cap F^{\prime\prime}|}{|F^{\prime}|+|F^{\prime\prime}|}& \geq\tau\\ \Leftrightarrow|F^{\prime}\cap F^{\prime\prime}|& \leq\frac{1-\tau}{2}\cdot(|F^{\prime}|+|F^{\prime\prime}|)\end{split} \tag{4}\]
Next, we express set sizes in terms of the feature-selection vector \(s\):
\[\begin{split}|F_{s}|=&\sum_{j=1}^{n}s_{j}\\ |F_{s^{\prime}}\cap F_{s^{\prime\prime}}|=&\sum_{j=1} ^{n}s^{\prime}_{j}\cdot s^{\prime\prime}_{j}\end{split} \tag{5}\]
Finally, we replace each product \(s^{\prime}_{j}\cdot s^{\prime\prime}_{j}\) with an auxiliary variable \(t_{j}\), bound by additional constraints, to linearize it [69]:
\[\begin{split} t_{j}\leq& s^{\prime}_{j}\\ t_{j}\leq& s^{\prime\prime}_{j}\\ 1+t_{j}\geq& s^{\prime}_{j}+s^{\prime\prime}_{j}\\ t_{j}\in&\{0,1\}\end{split} \tag{6}\]
Combining Equations 4, 5, and 6, we obtain a set of constraints that only involve linear expressions of binary decision variables. In particular, there are only sum expressions and multiplications with constants but no products between variables. If one feature set is known, i.e., either \(s^{\prime}\) or \(s^{\prime\prime}\) is fixed, Equation 5 only multiplies variables with constants and is already linear without Equation 6.
Given a suitable objective function, which we discuss later, linear constraints allow using a broad range of solvers. As an alternative formulation, one could also encode such constraints into propositional logic (SAT) [103].
If the set sizes \(|F^{\prime}|\) and \(|F^{\prime\prime}|\) are constant, e.g., user-defined, Equation 4 implies that the threshold \(\tau\) has a linear relationship to the maximum number of overlapping features \(|F^{\prime}\cap F^{\prime\prime}|\). This correspondence eases the interpretation of \(\tau\) and makes us use the Dice dissimilarity in the following. In contrast, the Jaccard distance exhibits a non-linear relationship between \(\tau\) and the overlap size, which follows from re-arranging Equation 2 in combination with Definition 1:
\[\begin{split} d_{\text{Jacc}}(F^{\prime},F^{\prime\prime})=1- \frac{|F^{\prime}\cap F^{\prime\prime}|}{|F^{\prime}|+|F^{\prime\prime}|-|F^{ \prime}\cap F^{\prime\prime}|}&\geq\tau\\ \Leftrightarrow|F^{\prime}\cap F^{\prime\prime}|& \leq\frac{1-\tau}{2-\tau}\cdot(|F^{\prime}|+|F^{\prime\prime}|)\end{split} \tag{7}\]
Further, if \(|F^{\prime}|=|F^{\prime\prime}|\), as in our experiments, the Dice dissimilarity (cf. Equation 4) becomes identical to several other set-dissimilarity measures [26]. The parameter \(\tau\) then directly expresses which fraction of features in one set needs to differ from the other set and vice versa, which further eases interpretability:
\[d_{\text{Dice}}(F^{\prime},F^{\prime\prime})\geq\tau\Leftrightarrow|F^{\prime }\cap F^{\prime\prime}|\leq(1-\tau)\cdot|F^{\prime}|=(1-\tau)\cdot|F^{\prime \prime}| \tag{8}\]
Thus, if users are uncertain how to choose \(\tau\) and \(|F^{\prime}|\) is reasonably small, they can try out all values of \(\tau\in\{i/|F^{\prime}|\}\) with \(i\in\{1,\ldots,|F^{\prime}|\}\). In particular, these \(|F^{\prime}|\) unique values of \(\tau\) suffice to produce all possible results that one could obtain with an arbitrary \(\tau\in(0,1]\).
#### 3.2.2 Multiple Alternatives
If users desire multiple alternative feature sets rather than only one, we can determine these alternatives sequentially or simultaneously. The number of alternatives \(a\in\mathbb{N}_{0}\) is a parameter to be set by the user. The overall number of feature sets is \(a+1\) since we deem one feature set the 'original' one. Table 1 compares the sizes of the optimization problems for these two search methods.
Sequential alternativesWith sequential search, users obtain several alternatives iteratively, with one feature set per iteration. We constrain this new set to be an alternative to all previously found ones, which are given in the set \(\mathbb{F}\):
**Definition 2** (Sequential alternative).: A feature set \(F^{\prime\prime}\) is an alternative to a set of feature sets \(\mathbb{F}\) (and vice versa) if \(F^{\prime\prime}\) is a single alternative (cf. Definition 1) to each \(F^{\prime}\in\mathbb{F}\).
One could also think of less strict constraints, e.g., requiring only the average dissimilarity to all previously found feature sets to pass a threshold \(\tau\). However, definitions like the latter may allow some feature sets to overlap heavily or even be identical if other feature sets are very dissimilar. Thus, we require pairwise dissimilarity in Definition 2. Combining Equation 1 with Definition 2, we obtain the following optimization problem for each iteration of the search:
\[\max_{s} Q(s,X,y)\] (9) subject to: \[\forall F^{\prime}\in\mathbb{F}:\ d(F_{s},F^{\prime})\geq\tau\]
The objective function remains the same as for a single alternative (\(|\mathbb{F}|=1\)), i.e., we only optimize the quality of one feature set at once. Thus, the number of variables in the optimization problem is independent of the number of alternatives \(a\). Instead, we solve the optimization problem repeatedly; each alternative only adds one constraint to the problem. The first, 'original' feature set is the same as in conventional feature selection without constraints for alternatives. As we always compare only one variable feature set to existing, constant feature sets, we also do not need to introduce auxiliary variables as in Equation 6.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{Sequential search} & Simultaneous search \\ \cline{2-4} & Alternative \(i\) & Summed & \\ \hline Decision variables \(s\) & \(n\) & \((a+1)\cdot n\) & \((a+1)\cdot n\) \\ Linearization variables \(t\) & 0 & 0 & \(\frac{a\cdot(a+1)\cdot n}{2}\) \\ Alternative constraints & \(i\) & \(\frac{a\cdot(a+1)}{2}\) & \(\frac{a\cdot(a+1)}{2}\) \\ Linearization constraints & 0 & 0 & \(\frac{3\cdot a\cdot(a+1)\cdot n}{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Size of the optimization problem by search method, for \(a\) alternatives (\(a+1\) feature sets overall) and \(n\) features.
Thus, we expect the runtime of sequential search to scale well with the number of alternatives. Further runtime gains may arise if the solver keeps a state between iterations and can warm-start.
However, as the solution space becomes narrower over iterations, feature-set quality can deteriorate with each further alternative. In particular, multiple alternatives from the same sequential search might differ significantly in their quality. As a remedy, users can decide after each iteration if the feature-set quality is already unacceptably low or if another alternative should be found. In particular, users do not need to define the number of alternatives \(a\) a priori.
Simultaneous alternativesWith simultaneous search, users obtain multiple alternatives at once, so they need to decide on the number of alternatives beforehand. We use pairwise dissimilarity constraints again:
**Definition 3** (Simultaneous alternatives).: A set of feature sets \(\mathbb{F}\) contains simultaneous alternatives if each feature set \(F^{\prime}\in\mathbb{F}\) is a single alternative (cf. Definition 1) to each other set \(F^{\prime\prime}\in\mathbb{F}\), \(F^{\prime}\neq F^{\prime\prime}\).
Combining Equation 1 with Definition 3, we obtain the following optimization problem for \(a+1\) feature sets:
\[\max_{s^{(0)},\ldots,s^{(a)}}\quad\operatorname*{agg}_{i\in\{0, \ldots,a\}}Q(s^{(i)},X,y) \tag{10}\] \[\text{subject to:}\quad\forall i_{1},i_{2}\in\{0,\ldots,a\},\ i_{1 }\neq i_{2}:\ d(F_{s^{(i_{1})}},F_{s^{(i_{2})}})\geq\tau\]
In contrast to the sequential case (cf. Equation 9), we need to introduce further decision variables and modify the objective function here. The operator \(\operatorname*{agg}(\cdot)\) defines how to aggregate the feature-set qualities of the alternatives. In our experiments, we consider the sum as well as the minimum to instantiate \(\operatorname*{agg}(\cdot)\), which we refer to as _sum-aggregation_ and _min-aggregation_. The latter explicitly fosters balanced feature-set qualities. Appendix A.1 discusses these two aggregation operators and additional ideas for balancing qualities in detail.
Runtime-wise, we expect simultaneous search to scale worse with the number of alternatives than sequential search, as it tackles one large optimization problem instead of multiple smaller ones. In particular, the number of decision variables increases linearly with the number of alternatives \(a\). Also, for each feature and each pair of alternatives, we need to introduce an auxiliary variable if we want to obtain linear constraints (cf. Equation 6 and Table 1).
In contrast to the greedy procedure of sequential search, simultaneous search optimizes alternatives globally. Thus, the simultaneous procedure should yield the same or higher average feature-set quality for the same number of alternatives. Also, the quality can be more evenly distributed over the alternatives, as opposed to the dropping quality over the course of the sequential procedure. However, increasing the number of alternatives still has a negative effect on the average feature-set quality. Further, as opposed to the sequential procedure, there are no intermediate steps where users could interrupt the search.
### Objective Functions - Finding Alternatives
In this section, we discuss how to find alternative feature sets. In particular, we describe how to solve the optimization problem from Section 3.1 for the different categories of feature-set quality measures from Section 2.2. We distinguish between white-box optimization (cf. Section 3.3.1), black-box optimization (cf. Section 3.3.2), and embedding alternatives (cf. Section 3.3.3).
#### 3.3.1 White-Box Optimization
If the feature-set quality function \(Q(s,X,y)\) is sufficiently simple, one can tackle alternative feature selection with a suitable white-box solver. We already showed that our notion of alternative feature sets results in 0-1 integer linear constraints (cf. Proposition 1). We now discuss several feature-selection methods with objectives that admit formulating a 0-1 integer linear problem. Appendix A.2 describes feature-selection methods we did not include in our experiments.
Univariate filter feature selectionFor univariate filter feature selection, the objective function is linear by default. In particular, these methods decompose the quality of a feature set into the qualities of the individual features:
\[Q_{\text{uni}}(s,X,y)=\sum_{j=1}^{n}q(X_{\cdot j},y)\cdot s_{j} \tag{11}\]
Here, \(q(\cdot)\) typically is a bivariate dependency measure, e.g., mutual information [55] or the absolute value of Pearson correlation, to quantify the relationship between one feature and the prediction target.
For this objective, Appendix A.3 specifies the complete optimization problem, including the constraints for alternatives. Appendix A.4 describes how to potentially speed up optimization by leveraging the monotonicity of the objective. Appendix A.6 proposes heuristic search methods, while we use exact optimization in our experiments.
Instead of an integer problem, one could formulate a weighted partial maximum satisfiability (MaxSAT) problem [5, 60], i.e., a weighted Max One problem [47]. In particular, Equation 11 is a sum of weighted binary variables, and the constraints for alternatives can be turned into SAT formulas with a cardinality encoding [99] for the sum expressions.
Post-hoc feature importanceFrom the technical perspective, one can also insert values of post-hoc feature-importance scores into Equation 11. For example, one can pre-compute permutation importance [12] or SAGE scores [20] for each feature and use them as \(q(X_{\cdot j},y)\). However, such post-hoc importance scores often evaluate the usefulness of each feature in the presence of other features. Thus, the importance scores of different features are not independent of each other, violating the implicit assumption behind Equation 11. For example, a feature might show high post-hoc importance if another feature is present,
due to feature interaction, but low importance else. Equation 11 cannot express such conditional importance but requires one overall quality value for each feature. Re-calculating feature importance for each possible alternative feature set is infeasible. In practice, one can still use Equation 11 with importance scores only computed on the full dataset \(X\), i.e., with all features being present. While such an approach might not represent importance in feature subsets faithfully, it can serve as a heuristic nevertheless.
FcbfThe Fast Correlation-Based Filter (FCBF) [115] bases on the notion of predominance: Each selected feature's correlation with the prediction target must exceed a user-defined threshold as well as the correlation of each other selected feature with the given one. While the original FCBF uses a heuristic search to find predominant features, we propose a formulation as a constrained optimization problem to enable a white-box optimization for alternatives:
\[\begin{split}\max_{s}& Q_{\text{FCBF}}(s,X,y)= \sum_{j=1}^{n}q(X_{\cdot j},y)\cdot s_{j}\\ \text{subject to:}&\forall j_{1},j_{2}\in\{1,\ldots, n\},\ j_{1}\neq j_{2},\ (*):s_{j_{1}}+s_{j_{2}}\leq 1\\ \text{with }(*)&\text{:}& q(X_{\cdot j_{1}},y)\leq q(X_{\cdot j_{2}},X_{\cdot j_{1}})\end{split} \tag{12}\]
We drop the original FCBF's threshold parameter on feature-target correlation and maximize the latter instead, as in the univariate-filter case. This change could produce large feature sets that contain many low-quality features. As a countermeasure, one can constrain the feature-set sizes, as we do in our experiments. Additionally, one could also filter out the features with low target correlation before optimization. Further, we keep FCBF's constraints on feature-feature correlation. In particular, we prevent the simultaneous selection of two features if the correlation between them is at least as high as one of the features' correlation to the target. As the 'with'-condition in Equation 12 does not depend on the decision variables \(s\), one can check whether it holds before optimization and add the corresponding linear constraint on \(s\) only if needed.
mRmrMinimal Redundancy Maximum Relevance (mRMR) [85] combines two criteria, i.e., feature relevance and feature redundancy. Relevance corresponds to the dependency between features and prediction target, which should be maximized, as for univariate filters. Redundancy corresponds to the dependency between features, which should be minimized. Using a bivariate dependency measure \(q(\cdot)\), the objective is maximizing the following difference between relevance and redundancy:
\[Q_{\text{mRMR}}(s,X,y)=\frac{\sum_{j=1}^{n}q(X_{\cdot j},y)\cdot s_{j}}{\sum_ {j=1}^{n}s_{j}}-\frac{\sum_{j_{1}=1}^{n}\sum_{j_{2}=1}^{n}q(X_{\cdot j_{1}},X _{\cdot j_{2}})\cdot s_{j_{1}}\cdot s_{j_{2}}}{(\sum_{j=1}^{n}s_{j})^{2}} \tag{13}\]
If one knows the feature-set size \(\sum_{j=1}^{n}s_{j}\) to be a constant \(k\), the denominators of both fractions are constant, so the objective leads to a quadratic-programming
problem [80, 89]. If one additionally replaces each product terms \(s_{j_{1}}\cdot s_{j_{2}}\) according to Equation 6, the problem becomes linear. However, there is a more efficient linearization [76, 78], which we use in our experiments:
\[\max_{s} Q_{\text{mRMR}}(s,X,y) =\frac{\sum_{j=1}^{n}q(X_{\cdot j},y)\cdot s_{j}}{k}-\frac{\sum_{ j=1}^{n}z_{j}}{k\cdot(k-1)}\] (14) subject to: \[\forall j_{1}: A_{j_{1}} =\sum_{j_{2}\neq j_{1}}q(X_{\cdot j_{1}},X_{\cdot j_{2}})\cdot s_{ j_{2}}\] \[\forall j: z_{j} \geq M\cdot(s_{j}-1)+A_{j}\] \[\forall j: z_{j} \in\mathbb{R}_{\geq 0}\] with indices: \[j,j_{1},j_{2} \in\{1,\dots,n\}\]
Here, \(A_{j_{1}}\) is the sum of all redundancy terms related to the feature with index \(j_{1}\). Thus, one can use one real-valued auxiliary variable \(z_{j}\) for each feature instead of one new binary variable for each pair of features. Since redundancy should be minimized, \(z_{j}\) assumes the value of \(A_{j}\) with equality if the feature with index \(j\) is selected (\(s_{j}=1\)) and is zero else (\(s_{j}=0\)). To that end, \(M\) is a large positive value that deactivates the constraint on \(z_{j}\) if \(s_{j}=0\).
Since Equation 14 assumes the feature-set size \(k\in\mathbb{N}\) to be user-defined before optimization, it requires fewer auxiliary variables and constraints than the more general formulation in [76, 78]. Further, following [80], we set the self-redundancy terms \(q(X_{\cdot j},X_{\cdot j})\), to zero and thereby exclude them from the objective. Thus, the redundancy term uses \(k\cdot(k-1)\) instead of \(k^{2}\) for averaging.
#### 3.3.2 Black-Box Optimization
If feature-set quality has no closed-form expression, one has to treat it as a black-box function when searching for alternatives. This situation applies to wrapper feature-selection methods, which use prediction models to assess feature-set quality. One can optimize such black-box functions with search heuristics that systematically iterate over candidate feature sets. However, search heuristics often assume an unconstrained search space and may propose candidate feature sets that are not alternative enough. We see four ways to address this issue:
Enumerating feature setsInstead of using a search heuristic, one may enumerate all feature sets that are alternative enough. E.g., one can iterate over all feature sets and sort out those violating the constraints or use a solver to enumerate all valid alternatives directly. Both approaches are usually very inefficient, as there can be a vast number of alternatives.
Sampling feature setsInstead of considering all possible alternatives, one can also sample a limited number. E.g., one could sample from all feature sets but remove samples that are not alternative enough. However, if the number of valid alternatives is small, this approach might need many samples. One could also sample with the help of a solver. However, uniform sampling from
a constrained space is a computationally hard problem, possibly harder than determining if a valid solution exists or not [28].
Multi-objective optimizationIf one phrases alternative feature selection as a multi-objective problem (cf. Section 3.1), there are no hard constraints anymore, and one could apply a standard multi-objective black-box search procedure. However, we chose to analyze a different problem formulation.
Adapting searchOne can adapt an existing search heuristic to consider the constraints for alternatives. One idea is to prevent the search from producing feature sets that violate the constraints or at least make the latter less likely, e.g., with a penalty in the objective function. Another idea is to'repair' feature sets in the search that violate constraints, e.g., replacing them with the most similar feature sets satisfying the constraints. Such solver-assisted search approaches are common in search procedures for software feature models [34, 41, 109]. One could also apply solver-based repair to sampled feature sets.
Greedy wrapperFor wrapper feature selection in our experiments, we propose a method that falls into the category _adapting search_. In particular, we adopt a greedy hill-climbing strategy [52] that observes constraints, as displayed in Algorithm 1. First, the algorithm uses a solver to find one solution that is alternative enough, given the current constraints (Line 1). Thus, it has a valid starting point and can always return a solution unless there are no valid solutions at all. Next, it tries'swapping' two features, i.e., selecting the features if they were deselected or deselecting them if they were selected (Line 7). For simultaneous search, we swap the affected two features in each alternative feature set. This swap might violate cardinality constraints as well as constraints for alternatives. Thus, the algorithm calls the solver again to find a solution \(s^{\prime}\) containing this swap and satisfying the other constraints. If such a solution \(s^{\prime}\) exists and its quality \(Q(s^{\prime},X,y)\) improves the current solution, the algorithm continues from the new solution and tries again to swap the first and second feature (Lines 10-12). Else, it attempts to swap the next pair of features (Lines 13-17). In particular, we only evaluate one solution per swap before moving on rather than enumerating all valid solutions containing the swap.
The algorithm terminates if no swap leads to an improvement or a fixed number of iterations \(max\_iters\) is reached (Line 6). Due to its heuristic nature, the algorithm might get stuck in local optima rather than yielding the global optimum. In particular, \(max\_iters\) only is an upper bound on the iteration count since the algorithm can stop earlier. We define the iteration count as the number of calls to the solver, i.e., attempts to generate feature sets. This number also bounds the number of prediction models trained. However, we only train a model for valid solutions, and not all solver calls may yield one.
#### 3.3.3 Embedding Alternatives
If feature selection is embedded into a prediction model, there is no general approach for finding alternative feature sets. Instead, one would need to embed the search for alternatives into model training as well. Thus, we leave the formulation of specific approaches open for future work. E.g., one could adapt the training of decision trees to not split on a feature if the resulting feature set of the tree was too similar to a given feature set. As another example, there are various formal encodings of prediction models, e.g., as SAT formulas [75, 94, 114], where 'training' already uses a solver. In such representations, one may directly add constraints for alternatives.
### Computational Complexity
In this section, we analyze the time complexity of alternative feature selection. In particular, we study the scalability regarding the number of features \(n\in\mathbb{N}\), feature-set size \(k\in\mathbb{N}\) and number of alternatives \(a\in\mathbb{N}_{0}\). Section 3.4.1 discusses exhaustive search for arbitrary feature-selection methods, while Section 3.4.2 examines univariate feature qualities. Section 3.4.3 summarizes key results.
#### 3.4.1 Exhaustive Search for Arbitrary Feature-Selection Methods
An exhaustive search over the entire search space is the arguably simplest though inefficient approach to finding alternative feature sets. This approach provides an upper bound for the time complexity of a runtime-optimal search algorithm. In this section, we assume unit costs for elementary arithmetic operations like addition, multiplication, and comparison of two numbers.
Conventional feature selectionIn general, the search space of feature selection grows exponentially with \(n\), even without alternatives. In particular, there are \(2^{n}-1\) possibilities to form a single non-empty feature set of arbitrary size. For a fixed feature-set size \(k\), there are \(\binom{n}{k}=\frac{n!}{k!\cdot(n-k)!}\leq n^{k}\) solution candidates. In an exhaustive search, we iterate over these feature sets:
**Proposition 2** (Complexity of exhaustive conventional feature selection).: _Exhaustive search for one feature set of size \(k\) from \(n\) features has a time complexity of \(O(n^{k})\) without the cost of evaluating the objective function._
Evaluating the objective means computing the quality of each solution candidate so that we can determine the best feature set in the end. The cost of this step depends on the feature-selection method but should usually be polynomial in \(n\). Even better, since feature-set quality typically only depends on selected features rather than unselected ones, this cost may be polynomial in \(k\ll n\).
If we assume \(k\ll n,\ k\in O(1)\), i.e., \(k\) being a small constant, independent from \(n\), then the complexity in Proposition 2 is polynomial rather than exponential in \(n\). This assumption makes sense for feature selection, where one typically wants to obtain a small feature set from a high-dimensional dataset. However, the exponent \(k\) may still render an exhaustive search practically infeasible. In terms of parameterized complexity, the problem resides in class \(\mathcal{XP}\) since the runtime term has the form \(O(f(k)\cdot n^{g(k)})\)[25], here with parameter \(k\) and functions \(f(k)=1\), \(g(k)=k\).
Sequential searchLike conventional feature selection, sequential search for alternatives (cf. Definition 2) finds a single feature set at once. However, not all size-\(k\) feature sets are valid anymore. In particular, the constraints for alternatives put an extra cost on each solution candidate. Constraint checking involves iterating over all existing feature sets and features to compute the dissimilarity between sets (cf. Equation 19). This procedure entails a cost of \(O(a\cdot n)\) for each new alternative and \(O(a^{2}\cdot n)\) for the whole sequential search with \(a\) alternatives. Combining this cost with Proposition 2, we obtain the following proposition:
**Proposition 3** (Complexity of exhaustive sequential search).: _Exhaustive sequential search for \(a\in\mathbb{N}\) alternative feature sets of size \(k\) from \(n\) features has a time complexity of \(O(a^{2}\cdot n^{k+1})\) without the cost of evaluating the objective function._
Thus, the runtime resides in the parameterized complexity class \(\mathcal{XP}\) with the parameter \(k\) and remains polynomial if \(k\in O(1)\) and \(a\in O(n^{c}),\ c\in O(1)\).
Simultaneous searchSimultaneous search(cf. Definition 3) enlarges the search space since it optimizes \(a+1\) feature sets at once. Thus, an exhaustive search over size-\(k\) feature sets iterates over \(O(n^{k\cdot(a+1)})\) solution candidates. Including the cost of constraint checking, we arrive at the following proposition:
**Proposition 4** (Complexity of exhaustive simultaneous search).: _Exhaustive simultaneous search for \(a\in\mathbb{N}\) alternative feature sets of size \(k\) from \(n\) features has a time complexity of \(O(a^{2}\cdot n^{k\cdot(a+1)+1})\) without the cost of evaluating the objective function._
The scalability with \(n\) is worse than for sequential search since the number of alternatives appears in the exponent now, except for a special case discussed in Appendix A.5.1. Proposition 4 also assumes that the constraints do not use linearization variables (cf. Equations 6 and 20), which would enlarge the search space even further. Finally, the complexity remains polynomial in \(n\) if \(a\) and \(k\) are small and independent from \(n\), i.e., \(a\cdot k\in O(1)\):
**Proposition 5** (Parameterized complexity of simultaneous search).: _Simultaneous search for \(a\in\mathbb{N}\) alternative feature sets of size \(k\) from \(n\) features resides in the parameterized complexity class \(\mathcal{XP}\) for the parameter \(a\cdot k\)._
#### 3.4.2 Univariate Feature Qualities
MotivationWhile the assumption \(a\cdot k\in O(1)\) ensures polynomial runtime for arbitrary feature-selection methods, the optimization problem can still be hard without this assumption. In the following, we derive complexity results for univariate feature qualities (cf. Equation 11 and Appendix A.3). This feature-selection method has the arguably simplest objective function, i.e., a feature set's quality equals the sum of its constituent features' qualities. This simplicity eases the transformation from and to well-known \(\mathcal{NP}\)-hard problems. Appendix A.5.2 discusses related work on these problems in detail.
Min-aggregation with complete partitioningWe start with three assumptions, which we will drop later: First, we use a dissimilarity threshold of \(\tau=1\), i.e., zero overlap of feature sets. Second, all features must be part of one set. Third, we analyze simultaneous search with min-aggregation (cf. Equation 16). We call the combination of the first two assumptions, which implies \(n=(a+1)\cdot k\), a _complete partitioning_. This scenario differs from the one for which we made polynomial-runtime claims in Section 3.4.1.
A key factor for the hardness of partitioning is the number of solutions: There are \(\genfrac{\{}{\}}{0.0pt}{}{n}{a}\) ways to partition a set of \(n\) elements into \(a\) non-empty subsets, a Stirling number of the second kind [31], which roughly scale like \(a^{n}/a!\)[70], i.e., exponential in \(n\) for a fixed \(a\). Even if the subset sizes are fixed, the scalability regarding \(n\) remains bad since it bases on a multinomial coefficient.
Our complete-partitioning scenario is a variant of the Multi-Way Number Partitioning problem: Partition a multiset of \(n\) integers into \(a\) subsets such
that the sums of all subsets are as equal as possible [54]. One problem formulation, called Multiprocessor Scheduling in [30], minimizes the maximum subset sum: The goal is to assign tasks with different lengths to a fixed number of processors such that the maximum processor runtime is minimal. Multiplying task lengths with \(-1\), one can turn the minimax problem of Multiprocessor Scheduling into the maximin formulation of simultaneous search with min-aggregation: The tasks become features, the negative task lengths become univariate feature qualities, and the processors become feature sets. Since Multiprocessor Scheduling is \(\mathcal{NP}\)-complete, even for just two partitions [30], our problem is \(\mathcal{NP}\)-complete as well:
**Proposition 6** (Complexity of simultaneous search with min-aggregation, complete partitioning, and unconstrained feature-set size).: _Assuming univariate feature qualities, a dissimilarity threshold \(\tau=1\), unconstrained feature-set sizes, and all \(n\) features have to be selected, simultaneous search for alternative feature sets with min-aggregation is \(\mathcal{NP}\)-complete._
Since the assumptions in Proposition 6 denote a special case of alternative feature selection, we directly obtain the following, more general proposition:
**Proposition 7** (Complexity of simultaneous search with min-aggregation).: _Simultaneous search for alternative feature sets with min-aggregation is \(\mathcal{NP}\)-hard._
While Proposition 6 allowed arbitrary sets sizes, there are also existing partitioning problems for constrained \(k\), e.g., called Balanced Number Partitioning or K-Partitioning. K-Partitioning with a minimax objective is \(\mathcal{NP}\)-hard [4] and can be transformed into our maximin objective as above:
**Proposition 8** (Complexity of simultaneous search with min-aggregation, complete partitioning, and constrained feature-set size).: _Assuming univariate feature qualities, a dissimilarity threshold \(\tau=1\), desired feature-set size \(k\), and all \(n\) features have to be selected, simultaneous search for alternative feature sets with min-aggregation is \(\mathcal{NP}\)-complete._
Min-aggregation with incomplete partitioningWe now allow that some features may not be part of any feature set while we keep the assumption of zero feature-set overlap. The problem of finding such an _incomplete partitioning_ still is \(\mathcal{NP}\)-complete in general (cf. Appendix A.5.3 for the proof):
**Proposition 9** (Complexity of simultaneous search with min-aggregation, incomplete partitioning, and constrained feature-set size).: _Assuming univariate feature qualities, a dissimilarity threshold \(\tau=1\), desired feature-set size \(k\), and not all \(n\) features have to be selected, simultaneous search for alternative feature sets with min-aggregation is \(\mathcal{NP}\)-complete._
Min-aggregation with overlapping feature setsThe problem with \(\tau<1\), i.e., set overlap, also is \(\mathcal{NP}\)-hard in general (cf. Appendix A.5.3 for the proof):
**Proposition 10** (Complexity of simultaneous search with min-aggregation, \(\tau<1\), and constrained feature-set size).: _Assuming univariate feature qualities, a dissimilarity threshold \(\tau<1\), and desired feature-set size \(k\), simultaneous search for alternative feature sets with min-aggregation is \(\mathcal{NP}\)-hard._
Sum-aggregationIn contrast to the previous \(\mathcal{NP}\)-hardness results for min-aggregation, sum-aggregation (cf. Equation 15) with \(\tau=1\) admits polynomial-time algorithms (cf. Appendix A.5.3 for the proof):
**Proposition 11** (Complexity of search with sum-aggregation and \(\tau=1\)).: _Assuming univariate feature qualities and a dissimilarity threshold \(\tau=1\), the search for alternative feature sets with sum-aggregation has a time complexity of \(O(n)\) for a complete partitioning of \(n\) features and \(O(n\cdot\log n)\) for an incomplete partitioning._
This feasibility result applies to sequential and simultaneous search, an arbitrary number of alternatives \(a\), and arbitrary feature-set sizes. The key reason for polynomial runtime is that sum-aggregation does not require balancing the feature sets' qualities. Thus, \(\tau=1\) allows many solutions with the same objective value. While at least one of these solutions also optimizes the objective with min-aggregation, most do not. Hence, it is not a contradiction that optimizing with min-aggregation is considerably harder.
#### 3.4.3 Summary
We showed that simultaneous search for alternative feature sets is \(\mathcal{NP}\)-hard in general (cf. Proposition 7). We also placed it in the parameterized complexity class \(\mathcal{XP}\) (cf. Proposition 5), having \(a\) and \(k\) as the parameters that drive the hardness of the problem. For univariate feature qualities and min-aggregation, we obtained more specific \(\mathcal{NP}\)-hardness results for (1) complete partitioning, i.e., \(\tau=1\) and \((a+1)\cdot k=n\) (cf. Proposition 8), (2) incomplete partitioning, i.e., \((a+1)\cdot k<n\) (cf. Proposition 9) and (3) feature set overlap, i.e., \(\tau<1\) (cf. Proposition 10). In contrast, we also inferred polynomial runtime for univariate feature qualities, sum-aggregation, and \(\tau=1\) (cf. Proposition 11).
## 4 Related Work
In this section, we review related work from the fields of feature selection (cf. Section 4.1), subgroup discovery (cf. Section 4.2), clustering (cf. Section 4.3), subspace clustering and subspace search (cf. Section 4.4), and explainable artificial intelligence (cf. Section 4.5). To the best of our knowledge, searching for optimal alternative feature sets in the sense of this paper is novel. However, there is literature on optimal alternatives outside the field of feature selection. Also, there are works on finding multiple, diverse feature sets.
### Feature Selection
Conventional feature selectionMost feature-selection methods only yield one solution [11], though some exceptions exist. Nevertheless, none of the following approaches searches for optimal alternatives in our sense.
[97] proposes a genetic algorithm that iteratively updates a population of multiple feature sets. To foster diversity, the algorithm's fitness criterion does not only consider feature-set quality but also a penalty on feature-set overlap in the population. However, users cannot control the admissible overlap, i.e., there is no parameter comparable to \(\tau\). In contrast, the genetic algorithm's parameter for the population size corresponds to the number of alternatives.
[27] employs multi-objective genetic algorithms to obtain prediction models with different complexity and diverse feature sets. However, the two objectives are prediction performance and feature-set size, while diversity only influences the genetic selection step under particular circumstances.
[73] clusters features and forms alternatives by picking one feature from each cluster. However, they do this to reduce the number of features for subsequent model selection and model evaluation, not as a guided search for alternatives.
Ensemble feature selectionEnsemble feature selection [92, 96] combines feature-selection results, e.g., obtained by different feature-selection methods or on different samples of the data. Fostering diverse feature sets might be a subgoal to improve prediction performance, but it is usually only an intermediate step. This focus differs from our goal of finding optimal alternatives.
[113] obtains feature sets or rankings on bootstrap samples of the data. Next, an aggregation strategy creates one or multiple diverse feature sets. The authors propose using k-medoid clustering and frequent itemset mining for the latter. While these approaches allow to control the number of feature sets, there is no parameter for their dissimilarity. Also, aggregation builds on bootstrap sampling instead of being allowed to form arbitrary alternatives.
[62] builds an ensemble prediction model from classifiers trained on different feature sets. To this end, a genetic algorithm iteratively evolves a population of feature sets. Diversity is one of multiple fitness criteria, with the Hamming distances quantifying the dissimilarity of feature sets. However, since feature diversity is only one of several objectives, users cannot control it directly.
[35] computes feature relevance separately for each class and then combines the top features. This procedure can yield alternatives but does not enforce dissimilarity. Also, the number of alternatives is fixed to the number of classes.
Statistically equivalent feature setsApproaches for statistically equivalent feature sets [11, 56] use statistical tests to determine features or feature sets that are equivalent for predictions. E.g., a feature may be independent of the target given another feature. A search algorithm conducts multiple such tests and outputs equivalent feature sets or a corresponding feature grouping.
Our notion of alternatives differs from equivalent feature sets in several aspects. In particular, building optimal alternatives from equivalent feature sets
is not straightforward. Depending on how the statistical tests are configured, there can be an arbitrary number of equivalent feature sets without explicit quality-based ordering. Instead, we always provide a fixed number of alternatives. Also, our alternatives need not have equivalent quality but should be optimal under constraints. Further, our dissimilarity threshold allows controlling overlap between feature sets instead of eliminating all redundancies.
Constrained feature selectionWe define alternatives via constraints on feature sets. There already is work on other kinds of constraints in feature selection, e.g., for feature cost [83], feature groups [116], or domain knowledge [6, 32]. These approaches are orthogonal to our work, as such constraints do not explicitly foster optimal alternatives. At most, they might implicitly lead to alternative solutions [6]. Further, most of the approaches are tied to particular constraint types, while our integer-programming formulation also supports such constraints besides the ones for alternatives. [6] is an exception in that regard since it models feature selection as a Satisfiability Modulo Theories (SMT) optimization problem, which admits our constraints for alternatives as well.
### Subgroup Discovery
[59] presents six strategies to foster diversity in subgroup set discovery, which searches for interesting regions in the data space, i.e., combinations of conditions on feature values, rather than only selecting features. Three strategies yield a fixed number of alternatives, and the other three a variable number. The strategies become part of beam search, i.e., a heuristic search procedure, while we mainly consider exact optimization. Also, the criteria for alternatives differ from ours. The strategy _fixed-size description-based selection_ prunes subgroups with the same quality as previously found ones if they differ by at most one feature-value condition. In contrast, we require dissimilarity independent from the quality, have a flexible dissimilarity threshold, and support simultaneous besides sequential search for alternatives. Another strategy, _variable-size description-based selection_, limits the total number of subgroups a feature may occur in but does not constrain subgroup overlap per se. The four remaining strategies in [59] have no obvious counterpart in our feature-selection scenario.
### Clustering
Finding alternative solutions has been addressed extensively in the field of clustering. [9] gives a taxonomy and describes algorithms for alternative clustering. Our problem definition in Sections 3.1 and 3.2 is, on a high level, inspired by the one in [9]: Find multiple solutions that maximize quality while minimizing similarity. [9] also distinguishes between singular/multiple alternatives and sequential/simultaneous search. They mention constraint-based search for alternatives as one of several solution paradigms. Further, feature selection can help to find alternative clusterings [101]. Nevertheless, the problem definition for alternatives in clustering and feature selection is fundamentally different.
First, the notion of dissimilarity differs, as we want to find differently composed feature sets while alternative clustering targets at different assignments of data objects to clusters. Second, our objective function, i.e., feature-set quality, relates to a supervised prediction scenario while clustering is unsupervised.
Two exemplary approaches for alternative clustering are _COALA_[7] and _MAXIMUS_[8]. COALA [7] imposes _cannot-link constraints_ on pairs of data objects rather than constraining features: Data objects from the same cluster in the original clustering should be assigned to different clusters in the alternative clustering. In each step of its iterative clustering procedure, COALA compares the quality of an action observing the constraints to another one violating them. Based on a threshold on the quality ratio, either action is taken. MAXIMUS [8] employs an integer program to formulate dissimilarity between clusterings. In particular, it wants to maximize the dissimilarity of the feature-value distributions in clusters between the clusterings. The output of the integer program leads to constraints for a subsequent clustering procedure.
### Subspace Clustering and Subspace Search
Finding multiple useful feature sets plays a role in subspace clustering [42, 72] and subspace search [29, 79, 102]. These approaches strive to improve the results of data-mining algorithms by using subspaces, i.e., feature sets, rather than the full space, i.e., all features. While some subspace approaches only consider individual subspaces, others explicitly try to remove redundancy between subspaces [72, 79] or foster subspace diversity [29, 102]. In particular, [42] surveys subspace-clustering approaches yielding multiple results and discusses the redundancy aspect. However, subspace clustering and -search approaches differ from alternative feature selection in at least one of the following aspects:
First, the objective differs, i.e., definitions of subspace quality deviate from feature-set quality in our scenario. Second, definitions of subspace redundancy may consider dissimilarity between projections of the entire data, i.e., data objects with feature values, into subspaces, while our notion of dissimilarity purely bases on binary feature-selection decisions. Third, controlling dissimilarity in subspace approaches is often less user-friendly than with our parameter \(\tau\). E.g., dissimilarity might be a regularization term in the objective rather than a hard constraint, or there might not be an explicit control parameter at all.
### Explainable Artificial Intelligence (XAI)
In the field of XAI, alternative explanations might provide additional insights into predictions, enable users to develop and test different hypotheses, appeal to different kinds of users, and foster trust in the predictions [50, 108]. In contrast, obtaining significantly different explanations for the same prediction might raise doubts about how meaningful the explanations are [43]. Finding diverse explanations had been studied for various explainers, e.g., for counterfactuals [21, 44, 67, 71, 91, 105], criticisms [49], and semifactual explanations [2]. There are
several approaches to foster diversity, e.g., ensembling different kinds of explanations [98], considering multiple local minima [105], using a search algorithm that maintains diversity [21], extending the optimization objective [2, 49, 71], or introducing constraints [44, 67, 91]. The last option is similar to the way we enforce alternatives. Of the various mentioned approaches, only [2, 67, 71] introduce a parameter to control the diversity of solutions. Of these three works, only [67] offers a user-friendly dissimilarity threshold in \([0,1]\), while the other two approaches employ a regularization parameter in the objective.
Despite similarities, all the previously mentioned XAI techniques tackle different problems than alternative feature selection. In particular, they provide local explanations, i.e., target at prediction outcomes for individual data objects and build on feature values. In contrast, we are interested in the global prediction quality of feature sets. For example, counterfactual explanations [33, 100, 104] alter feature _values as little as possible_ to produce an alternative prediction _outcome_. In contrast, alternative feature sets might alter the feature _selection significantly_ while trying to maintain the original prediction _quality_.
## 5 Experimental Design
In this section, we describe our experimental design. We give a brief overview of its goal and components (cf. Section 5.1) before elaborating on the components in detail. In particular, we describe evaluation metrics (cf. Section 5.2), methods (cf. Section 5.3), datasets (cf. Section 5.4), and implementation (cf. Section 5.5).
### Overview
We conduct experiments with 30 binary-classification datasets. Our evaluation focuses on the trade-off between feature-set quality and obtaining alternative feature sets. We compare five feature-selection methods, representing different notions of feature-set quality. Also, we train prediction models with the resulting feature sets and analyze prediction performance. To find alternatives, we consider simultaneous as well as sequential search. We systematically vary the number of alternatives and the dissimilarity threshold for alternatives.
### Evaluation Metrics
Feature-set qualityWe evaluate feature-set quality with two metrics. First, we report the _objective value_\(Q(s,X,y)\) of the feature-selection methods, which guided the search for alternatives. Second, we train prediction models with the found feature sets. We report _prediction performance_ in terms of the Matthews correlation coefficient (MCC) [64]. This coefficient is insensitive to class imbalance, reaches its maximum of 1 for perfect predictions, and is 0 for random guessing. We conduct stratified five-fold cross-validation to analyze how well feature selection and prediction models generalize. The search for alternatives and model training are limited to the training data.
RuntimeWe consider two metrics related to runtime.
First, we analyze the _optimization time_. For white-box feature-selection methods, we measure the total runtime of solver calls. We exclude the time for computing feature qualities and feature dependencies for the objective since one can compute these values once per dataset and then re-use them in each solver call. For _Greedy Wrapper_, we measure the runtime of the entire black-box optimization procedure involving multiple solver calls and model trainings.
Second, we examine the _optimization status_, which can take four values. If the solver finished before reaching a timeout, it either found an _optimal_ solution or proved the problem _infeasible_, i.e., no solution exists. If the solver reached its timeout, it either found a _feasible_ solution whose optimality it could not prove or found no valid solution though one might exist, so the problem is _not solved_.
### Methods
We compare several approaches for making predictions (cf. Section 5.3.1), feature selection (cf. Section 5.3.2), and searching alternatives (cf. Section 5.3.3).
#### 5.3.1 Prediction
As prediction models, we use decision trees [13] and random forests with 100 trees [12]. Both these models admit learning complex, non-linear dependencies from the data. We leave the hyperparameters of the models at their defaults, except for using information gain instead of Gini impurity as the split criterion, to be consistent with our parametrization of filter feature-selection methods.
Note that tree models also carry out feature selection themselves, i.e., they are embedded approaches. Thus, they might not use all features from the alternative feature sets. However, this is not a problem for our study. We are interested in which performance the models achieve if they are limited to certain feature sets, not if and how they use each feature from these sets.
#### 5.3.2 Feature Selection (Objective Functions)
We search for alternatives under different notions of feature-set quality as the objective function. We choose five well-known feature-selection methods that are easy to parameterize and cover the different categories from Section 2.2 except _embedded_, as explained in Section 3.3.3. One method (_Greedy Wrapper_) requires black-box optimization, while the other four are white-box.
With each feature-selection method, we select \(k\in\{5,10\}\) features, thereby obtaining small feature sets. We enforce the desired \(k\) with a simple constraint in optimization, using the feature-set-size expression from Equation 5.
Filter feature selectionWe evaluate three filter methods, all using mutual information [55] as the dependency measure \(q(\cdot)\). This measure allows to capture arbitrary dependencies rather than, e.g., just linear correlations. _MI_ denotes a univariate filter (cf. Equation 11), while _FCBF_ (cf. Equation 12) and _mRMR_
(cf. Equation 14) are multivariate. Since mutual information has no fixed upper bound, we normalize per dataset and cross-validation fold to improve the comparability of feature-set quality. For _FCBF_ and _MI_, we normalize the individual features' qualities such that selecting all features yields a quality of \(1\) and selecting no feature yields a quality of \(0\). For _mRMR_, we min-max-normalize all mutual-information values to \([0,1]\), so the overall objective is in \([-1,1]\).
Wrapper feature selectionAs a wrapper method, we employ the hill-climbing strategy _Greedy Wrapper_ from Algorithm 1. We set \(max\_iters\) to \(1000\). To evaluate feature-set quality within the wrapper, we apply a stratified 80:20 holdout split and train decision trees. \(Q(s,X,y)\) corresponds to the prediction performance in terms of MCC on the 20% validation part.
Post-hoc feature importanceAs a post-hoc importance measure, we use model-based feature importance provided by _scikit-learn_. Again, we use a decision tree as the model. There, importance expresses a feature's contribution towards optimizing the split criterion of the tree, for which we choose information gain. These importances are normalized to sum up to \(1\) by default. We plug the importances into Equation 11, i.e., treat them like univariate filter scores. The interpretation is different, though, since the scores originate from trees trained with all features rather than assessing features in isolation.
#### 5.3.3 Alternatives (Constraints)
CompetitorsWe only evaluate approaches for searching alternatives that we proposed in this article. As discussed in Section 4, approaches from related work pursue different objective functions, operate with different notions of alternatives, and may only work for particular feature-selection methods. All these points prevent a meaningful comparison of these approaches to ours. E.g., a feature set considered alternative in related work might violate our constraints for alternatives. Further, within our own approaches, we can still put the feature-set quality into perspective by comparing alternatives to each other.
Search parametrizationWe employ _sequential_ (cf. Equation 9) and _simultaneous_ (cf. Equation 10) search for alternatives. For the latter, we use sum-aggregation (cf. Equation 15) and min-aggregation (cf. Equation 16) in the objective. We evaluate \(a\in\{1,\ldots,10\}\) alternatives for sequential search and \(a\in\{1,\ldots,5\}\) for simultaneous search due to the higher runtime of the latter. For the dissimilarity threshold \(\tau\), we analyze all possible sizes of the feature-set overlap in the Dice dissimilarity (cf. Equations 3 and 8). Thus, for \(k=5\), we consider \(\tau\in\{0.2,0.4,0.6,0.8,1\}\), corresponding to an overlap of four to zero features. For \(k=10\) we consider \(\tau\in\{0.1,0.2,\ldots,1\}\). We exclude \(\tau=0\), which would allow returning duplicate feature sets.
OptimizationAll searches for alternatives rely on solvers. With _Greedy Wrapper_ as the feature-selection method, the search procedure is heuristic and might not cover the entire search space. There, the solver only assists in finding valid solutions but does not optimize. For the white-box feature-selection methods, the solver exactly solves the underlying optimization problems. Thus, given sufficient solving time, these alternatives are globally optimal.
TimeoutWe employ a solver timeout to make a large-scale evaluation feasible and to account for the high variance of solver runtime, even for optimization problems of the same size. In particular, we grant each solver call 60 s multiplied by the number of feature sets. Thus, sequential search conducts multiple solver calls with 60 s timeout, while simultaneous search conducts one solver call with proportionally more time. The summed timeout for a fixed number of alternatives is the same for both search methods. For 84% of the feature sets in our evaluation, the solver finished before the timeout.
### Datasets
We evaluate alternative feature selection on the Penn Machine Learning Benchmarks (PMLB) [82, 90]. To harmonize evaluation, we only consider binary-classification datasets, though alternative feature selection also works for regression and multi-class problems. We exclude datasets with less than 100 data objects since they might entail a high uncertainty when assessing feature-set quality. Otherwise, the number of data objects should not systematically impact the feature-set quality and is unimportant for our evaluation. Also, we exclude datasets with less than 15 features to leave some room for alternatives. Next, we exclude one dataset with 1000 features, which would dominate the overall runtime of the experiments. Finally, we manually exclude datasets that seem duplicated or modified versions of other datasets from the benchmark.
Consequently, we obtain 30 datasets with 106 to 9822 data objects and 15 to 168 features. The datasets contain no missing values. Categorical features have an ordinal encoding by default. Table 2 lists these datasets.
### Implementation and Execution
We implemented our experimental pipeline in Python 3.8, using _scikit-learn_[84] for machine learning and the solver _SCIP_[10] via the package _OR-Tools_[86] for optimization. A requirements file in our code specifies the versions of all packages. The experimental pipeline parallelizes over datasets, cross-validation folds, and feature-selection methods, while solver calls and model training are single-threaded. We ran the pipeline on a server with 128 GB RAM and an _AMD EPYC 7551_ CPU, having 32 physical cores and a base clock of 2.0 GHz. The parallelized pipeline run took 255 hours, i.e., about 10.6 days.
\begin{table}
\begin{tabular}{l r r} \hline \hline Dataset & \(m\) & \(n\) \\ \hline backache & 180 & 32 \\ chess & 3196 & 36 \\ churn & 5000 & 20 \\ clean1 & 476 & 168 \\ clean2 & 6598 & 168 \\ coil2000 & 9822 & 85 \\ credit\_a & 690 & 15 \\ credit\_g & 1000 & 20 \\ dis & 3772 & 29 \\ G\_Epistasis\_2\_Way\_20atts\_0.1H\_EDM\_1\_1 & 1600 & 20 \\ G\_Epistasis\_2\_Way\_20atts\_0.4H\_EDM\_1\_1 & 1600 & 20 \\ G\_Epistasis\_3\_Way\_20atts\_0.2H\_EDM\_1\_1 & 1600 & 20 \\ G\_Heterogeneity\_20atts\_1600\_Het\_0.4\_0.2\_50\_EDM\_2\_001 & 1600 & 20 \\ G\_Heterogeneity\_20atts\_1600\_Het\_0.4\_0.2\_75\_EDM\_2\_001 & 1600 & 20 \\ hepatitis & 155 & 19 \\ Hill\_Valley\_with\_noise & 1212 & 100 \\ horse\_colic & 368 & 22 \\ house\_votes\_84 & 435 & 16 \\ hypothyroid & 3163 & 25 \\ ionosphere & 351 & 34 \\ molecular\_biology\_promoters & 106 & 57 \\ mushroom & 8124 & 22 \\ ring & 7400 & 20 \\ sonar & 208 & 60 \\ spambase & 4601 & 57 \\ spect & 267 & 22 \\ spectf & 349 & 44 \\ tokyo1 & 959 & 44 \\ twonorm & 7400 & 20 \\ wdbc & 569 & 30 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Datasets from PMLB used in our experiments. \(m\) denotes the number of instances and \(n\) the number of features. Dataset names starting with ‘G_’ actually start with ‘GAMETES_’; we truncated them to reduce the table’s width.
Evaluation
In this section, we evaluate our experiments. In particular, we discuss the parametrization for searching alternatives: the search method (cf. Section 6.1), number of alternatives \(a\) (cf. Section 6.2), and dissimilarity threshold \(\tau\) (cf. Section 6.3). Section 6.4 summarizes key findings. Additionally, Appendix A.7 contains results for further dimensions of our experimental design.
### Search Methods for Alternatives
Variance in feature-set qualityAs expected, the search method influences how much the training-set objective value \(Q\) varies between alternatives found within each search run. Figure 0(a) visualizes this result for _MI_ as the feature-selection method and \(k=5\). In particular, the quality of multiple alternatives found by sequential search usually varies more than for simultaneous search. For simultaneous search, min-aggregation yields considerably more homogeneous feature-set quality than sum-aggregation. These findings apply to all white-box feature-selection methods but not the heuristic _Greedy Wrapper_.
As Figures 0(c) and 0(e) show, the variance of feature-set quality differs considerably less between the search methods on the test set, for the objective value as well as prediction performance. In particular, alternatives found by simultaneous search do not have considerably more homogeneous test feature-set quality than for sequential search. This effect might result from overfitting: Even if training feature-set quality is similar, some alternatives might generalize better, i.e., lose less quality on the test set than others. Thus, the variance in test feature-set quality caused by overfitting could alleviate the effect on variance caused by the search method.
Average value of feature-set qualityWhile obtaining alternatives of homogeneous quality can be one goal of simultaneous search, the main selling point compared to sequential search would be alternatives of higher average quality. However, we found that simultaneous search is not clearly better than sequential search in that regard. In particular, Figure 0(b) compares the distribution of the mean training-set objective in search runs with _MI_ as feature-selection method and \(k=5\). We observe that all search methods yield very similar distributions of feature-set quality. The other four feature-selection methods also do not show a general quality advantage of simultaneous search. At most, simultaneous search tends to develop a slight advantage with a growing number of alternatives for _MI_, as visible in Figure 0(b), and _Model Gain_.
The test-set objective value in Figure 0(d) and the test-set prediction performance in Figure 0(f) also exhibit the negligible quality difference between the search methods. As Figure 1(a) displays, the variation in prediction performance caused by other dimensions of the experimental design, e.g., dataset, dissimilarity threshold \(\tau\), etc., exceeds the variation due to the search methods.
Finally, Figure 1(b) displays the difference in feature-set quality between sequential and simultaneous search compared on each search setting separately,
Figure 1: Feature-set quality over the number of alternatives \(a\), by search method for alternatives and evaluation metric. Results with _MI_ as feature-selection method and \(k=5\). Y-axes are truncated to improve readability.
i.e., each combination of dataset, dissimilarity threshold \(\tau\), etc. The figure again shows little variation in quality between the search methods except for _Greedy Wrapper_ feature selection. In particular, the quality difference is usually close to zero, apart from a few outliers. Additionally, the figure highlights that outliers can occur in both directions: While simultaneous search can yield better feature sets in some scenarios, sequential search can be better in others.
Optimization statusOne reason why simultaneous search fails to consistently beat sequential search quality-wise is that search results can be suboptimal. For _Greedy Wrapper_, the search is heuristic per se and does not cover the entire search space. For all feature-selection methods, the solver can time out. Table 3 shows that simultaneous search has a higher likelihood of timeouts than sequential search, likely due to the larger size of the optimization problem (cf. Table 1). In particular, for up to five alternatives and \(k=5\), all sequential searches for _FCBF_, _MI_, and _Model Gain_ finished within the timeout, i.e., yielded the optimal feature set or ascertained infeasibility, while _mRMR_ had about 9% timeouts. In contrast, for simultaneous search with sum-aggregation, all feature-selection methods experience timeouts: Roughly 2-3% of the searches for _FCBF_, _MI_, and _Model Gain_, and 67% of the searches for _mRMR_ found a feasible solution but could not prove optimality. Such timeout-affected simultaneous solutions can be worse than optimal sequential solutions. The optimization status _not solved_, i.e., not finding a feasible solution without proving infeasibility, did not occur in the displayed results. Min-aggregation instead of sum-aggregation in simultaneous search exhibits more timeouts for _MI_ and
Figure 2: Feature-set quality by feature-selection method and search method for alternatives. Results with \(k=5\) and \(a\in\{1,2,3,4,5\}\).
Model Gain_ but less for _FCBF_ and _mRMR_. Still, sequential search incurs fewer timeouts for all these four feature-selection methods.
Finally, note that the fraction of timeouts strongly depends on the number of alternatives \(a\), as Table 4 displays: For simultaneous search with \(k=5\) and sum-aggregation, roughly \(8\%\) of the white-box searches timed out for \(a=1\) but \(20\%\) for \(a=3\) and \(30\%\) for \(a=5\). While we grant simultaneous searches proportionally more time for multiple alternatives, the observed increase in timeouts suggests that runtime increases super-proportionally, as we analyze next.
Optimization timeThe actual optimization times also speak in favor of sequential search. As Table 5 shows, the mean optimization time of sequential search is lower for all five feature-selection methods. In particular, the difference between sequential and simultaneous search is up to three orders of magnitude for the four white-box feature-selection methods. Further, _FCBF_, _MI_, and _Model Gain_ experience a dramatic increase in optimization time with the number of alternatives \(a\) in simultaneous search, as Table 6 displays. In contrast, the runtime increase is considerably less for sequential search, which shows an approximately linear trend with the number of alternatives.
Based on all results described in this section, we focus on sequential search in the following. In particular, it was significantly faster than simultaneous search while yielding similar feature-set quality.
Another interesting question for practitioners is how the runtime relates to \(n\), the number of features in the dataset. One would expect a positive correlation since the optimization problem's instance size increases with \(n\). Roughly speak
\begin{table}
\begin{tabular}{l l r r r} \hline \hline Selection & Search & \multicolumn{3}{c}{Optimization status} \\ \cline{3-5} & & Infeasible & Feasible & Optimal \\ \hline FCBF & seq. & 66.39\% & 0.00\% & 33.61\% \\ FCBF & sim. (min) & 73.07\% & 1.73\% & 25.20\% \\ FCBF & sim. (sum) & 73.07\% & 2.19\% & 24.75\% \\ MI & seq. & 1.97\% & 0.00\% & 98.03\% \\ MI & sim. (min) & 4.67\% & 9.60\% & 85.73\% \\ MI & sim. (sum) & 4.67\% & 3.17\% & 92.16\% \\ Model Gain & seq. & 1.97\% & 0.00\% & 98.03\% \\ Model Gain & sim. (min) & 4.67\% & 5.55\% & 89.79\% \\ Model Gain & sim. (sum) & 4.67\% & 1.92\% & 93.41\% \\ mRMR & seq. & 1.95\% & 8.67\% & 89.38\% \\ mRMR & sim. (min) & 4.67\% & 49.04\% & 46.29\% \\ mRMR & sim. (sum) & 4.67\% & 67.39\% & 27.95\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Frequency of optimization statuses (cf. Section 5.2) by feature-selection method and search method for alternatives. Results with \(k=5\), \(a\in\{1,2,3,4,5\}\), and excluding _Greedy Wrapper_, which uses the solver for satisfiability checking rather than optimizing. Each row adds up to \(100\%\).
ing, this trend appears in our experimental data indeed. However, the observed trend is rather noisy, particularly for simultaneous search, and some higher-dimensional datasets even show lower average runtimes than lower-dimensional datasets. This result indicates that several other factors than \(n\) influence runtime. Besides factors related to the datasets and experimental design, the heuristics used by the solver may also cause the runtime to fluctuate considerably.
### Number of Alternatives \(a\)
Feature-set qualityFor sequential search, the training-set objective value has to decrease with the number of alternatives, at least for the feature-selection criteria optimized exactly. In particular, each found feature set constrains the optimization problem further. Figures 2(a) and 2(c) illustrate this trend for _MI_-based feature selection. Since feature-set quality varies between datasets (cf. Appendix A.7.1), we additionally normalize feature-set quality here. In particular, we analyze the relative development of feature-set quality within each search run for alternatives. First, we shift the range of all evaluation metrics to \([0,1]\) since prediction performance and the objectives of _Greedy Wrapper_ and _mRMR_ have the range \([-1,1]\) without this shift. Second, we max-normalize feature-set quality for each search of alternatives, i.e., the highest feature-set quality in
\begin{table}
\begin{tabular}{l r r r} \hline \hline \multicolumn{1}{c}{\(a\)} & \multicolumn{3}{c}{Optimization status} \\ \cline{2-4} & \multicolumn{1}{c}{Infeasible} & \multicolumn{1}{c}{Feasible} & \multicolumn{1}{c}{Optimal} \\ \hline
1 & 16.10\% & 7.57\% & 76.33\% \\
2 & 17.50\% & 13.43\% & 69.07\% \\
3 & 20.00\% & 20.40\% & 59.60\% \\
4 & 27.00\% & 21.47\% & 51.53\% \\
5 & 28.23\% & 30.47\% & 41.30\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Frequency of optimization statuses (cf. Section 5.2) by number of alternatives \(a\). Results from simultaneous search with sum-aggregation, \(k=5\), and excluding _Greedy Wrapper_. Each row adds up to 100%.
\begin{table}
\begin{tabular}{l r r r} \hline \hline \multirow{2}{*}{Selection} & \multicolumn{3}{c}{Optimization time} \\ \cline{2-4} & Seq. & Sim. (min) & Sim. (sum) \\ \hline FCBF & 0.22 s & 11.91 s & 13.09 s \\ Greedy Wrapper & 54.23 s & 61.10 s & 63.45 s \\ MI & 0.03 s & 48.25 s & 25.39 s \\ Model Gain & 0.03 s & 30.91 s & 19.98 s \\ mRMR & 34.12 s & 157.87 s & 189.76 s \\ \hline \hline \end{tabular}
\end{table}
Table 5: Mean optimization time by feature-selection method and search method for alternatives. Results with \(k=5\) and \(a\in\{1,2,3,4,5\}\).
Figure 3: Feature-set quality, normalized per search run for alternatives, over the number of alternatives, by evaluation metric and normalization method. Results from sequential search with _MI_ as feature-selection method and \(k=5\).
the search run is set to 1, and the other qualities are scaled accordingly. Figure 2(a) shows that multiple alternatives may have a similar quality, as the median training-set objective value remains relatively stable over the alternatives and is above 0.8 even for the tenth alternative. For comparison, Figure 2(c) uses min-max normalization, i.e., the worst of the alternatives gets 0 as objective. This figure makes the decrease in quality over the alternatives more visible. In particular, this figure highlights that the training-set objective value decreases most from the original feature set to the first alternative but less beyond.
Additionally, Figures 2(a) and 2(c) show that the test-set objective value also drops most to the first alternative. However, this decrease is less prominent than on the training set, and there is no clear trend beyond the first few alternatives. In particular, alternatives can even have a higher test-set objective value than the original feature set due to overfitting. Similar findings hold for test-set prediction performance. Overall, these results indicate that alternative feature sets fulfill their purpose of being different solutions with similar quality.
Optimization statusThe prior observations refer to the quality of the found feature sets. However, the more alternatives are desired, the likelier an infeasible optimization problem is (cf. Table 4). For example, _MI_-based feature selection in sequential search always finds an original feature set. However, with \(k=5\), the problem is infeasible in 2% of the cases for the third alternative, 12% for the fifth, and 17% for the tenth. Increasing the feature-set size \(k\) or having lower dataset dimensionality \(n\) naturally causes more infeasible solutions, as fewer features become available for alternatives. Thus, even if the quality of found feature sets remains relatively stable for more alternatives, valid alternatives may simply not exist. Figures 2(b) and 2(d) show the same data as Figures 2(a) and 2(c) but with the quality of infeasible feature sets set to zero, i.e., the theoretical minimum after we shifted the value ranges of evaluation metrics. In these figures, the downward trend of feature-set quality over the alternatives becomes slightly more prominent, particularly for many alternatives. This trend also depends on the dissimilarity threshold \(\tau\), which we analyze in the next section.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(a\) & \multicolumn{5}{c}{Optimization time} \\ \cline{2-6} & FCBF & Wrapper & MI & Model Gain & mRMR \\ \hline
1 & 0.52 s & 25.94 s & 0.03 s & 0.02 s & 44.99 s \\
2 & 0.95 s & 39.44 s & 0.09 s & 0.08 s & 118.80 s \\
3 & 3.26 s & 56.52 s & 0.31 s & 0.27 s & 208.90 s \\
4 & 14.02 s & 86.13 s & 3.84 s & 3.59 s & 258.40 s \\
5 & 46.71 s & 109.20 s & 122.69 s & 95.94 s & 317.69 s \\ \hline \hline \end{tabular}
\end{table}
Table 6: Mean optimization time by number of alternatives and feature-selection method. Results from simultaneous search with sum-aggregation and \(k=5\).
Figure 4: Mean of feature-set quality, max-normalized per search run for alternatives, over the number of alternatives, by feature-selection method and evaluation metric. Results from sequential search with \(k=5\).
Influence of feature-selection methodWhile we discussed _MI_ before, the decrease in objective value over the number of alternatives occurs for all feature-selection methods in our experiments, as Figure 3(a) displays. The strength of the decrease varies between the feature selection methods. For example, _Greedy Wrapper_ and _mRMR_ show little effect of increasing the number of alternatives, while _MI_ and _Model Gain_ exhibit the strongest effect. As Figure 3(b) displays, the quality decrease becomes more prominent if one sets the quality of infeasible feature sets to zero. Further, for the test-set prediction performance shown in Figure 3(c), no feature-selection method exhibits a strong decrease over the number of alternatives, unless we account for infeasible feature sets (cf. Figure 3(d)).
### Dissimilarity Threshold \(\tau\)
Feature-set qualityAs Figure 4(a) shows for _MI_ as the feature-selection method, the decrease in the objective value \(Q\) over the number of alternatives strongly depends on the dissimilarity threshold \(\tau\). We use results with \(k=10\) instead of \(k=5\) here to show more distinct values of \(\tau\). For a low dissimilarity threshold, e.g., \(\tau=0.1\), the objective value barely drops over the number of alternatives. In contrast, the objective value decreases significantly for a high dissimilarity threshold, e.g., \(\tau=1\). This trend is expected since a higher \(\tau\) constrains the feature selection more. As Figure 4(c) displays, this phenomenon also holds for the test-set objective value, though the dependency on \(\tau\) is lower there. The effect of \(\tau\) on prediction performance exhibits an even less clear trend, as visualized in Figure 4(e). This result underlines our previous observations that the objective value is only partially indicative of prediction performance.
Optimization statusSimilar to our analysis for the number of alternatives (cf. Section 6.2), one needs to consider that setting \(\tau\) too high can make the optimization problem infeasible. In particular, a higher dissimilarity threshold increases the likelihood that no feature set is alternative enough. Figure 6 visualizes the fraction of valid feature sets over the number of alternatives and dissimilarity threshold \(\tau\). Figures 4(b), 4(d), and 4(f) account for infeasible feature sets by setting their feature-set quality to zero. Compared to Figures 4(a), 4(c), and 4(e), the decrease in feature-set quality is noticeably stronger. In contrast, if only considering valid feature sets, the mean quality can increase over the number of alternatives, as visible in Figure 4(a) for \(\tau=1.0\) or in Figure 3(a) for _MI_ and _Model Gain_. This counterintuitive phenomenon can occur because some datasets run out of valid feature sets sooner than others, so the average quality may be determined for different sets of datasets at each number of alternatives.
Influence of feature-selection methodThe impact of \(\tau\) on feature-set quality varies between feature-selection methods, as Figure 6(a) shows. Besides _MI_, the objective value of _Model Gain_ strongly depends on \(\tau\) as well. In contrast, the remaining three feature-selection methods exhibit little influence of \(\tau\) on feature-set quality unless one also accounts for infeasible feature sets (cf. Figure 6(b)). For _Greedy Wrapper_, this outcome may be explained by the heuristic,
Figure 5: Mean of feature-set quality, normalized per search run for alternatives, over the number of alternatives and dissimilarity threshold \(\tau\), by evaluation metric and normalization method. Results from sequential search with \(MI\) as feature-selection method and \(k=10\).
inexact search procedure. For _FCBF_, the additional constraints on feature-feature correlation (cf. Equation 12) may alleviate the effect of \(\tau\). For _mRMR_, the low influence of \(\tau\) matches the low influence of the number of alternatives. For this feature-selection method, alternatives tend to vary little in their objective value. Finally, the test-set prediction performance does not vary considerably over \(\tau\) for any feature-selection method, as Figure 6(c) displays. Only considering infeasible feature sets results in decreased prediction performance (cf. Figure 6(d)).
### Summary
Datasets (cf. Appendix a.7.1)Generally, feature-set quality strongly depended on the dataset. Thus, an analysis of alternative feature sets should be dataset-specific or appropriately normalize quality, as we did.
Feature-set quality metrics (cf. Appendix a.7.2)Different notions of feature-set quality exhibited different trends in our experiments, so one should choose a notion of feature-set quality carefully. In particular, the objective function of feature-selection methods might disagree with the prediction performance of the corresponding feature sets. Further, we observed overfitting, i.e., a gap between training-set quality and test-set quality, also for simple objective functions, though to a lesser extent than for prediction performance.
Feature-selection methods (cf. Appendix a.7.3)Among the feature-selection methods, _Model Gain_ resulted in the best prediction performance on average, though the simple univariate _MI_ also turned out competitive. _Greedy Wrapper_ and _mRMR_ required high optimization times, while our constraint-based version of _FCBF_ yielded many infeasible solutions. Finally, selecting \(k=10\) instead of \(k=5\) features yielded only a slight improvement in prediction
Figure 6: Fraction of optimization runs yielding a valid feature set over the number of alternatives and dissimilarity threshold \(\tau\), by feature-set size \(k\). Results from sequential search with _MI_ as feature-selection method.
Figure 7: Mean of feature-set quality, max-normalized per search run for alternatives, over the dissimilarity threshold \(\tau\), by feature-selection method and evaluation metric. Results from sequential search with \(k=10\).
performance for all feature-selection methods, so one might stick to smaller feature-set sizes if such a setting benefits interpretability for users.
Search methods for alternatives (cf. Section 6.1)Simultaneous search, particularly with min-aggregation, considerably reduced the variance of the training-set objective value over alternatives compared to sequential search, as we desired. However, results were less clear on the test set and when considering prediction performance to measure feature-set quality. Further, the average quality of alternatives was similar to sequential search. In addition, the latter was considerably faster and led to less solver timeouts, particularly when increasing the number of alternatives. Also, sequential search allows users to stop searching after each alternative instead of requiring the number of alternatives to be specified beforehand. Thus, we recommend using sequential search.
Number of alternatives \(a\) (cf. Section 6.2)Feature-set quality decreased most from the original feature set to the first alternative but less beyond. The strength of this decrease depended on the feature-selection method. There usually were several alternatives of similar quality, if such valid alternatives existed at all. In particular, the frequency of infeasible solutions increased with \(a\) due to more constraints. Finally, the quality decrease was more prominent on the training than on the test set.
Dissimilarity threshold \(\tau\) (cf. Section 6.3)A higher dissimilarity threshold caused a stronger decrease in feature-set quality in terms of objective value for the feature-selection methods _MI_ and _Model Gain_. This result shows that users can control a trade-off between quality and dissimilarity. However, results regarding prediction performance and for the other three feature-selection methods were less clear. In any case, a higher \(\tau\) naturally caused more infeasible solutions, which users should be aware of.
## 7 Conclusions and Future Work
In this section, we summarize our work (cf. Section 7.1) and give an outlook on potential future work (cf. Section 7.2).
### Conclusions
Feature-selection methods are a valuable tool to foster interpretable predictions. Conventional feature-selection methods typically yield only one feature set. However, users may be interested in obtaining multiple, sufficiently diverse feature sets of high quality. Such alternative feature sets may provide alternative explanations for predictions from the data.
In this article, we defined alternative feature selection as an optimization problem. We formalized alternatives via constraints that are independent of
the feature-selection method, can be combined with other constraints on feature sets, and allow users to control diversity according to their needs. We analyzed the complexity of this optimization problem and proved \(\mathcal{NP}\)-hardness, even for simple notions of feature-set quality. Further, we discussed how to integrate different categories of conventional feature-selection methods. Finally, we evaluated alternative feature selection with 30 classification datasets and five feature-selection methods. We compared two search methods for alternatives and varied the number of alternatives as well as the threshold for alternatives.
### Future Work
Feature selection (objective function)One could search for alternatives with other feature-selection methods than the five we analyzed. In particular, we implemented only one procedure to find alternatives for wrapper feature selection (cf. Section 3.3.2). Embedded feature selection, which we did not evaluate, would also need adapted search procedures for alternatives (cf. Section 3.3.3).
Alternatives (constraints)One could vary the definition of alternatives, e.g., the set-dissimilarity measure (cf. Section 3.2.1), the quality aggregation for simultaneous alternatives (cf. Appendix A.1), or the overall optimization problem (cf. Section 3.1). While we made general and straightforward decisions for each of these points, particular applications might demand other formalizations of alternatives. E.g., one could use soft instead of hard constraints.
Computational complexityAppendix A.5.4 discusses how one could extend our complexity analysis of alternative feature selection (cf. Section 3.4).
RuntimeOur experiments (cf. Section 6.1) and theoretical analyses (cf. Section 3.2.2) revealed that simultaneous search scales poorly with the number of alternatives. One could conceive a more efficient problem formulation. Further, one could limit the solver runtime and take the intermediate results once the timeout is reached. We already used a fixed timeout in our experiments, but studying the exact influence of timeouts on feature-set quality is an open topic. Next, one could use a different solver, e.g., one for non-linear optimization, so the auxiliary variables from Equation 6 become superfluous. Finally, one could employ a heuristic rather than an exact search method (cf. Appendix A.6).
DatasetsOur evaluation used datasets from various domains (cf. Section 5.4). While we could uncover several general trends, the existence and quality of alternatives naturally depend on the dataset. Thus, practitioners could use our generic search methods for alternatives in domain-specific case studies.
AcknowledgmentsThis work was supported by the Ministry of Science, Research and the Arts Baden-Wurttemberg, project _Algorithm Engineering for the Scalability Challenge (AESC)_.
Appendix
In this section, we provide supplementary materials. Section A.1 discusses aggregation operators for the objective of simultaneous search (cf. Equation 10). Section A.2 discusses additional objective functions for multivariate filter feature selection (cf. Section 3.3.1). Section A.3 provides complete definitions of the alternative-feature-selection problem (cf. Section 3.2) for the univariate objective (cf. Equation 11). Section A.4 proposes how to speed up optimization for the univariate objective (cf. Equation 11). Section A.5 complements the complexity analysis (cf. Section 3.4). Section A.6 proposes search heuristics for the univariate objective (cf. Equation 11). Section A.7 contains additional evaluation results (cf. Section 6).
### Aggregation Operators for Simultaneous Search
In this section, we discuss operators to aggregate the feature-set quality of multiple alternatives in the objective of simultaneous search (cf. Equation 10).
Sum-aggregationThe arguably simplest way to aggregate the qualities of multiple feature sets is to sum them up, which we call _sum-aggregation_:
\[\max_{s^{(0)},\ldots,s^{(a)}}\sum_{i=0}^{a}Q(s^{(i)},X,y) \tag{15}\]
While this objective fosters a high average quality of feature sets, it does not guarantee that the alternatives have similar quality:
**Example 1** (Sum-aggregation).: Consider \(n=6\) features with univariate feature qualities (cf. Equation 11) \(q=(9,8,7,3,2,1)\), feature-set size \(k=3\), number of alternatives \(a=2\), and dissimilarity threshold \(\tau=0.5\), which permits an overlap of one feature between sets here. Sequential search yields the selection \(s^{(0)}=(1,1,1,0,0,0)\), \(s^{(1)}=(1,0,0,1,1,0)\), and \(s^{(2)}=(0,1,0,1,0,1)\), with a summed quality of \(24+14+12=50\). One simultaneous-search solution consists of the feature sets \(s^{(0)}=(1,1,0,1,0,0)\), \(s^{(1)}=(1,0,1,0,1,0)\), and \(s^{(2)}=(0,1,1,0,0,1)\), with a summed quality of \(20+18+16=54\). Another simultaneous-search solution is \(s^{(0)}=(1,1,0,0,0,1)\), \(s^{(1)}=(1,0,1,0,1,0)\), and \(s^{(2)}=(0,1,1,1,0,0)\), with a summed quality of \(18+18+18=54\).
This example allows several insights. First, sequential search yields worse quality than simultaneous search here, i.e., 50 vs. 54. Second, the feature-set qualities of the sequential solution, i.e., 24, 14, and 12, differ significantly. Third, simultaneous search can yield multiple solutions whose feature-set quality is differently balanced. Here, the feature-set qualities in the second simultaneous-search solution, i.e., 18, 18, and 18, are more balanced than in the first, i.e., 20, 18, and 16. However, both solutions are equally optimal for sum-aggregation.
Min-aggregationTo actively foster balanced feature-set qualities in simultaneous search, we propose _min-aggregation_ in the objective:
\[\max_{s^{(0)},\ldots,s^{(a)}}\min_{i\in\{0,\ldots,a\}}Q(s^{(i)},X,y) \tag{16}\]
In the terminology of social choice theory, this objective uses an egalitarian rule instead of a utilitarian one [74]. Note that optimizing the objective with either sum-aggregation or min-aggregation does not necessarily optimize the other. We already showed a solution optimizing sum-aggregation but not min-aggregation (cf. Example 1). In the following, we demonstrate the other direction:
**Example 2** (Min-aggregation).: Consider \(n=6\) features with univariate feature qualities (cf. Equation 11) \(q=(11,10,6,5,4,1)\), feature-set size \(k=3\), number of alternatives \(a=1\), and dissimilarity threshold \(\tau=0.5\), which permits an overlap of one feature between sets here. One solution optimizing the objective with min-aggregation is \(s^{(0)}=(1,1,0,0,1,0)\) and \(s^{(1)}=(1,0,1,1,0,0)\), with a summed quality of \(25+22=47\). Another solution is \(s^{(0)}=(1,1,0,0,0,1)\) and \(s^{(1)}=(1,0,1,1,0,0)\), with a summed quality of \(22+22=44\).
While both solutions have the same minimum quality, only the first solution optimizes the objective with sum-aggregation. In particular, min-aggregation permits reducing the quality of sets above the minimum of all sets.
From the technical perspective, Equation 16 has the disadvantage of being non-linear regarding the decision variables \(s^{(0)},\ldots,s^{(a)}\). However, we can linearize it with one constraint per feature set and an auxiliary variable \(Q_{\min}\):
\[\max_{s^{(0)},\ldots,s^{(a)}} Q_{\min}\] (17) subject to: \[\forall i\in\{0,\ldots,a\}: Q_{\min}\leq Q(s^{(i)},X,y)\] \[Q_{\min}\in\mathbb{R}\]
As we maximize \(Q_{\min}\), this variable will implicitly assume the actual minimum value of \(Q(s^{(i)},X,y)\) with equality since the solution would not be optimal otherwise. This situation relieves us from introducing further auxiliary variables that are usually necessary when linearizing maximum or minimum expressions [69].
Further approaches for balancing qualityMin-aggregation provides no control or guarantee of how much the feature-set qualities will actually differ between alternatives since it only incentives high quality for all sets. One can alleviate this issue by adapting the objective or constraints. First, related work on number partitioning also uses other objectives for balancing [54, 57] (cf. Section A.5.2). E.g., one could minimize the difference between maximum and minimum feature-set quality. Second, one could use sum-aggregation but constrain the minimum or maximum quality of sets, or the difference between the qualities. However, such constraint-based approaches introduce one or several parameters bounding feature-set quality, which are difficult to determine a priori. Third, one could treat balancing qualities as another objective besides
maximizing the summed quality. One can then optimize two objectives simultaneously, filtering results for Pareto-optimal solutions or optimize a weighted combination of the two objectives. In both cases, users may need to define an acceptable trade-off between the objectives. It is an open question if a solution always exists that jointly optimizes min- and sum-aggregation. If yes, then optimizing a weighted combination of the two objectives would also optimize each of them on its own, assuming positive weights.
### Further Objectives for Multivariate Filter Methods
While Section 3.3.1 already addressed FCBF and mRMR as multivariate filter feature-selection methods, we discuss the objectives of CFS and Relief here.
CfsCorrelation-based Feature Selection (CFS) [37, 38] follows a similar principle as mRMR but uses the ratio instead of the difference between a relevance term and a redundancy term for feature-set quality. Using a bivariate dependency measure \(q(\cdot)\) to quantify correlation, the objective is as follows:
\[Q_{\text{CFS}}(s,X,y)=\frac{\sum_{j=1}^{n}q(X_{\cdot j},y)\cdot s_{j}}{\sqrt{ \sum_{j=1}^{n}s_{j}+\sum_{j_{1}=1}^{n}\sum_{\begin{subarray}{c}j_{2}=1\\ j_{2}\neq j_{1}\end{subarray}}^{n}q(X_{\cdot j_{1}},X_{\cdot j_{2}})\cdot s_{ j_{1}}\cdot s_{j_{2}}}} \tag{18}\]
One can square this objective to remove the square root in the denominator [78]. Nevertheless, the objective remains non-linear in the decision variables \(s\) since it involves a fraction and multiplications between variables. However, one can linearize the objective with additional variables and constraints [77, 78], allowing to formulate alternative feature selection for CFS as a linear problem.
ReliefRelief [51, 88] builds on the idea that data objects with a similar value of the prediction target should have similar feature values, but data objects that differ in their target should differ in their feature values. Relief assigns a score to each feature by sampling data objects and quantifying the difference in feature values and target values compared to their nearest neighbors. We deem Relief to be multivariate since the nearest-neighbor computations involve all features instead of considering them independently. However, the resulting feature scores can directly be put into the univariate objective (cf. Equation 11) to obtain a linear problem. One can also use Relief scores in CFS to consider feature redundancy [37, 38], which the default Relief does not.
### Complete Specifications of the Optimization Problem for the Univariate Objective
In this section, we provide complete specifications of the alternative-feature-selection problem for sequential and simultaneous search. In particular, we
combine all relevant definitions and equations from Section 3. We use the objective of univariate filter feature selection (cf. Equation 11). The corresponding feature qualities \(q(\cdot)\) are constants in the optimization problem. We use the Dice dissimilarity (cf. Equation 8) to measure feature-set dissimilarity for alternatives. The dissimilarity threshold \(\tau\in[0,1]\) is a user-defined constant. Further, we assume fixed, user-defined feature-set sizes \(k\in\mathbb{N}\).
Sequential alternativesIn the sequential case, only one feature set \(F_{s}\) is variable in the optimization problem, while the existing feature sets \(F_{\bar{s}}\in\mathbb{F}\) with their selection vectors \(\bar{s}\) are constants.
\[\max_{s} Q_{\text{uni}}(s,X,y)=\sum_{j=1}^{n}q(X_{.j},y)\cdot s_{j}\] (19) subject to: \[\forall F_{\bar{s}}\in\mathbb{F}: \sum_{j=1}^{n}s_{j}\cdot\bar{s}_{j}\leq(1-\tau)\cdot k\] \[\sum_{j=1}^{n}s_{j}=k\] \[s\in\{0,1\}^{n}\]
Simultaneous alternativesIn the simultaneous case, all feature sets are variable. \(a\in\mathbb{N}_{0}\) denotes the number of alternatives, which corresponds to the number of feature sets minus one. Next, we introduce auxiliary variables to linearize products between variables (cf. Equation 6). Finally, we use sum-aggregation (cf. Equation 15) in the objective here.
\[\max_{s^{(0)},\ldots,s^{(a)}} \sum_{i}Q_{\text{uni}}(s^{(i)},X,y)=\sum_{i}\sum_{j}q(X_{.j},y) \cdot s_{j}^{(i)}\] subject to: \[\forall i_{1}\ \forall i_{2}: \sum_{j}t_{j}^{(i_{1},i_{2})}\leq(1-\tau)\cdot k \tag{20}\] \[\forall i_{1}\ \forall i_{2}\ \forall j: t_{j}^{(i_{1},i_{2})}\leq s_{j}^{(i_{1})}\] \[\forall i_{1}\ \forall i_{2}\ \forall j: t_{j}^{(i_{1},i_{2})}\leq s_{j}^{(i_{2})}\] \[\forall i_{1}\ \forall i_{2}\ \forall j: 1+t_{j}^{(i_{1},i_{2})}\geq s_{j}^{(i_{1})}+s_{j}^{(i_{2})}\] \[\forall i: \sum_{j}s_{j}^{(i)}=k\] \[\forall i: s^{(i)}\in\{0,1\}^{n}\] \[\forall i_{1}\ \forall i_{2}: t^{(i_{1},i_{2})}\in\{0,1\}^{n}\] with indices: \[i\in\{0,\ldots,a\}\] \[i_{1}\in\{1,\ldots,a\}\] \[i_{2}\in\{0,\ldots,i_{1}-1\}\] \[j\in\{1,\ldots,n\}\]
### Pre-Selection for the Univariate Objective
In this section, we describe how to potentially speed up the optimization of the univariate objective (cf. Equation 11) by _pre-selection_ if the user-defined feature-set sizes \(k\) and the number of alternatives \(a\) are small.
The univariate objective is monotonic in the features' qualities \(q(X_{\cdot j},y)\) and the selection decisions \(s_{j}\). In particular, the objective cannot decrease when selecting more features or replacing a feature with another of higher quality for a fixed feature-set size. Sum-aggregation (cf. Equation 15) and min-aggregation (cf. Equation 16) for simultaneous search are monotonic as well.
Thus, assuming \((a+1)\cdot k<n\), it suffices to use the \((a+1)\cdot k\) highest feature qualities when searching for an optimal solution out of \(a+1\) feature sets. Due to monotonicity, the remaining feature qualities cannot improve the objective, so one can drop them before optimization. We call this step _pre-selection_. While there might also be optimal solutions using the dropped features, their objective value cannot be higher than with pre-selection. For example, such solutions can arise in case of multiple identical qualities or for min-aggregation in the objective (cf. Example 2). Also, the optimal solution might not contain all pre-selected features, i.e., pre-selection over-approximates the set of selected features.
One can conduct pre-selection before using a solver or any other search mechanism, e.g., exhaustive search. The latter generally has polynomial runtime regarding \(n\) assuming small, constant \(a\) and \(k\), i.e., \(a\cdot k\in O(1)\) (cf. Section 3.4.1). With pre-selection, the pure search cost would even become independent from \(n\), i.e., \(O(1)\) under that assumption. However, one would need to determine the highest feature qualities first, e.g., by sorting all qualities in \(O(n\cdot\log n)\) or iteratively determining the maximum quality in \(O((a+1)\cdot k\cdot n)\).
### Computational Complexity
In this section, we provide details for our analysis of computational complexity (cf. Section 3.4). In particular, we discuss a special case of exhaustive simultaneous search (cf. Section A.5.1), outline related work (cf. Section A.5.2), provide proofs (cf. Section A.5.3), and describe future work (cf. Section A.5.4).
#### a.5.1 A Special Case of Exhaustive Simultaneous Search
The complexity of exhaustive simultaneous search is lower than in Proposition 4 for the special case \(0<\tau\cdot k\leq 1\), i.e., if feature sets need to differ in only one feature. There, each feature set is an alternative to each other unless both sets are identical. Thus, each set of \(a+1\) distinct feature sets constitutes a valid solution, and further constraint checking is unnecessary. Hence, instead of iterating over sets of feature sets, one can iterate over individual feature sets and maintain a buffer containing the \(a+1\) feature sets with the highest quality. For each feature set iterated over, one needs to determine if its quality is higher than the lowest feature-set quality in the buffer and replace it if yes. This procedure has a runtime of \(O((a+1)\cdot n^{k})\) without the cost of evaluating the objective. I.e.,
unlike in Proposition 4, the number of alternatives \(a\) is not part of the exponent anymore, and the cost corresponds to the search for one feature set times the cost of updating the buffer. For large \(a\), one can implement the buffer as a heap, thereby reducing the linear factor regarding \(a\) to a logarithmic one.
#### a.5.2 Related Work
In this section, we discuss related work on \(\mathcal{NP}\)-hard problems that resemble alternative feature-selection with univariate feature qualities (cf. Equation 11), providing background for Section 3.4.2.
Integer programmingThe univariate objective and several other feature-selection methods allow us to phrase alternative feature selection as a 0-1 integer linear program (cf. Section 3.3.1). Integer Programming is \(\mathcal{NP}\)-complete in general, even for binary decision variables [30, 45]. Thus, alternative feature selection with a white-box objective suitable for Integer Programming resides in \(\mathcal{NP}\). However, it could still be easier since alternative feature selection only uses particular constraint types instead of expressing arbitrary integer linear problems. Vice versa, the membership in \(\mathcal{NP}\) based on Integer Programming assumes a particular encoding of alternative feature selection, i.e., each constraint is stored separately and counts towards the problem's input size. If we instead define the input size only as the number of features \(n\) or the total encoding length of the objective function plus parameters \(a\), \(k\), and \(\tau\), the problem could be harder than \(\mathcal{NP}\), e.g., for a high number of alternatives. In particular, increasing the number of alternatives would increase the encoding length logarithmically but the cost of constraint checking quadratically.
Multi-way number partitioning / multiprocessor schedulingThe literature provides different formulations of Multi-Way Number Partitioning and Multiprocessor Scheduling. In particular, different objectives formalize the notion of balanced subset sums and can lead to different optimal solutions [54, 57]. The maximin formulation we use for min-aggregation in simultaneous search is one such notion.
There are several exact algorithms to solve Multi-Way Number Partitioning, e.g., using branch-and-bound approaches that might have exponential runtime [39, 95, 107]. For a fixed number of partitions, the problem is weakly \(\mathcal{NP}\)-complete since it admits pseudo-polynomial algorithms [30, 53]. Such algorithms run in polynomial time if the input numbers are bounded to a particular size known in advance. Since our feature qualities typically are real numbers, one would need to scale and discretize them to apply such an algorithm. Also, for an arbitrary number of partitions, the problem is strongly \(\mathcal{NP}\)-complete, so no pseudo-polynomial algorithm can exist unless \(\mathcal{P}=\mathcal{NP}\)[30].
However, \(\mathcal{NP}\)-completeness does not exclude the existence of approximation routines that run in polynomial time and have a guaranteed quality relative to the optimal solution. For example, [1, 24, 112] present such algorithms for
the maximin formulation of Multi-Way Number Partitioning, which corresponds to our objective with min-aggregation. In particular, [112, 1] describe polynomial-time approximation schemes (PTAS), which can provide a solution arbitrarily close to the optimum. However, the runtime depends on the desired approximation ratio and can grow exponentially the more precision is desired. Unless \(\mathcal{P}=\mathcal{NP}\), the strong \(\mathcal{NP}\)-completeness of the problem prevents the existence of a fully polynomial-time approximation scheme (FPTAS), which would only polynomially depend on the precision of approximation [112, 1]. However, an FPTAS does exist for each fixed number of partitions [93]. Further, besides approximations, the problem also has polynomial-time exact algorithms if certain parameters of the problem are fixed, e.g., the number of unique numbers to be partitioned or the largest number [66]. Thus, the problem is fixed-parameter tractable (\(\mathcal{FPT}\)) for an appropriate definition of 'parameter'.
Balanced number partitioning / k-partitioningWhile the previous approaches considered sets of arbitrary sizes, there are number-partitioning problems with constrained \(k\) as well, e.g., called Balanced Number Partitioning or K-Partitioning. The problem formulations differ in their objective and cardinality constraints, e.g., if equalities or inequalities are used.
For the minimax objective, [4, 65, 117] propose heuristic algorithms, some with approximation guarantees. [4] also provides a bound of the objective value relative to the unconstrained case. Further, there is a PTAS for each fixed set size \(k\)[65]. Finally, the problem exhibits a polynomial-time exact algorithm for \(k=2\)[22, 23] and an FPTAS for \(k=n/2\)[111].
One can also loosen the cardinality constraints by requiring \(\leq k\) instead of \(=k\). Further, the cardinality \(k\) might vary between partitions. This generalized problem is strongly \(\mathcal{NP}\)-hard but has heuristics running in polynomial time [46]. In particular, [17] provides an efficient PTAS (EPTAS).
As another problem formulation, [18, 40, 58] use a maximin objective as we do. This objective was rarely addressed in combination with cardinality constraints in the literature [58]. Also, all these three references use \(\leq k\) constraints instead of \(=k\). Again, this problem is strongly \(\mathcal{NP}\)-hard [40], but [18, 40, 58] propose approximation algorithms, partly with quality guarantees.
Other partitioning problemsThere are other \(\mathcal{NP}\)-complete problems that partition elements into non-overlapping subsets [30]. E.g., Partition[45] asks if one can partition a set of elements with positive integer weights into two subsets with the same subset sum. 3-Partition[30] demands a partitioning into three-element subsets with an identical, predefined subset sum of the elements' positive integer weights. In contrast to these two problems, we do not require alternative feature sets to have the same quality.
Bin coveringBin Covering[3] distributes elements with individual weights into bins such that the number of bins is maximal and the summed weights in
each bin surpass a predefined limit. [57] noted a relationship between MultiWay Number Partitioning and Bin Covering, which may improve solution approaches for either problem [106, 107]. In our case, we could maximize the number of alternatives such that each feature set's quality exceeds a threshold.
Multiple knapsackSimultaneous search with sum-aggregation, \(\tau=1\), and univariate feature qualities is a special case of the Multiple Knapsack problem [16]. The latter involves knapsacks, i.e., sets with individual capacities, and elements with individual weights and profits. The goal is to assign elements to knapsacks such that the summed profit of selected elements is maximal. Each element can be assigned to at most one knapsack, and the weights of all elements in the knapsack must not violate its capacity. This problem is strongly \(\mathcal{NP}\)-complete in general, though it exhibits a PTAS [16]. However, our problem is a special case where the feature qualities act as profits, the feature-set sizes are capacities, and each feature has a weight of 1. These uniform weights enable the polynomial-runtime result stated in Proposition 11.
#### a.5.3 Proofs
In this Section, we provide proofs for propositions from Section 3.4.2.
Proof of Proposition 9.: Let an arbitrary problem instance \(I\) of the complete-partitioning problem be given and the feature-set size \(k\) be fixed. We add one feature \(f^{\prime}\) to \(I\) and keep \(a\), \(k\), and \(\tau\) as before, obtaining an instance \(I^{\prime}\) of the incomplete-partitioning problem since one feature will not be selected. We choose the quality \(q^{\prime}\) of \(f^{\prime}\) to be lower than the quality of all other features in \(I\). Since the univariate objective with min-aggregation is monotonically increasing, selecting feature \(f^{\prime}\) in the solution of \(I^{\prime}\) does not have any benefit since \(f^{\prime}\) would replace a feature with higher quality. If \(f^{\prime}\) is not selected, then this solution of \(I^{\prime}\) also solves \(I\). However, if the qualities of the resulting alternatives are not equal, \(f^{\prime}\) might be chosen in a set that does not have the minimum quality of all sets since only the latter determines the overall objective value (cf. Example 2). In that case, we replace \(f^{\prime}\) with the remaining feature that was not selected instead; the objective value remains the same, and the solution becomes valid for \(I\). Thus, in any case, we can easily transform a solution for \(I^{\prime}\) to a solution for \(I\).
This argument shows that an algorithm for incomplete partitioning can solve arbitrary complete-partitioning problem instances with negligible computational overhead. Thus, a polynomial-time algorithm for incomplete partitioning could also solve complete partitioning polynomially. However, the latter problem type is \(\mathcal{NP}\)-complete (cf. Proposition 8), so incomplete partitioning has to be \(\mathcal{NP}\)-hard. Since checking a solution for incomplete partitioning needs only polynomial time, we obtain membership in \(\mathcal{NP}\) and thereby \(\mathcal{NP}\)-completeness.
**Proof of Proposition 10**
Proof.: Let an arbitrary problem instance \(I\) of the complete-partitioning problem be given and the feature-set size \(k\) be fixed. We create another instance \(I^{\prime}\) by adding a new feature \(f^{\prime}\) and increasing the feature-set size to \(k^{\prime}=k+1\). Further, we set \(\tau^{\prime}=(k^{\prime}-1)/k^{\prime}\), thereby allowing an overlap of at most one feature between feature sets. Also, we choose \(f^{\prime}\) to have a considerably higher quality \(q^{\prime}\) than all other features. The goal is to force the selection of \(f^{\prime}\) in all feature sets such that any other solution would be worse, no matter which other features are selected. One possible choice is \(q^{\prime}=\sum_{j=1}^{n}q_{j}+\varepsilon\), with \(\varepsilon\in\mathbb{R}_{>0}\) being a small positive number, or, if the qualities are integers, \(\varepsilon=1\). This quality \(q^{\prime}\) of \(f^{\prime}\) is higher than of any feature set not containing it. Thus, a solution for \(I^{\prime}\) contains \(f^{\prime}\) in each feature set while the remaining features are part of exactly one feature set. Hence, we remove \(f^{\prime}\) to get feature sets of size \(k=k^{\prime}-1\) that constitute an optimal solution for the original problem instance \(I\).
This transformation shows how an algorithm for instances with \(\tau<1\) can help solve arbitrary problem instances with \(\tau=1\). Given the \(\mathcal{NP}\)-completeness of the latter problem, we obtain \(\mathcal{NP}\)-hardness of the former.
Adding the proposed \(f^{\prime}\) with a high quality \(q^{\prime}\) enlarges the size of the problem instance. However, the transformation from \(I\) to \(I^{\prime}\) still runs in polynomial time and increases the input size by at most a fixed factor. In particular, encoding a problem instance involves \(n\) feature qualities and the values of \(a\), \(k\), and \(\tau\). Assuming the feature qualities in \(I\) have an average encoding size of \(c\in\mathbb{R}\), the overall quality encoding has the size \(c\cdot n\). As \(q^{\prime}\) roughly equals the sum of all feature qualities, its encoding size is upper-bounded by \(c\cdot n\) if we disregard \(\epsilon\). The change of \(k\) and \(\tau\) is negligible for the encoding size. In consequence, the input size of \(I^{\prime}\) is at most roughly double the size of \(I\). If we explicitly stored all the constraints instead of only the relevant parameters, we would obtain a similar result: Besides adding \(q^{\prime}\) to the objective, all constraints would accommodate one new feature, independent of its quality, increasing their encoding size from \(O(n)\) to \(O(n+1)\), i.e., less than double.
One can extend the reduction above from \(\tau^{\prime}=(k^{\prime}-1)/k^{\prime}\) to all other \(\tau>0\). In particular, for a fixed feature set-size \(k\), there is only a finite number of \(\tau\) values leading to different set overlaps, i.e., \(\tau=\{1/k,\ldots,(k-1)/k\}\). The highest overlap except \(\tau=0\) requires creating an instance \(I^{\prime}\) with \(\tau^{\prime}=1/k\) from an instance with \(\tau=1\). For this purpose, \(k^{2}-k\) features need to be added since \(\tau^{\prime}=k/k^{\prime}=k/(k+k^{2}-k)=1/k\). I.e., \(k\) out of \(k^{\prime}=k^{2}\) features need to form a complete partitioning, while the remaining \(k^{2}-k\) features occur in each feature set and will be removed after solving \(I^{\prime}\). The maximum number of features to be added is polynomial in \(k\) and thereby also polynomial in \(n\).
**Proof of Proposition 11**
Proof.: For a complete partitioning, we must use each of the \(n\) features exactly once. How we distribute the features among sets does not change the objective value, which is the sum of all \(n\) qualities in any case. We only need to
ensure that each feature set satisfies cardinality constraints if the latter exist. Thus,'searching' for alternatives amounts to iterating over the features once to assigning them to the feature sets. Hence, the time complexity is \(O(n)\).
For an incomplete partitioning, we use the monotonicity of the univariate objective with sum-aggregation (cf. Section A.4) and order the features decreasingly by their individual quality. Next, we pick features without replacement until we have the desired number of alternatives with the desired feature-set sizes. Again, assigning features to sets does not matter for the objective value. Due to the quality-based sorting, the time complexity is \(O(n\cdot\log n)\). If only a small fraction of features is used, one might slightly improve complexity by iteratively picking the maximum instead of sorting all qualities.
#### a.5.4 Future Work
In this section, we outline future work on alternative feature selection from the complexity-theory perspective, supplementing the Sections 3.4 and 7.2.
Scenarios of alternative feature selectionOur prior complexity analyses focused on special cases of alternative feature selection. E.g., while we obtained \(\mathcal{NP}\)-hardness for min-aggregation with feature-set overlap (cf. Proposition 10), an analysis of sum-aggregation with overlap is open, even for sequential search. Sum-aggregation admits polynomial runtime for \(\tau=1\) (cf. Proposition 11), but this result might not extend to \(\tau<1\). In particular, \(\tau<1\) increases the number of solution candidates, which could negatively affect the runtime.
Further, our complexity analyses mostly assumed univariate feature qualities. Other feature-selection methods can reside in different complexity classes.
Complexity classesFor analyzing other scenarios of alternative feature selection, several questions spring to mind. First, one could establish a complexity result like \(\mathcal{NP}\)-hardness or membership in \(\mathcal{P}\). In the former case, there might be pseudo-polynomial approaches or (F)PTAS. As a first step in that direction, we show membership in complexity class \(\mathcal{APX}\) under certain conditions (cf. Proposition 13), i.e., there are polynomial-time algorithms yielding constant-factor approximations. One might attempt to tighten the quality bounds we derived. Further, there might be efficient exact or approximate algorithms for certain types of problem instances, e.g., satisfying additional assumptions regarding feature-set quality or the parameters \(k\), \(a\), and \(\tau\). Finally, while we placed alternative feature selection in class \(\mathcal{XP}\) (cf. Proposition 5), one might prove membership or hardness for more specific parameterized complexity classes.
Related problem formulationsWe only focused on the optimization problem of alternative feature selection until now. Another interesting question is how many alternatives exist for a given \(n\), \(k\), and \(\tau\), regardless of their quality. Also, given the number of alternatives as well, it would be interesting to have an exact or approximate estimate for the number of valid solutions for alternative feature selection, i.e., sets of feature sets. While both these estimates are
straightforward for \(\tau=1\), allowing arbitrary \(\tau\) poses a larger challenge. Finally, one could re-formulate alternative feature selection similar to Bin Covering (cf. Section A.5.2) and analyze this problem in detail.
### Heuristic Search for the Univariate Objective
In this section, we propose heuristic search methods for the univariate objective (cf. Equation 11 and Section A.3), complementing the exact, solver-based search methods that we evaluate in our experiments (cf. Section 6.1). The proposed heuristics may be faster than exact optimization at the expense of lower feature-set quality. In particular, we describe _Greedy Replacement Search_ (cf. Section A.6.1), _Greedy Balancing Search_ (cf. Section A.6.2), and _Greedy Depth Search_ (cf. Section A.6.3). The second search method is simultaneous, while the other two are sequential. All three heuristics leverage that the univariate objective sums up the individual qualities \(q_{j}\) of selected features and does not consider interactions between features.
#### a.6.1 Greedy Replacement Search
_Greedy Replacement Search_ is our first heuristic for alternative feature selection with the univariate objective. This heuristic conducts a sequential search.
AlgorithmAlgorithm 2 outlines _Greedy Replacement Search_. We start by sorting the features decreasingly based on their qualities \(q_{j}\) (Line 1). For a fixed feature-set size \(k\), a dissimilarity threshold \(\tau\), and using the Dice dissimilarity (cf. Equation 3), one subset with \(\lfloor(1-\tau)\cdot k\rfloor\) features can be contained in all alternatives without violating the dissimilarity threshold (cf. Equation 8). Thus, our algorithms indeed selects the \(\lfloor(1-\tau)\cdot k\rfloor\) features with highest quality in each alternative \(s^{(\cdot)}\) (Lines 2-7). We fill the remaining spots in the sets by iterating over the alternatives and remaining features (Lines 8-15). For each alternative, we select the \(\lceil\tau\cdot k\rceil\) highest-quality features not used in any prior alternative, thereby satisfying the dissimilarity threshold. We continue this procedure until we reach the desired number of alternatives \(a\) or until there are not enough unused features to form further alternatives (Line 9).
**Example 3** (Algorithm of _Greedy Replacement Search_).: With \(n=10\) features, feature-set size \(k=5\), and \(\tau=0.4\), each feature set must differ by \(\lceil\tau\cdot k\rceil=2\) features from the other feature sets. The original feature set \(s^{(0)}\) consists of the top \(k=5\) features regarding quality \(q_{j}\). The first alternative \(s^{(1)}\) consists of the top \(\lfloor(1-\tau)\cdot k\rfloor=3\) features plus the sixth- and seventh-best feature. The second alternative \(s^{(2)}\) consists of the top three features plus the eighth- and ninth-best one. The algorithm has to stop at \(i=2\) since there are not enough unused features to form further alternatives in the same manner.
In general, \(i\)-th alternative consists of the top \(\lfloor(1-\tau)\cdot k\rfloor\) features plus the features \(k+(i-1)\cdot\lceil\tau\cdot k\rceil+1\) to \(k+i\cdot\lceil\tau\cdot k\rceil\) in descending quality order.
ComplexitySorting the qualities of \(n\) features (Line 1) has a complexity of \(O(n\cdot\log n)\). Next, the algorithm iterates over the features and processes each feature at most once. In particular, after selecting a feature in an alternative, \(feature\_position\) increases by 1. The maximum value of this variable depends on \(a\) and \(k\) (Line 9) but cannot exceed the total number of features \(n\). For each \(feature\_position\), the algorithm accesses the arrays \(indices\) and \(s^{(i)}\) (Lines 11-14). Further, each alternative \(s^{(i)}\) gets initialized as the selection \(s\) of the top \(\lfloor(1-\tau)\cdot k\rfloor\) features (Line 10), which the algorithm only needs to determine once before the main loop (Lines 2-7). Each of these array operations runs in \(O(n)\) or faster. Combining the cost per \(feature\_position\) with the number of \(feature\_position\)s, the overall time complexity is \(O(n^{2})\), i.e., polynomial in \(n\).
QualityWhile not optimizing exactly, _Greedy Replacement Search_ still offers an approximation guarantee relative to exact search methods:
**Proposition 12** (Approximation quality of _Greedy Replacement Search_).: _Assume non-negative univariate feature qualities of \(n\) features, \(a\in\mathbb{N}_{0}\) alternatives, a dissimilarity threshold \(\tau\), desired feature-set size \(k\), and \(k+a\cdot\lceil\tau\cdot k\rceil\leq n\). Under these conditions, Greedy Replacement Search reaches at least a fraction
of \(\frac{\lfloor(1-\tau)\cdot k\rfloor}{k}\) of the optimal objective values of the optimization problems for (1) sequential search, (2) simultaneous search with sum-aggregation, and (3) simultaneous search with min-aggregation._
Proof.: In the univariate objective, the quality of a feature set is the sum of the qualities of the contained features. _Greedy Replacement Search_ includes the \(\lfloor(1-\tau)\cdot k\rfloor\) highest-quality features in each alternative of size \(k\), while the remaining \(\lceil\tau\cdot k\rceil\) features may have an arbitrary quality. In comparison, the single, i.e., unconstrained, optimal feature set of size \(k\) contains the top \(k\) features, which are the union of the top \(\lfloor(1-\tau)\cdot k\rfloor\) features and the next-best \(\lceil\tau\cdot k\rceil\) features. Due to quality sorting, each of the next-best \(\lceil\tau\cdot k\rceil\) features has at most the quality of each of the top \(\lfloor(1-\tau)\cdot k\rfloor\) features. Hence, assuming non-negative qualities, each alternative yielded by _Greedy Replacement Search_ has at least a quality of \(\lfloor(1-\tau)\cdot k\rfloor/k\) relative to the single optimal feature set of size \(k\). Next, the single optimal feature set of size \(k\) upper-bounds the quality of any individual feature set of size \(k\) found by any search method. Thus, the bound also applies to the minimum and sum of qualities over feature sets.
In particular, _Greedy Replacement Search_ yields a constant-factor approximation for the three optimization problems (cf. Equation 9 and 10) mentioned in Proposition 12. The condition \(k+a\cdot\lceil\tau\cdot k\rceil\leq n\) describes scenarios where _Greedy Replacement Search_ can yield all desired alternatives, i.e., does not run out of unused features. As the heuristic has polynomial runtime, alternative feature selection lies in the complexity class \(\mathcal{APX}\)[48] under the specified conditions:
**Proposition 13** (Approximation complexity of alternative feature selection).: _Assume non-negative univariate feature qualities of \(n\) features, \(a\in\mathbb{N}_{0}\) alternatives, a dissimilarity threshold \(\tau\), desired feature-set size \(k\), and \(k+a\cdot\lceil\tau\cdot k\rceil\leq n\). Under these conditions, the optimization problems of alternative feature selection with (1) sequential search, (2) simultaneous search with sum-aggregation, and (3) simultaneous search with min-aggregation reside in the complexity class \(\mathcal{APX}\)._
For \(\tau=1\), _Greedy Replacement Search_ even yields the same objective values as sequential search and simultaneous search with sum-aggregation since it becomes identical to a procedure we outlined in our complexity analysis earlier (cf. Proposition 11). In contrast, the following example shows that the heuristic can be worse than exact sequential search for as few as \(a=2\) alternatives:
**Example 4** (Quality of _Greedy Replacement Search_ vs. exact search).: Consider \(n=6\) features with univariate feature qualities \(q=(9,8,7,3,2,1)\), feature-set size \(k=2\), number of alternatives \(a=2\), and dissimilarity threshold \(\tau=0.5\), which permits an overlap of one feature between sets here. Sequential search and simultaneous search, for min- and sum-aggregation, yield the selection \(s^{(0)}=(1,1,0,0,0,0)\), \(s^{(1)}=(1,0,1,0,0,0)\), and \(s^{(2)}=(0,1,1,0,0,0)\), with a summed quality of \(17+16+15=48\). _Greedy Replacement Search_ yields the selection \(s^{(0)}=(1,1,0,0,0,0)\), \(s^{(1)}=(1,0,1,0,0,0)\), and \(s^{(2)}=(1,0,0,1,0,0)\), with a summed quality of \(17+16+12=45\).
While the first two feature sets are identical between exact and heuristic search, the quality of \(s^{(2)}\) is lower for the heuristic (12 vs. 15). In particular, by always selecting the top \(\lfloor(1-\tau)\cdot k\rfloor\) features, the heuristic misses out on feature sets only involving the next-best features.
For min-aggregation in the objective, \(a=1\) alternative already suffices such that the heuristic may be worse than exact search:
**Example 5** (Quality of _Greedy Replacement Search_ vs. min-aggregation).: Consider \(n=6\) features with univariate feature qualities \(q=(9,8,7,3,2,1)\), feature-set size \(k=3\), number of alternatives \(a=1\), and dissimilarity threshold \(\tau=0.5\), which permits an overlap of one feature between sets here. Simultaneous search with min-aggregation yields the selection \(s^{(0)}=(1,1,0,0,1,0)\) and \(s^{(1)}=(1,0,1,1,0,0)\), with a quality of \(\min\{19,19\}=19\). _Greedy Replacement Search_ and sequential search yield the selection \(s^{(0)}=(1,1,1,0,0,0)\) and \(s^{(1)}=(1,0,0,1,1,0)\), with a quality of \(\min\{24,14\}=14\). Simultaneous search with sum-aggregation may yield either of these two solutions or the selection \(s^{(0)}=(1,1,0,1,0,0)\) and \(s^{(1)}=(1,0,1,0,1,0)\) with the same summed quality.
In particular, _Greedy Replacement Search_ does not balance feature-set qualities since it is a sequential search method. We alleviate this issue with the heuristic _Greedy Balancing Search_ (cf. Section A.6.2).
LimitationsProposition12 and Examples4, 5 already showed the potential quality loss of the heuristic compared to an exact search for alternatives. Further, _Greedy Replacement Search_ only works as long as some features have not been part of any feature set yet, i.e., \(k+a\cdot\lceil\tau\cdot k\rceil\leq n\). Once the heuristic runs out of unused features, one would need to switch the search method. Thus, to obtain a high number of alternatives \(a\), the following conditions are beneficial for the heuristic: The number of features \(n\) should be high, the feature-set size \(k\) show be low, and the dissimilarity threshold \(\tau\) should be low. These conditions align well with typical feature-selection scenarios where \(k\ll n\).
Another drawback is that _Greedy Replacement Search_ assumes a very simple structure of the optimization problem. If the objective function becomes more complex than a sum of univariate qualities, quality-based feature ordering may be impossible or suboptimal. Further, _Greedy Replacement Search_ cannot accommodate additional constraints on feature sets, e.g., based on domain knowledge. Finally, the heuristic assumes the same size \(k\) for all feature sets.
Given the limitations of _Greedy Replacement Search_ and the low optimization time for exact sequential search with the univariate objective (cf. Table5), we do not evaluate this heuristic in our experiments in Section6.
#### a.6.2 Greedy Balancing Search
_Greedy Balancing Search_ modifies _Greedy Replacement Search_ to obtain more balanced feature-set qualities with a simultaneous search procedure.
```
Input: Univariate feature qualities \(q_{j}\) with \(j\in\{1,\ldots,n\}\), Feature-set size \(k\), Number of alternatives \(a\), Dissimilarity threshold \(\tau\) Output: List of feature-selection decision vectors \(s^{(0)},\ldots,s^{(a)}\)
1if\(\lceil\tau\cdot k\rceil\cdot a+k>n\)then
2return\(\emptyset\)
3\(indices\leftarrow\) sort_indices(\(q\), order=descending) // Order by qualities
4for\(i\gets 0\)to\(a\)do// Initial selection for all alternatives
5\(s^{(i)}\leftarrow\{0\}^{n}\)
6\(feature\_position\gets 1\)// Index of index of current feature
7while\(feature\_position\leq\lfloor(1-\tau)\cdot k\rfloor\)do// Select top features
8\(j\gets indices[feature\_position]\)// Index feature by quality
9for\(i\gets 0\)to\(a\)do// Same features in all alternatives
10\(s^{(i)}_{j}\gets 1\)
11\(feature\_position\gets feature\_position+1\)
12for\(i\gets 0\)to\(a\)do
13\(Q^{(i)}\gets 0\)// Relative quality of each alternative
14while\(feature\_position\leq\lceil\tau\cdot k\rceil\cdot a+k\)do// Fill all positions
15\(Q_{\min}\leftarrow\infty\)// Find alternative with lowest quality
16\(i_{\min}\leftarrow-1\)
17for\(i\gets 0\)to\(a\)do
18if\(Q^{(i)}<Q_{\min}\)and\(\sum_{j=1}^{n}s^{(i)}_{j}<k\)then// Check cardinality
19\(Q_{\min}\gets Q^{(i)}\)
20\(i_{\min}\gets i\)
21\(j\gets indices[feature\_position]\)// Index feature by quality
22\(s^{(i_{\min})}_{j}\gets 1\)// Add to lowest-quality, non-full alternative
23\(Q^{(i_{\min})}\gets Q^{(i_{\min})}+q_{j}\)// Update quality of that alternative
24\(feature\_position\gets feature\_position+1\)
25return\(s^{(0)},\ldots,s^{(a)}\)
```
**Algorithm 3**Greedy Balancing Search for alternative feature sets.
AlgorithmAlgorithmAlgorithm 3 outlines _Greedy Balancing Search_. First, we check whether the algorithm should terminate early, i.e., whether the number of features \(n\) is not high enough to satisfy the desired user parameters \(k\), \(a\), and \(\tau\) (Line 1). Next, we select the first \(\lfloor(1-\tau)\cdot k\rfloor\) features in each alternative like in _Greedy Replacement Search_ (cf. Algorithm 2), i.e., we pick the features with the highest quality \(q_{j}\) (Lines 3-11).
For the remaining spots in the alternatives, we use a Longest Processing Time (LPT) heuristic (Lines 12-24). Such heuristics are common for Multiprocessor Scheduling and Balanced Number Partitioning problems [4, 18, 58] (cf. Section A.5.2). In particular, we continue iterating over features by decreasing quality. We assign each feature to the alternative that currently has the lowest summed quality \(Q^{(i)}\) and whose size \(k\) has not been reached yet. We continue this procedure until all alternatives have reached size \(k\) (Line 14).
**Example 6** (Algorithm of _Greedy Balancing Search_).: Consider \(n=6\) features with univariate feature qualities \(q=(9,8,7,3,2,1)\), feature-set size \(k=4\), number of alternatives \(a=1\), and dissimilarity threshold \(\tau=0.5\), which permits an overlap of two features between sets here. The features with qualities \(9\) and \(8\) become part of both feature sets, \(s^{(0)}\) and \(s^{(1)}\) (Lines 3-11). At this point, both alternatives have the same relative quality \(Q^{(0)}=Q^{(1)}=0\), i.e., \(Q^{(i)}\) in the algorithm ignores the quality of the shared features. Now the LPT heuristic becomes active (Lines 12-24). The feature with quality \(7\) is added to \(s^{(0)}\), which causes \(Q^{(0)}>Q^{(1)}\) (i.e., \(7>0\)). Thus, the feature with quality \(3\) is added to \(s^{(1)}\). As \(Q^{(0)}>Q^{(1)}\) (i.e., \(7>3\)) still holds, the feature with quality \(2\) becomes part of \(s^{(1)}\) as well. Because \(s^{(1)}\) has reached size \(k=4\), the feature with quality \(1\) is added to \(s^{(0)}\), even if the latter still has a higher quality (i.e., \(7>5\)). Now both alternatives have reached their desired size and \(n=6=\lceil 0.5\cdot 4\rceil\cdot 1+4=\lceil\tau\cdot k\rceil\cdot a+k\) (Line 14). Thus, the algorithm terminates. The solution consists of \(s^{(0)}=(1,1,1,0,0,1)\) and \(s^{(1)}=(1,1,0,1,1,0)\).
ComplexityLike _Greedy Replacement Search_, _Greedy Balancing Search_ has an upfront cost of \(O(n\cdot\log n)\) for sorting feature qualities (Line 3) and then iterates over \(O(n)\)\(feature\_positions\). For each \(feature\_position\), the algorithm iterates over \(a\) alternatives and conducts a fixed number of array operations in \(O(n)\). Thus, the overall complexity of _Greedy Balancing Search_ is \(O(a\cdot n^{2})\).
Quality_Greedy Balancing Search_ selects the same features as _Greedy Replacement Search_ and only changes their assignment to the feature sets. Thus, the summed feature-set quality remains the same, while the minimum feature-set quality may be higher due to balancing. Hence, the quality guarantee of _Greedy Replacement Search_ (cf. Proposition 12) holds here as well:
**Proposition 14** (Approximation quality of _Greedy Balancing Search_).: _Assume non-negative univariate feature qualities of \(n\) features, \(a\in\mathbb{N}_{0}\) alternatives, a dissimilarity threshold \(\tau\), desired feature-set size \(k\), and \(k+a\cdot\lceil\tau\cdot k\rceil\leq n\). Under these conditions, Greedy Balancing Search reaches at least a fraction of
\(\frac{\lfloor(1-\tau)\cdot k\rfloor}{k}\) of the optimal objective values of the optimization problems for (1) sequential search, (2) simultaneous search with sum-aggregation, and (3) simultaneous search with min-aggregation._
For the objective with min-aggregation, _Greedy Balancing Search_ can even be better than exact sequential search, as Example 5 shows, where the heuristic would yield the same solution as simultaneous search with min-aggregation. However, the heuristic can also be worse than sequential and simultaneous search, as Example 4 shows, where _Greedy Balancing Search_ would yield the same solution as _Greedy Replacement Search_.
Limitations_Greedy Balancing Search_ shares several limitations with _Greedy Replacement Search_, e.g., it may be worse than exact search, assumes univariate feature qualities, and does not work if the number of features \(n\) is too low relative to \(k\), \(a\), and \(\tau\). In the latter case, _Greedy Balancing Search_ yields no solution due to its simultaneous nature, while _Greedy Replacement Search_ yields at least some alternatives. However, if running out of features is not an issue, _Greedy Balancing Search_ has the advantage of more balanced feature-set qualities.
#### a.6.3 Greedy Depth Search
_Greedy Depth Search_ is a sequential search heuristic that generalizes _Greedy Replacement Search_ and allows to obtain more than \(\frac{n-k}{\lceil\tau\cdot k\rceil}\) alternatives.
AlgorithmAlgorithm 4 outlines _Greedy Depth Search_. As in the other two heuristics, we start by sorting the features decreasingly according to their qualities \(q_{j}\) (Line 1). However, instead of keeping the same \(\lfloor(1-\tau)\cdot k\rfloor\) features in each alternative and only replacing the remaining ones, we now allow all features to be replaced. In particular, we may exhaustively iterate over all feature sets, depending on the number of alternatives \(a\). Thus, we maintain not only one feature position as before but a length-\(k\) array of the feature positions for the current feature set (Lines 2-4). This array represents feature indices regarding the sorted qualities and is sorted increasingly, which prevents evaluating the same feature set, only with different feature order, multiple times.
In the main loop of the algorithm, we find alternatives sequentially (Lines 7-24). For each potential alternative, we select the features based on the position array (Lines 8-11). We check the resulting feature set against the constraints for alternatives (Line 12) and only store it if it is valid. This check was unnecessary in the other two heuristics, which only formed valid alternatives by design.
Next, we update the feature positions for the next potential alternative (Lines 14-24). First, we try to replace the lowest-quality feature in the current feature set by advancing one position in the sorted qualities. This step may not be possible, as the feature set may already contain the feature with the overall lowest quality, i.e., position \(n\) in the array of sorted qualities (Line 17). In this case, we try to replace the second-lowest-quality feature in the current feature set by advancing its position. If this action is impossible as well, we iterate
```
Input: Univariate feature qualities \(q_{j}\) with \(j\in\{1,\ldots,n\}\), Feature-set size \(k\), Number of alternatives \(a\), Dissimilarity threshold \(\tau\) Output: List of feature-selection decision vectors \(s^{(\cdot)}\)
1\(indices\leftarrow\) sort_indices(\(q\), order=descending) // Order by qualities
2\(feature\_positions\leftarrow\{0\}^{k}\) // Indices of indices of features
3for\(p\gets 1\)to\(k\)do // Start with top \(k\) features
4\(feature\_positions[p]\gets p\) // Ordered by qualities as well
5\(i\gets 0\) // Number of current alternative
6\(has\_next\_solution\leftarrow\)true
7while\(i\leq a\)and\(has\_next\_solution\)do
8\(s^{(i)}\leftarrow\{0\}^{n}\)
9for\(p\gets 1\)to\(k\)do // Select \(k\) features, indexed by quality
10\(j\gets indices[feature\_positions[p]]\)
11\(s^{(i)}_{j}\gets 1\)
12ifis_valid_alternative(\(s^{(i)}\), \(\{s^{(0)},\ldots,s^{(i-1)}\}\))then
13\(i\gets i+1\) // Else, \(s^{(i)}\) overwritten in next iteration
14\(p\gets k\) // Update feature positions, starting with last
15while\(p\geq 1\)do
16\(position\gets feature\_positions[p]\)
17if\(position<n+p-k\)then // Position can be increased
18for\(\Delta_{p}\gets 0\)to\(k-p\)do // Also update later positions
19\(feature\_positions[p+\Delta_{p}]\gets position+\Delta_{p}+1\)
20\(p\leftarrow-1\) // Position update finished
21else // Position cannot be increased
22\(p\gets p-1\) // Also update at least one prior position
23if\(p=0\)then // Updating positions further would violate \(n\)
24\(has\_next\_solution\leftarrow\)false
25
26return\(s^{(0)},\ldots,s^{(i)}\)
```
**Algorithm 4**Greedy Depth Search for alternative feature sets.
further over positions in the current feature set by increasing quality (Line 22). Once we find a feature position that we can increase, we also advance all subsequent, i.e., lower-quality, positions accordingly. Hence, the feature positions remain sorted by decreasing quality (Lines 18-19).
We repeat the main loop until we reach the desired number of alternatives \(a\) or until we cannot update any feature position without exceeding the number of features \(n\), i.e., we cannot form another alternative (Lines 7 and 23).
**Example 7** (Algorithm of _Greedy Depth Search_).: Consider \(n=6\) features with univariate feature qualities \(q=(9,8,7,3,2,1)\), feature-set size \(k=4\), number of alternatives \(a=1\), and dissimilarity threshold \(\tau=0.5\), which permits an overlap of two features between sets here. Note that the features are already ordered by quality here, i.e., \(indices=(1,2,3,4,5,6)\) (Line 1). Next, the algorithm initializes \(feature\_positions=(1,2,3,4)\) (Line 2-4). \(s^{(0)}\) contains these \(k\) features, i.e., \(s^{(0)}=(1,1,1,1,0,0)\). Given that there are no other alternatives yet, this feature set is valid (Line 12)) and the algorithm moves on to \(i=1\).
For forming \(s^{(1)}\), the position-update step (Lines 14-24) first tries to only replace the lowest-quality feature in the alternative, i.e., \(feature\_positions=(1,2,3,5)\) and \(feature\_positions=(1,2,3,6)\). However, neither of these feature sets constitutes a valid alternative regarding \(s^{(0)}\). Thus, the algorithm attempts to replace the feature with the second-lowest quality as well, evaluating \(feature\_positions=(1,2,4,5)\) and \(feature\_positions=(1,2,4,6)\). However, the overlap with \(s^{(0)}\) is still too large. The next value is \(feature\_positions=(1,2,5,6)\), which yields the valid alternative \(s^{(1)}=(1,1,0,0,1,1)\).
_Greedy Replacement Search_ would terminate now since the options for replacing the \(\lceil\tau\cdot k\rceil=2\) lowest-quality features are exhausted. In contrast, _Greedy Depth Search_ attempts to replace the third-lowest-quality feature, starting with \(feature\_positions=(1,3,4,5)\). This feature set is not a valid alternative, and neither are the subsequent sets with \(feature\_positions=(1,3,4,6)\), \(feature\_positions=(1,3,5,6)\), etc. After more iterations, the algorithm also replaces the highest-quality feature, starting with \(feature\_positions=(2,3,4,5)\). Eventually, the algorithm reaches \(feature\_positions=(3,4,5,6)\), which yields the valid alternative \(s^{(2)}=(0,0,1,1,1,1)\). After obtaining \(s^{(2)}\), there is no valid update of the feature positions left (Line 23). Thus, the algorithm terminates.
ComplexityThe runtime behavior differs from the other two heuristics. In particular, _Greedy Replacement Search_ has the same runtime cost between subsequent alternatives since it directly creates valid alternatives by design. In contrast, _Greedy Depth Search_ iterates over all possible feature sets, and the runtime between valid alternatives may vary. For each values of \(feature\_positions\), the algorithm creates a feature selection in \(O(k\cdot n)\) (Lines 8-11), checks constraints in \(O(a\cdot n)\) (Line 12), and updates the position array in \(O(k^{2})\) (Lines 14-24). However, there are \(O(n^{k})\) potential \(feature\_positions\), and _Greedy Depth Search_ may exhaustively iterate over them. This cost is comparable to exhaustive conventional feature selection (cf. Proposition 2) and exhaustive sequential search
(cf. Proposition 3). Unlike the latter, the search does not restart for each alternative, i.e., it only considers each feature set once instead of \(a+1\) times.
On the positive side, _Greedy Depth Search_ can yield more alternatives than _Greedy Replacement Search_ with its \(O(n^{2})\) cost or _Greedy Balancing Search_ with its \(O(a\cdot n^{2})\) cost. Nevertheless, in scenarios where the latter two are applicable, i.e., \(k+a\cdot\lceil\tau\cdot k\rceil\leq n\), they have a lower cost than _Greedy Depth Search_. In particular, _Greedy Depth Search_ needs \(O(n^{\lceil\tau\cdot k\rceil})\) iterations to cover the options for replacing the worst \(\lceil\tau\cdot k\rceil\) features in size-\(k\) feature sets, which is the search space of the other two heuristics. In particular, the cost disadvantage relative to the other two heuristics grows with the dissimilarity threshold \(\tau\). As a remedy, one may use _Greedy Replacement Search_ for as many alternatives as possible and then continue with _Greedy Depth Search_, initializing the \(feature\_positions\) (Line 2-4) based on the results of the former heuristic.
Quality_Greedy Depth Search_ initially yields the same solutions as _Greedy Replacement Search_. Thus, _Greedy Depth Search_ also yields a constant-factor approximation of the optimal solution in case \(k+a\cdot\lceil\tau\cdot k\rceil\leq n\) (cf. Proposition 12). The quality analysis becomes more involved for further alternatives since these do not contain all top \(\lfloor(1-\tau)\cdot k\rfloor\) features anymore, on which our proof of Proposition 12 builds. Thus, we leave this analysis open for future work. The quality of alternatives may not even be monotonically decreasing anymore, as the following example shows:
**Example 8** (Non-monotonic quality of _Greedy Depth Search_).: Consider \(n=4\) features with univariate feature qualities \(q=(9,8,7,1)\), feature-set size \(k=2\), number of alternatives \(a=3\), and dissimilarity threshold \(\tau=0.5\), which permits an overlap of one feature between sets here. _Greedy Depth Search_ yields the the selection \(s^{(0)}=(1,1,0,0)\), \(s^{(1)}=(1,0,1,0)\), \(s^{(2)}=(1,0,0,1)\), and \(s^{(3)}=(0,1,1,0)\), with the corresponding feature-set qualities 17, 16, 10, and 15.
LimitationsLike _Greedy Balancing Search_ and _Greedy Replacement Search_, _Greedy Depth Search_ assumes univariate feature qualities and may be worse than exact search. As a sequential procedure, it does not balance the alternatives' qualities. It may yield more alternatives than the former two heuristics but has a higher and more variable runtime.
### Evaluation
In this section, we evaluate experimental results not covered in Section 6. In particular, we cover three experimental dimensions not stemming from the search for alternatives: datasets (cf. Section A.7.1), feature-set-quality metrics (cf. Section A.7.2), and feature-selection methods (cf. Section A.7.3).
#### a.7.1 Datasets
Naturally, feature-set quality depends on the dataset, and several effects could occur. For example, the distribution of feature-set quality in a dataset may be
relatively uniform or relatively skewed. Further, datasets with more features \(n\) give way to more alternative feature sets. At the same time, the feature quality can be spread over more features than for lower-dimensional datasets, making it harder to compose a small high-quality feature set. Indeed, our experiments show a broad variation of feature-set quality over the datasets. Figure 8 depicts the relationship between datasets and the quality of the original, i.e., unconstrained, feature set in sequential search. To account for the varying dataset dimensionality, we put the ratio between feature-set size \(k\) and dimensionality \(n\) on the x-axis, which measures relative feature-set sizes. As Figure 7(a) displays, the objective of the univariate feature-selection method _MI_ approximately increases linearly with \(k/n\). However, there still is variation exclusively caused by the dataset rather than its dimensionality. Further, the quality of a prediction model, i.e., decision trees, does not exhibit any trend but varies strongly between datasets, as Figure 7(b) visualizes. This variation justifies our normalization of feature-set quality when analyzing alternatives in Sections 6.2 and 6.3.
#### a.7.2 Feature-Set Quality Metrics
Prediction models and overfittingAs one can expect, random forests have a higher average prediction performance than decision trees. Further, both model types exhibit overfitting, i.e., there is a gap between training-set and test-set performance. In particular, over all experimental settings, both model types have a mean training-set MCC around 0.85-0.86 (median: 1.0). In contrast, decision trees have a mean MCC of 0.47 (median: 0.53) on the test set, while random forests have a slightly higher mean MCC of 0.52 (median: 0.61). I.e., prediction performance is significantly worse on the test set than on the training set. The existence of overfitting makes sense as we do not regularize, i.e., limit the growth of the trees or prune them after training.
As another comparison, Figure 8(a) shows the distribution of the difference between training and test feature-set quality, again over all experimental settings. The difference between training and test feature-set quality is 0.52 (median: 0.61).
Figure 8: Feature-set quality in datasets over feature-set size \(k\) relative to dimensionality \(n\), by feature-set size \(k\) and evaluation metric. Results from the original feature sets of sequential search with _MI_ as feature-selection method.
tings. Once more, we observe that training feature-set quality is usually higher, i.e., the difference shown in the figure is greater than zero. However, this phenomenon does not invalidate our analysis of how feature-set quality develops over alternatives. The optimization objective \(Q\), which Figure 8(a) also depicts, shows overfitting for all feature-selection methods as well, though to a lesser extent than prediction performance. Thus, Section 6 considers the training and test set for the objective value, but only the test set for prediction performance.
Correlation between evaluation metricsFigure 8(b) shows the Spearman correlation between different evaluation metrics over all experimental settings: First, we compute the correlation between metrics for each combination of dataset, cross-validation fold, and feature-selection method. Second, we average the correlation values over these three experimental dimensions. This two-step procedure accounts for the different objectives of feature-selection methods and the normalization of quality per dataset and cross-validation fold in some objectives (cf. Section 5.3.2). The plot shows that the performance of decision trees and random forests is highly correlated. Thus, we only report MCC of decision trees in Section 6, which are the simpler model type and always consider all features during training rather than randomly sampling them.
Figure 8(b) also shows that the correlation between training and test feature-set quality is only moderate for the optimization objective \(Q\) and weak for prediction performance in terms of MCC. This result might be caused by overfitting, whose strength may depend on the experimental settings. Further, the correlation between optimization objective \(Q\) and prediction MCC is only weak to moderate as well. I.e., the objective of feature selection is only partially in
Figure 9: Feature-set quality by evaluation metric. Results from all search runs.
dicative of prediction performance since the former might use a simplified quality criterion. Among the five feature-selection methods, _Greedy Wrapper_ has the highest correlation between training-set objective value and test-set MCC, with a value of 0.48. Since this feature-selection method uses prediction performance in its objective, a comparatively high correlation is expected. On the other end of the spectrum, _mRMR_ exhibits a correlation of -0.05 between training-set objective value and test-set MCC. This filter method penalizes the correlation between features in its objective. However, redundant features may not hurt prediction performance in decision trees, even if they do not improve it.
#### a.7.3 Feature-Selection Methods
Prediction performanceAs the five feature-selection methods employ different objective functions \(Q\), comparing absolute objective values between them does not make sense. However, we can analyze the prediction performance of the obtained feature sets. Figure 9(a) compares a decision tree's test-set MCC on the original feature sets of sequential search between feature-selection methods. On average, _Model Gain_ is the best feature-selection method: The mean test-set MCC of decision trees is 0.53 for _Model Gain_, 0.49 for _Greedy Wrapper_, 0.47 for _MI_, 0.46 for _mRMR_, 0.43 for _FCBF_. In particular, the univariate, model-free method _MI_ keeps up surprisingly well with more sophisticated methods. Thus, the analyses of alternative feature sets in Section 6 focus on _MI_ while still discussing the remaining feature-selection methods. The overall best feature-selection method, _Model Gain_, uses the same objective function as _MI_
Figure 10: Feature-set quality by feature-selection method and feature-set size \(k\). Results from the original feature sets of sequential search.
but obtains its feature qualities from a prediction model rather than a bivariate dependency measure, which might boost its performance.
While _Greedy Wrapper_ uses actual prediction performance to assess feature-set quality, its heuristic nature might prevent better results: This method only evaluates a fraction of all feature sets, while the other feature-selection methods optimize globally. In particular, _Greedy Wrapper_ performed 629 iterations on average (median: 561) to determine the original feature sets of sequential search. However, the number of possible feature sets is much higher, e.g., already \(2^{15}=32768\) for the lowest-dimensional datasets in our evaluation (cf. Table 2).
_FCBF_'s results may be taken with a grain of salt: Over all experimental settings, 89% of feature sets for _FCBF_ were infeasible, i.e., no solution satisfied the constraints. In contrast, this figure only is 18% for _MI_. Even the original feature set in sequential search is infeasible in 71% of the cases for _FCBF_ but never for the other feature-selection methods. In particular, the combination of feature-correlation constraints in our formulation of _FCBF_ (c.f. Equation 12) with a feature-set-cardinality constraint, i.e., enforcing a feature-set size \(k\), may make the problem infeasible, especially if \(k\) gets larger.
Influence of feature-set size \(k\)As expected, larger feature sets usually exhibit a higher feature-set quality than smaller feature sets in our experiments. However, the increase in quality with \(k\) is not proportional, and there might even be a decrease. As Figure 9(b) shows for the original feature sets of sequential search, _MI_ and _Model Gain_ exhibit an increase of the training-set objective value \(Q_{\text{train}}\) from \(k=5\) to \(k=10\), i.e., the difference depicted in Figure 9(b) is positive. As these objectives are monotonic in the set of selected features, a decrease in the training-set objective value is impossible. In contrast, the heuristic _Greedy Wrapper_ does not necessarily benefit from more features. The latter insight also applies to _mRMR_, which normalizes its objective with the number of selected features and penalizes feature redundancy. For _FBCF_, the fraction of feasible feature sets changes considerably from \(k=5\) to \(k=10\), so one cannot directly compare the overall quality between these two settings. As Figure 9(b) also displays, the benefit of larger feature sets is even less clear for prediction performance. In particular, all feature-selection methods except _FCBF_ show a median difference in test-set MCC close to zero when comparing \(k=5\) to \(k=10\). Thus, Section 6 focuses on smaller feature sets, i.e., \(k=5\).
|
2305.10360 | Neuroimaging Meta Regression for Coordinate Based Meta Analysis Data
with a Spatial Model | Coordinate-based meta-analysis combines evidence from a collection of
Neuroimaging studies to estimate brain activation. In such analyses, a key
practical challenge is to find a computationally efficient approach with good
statistical interpretability to model the locations of activation foci. In this
article, we propose a generative coordinate-based meta-regression (CBMR)
framework to approximate smooth activation intensity function and investigate
the effect of study-level covariates (e.g., year of publication, sample size).
We employ spline parameterization to model spatial structure of brain
activation and consider four stochastic models for modelling the random
variation in foci. To examine the validity of CBMR, we estimate brain
activation on $20$ meta-analytic datasets, conduct spatial homogeneity tests at
voxel level, and compare to results generated by existing kernel-based
approaches. | Yifan Yu, Rosario Pintos Lobo, Michael Cody Riedel, Katherine Bottenhorn, Angela R. Laird, Thomas E. Nichols | 2023-05-15T21:16:25Z | http://arxiv.org/abs/2305.10360v1 | # Neuroimaging Meta Regression for Coordinate Based Meta Analysis Data with a Spatial Model
###### Abstract
Coordinate-based meta-analysis combines evidence from a collection of Neuroimaging studies to estimate brain activation. In such analyses, a key practical challenge is to find a computationally efficient approach with good statistical interpretability to model the locations of activation foci. In this article, we propose a generative coordinate-based meta-regression (CBMR) framework to approximate smooth activation intensity function and investigate the effect of study-level covariates (e.g., year of publication, sample size). We employ spline parameterization to model spatial structure of brain activation and consider four stochastic models for modelling the random variation in foci. To examine the validity of CBMR, we estimate brain activation on \(20\) meta-analytic datasets, conduct spatial homogeneity tests at voxel level, and compare to results generated by existing kernel-based approaches.
## 1 Introduction
Functional neuroimaging includes a number of techniques to image brain activity, including Positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). Starting three decades ago, PET studies were used to compare brain activity between rest and experimental conditions, producing maps of "activation", images of statistics measuring the strength of the experimental effect. Especially in the last two decades, the literature of fMRI activations has grown rapidly, which motivates a need to integrate findings, establish consistency and heterogeneity across independent but related studies. However in both PET and fMRI studies, the validity is challenged by common drawbacks of small sample size, high prevalence of false positives (approximately \(10-20\%\) of reported foci in publications are false positives (Wager et al., 2007), as well as significant heterogeneity among studies and unreliable inference due to their diversity in measurements and types of analysis (Samartsidis et al., 2017b). Meta-analysis is an essential tool to address these
limitations and improve statistical power by pooling evidence from multiple studies and providing insight into consistent results.
There are also applications of neuroimaging meta-analysis to resting-state fMRI and structural analysis using voxel-based morphometry. Going forward we will only reference fMRI, but note that the application extends to other types of data. Meta-analysis is classified into two categories in neuroimaging research: image-based meta-analysis (IBMA) which uses the 3D statistic maps of original studies and coordinate-based meta-analysis (CBMA) which uses the reported spatial coordinates of activation foci in a standard MNI or Talairach space. Ideally, only IBMA would be used, as there is substantial information loss by only using activation foci as compared to full statistics maps, and further accuracy loss occurs when deactivation foci are ignored (Salimi-Khorshidi et al., 2009). However, while it is now more common to share entire statistical maps in published studies, historically this has not been the case (Salimi-Khorshidi et al., 2009) and there exists large-scale coordinate databases (e.g., BrainMap (Laird et al., 2005), Neurosynth (Yarkoni et al., 2011)). Hence, CBMA is still the predominant approach for neuroimaging meta-analysis.
To identify consensus in brain regions with consistent activation across studies, researchers have developed a variety of CBMA methods, which are either kernel-based or model-based. Among those kernel-based methods, activation likelihood estimation (ALE, with a Gaussian kernel), multilevel kernel density analysis (MKDA, with a uniform sphere) and signed differential mapping (SDM, with a Gaussian kernel scaled by effect size) are commonly used (Yurkeltaub et al., 2002, Eickhoff et al., 2012, Wager et al., 2007, Radua et al., 2012). None of the three methods is based on a formal statistical model, however, all are able to obtain statistical inferences by reference to a null hypothesis of total random arrangement of the foci (Samartsidis et al., 2017). Voxels with significant p-values are considered to be the regions of consistent activation. Multiple testing corrected inferences are made by controlling the family wise error rate using the null maximum distribution (Westfall and Young, 1993) or the false discovery rate (FDR) (Benjamini-Hochberg procedure). However, kernel-based methods lack interpretability as mass-univariate approaches rather than explicit probabilistic models, they do not generally allow group comparison, do not model the spatial dependence of activation foci, as well as cannot accommodate study-level covariates to conduct a meta-regression (Samartsidis et al., 2019).
Bayesian model-based methods address these limitations, and are categorised into parametric spatial point process models (Kang et al., 2011, Montagna et al., 2018, Samartsidis et al., 2019) and nonparametric Bayesian models (Yue et al., 2012, Kang et al., 2014). They use explicit generative models for the data with testable assumptions. Although they generally provide advances in interpretability and accuracy over kernel-based methods, they are computationally intensive approaches and generally require parallel computing on GPUs (Samartsidis et al., 2019), and only some approaches can conduct meta-regression to estimate the effect of study-level covariates. Further, it can be more challenging for practitioners to interpret the spatial posterior intensity functions and utilise spatial Bayesian models in practice.
In this work, we investigate classical frequentist models that explicitly account for the spatial structure of the distribution of activation foci. Specifically, we focus on developing a spatial model that takes the form of a generalised linear model, where we make use of a spline parameterization to induce a smooth response and model the entire image jointly, allow for image-wise study-level regressors and consider different stochastic models to find the most accurate but parsimonious fit. Although Poisson is the classic distribution describing independent foci counts, we have previously found evidence of over-dispersion (Samartsidis et al., 2017), and thus we further explore a Negative Binomial model, a Clustered Negative Binomial model and a Quasi-Poisson model to allow excess variation in counts data.
Our work draws on the existing methods for CBMA, while introducing key innovations. From the Bayesian work, we take the idea of explicit spatial models; from the kernel methods, we take the idea of fixing the degree of spatial smoothness. The contribution of this meta-regression model is both methodological and practical, it provides a generative regression model that estimates a smooth intensity function and can have study-level regressors. Meanwhile, using a crucial memory-saving model factorisation, it is also a computationally efficient alternative to the existing Bayesian spatial regression models and provides an accurate estimation of the intensity function. While our method is suitable for any CBMA data, we are particularly motivated by studies of cognition. Cognition encompasses various mental processes, including perception, intelligence, problem solving,
social interactions, and can be affected by substance use. We demonstrate this meta-regression framework on previously published meta-analyses of \(20\) cognitive and psychological tasks, allowing generalised linear hypothesis testing on spatial effect, as well as inference on the effect of study-level covariates.
In the reminder of this work, we present the proposed meta-regression framework, discuss model factorisation and optimisation procedures, as well as inferences on meta-regression outcomes via statistical tests in Section 2. Then we explain experiment settings in Section 3 and explore different variants of stochastic models on the \(20\) meta-analytic datasets and describe multiple goodness-of-fit statistics to identify the most accurate model, establish valid FPR control via Monte Carlo simulation under the null hypothesis of spatial homogeneity, followed by a comparison of homogeneity test with kernel methods in Section 4. Finally, Section 5 summarises our findings and potential extension of this meta-regression framework in the future.
## 2 Methods
Generalised linear models are described in terms of their stochastic and deterministic components. Our deterministic model has a regression structure with a spatial component utilising a spline parameterization and study-level covariate component. For the stochastic model, we consider multiple models motivated by CBMA data characteristics. We then propose a model factorisation approach to make our methods scalable, before outlining a general inference framework.
### Deterministic model
#### 2.1.1 Generic regression structure
Assume there are \(N\) voxels in each of \(M\) studies, and then our CBMA data at voxel \(j\) for study \(i\) is the voxelwise count of foci \(Y_{ij}\), collected for study \(i\) as the N-vector \(Y_{i}=\left[Y_{i1},Y_{i2},\cdots,Y_{iN}\right]^{\top}\). We generate a spatial design matrix \(X(N\times P)\) with \(P\) cubic B-spline bases (more details to follow in Section 2.1.2) and construct study-level covariates matrix \(Z(M\times R)\) by extracting \(R\) study-level covariates from each of \(M\) studies. For the CBMA framework, the central object of interest is the voxelwise intensity function for study \(i\), which considers both effects of smooth spatial bases and study-level covariates. In this setting, it is most concise to write the model for study \(i\) as
\[\log(\mu_{i})=\log\left[\mathbb{E}(Y_{i})\right]=X\beta+(Z_{i}\gamma)\mathbf{1 }_{N} \tag{1}\]
where \(\beta(P\times 1)\) and \(\gamma(R\times 1)\) are regression coefficients for spatial bases \(X\) and study-level covariates \(Z\) respectively, \(Z_{i}\) is the \(i^{th}\) row of study-level regressors \(Z\), \(\mathbf{1}_{N}\) is a \(N\)-vector of \(1\)'s; and the estimated intensity is captured via \(\mu_{ij}\) for studies \(i=1,...,M\) and voxels \(j=1,...,N\), collected for study \(i\) as the N-vector \(\mu_{i}=\left[\mu_{i1},\mu_{i2},\cdots\mu_{iN}\right]^{\top}\). This model is identifiable as long as we ensure each covariate variable is mean zero, letting \(X\) capture the overall mean. The GLM for all voxels in all \(M\) studies is then
\[\log\left[\mathbb{E}(Y)\right]=(\mathbf{1}_{M}\otimes X)\beta+(Z\otimes \mathbf{1}_{N})\gamma \tag{2}\]
where \(Y=[Y_{1},Y_{2},\cdots,Y_{M}]^{\top}\) is a \((M\times N)-\)vector, containing voxelwise foci count for all of \(M\) studies, and \(\otimes\) is the Kronecker product. This formulation has millions of rows \((MN)\) and the spatial design matrix has billions of entries (\(MN\times P\)). In consideration of implementation complexity and memory requirement, we will propose a simplified reformulation of this GLM in Section 2.3.
#### 2.1.2 Spline parameterization
Previous work on spatial point process modelling of CBMA data have treated each study's foci as realisation of a doubly-stochastic Poisson process, also known as a Cox process. In some of that work, the log intensity function is parameterised by superimposed Gaussian kernel basis functions (Montagna et al., 2018), while in others, the log intensity is a Gaussian process (Samartsidis et al., 2019). Here, we propose tensor product of cubic B-spline basis for modelling the spatial intensity, as its smoothness, stability and local support make it an ideal spatial basis for CBMA application. A 1-dimensional cubic B-spline is a piecewise polynomial of order \(k=3\), where pre-specified knots \(T=(t_{0},t_{1},\cdots,t_{n})\) determines the parameterization of basis functions as the intersections of polynomial sections. The order \(k\) B-spline basis functions \(B_{ik}\) are defined by recurrence relations,
for \(i=0,1,\cdots,n\),
\[\begin{split} B_{i1}&=\begin{cases}1\text{ for }t_{i}\leq t<t_{i+1}\\ 0\text{ otherwise}\end{cases}\quad\text{ for }k=1\,,\\ B_{i,k}(t)&=\frac{t-t_{i}}{t_{i+k-1}-t_{i}}B_{i,k-1}(t)+ \frac{t_{i+k}-t}{t_{i+k}-t_{i+1}}B_{i+1,k-1}(t)\text{ for }k>1\,.\end{split} \tag{3}\]
The B-spline curve is a linear combination of the B-spline basis function \(B_{ik}\). For our \(3D\) lattice, assume there are \(v_{x}\) voxels along x direction, the coefficients of \(v_{x}\) voxels evaluated at each of \(n_{x}\) B-spline bases construct a coefficient matrix \(C_{x}\) (size \(v_{x}\times n_{x}\)). Similarly, there exist another two coefficient matrices \(C_{y}\) and \(C_{z}\) (size \(v_{y}\times n_{y}\) and \(v_{z}\times n_{z}\)) along y and z direction. The whole coefficient matrix \(C\) of 3-dimensional B-spline bases is constructed by taking tensor product of the \(3\) coefficient matrices (see Figure 1 for a 2D illustration),
\[C=C_{x}\otimes C_{y}\otimes C_{z} \tag{4}\]
The matrix of \(C\) is \((v_{x}v_{y}v_{z})\times(n_{x}n_{y}n_{z})\), and is the basis for the entire \(3D\) volume, while the analysis is based on a brain mask of \(N\) voxels. The design matrix \(X\) is obtained from \(C\) after a three-step process: First, rows corresponding to voxels outside the brain mask are removed; then, columns are removed if they correspond to weakly supported B-spline bases (a B-spline basis is regarded as "weakly supported" if its maximum value of coefficients evaluated at each voxel is below \(0.1\)). Finally, the rows are re-normalised (sum to \(1\)) to preserve the property of "partition of unity" of spline bases.
We define our cubic B-spline bases with equally spaced knots on \(x,y,z\) dimension, and thus we parameterise the level of spatial smoothness by the knot spacing. Larger knots spacing, smaller basis, and greater smoothness; conversely, closer knots, larger basis, and greater ability to represent fine details. Conceptually, more flexible parameterizations would allow arbitrary knots locations, but with the consideration of minimising computational complexity, we fix the design matrix \(X\) based on pre-specified knots spacing according to prior knowledge. While other spline applications use a dense array of knots and then control smoothness with a roughness penalty, the computational and memory requirements of our spatial model demand that we judiciously select the coarsest spline spacing consistent with our application.
### Stochastic model
Assumed stochastic behaviours of CBMA foci data determine the form of statistical likelihood we use. We consider a set of four stochastic models for the distribution of foci counts at voxel level. All
Figure 1: Illustration of tensor product of \(2D\) spline bases (with equal knots spacing)
of our models take the form of generalised linear models, where inhomogeneous intensity at each voxel is captured by the spline bases and any study-level covariate (as per Equation (2)). We fit our model either by maximising log-likelihood function iteratively via L-BFGS algorithm or iteratively re-weighted least squares (IRLS) for Quasi-likelihood models. To identify the most accurate but parsimonious model, we will elaborate our meta-regression framework for these models and illustrate their strengths and limitations.
#### 2.2.1 Poisson model
In practice, the count of foci \(Y_{ij}\) (for studies \(i=1,\cdots,M\), voxels \(j=1,\cdots,N\)) is only ever \(0\) or \(1\), which strictly indicates a Binomial model. However, inspired by previous success with Poisson point process, and accuracy of Poisson approximation for low-rate Binomial data (Eisenberg et al., 1966), we consider a Poisson model.
If foci arise from a realisation of a (continuous) inhomogeneous Poisson process, the (discrete) voxel-wise counts will be independently distributed as Poisson random variables, with rate equal to the integral of the (true, unobserved, continuous) intensity function over each voxel. As the sum of multiple independent Poisson random variables is also Poisson, also gives rise to a practical consequence that it is equivalent to either model the set of \(M\) study-level counts or the summed counts at each voxel. Following the deterministic structure outlined in Equation (1), the intensity for voxel \(j\) in study \(i\) is
\[\mathbb{E}[Y_{ij}] =\mu_{ij} \tag{5}\] \[\log(\mu_{ij}) =\eta_{ij}=x_{j}^{\top}\beta+Z_{i}\gamma\]
where \(Y_{ij}\sim\mathrm{Poisson}(\mu_{ij})\), \(x_{j}^{\top}\) is the \(j^{th}\) row of spatial design matrix \(X(N\times P)\), and \(\beta\) is regression coefficient of spline bases. The data vector \(Y\) has length-\((MN)\), which is impractical to represent explicitly. Under the assumption of independence of counts across studies, the likelihood function is exactly same if we model the voxelwise total foci count over studies instead (more details to follow in S1.1 in supplementary material), which gives rise to the modified Poisson model on summed data at voxel \(j\) over all studies, \(Y_{\cdot,j}=\sum\limits_{i=1}^{M}Y_{ij}\),
\[\mathbb{E}[Y_{\cdot,j}] =\mu_{\cdot,j} \tag{6}\] \[\mu_{\cdot,j} =\sum\limits_{i=1}^{M}\mu_{ij}=\sum\limits_{i=1}^{M}\exp\left(x_{ j}^{\top}\beta+Z_{i}\gamma\right)=\exp(x_{j}^{\top}\beta)\left(\sum\limits_{i=1} ^{M}\exp(Z_{i}\gamma)\right)\]
where \(\mu_{\cdot,j}=\sum\limits_{i=1}^{M}\mu_{ij}\) is the expected sum of intensity at voxel \(j\) over studies. Under this formulation, the likelihood to be optimised is,
\[l(\theta)=l(\beta,\gamma)=\sum\limits_{j=1}^{N}\left[Y_{\cdot,j}\log(\mu_{ \cdot,j})-\mu_{\cdot,j}-\log(Y_{\cdot,j}!)\right] \tag{7}\]
#### 2.2.2 Negative Binomial model
While Poisson model is widely used in the regression of count data, it is recognised that counts often display over-dispersion (the variance of response variable substantially exceeds the mean). Imposition of Poisson model according to an unrealistic assumption (variance equals mean) may underestimate the standard error, and give rise to biased estimation of the regression coefficients. While Barndorff-Nielsen and Yeo (1969) proposed a formal definition of spatial Negative Binomial model, it involves Gaussian processes and complexities we sought to avoid. Hence, here we do not pose a formal point process model, but rather simply assert that the count data at each voxel follows a Negative binomial (NB) distribution independently, thus allowing for anticipated excess variance relative to Poisson (Lawless, 1987).
Our NB model uses a single parameter \(\alpha\) shared over all studies and all voxels to index variance in excess of Poisson model. For each study \(i\) and voxel \(j\), let \(\lambda_{ij}\) follows a Gamma distribution with mean \(\mu_{ij}\) and variance \(\alpha\mu_{ij}^{2}\); then conditioned on \(\lambda_{ij}\), let \(Y_{ij}\) be Poisson with mean \(\lambda_{ij}\). Then it
can be shown that the marginal distribution of \(Y_{ij}\) follows a NB distribution with probability mass function,
\[\mathbb{P}(Y_{ij}=y_{ij})=\frac{\Gamma(y_{ij}+\alpha^{-1})}{\Gamma(y_{ij}+1) \Gamma(\alpha^{-1})}(\frac{1}{1+\alpha\mu_{ij}})^{\alpha^{-1}}(\frac{\alpha\mu _{ij}}{1+\alpha\mu_{ij}})^{y_{ij}}. \tag{8}\]
In terms of the success count and probability parameterization, \(\text{NB}(r,p)\), we have \(Y_{ij}\sim\text{NB}(\alpha^{-1},\frac{\mu_{ij}}{\alpha^{-1}+\mu_{ij}})\), with mean \(\mathbb{E}(Y_{ij})=\mu_{ij}\) and variance \(\mathbb{V}(Y_{ij})=\mu_{ij}+\alpha\mu_{ij}^{2}\). Details on derivation of probability density function of NB model can be found in S1.2 of the Supplementary material. When \(\alpha>0\), we have Poisson-excess variance of \(\alpha\mu_{ij}^{2}\); or analogous to the coefficient of variation, the coefficient of excess variation is \(\sqrt{\alpha\mu_{ij}^{2}}/\mu_{ij}=\sqrt{\alpha}\), which can be interpreted roughly as the relative excess standard deviation, relative to a Poisson model.
Again, the data vector is impractical to represent explicitly, but unlike Poisson, the sum of multiple independent NB random variables doesn't follow a NB distribution. Thus, we propose moment matching approach to approximate the mean (first moment) and variance (second moment) of this convolution of NB distributions, which significantly facilitates the simplification of log-likelihood function. Matching the first two moments, the approximate NB distribution of total count of foci over all studies at voxel \(j\) is given by \(Y_{\cdot,j}=\sum\limits_{i=1}^{M}Y_{ij}\sim\text{NB}(r_{j}^{\prime},p_{j}^{ \prime})\), where
\[r_{j}^{\prime}=\frac{\mu_{\cdot,j}^{2}}{\alpha\sum\limits_{i=1}^{M}\mu_{ij}^{2 }},\;\;p_{j}^{\prime}=\frac{\sum\limits_{i=1}^{M}\mu_{ij}^{2}}{\alpha^{-1}\mu _{\cdot,j}+\sum\limits_{i=1}^{M}\mu_{ij}^{2}}\]
with corresponding excess variance
\[\alpha^{\prime}=\alpha\frac{\sum\limits_{i=1}^{M}\mu_{ij}^{2}}{\mu_{\cdot,j}^{ 2}},\]
which gives rise to the simplified NB log-likelihood function,
\[l(\theta)\approx l(\beta,\alpha^{\prime})=\sum\limits_{j=1}^{N}\left[\log \Gamma(Y_{\cdot,j}+r_{j}^{\prime})-\log\Gamma(Y_{\cdot,j}+1)-\log\Gamma(r_{j }^{\prime})+r_{j}^{\prime}\log\left(1-p_{j}^{\prime}\right)+Y_{\cdot,j}\log p_ {j}^{\prime}\right] \tag{9}\]
Details on derivations of moment matching approach can be found in S1.3 in the Supplementary material.
#### 2.2.3 Clustered Negative Binomial model
While NB model can be regarded as a kind of "random effects" Poisson model, as developed above, the latent Gamma random variable introduces independent variation at each voxel. We could instead assert that the random (Gamma-distributed) effects are not independent voxelwise effects, but rather latent characteristics of each study, and represent a shared effect over the entire brain for a given study. This is, in fact, the approach used by a Bayesian CBMA method (Samartsidis et al., 2019), and in a non-imaging setting, a Poisson-Gamma model for two-stage cluster sampling (Geoffroy and Weerakkody, 2001). Therefore, we now consider a third GLM, where at the first stage, we assume each individual study \(i\) is sampled with a global latent value \(\lambda_{i}\) from a Gamma distribution with mean \(1\) and variance \(\alpha\), which accommodates excess variance by dispersion parameter \(\alpha\) (\(\lambda_{i}\sim Gamma(\alpha^{-1},\alpha^{-1})\)). At the second stage, conditioned on the global variable \(\lambda_{i}\), \(Y_{ij}\) are drawn from a Poisson distribution with mean \(\lambda_{i}\mu_{ij}\) (\(Y_{ij}|\lambda_{i}\sim\text{Poisson}(\lambda_{i}\mu_{ij})\)), where \(\mu_{ij}\) is the expected intensity parameterised by spatial regression parameter \(\beta\) and covariates regression parameter \(\gamma\). The marginal distribution of \(Y_{ij}\) also follows a NB distribution,
\[\mathbb{P}(Y_{ij}=y_{ij})=\frac{\Gamma(y_{ij}+\alpha^{-1})}{\Gamma(y_{ij}+1) \Gamma(\alpha^{-1})}(\frac{\alpha^{-1}}{\mu_{ij}+\alpha^{-1}})^{\alpha^{-1}}( \frac{\mu_{ij}}{\mu_{ij}+\alpha^{-1}})^{y_{ij}} \tag{10}\]
where \(Y_{ij}\sim NB(\alpha^{-1},\frac{\mu_{ij}}{\alpha^{-1}+\mu_{ij}})\) with mean \(\mathbb{E}(Y_{ij})=\mu_{ij}\) and variance \(\mathbb{V}(Y_{ij})=\mu_{ij}+\alpha\mu_{ij}^{2}\). Details on derivation of probability density function of clustered NB model can be found in S1.4
in Supplementary material. This two-stage hierarchical Clustered NB model also introduces a covariance structure between foci within a study, which is determined by the expected intensity of the observations as well as the dispersion parameter \(\alpha\) (see S1.5 in Supplementary material). The covariance for studies \(i\) and \(i^{\prime}\), and distinct voxel \(j\) and \(j^{\prime}\) is,
\[\begin{cases}\mathbb{C}(Y_{ij},Y_{i^{\prime},j^{\prime}})=\alpha\mu_{ij}\mu_{ ij^{\prime}},\text{ if }i=i^{\prime}\\ \mathbb{C}(Y_{ij},Y_{i^{\prime},j^{\prime}})=0,\text{ if }i\neq i^{\prime} \end{cases} \tag{11}\]
The log-likelihood is the sum of terms over independent studies,
\[\begin{split} l(\beta,\alpha,\gamma)&=\sum\limits_{i=1 }^{M}\log[f(Y_{i1},Y_{i2},\cdots,Y_{iN})]\\ &=M\alpha^{-1}\log(\alpha^{-1})-M\log\Gamma(\alpha^{-1})+\sum \limits_{i=1}^{M}\log\Gamma(Y_{i,\centerdot}+\alpha^{-1})\\ &-\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{N}\log Y_{ij}!-\sum \limits_{i=1}^{M}(Y_{i,\centerdot}+\alpha^{-1})\log(\mu_{i,\centerdot}+ \alpha^{-1})+\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{N}Y_{ij}\log\mu_{ij}\end{split} \tag{12}\]
where \(Y_{i,\centerdot}=\sum\limits_{j=1}^{N}Y_{ij}\) is the sum of foci within study \(i\). One limitation of this model, though, is that it doesn't admit a factorisation and depends on the length-(MN) data vector (see S1.5 in Supplementary material).
Despite a good motivation to induce intra-study dependence, the Clustered NB model depends on the strong assumption that excess variance is captured by the global dispersion \(\lambda_{i}\). If there is voxelwise independent excess variance, the previous NB model will be preferred; we assess this issue below with real data evaluations.
#### 2.2.4 Quasi-Poisson model
As an alternative to NB model, Quasi-Poisson model also allows over-dispersed count data, and is a straightforward elaboration of the GLM. Instead of specifying a well-defined probability distribution for count data, Quasi-Poisson model only needs a mean model and a variance function, \(\mathbb{V}(Y_{ij})=\theta\mu_{ij}\) (with \(\theta\geq 1\)). While the variance-mean relationship is linear for the Quasi-Poisson model, the relationship is quadratic in NB model. This results in small foci counts being weighted more and can have greater adjustment effect in Quasi-Poisson model, which theoretically might be a perfect fit to our scenario that most brain regions have zero or low foci counts (Ver Hoef and Boveng, 2007).
Quasi-Poisson model can be framed as GLM, with mean and variance for voxel \(j\) in study \(i\) is given by,
\[\begin{split} E[Y_{ij}]&=\mu_{ij}\\ \text{Var}(Y_{ij})&=\theta\mu_{ij}.\end{split} \tag{13}\]
Without a likelihood function, we instead use ILRS algorithm, with the \((j+1)^{th}\) iteration given by,
\[\begin{split}\hat{\beta}^{[j+1]}&=\hat{\beta}^{[j]} +({X^{*}}^{\top}W^{[j]}X^{*})^{-1}{X^{*}}^{\top}(Y-\mu^{[j]})\\ \hat{\gamma}^{[j+1]}&=\hat{\gamma}^{[j]}+({Z^{*}}^{ \top}W^{[j]}Z^{*})^{-1}{Z^{*}}^{\top}(Y-\mu^{[j]})\end{split} \tag{14}\]
where \(W=\operatorname{diag}(\frac{\mu_{11}}{\theta},\cdots,\frac{\mu_{1N}}{\theta}, \cdots,\frac{\mu_{M1}}{\theta},\cdots,\frac{\mu_{MN}}{\theta})\), and \(X^{*}=\mathbf{1}_{M}\otimes X\), \(Z^{*}=\mathbf{1}_{N}\otimes Z\). This model can be simplified as well, though we again defer that to the next Section 2.3.
### Model factorisation
Having derived the explicit log-likelihood functions for meta-regression with three stochastic likelihood-based models, as well as the updating equation for a quasi-likelihood based model, we now consider model factorisation to replace the full \((MN)\)-vector of foci counts by sufficient statistics. Following the generic formulation of GLM proposed in Section 2.1.1,
\[\eta_{ij}=\log(\mu_{ij})=\sum\limits_{k=1}^{P}X_{jk}\beta_{k}+\sum\limits_{s= 1}^{R}Z_{is}\gamma_{s}. \tag{15}\]
\(\eta_{ij}\) is the estimated linear response from GLM, specific to each voxel \(j\) in each individual study \(i\). In this application, there are always at least \(220,000\) voxels (\(N\)), hundreds or thousands of studies \(M\), and as many as \(P=456\) basis elements (with \(20mm\) knots spacing), giving rise to millions of rows (\(MN\)) and billions of entries (\(MN\times(P+R)\)) in a GLM formulation. Thus, we propose a reformulation of this model into a series of sufficient statistics that are never larger than \(M\) or \(N\) in dimension. First, note that the localised spatial effect \(\mu^{X}\) and global effect of study-level covariates \(\mu_{i}^{Z}\) for study \(i\) factorise \(\mu_{ij}\) as
\[\mu_{ij}=\exp\left(\sum\limits_{k=1}^{P}X_{jk}\beta_{k}+\sum\limits_{s=1}^{R}Z _{is}\gamma_{s}\right)=\exp\left(\sum\limits_{k=1}^{P}X_{jk}\beta_{k}\right) \exp\left(\sum\limits_{s=1}^{R}Z_{is}\gamma_{s}\right)=\mu_{j}^{X}\ \mu_{i}^{Z} \tag{16}\]
This model is identifiable as long as we ensure each covariate \(Z_{s}\) is mean zero, letting \(X\) capture the overall mean. To further simplify the total log-likelihood function, we also use the fact that \(Y_{ij}\leq 1\) (either \(0\) or \(1\)), as there will never be more than \(1\) foci at the same location in a given study. Define the following notation:
* Let N-vector \(\mu^{X}=\exp(X\beta)\) be the vector of localised spatial effect of studies;
* let \(M\)-vector \(\mu^{Z}=\exp(Z\gamma)\) be the vector of global study-level covariates effect of studies;
* Let \(Y_{\centerdot,j}=\sum\limits_{i=1}^{M}Y_{ij}\) be the sum of foci counts at voxel \(j\) across all studies, and the \(N-\)vector \(Y_{\centerdot,}=[Y_{\centerdot,1},\cdots,Y_{\centerdot,N}]^{\top}\);
* \(Y_{i,\centerdot,}=\sum\limits_{j=1}^{N}Y_{ij}\) be the sum of foci counts for study \(i\) across all voxels, and the \(M-\)vector \(Y_{\centerdot,}=[Y_{\centerdot,},\cdots,Y_{\centerdot,}]^{\top}\).
The simplified factorisation of total log-likelihood functions or IRLS updating equation are specific to each stochastic model (see S2 in Supplementary material),
* Poisson model: \[l(\beta,\alpha)=Y_{\centerdot,}^{\top}\log(\mu^{X})+Y_{\centerdot,}^{\top} \log(\mu^{Z})-\left[\mathbf{1}^{\top}\mu^{X}\right]\left[\mathbf{1}^{\top} \mu^{Z}\right],\] (17)
* NB model: As described in Section 2.2.2, we approximate a sum of independent NB variables again as a NB:
\[Y_{\centerdot,j}=\sum\limits_{i=1}^{M}Y_{ij}\sim\mathrm{NB}(r_{j}^{\prime}, p_{j}^{\prime})=\mathrm{NB}(\frac{(\mu_{j}^{X})^{2}[\mathbf{1}^{\top}\mu^{Z}]^{2 }}{{\alpha^{\prime}}\sum\limits_{i=1}^{M}(\mu_{j}^{X}\mu_{i}^{Z})^{2}},\frac{ \sum\limits_{i=1}^{M}(\mu_{j}^{X}\mu_{i}^{Z})^{2}}{{(\alpha^{\prime})}^{-1}\mu _{j}^{X}[\mathbf{1}^{\top}\mu^{Z}]^{\top}+\sum\limits_{i=1}^{M}(\mu_{j}^{X}\mu _{i}^{Z})^{2}})\] (18) with dispersion parameter \(\alpha^{\prime}=\frac{\alpha\sum\limits_{i=1}^{M}(\mu_{j}^{X}\mu_{i}^{Z})^{2} }{(\mu_{j}^{X})^{2}[\mathbf{1}^{\top}\mu^{Z}]^{2}}\). The log-likelihood function is given by, \[l(\alpha^{\prime},\beta,\gamma)=\sum\limits_{j=1}^{N}\left[\log\Gamma(Y_{ \centerdot,j}+r_{j}^{\prime})-\log\Gamma(Y_{\centerdot,j}+1)-\log\Gamma(r_{ j}^{\prime})+r_{j}^{\prime}\log\left(1-p_{j}^{\prime}\right)+Y_{\centerdot,j}\log p _{j}^{\prime}\right], \tag{19}\]
* Clustered NB model: \[l(\alpha,\beta,\gamma) =M\alpha^{-1}\log(\alpha^{-1})-M\log\Gamma(\alpha^{-1})+\sum \limits_{i=1}^{M}\log\Gamma(Y_{i,\centerdot,}+\alpha^{-1})\] (20) \[-\sum\limits_{i=1}^{M}(Y_{i,\centerdot}+\alpha^{-1})\log(\alpha^{-1 }+\mu_{i,\centerdot,})+Y_{\centerdot,}^{\top}\log(\mu^{X})+Y_{\centerdot,}^{ \top}\log(\mu^{Z})\] where dispersion parameter \(\alpha\) measures the excess variance across all studies and all voxels,
* Quasi-Poisson model: \[\begin{split}\hat{\beta}^{[j+1]}&=\hat{\beta}^{[j]}+(X ^{\top}W^{[j]}X)^{-1}X^{\top}(Y_{,-}-(\mu^{X})^{[j]})\\ \hat{\gamma}^{[j+1]}&=\hat{\gamma}^{[j]}+(Z^{\top}V^ {[j]}Z)^{-1}Z^{\top}(Y_{,-}-(\mu^{Z})^{[j]})\\ \text{where }W=\mathrm{diag}(\frac{\mu_{\theta}^{X}}{\theta}, \cdots,\frac{\mu_{X}^{X}}{\theta})\text{ and }V=\mathrm{diag}(\frac{\mu_{\theta}^{Z}}{ \theta},\frac{\mu_{\theta}^{Z}}{\theta},\cdots,\frac{\mu_{\theta}^{Z}}{\theta}).\end{split}\] (21)
### Fisher Scoring and optimisation via L-BFGS
Based on explicit log-likelihood functions associated with three stochastic likelihood-based models (Poisson, NB and clustered NB model) in Section 2.2.1-Section 2.2.3, we employ Fisher scoring for iterative optimisation of parameters in GLMs. Fisher scoring replaces the gradient and Hessian of Newton's method with the score and observed Fisher's information, respectively (Longford, 1987). Writing \(\theta\) for all parameters, the updating equation at \((j+1)^{th}\) iteration is,
\[\theta^{[k+1]}=\theta^{[k]}+I(\theta^{[k]})^{-1}\frac{\partial}{\partial \theta^{[k]}}l(\theta^{[k]})) \tag{22}\]
where the observed Fisher information is \(I(\theta^{[k]})=\mathbb{E}\left[-\frac{\partial^{2}l(\theta)}{\partial\theta \partial\theta^{\top}}\right]_{\theta=\theta^{[k]}}\).
For the Poisson model, \(\theta=[\beta,\gamma]\), the Fisher information is given by,
\[I(\theta)=I(\beta,\gamma)=\begin{bmatrix}-\frac{\partial^{2}l}{\partial\beta \partial\beta^{\top}}&-\frac{\partial^{2}l}{\partial\beta\partial\gamma^{\top }}\\ -\frac{\partial^{2}l}{\partial\gamma\partial\beta^{\top}}&-\frac{\partial^{2}l} {\partial\gamma\partial\gamma^{\top}}\end{bmatrix} \tag{23}\]
with negative Hessian matrix of \(\beta\), \(\left(-\frac{\partial^{2}l}{\partial\beta\partial\beta^{\top}}\right)_{P \times P}=X^{\top}\mathrm{diag}(\mu^{X})X^{\top}\); the negative cross term \(\left(-\frac{\partial^{2}l}{\partial\beta\partial\gamma^{\top}}\right)_{P \times R}=\left(-\frac{\partial^{2}l}{\partial\gamma\partial\beta^{\top}} \right)^{\top}_{R\times P}=[X^{\top}\mu^{X}][(\mu^{Z})^{\top}Z]\); and negative Hessian matrix of \(\gamma\), \(\left(-\frac{\partial^{2}l}{\partial\gamma\partial\gamma^{\top}}\right)=Z^{ \top}\mathrm{diag}(\mu^{Z})Z\).
Our other stochastic models with study-level covariates lead to more complicated derivations of updating equations via Fisher scoring. Instead, we use a more efficient quasi-Newton algorithm (L-BFGS algorithm, Shanno, 1970), which approximates the observed Fisher information from gradient evaluations instead, and only requires limited memory and reduces the computation complexity.
### Statistical inference
#### 2.5.1 Global test of model fitness
Among the proposed stochastic models in Section 2.2, Poisson, NB and clustered NB model are likelihood-based, while Quasi-Poisson model is Quasi-likelihood based (its exact likelihood is computationally infeasible). To compare the goodness of fit from a global perspective, we will utilise likelihood-based comparison criteria (e.g., LRT, Akaike information criterion (AIC), Bayesian information criterion (BIC)) with likelihood-based models, as well as other global model fitness criteria across all stochastic models within this meta-regression framework.
**Likelihood-based model selection criteria** LRT uses the difference in log-likelihoods to test the null hypothesis that true model is the smaller nested model. As Poisson model is nested in both NB model and clustered NB model with dispersion parameter \(\alpha=0\), for the null hypothesis \(H_{0}\): dispersion parameter \(\alpha=0\), the likelihood-ratio test statistic is given by,
\[\lambda_{LR}=-2\left[l(\hat{\theta}_{0})-l(\hat{\theta})\right]\]
where \(l(\hat{\theta})=l(\hat{\alpha},\hat{\beta},\hat{\gamma})\) is maximum log-likelihood of NB model or clustered NB model without any constraint on parameters, and \(l(\hat{\theta}_{0})=l(\hat{\alpha}=0,\hat{\beta},\hat{\gamma})\) is maximum log-likelihood of NB model or clustered NB model with dispersion parameter \(\alpha\) constrained at 0 (i.e. Poisson model). The test statistic is Chi-square distributed, with degree of freedom equals to \(1\).
AIC and BIC are two alternatives of LRT which also deal with the trade-off between the goodness of fit and simplicity of the model, and resolve overfitting problem by penalising the number of parameters in the model. To measure the goodness of fit of a model \(M\) on dataset \(D\),
\[AIC=2k-2l(\hat{\theta}),\;\;BIC=k\ln(n)-2l(\hat{\theta}) \tag{24}\]
where \(l(\hat{\theta})\) is the maximised log-likelihood function of the model \(M\), \(k\) is the number of parameters in model \(M\) and \(n\) is the number of data points in the dataset \(D\). The model with smaller AIC or BIC value is believed to be a better fit to the dataset.
**Bias and variance of estimation** For the purpose of selecting the best model in terms of goodness of fit on a variety of datasets, we extend the model comparisons to all stochastic models proposed in Section 2.2, including Quasi-Poisson model. As the central outcome of this meta-regression framework is voxelwise intensity estimation for each study, with the effect of study-level covariates being considered, it's natural to utilise bias and variance of intensity estimation as new criteria stated below,
* Relative bias of estimated total sum of intensity (per study), comparing with the averaged sum of foci count (per study) across multiple datasets;
* Relative bias of standard deviation (Std) in each of \(x,y,z\) dimension, comparing with the actual Standard deviation in foci count (per study) across multiple datasets;
* Relative bias of voxel-wise variance between actual foci count (per study) and intensity estimation (per study).
Here, relative bias is evaluated instead of bias, especially when applied to a variety of datasets with diverse foci counts.
#### 2.5.2 Localised inference with Wald tests on \(\mu_{ij}^{X}\) and \(\eta_{ij}^{X}\)
While our model is parameterised by \(P\) basis elements, users want to make inference at each of the \(N\) voxels. Hence, we will also explore localised inference on estimated spatial intensity \(\mu_{ij}^{X}\) (or \(\eta_{ij}^{X}=\log(\mu_{ij}^{X})\)) and regression coefficient of study-level covariates (\(\gamma\)) via Wald tests.
**Test of spatial homogeneity:** In the CBMA context, the most basic inference is a test of homogeneity to identify regions where more foci arise than would be expected if there were no spatial structure. Precisely, we use the null hypothesis on voxelwise intensity estimation or estimated linear response, \(H_{0}:\mu_{ij}^{X}=\mu_{0}=\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{N}Y_{ij}/( MN)\) or \(\eta_{ij}^{X}=\eta_{0}=\log(\mu_{0})\) at voxel \(j\), for study \(i\). The standard error for \(\beta\) can be asymptotically estimated from the inverse of observed Fisher Information matrix, which gives rise to the standard error for the linear response \(\eta_{ij}^{X}\), and thus standard error for \(\mu_{ij}^{X}\) is obtained via delta method. It allows inference via Wald tests by examining voxelwise intensity estimation against null hypothesis of homogeneity over space. The signed Wald statistic for \(\mu_{ij}^{X}\) or \(\eta_{ij}^{X}\) takes the form:
\[Z_{\mu^{X}}=\frac{\mu_{ij}^{X}-\mu_{0}}{\mathrm{SE}(\mu_{ij}^{X})},\ \ Z_{\eta^{X}}=\frac{\eta_{ij}^{X}-\eta_{0}}{\mathrm{SE}(\eta_{ij}^{X})} \tag{25}\]
where \(SE(\mu_{ij}^{X})\) is the standard error of estimated spatial intensity \(\mu_{ij}^{X}\), and \(SE(\eta_{ij}^{X})\) is the standard error of estimated linear response \(\eta_{ij}^{X}\), and the statistics are Gaussian asymptotically. Finally, we can create p-value maps that are thresholded to control FDR at \(5\%\)(Benjamini and Hochberg, 1995).
#### 2.5.3 Inference on study-level covariates
For regression coefficient \(\gamma\) (\(s\times 1\)) of study-level covariates, we consider general linear hypothesis (GLH) tests through a contrast matrix \(C_{\gamma}\) (\(m\times s\)). Under the null hypothesis,
\[H_{0}:C_{\gamma}\gamma=\mathbf{0}_{m\times 1} \tag{26}\]
The test statistic follows a \(\chi^{2}\) distribution with \(m\) degree of freedom asymtotically,
\[(C_{\gamma}\hat{\gamma})^{T}(C_{\gamma}\mathrm{Cov}(\hat{\gamma})C_{\gamma}^{ T})^{-1}(C_{\gamma}\hat{\gamma})\stackrel{{ D}}{{\longrightarrow}}\chi_{m}^{2} \tag{27}\]
and in the case of a single contrast \((m=1)\), a signed Z test can be computed. Details of GLH on study-level covariates can be found in S3.1 in Supplementary material.
Experiments
### Simulation settings
The statistical analyses of model estimation with CBMA data are conducted at voxel level: voxelwise test statistics are evaluated to examine the significance of experimental effect. Therefore, before investigating the model fitness, we evaluate our models' false positive rates (FPR) under null settings. Due to the computationally intensive nature of these evaluations, we only evaluated the two models that showed promise in other evaluations, Poisson and NB. Under the null hypothesis of spatial homogeneity, we use Monte Carlo (MC) simulation to establish the validity of FPR control for the test of spatial intensity (\(\mu^{X}\)). Specifically, we will explore meta-regression with Poisson or NB model, either with non-null study-level covariates or without study-level covariates. To ensure the validity of FPR control is applicable to all CBMA data, the sampling mechanism is either model-based or empirical, with simulated foci count always analogous to the foci count within a real dataset. Specifically, in model-based sampling, data generating mechanism matches with regression model, with number of studies and average foci per study identical to the original dataset; while in empirical sampling, real data foci locations are randomly shuffled to guarantee the spatial homogeneity of foci distribution.
### Applications to \(20\) meta-analytic datasets
Cognition concerns psychological and cognitive processes that focus on learning people's perception, interpretation and response to information and stimuli. It refers to both conscious procedure and unconscious, automatic mechanisms in the brain that occur as a response to stimuli, and is highly variable across individuals (Gallagher et al., 2019). Cognition has been studied intensively to identify brain regions that are involved in cognition tasks, conducted in an MRI scanner. For the purpose of evaluating the accuracy and sensitivity of this meta-regression framework, as well as analysing goodness of fit of stochastic models with respect to different CBMA datasets, \(20\) previously published meta-analytic datasets are used in this article, which involves multiple aspects of cognition research, as well as other stimulus-based and diagnosis-based research (as displayed in Table 1).
Preprocessing steps are summarised in Figure 2. The discrete sampling space of our analysis is the 2\(mm^{3}\) MNI atlas, with dimensions \(91\times 109\times 91\), and \(N=228483\) brain voxels. We first apply this brain mask to remove foci outside the brain and remove any multiple-foci (while original data peaks are always distinct, a foci count in excess of 1 cannot occur after rounding to the 2mm space). We then extract all the sufficient statistics after model factorisation in Section 2.3, including spatial design matrix \(X(N\times P)\) generated from B-spline bases, total foci count per voxel \(Y_{*}(N\times 1)\) and total foci count per study \(Y_{*}(M\times 1)\) and study-level covariates \(Z(M\times R)\) if considered.
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline Dataset & number of contrasts & total count of foci & max foci count & average foci count \\ \hline
1. Social Processing & \(599\) & \(4934\) & \(47\) & \(8.24\) \\
2. PTSD & \(22\) & \(154\) & \(26\) & \(7.00\) \\
3. Substance Use & \(89\) & \(657\) & \(110\) & \(7.38\) \\
4. Dementia & \(28\) & \(1194\) & \(548\) & \(42.64\) \\
5. Cue Reactivity & \(275\) & \(3197\) & \(58\) & \(11.63\) \\
6. Emotion Regulation & \(338\) & \(3543\) & \(87\) & \(10.48\) \\
7. Decision Making & \(145\) & \(1225\) & \(49\) & \(8.45\) \\
8. Reward & \(850\) & \(6791\) & \(59\) & \(7.99\) \\
9. Sleep Deprivation & \(44\) & \(454\) & \(59\) & \(10.32\) \\
10. Naturalistic & \(122\) & \(1220\) & \(59\) & \(10.00\) \\
11. Problem Solving & \(282\) & \(3043\) & \(44\) & \(10.79\) \\
12. Emotion & \(1738\) & \(22038\) & \(203\) & \(12.68\) \\
13. Cannabis Use & \(81\) & \(314\) & \(16\) & \(3.88\) \\
14. Nicotine Use & \(13\) & \(77\) & \(23\) & \(5.92\) \\
15. Frontal Pole CBP & \(795\) & \(9525\) & \(57\) & \(11.98\) \\
16. Face Perception & \(385\) & \(2920\) & \(50\) & \(7.58\) \\
17. Nicotine Administration & \(75\) & \(349\) & \(24\) & \(4.65\) \\
18. Executive Function & \(243\) & \(2629\) & \(54\) & \(10.82\) \\
19. Finger Tapping & \(76\) & \(696\) & \(27\) & \(9.16\) \\
20. n-Back & \(29\) & \(640\) & \(69\) & \(22.07\) \\ \hline \end{tabular}
\end{table}
Table 1: Number of contrasts and foci counts of \(20\) meta-analytic datasets
## 4 Results
### Simulation results
For each of the \(20\) meta-analytic datasets, we simulate foci distribution under a null hypothesis of spatial homogeneity, estimate spatial intensity and investigate the distribution of voxel-wise p-values for the eight different scenarios: fitting Poisson or NB model, use of a model-based or empirical (random shuffling) data sampling mechanism, and use or omission of study-level covariates; for all settings we use B-spline knot of spacing \(20mm\) in \(x,y,z\) direction, producing \(P=456\) basis elements. The computation of test statistics depends on the covariance of regression coefficients, which is approximated by the inverse of Fisher Information matrix of optimised parameters at maximised log-likelihood (see Section 2.4). Empirically, we sometimes observe the Fisher Information matrix is ill-conditioned and numerically singular, which is associated with datasets having low foci count. Especially in datasets with few studies, some regions have essentially no foci, leading some of the \(\beta\) coefficients to be driven to negative infinity and produce an estimated rate of zero. In the experiments, we found that datasets with a total foci count of at least 200 generally avoided these singularity problems and produced accurate standard errors.
To establish the validity of spatial homogeneity tests (\(\mu_{j}^{X}=\mu_{0}\), \(\forall j=1,\cdots,N\)) for each \(20\) meta-analytic datasets, we compute p-values and create P-P plots. We compute 100 null realisations, each producing N p-values (one for each voxel), with the null expected \(-\log_{10}\) p-values ranging from \(-\log_{10}(N/(N+1))\approx 0\) to \(-\log_{10}(1/(N+1))=5.359\). For each expected null p-value, we plot the mean and 95\(\%\) prediction interval via a normal approximation (mean \(\pm\) 1.96 \(\times\) standard deviation, computed over 100 realisations). Since the P-P plots are very similar for each of the eight scenarios, we only display results for the setting of CBMR with Poisson model without study-level covariates, sampled with a model-based approach. Figure 3 shows the four representative \(-\log_{10}\) P-P plots (results for all 20 studies shown in Figure S7 in Supplementary material), with identity (dashed diagonal line), \(5\%\) significance (dashed horizontal line) and the FDR \(5\%\) boundary (solid diagonal line); gray shaded areas plot the point-wise \(95\%\) prediction intervals. It shows that p-values \(<0.05\approx 10^{-1.3}\) are valid, and extreme p-values can skew liberal; the worst affected cases are datasets with very few foci (e.g. analysis 14). In general, datasets with total foci counts less than 200 show poor behaviour.
Figure 2: Preprocessing pipeline of \(20\) meta-analytic datasets before fitting CBMR framework. Note that panel A and B are applicable to all datasets, which generate a spatial design matrix \(X\), total foci count per voxel \(Y_{*}(N\times 1)\) and total foci count per study \(Y_{*}(M\times 1)\). While covariates matrix \(Z(M\times R)\) in panel C is integrated into CBMR only if the effect of study-level covariates is considered.
Since multiple testing correction requires valid p-values far smaller than \(0.05\), we focus on control of FDR in these null simulations. None of the 20 datasets have valid FDR control (PP-plots or prediction intervals fall above the \(5\%\) Benjamini-Hochberg threshold). However, the PP plots generally show valid p-values \(<10^{-3}\), and if we truncate p-values by replacing any p-value smaller than \(10^{-3}\) with that value, we obtain valid (if conservative) FDR control (Table 2). This pragmatic approach could impact power, but empirical results (Section 4.2) suggest the inferences based on truncated p-values remain sensitive.
### Results from \(20\) meta-analytic datasets
We first evaluate the goodness of fit among likelihood-based stochastic models (Poisson, NB and clustered NB model) via comparisons of maximised log-likelihood, AIC and BIC. As shown in Figure S8 in S3.3 of Supplementary material, CBMR with NB model outperforms two other likelihood-based stochastic models in every dataset. Not surprisingly, as NB model is the only stochastic approach that allows for anticipated excess variance relative to Poisson at voxel level; clustered NB is better than Poisson for the majority of these \(20\) meta-analytic datasets, but only by a small margin. It is conceivable that although a study-wise global dispersion parameter exists in clustered NB model, CBMA data is completely specified by a Poisson model at voxel level. As there exists nested rela
Figure 3: P-P plot of \(p\)-value (under \(-\log_{10}\) scale) with four representative meta-analytic datasets (Social Processing, Substance Use, Cannabis Use and PTSD datasets), estimated by CBMR with Poisson model without study-level covariates, sampled with model-based approach.
tionships between Poisson and both NB and clustered NB model (with dispersion parameter \(\alpha=0\)), we also conduct LRT to evaluate the trade-off between sufficiency and complexity of model. We found the null hypothesis that the nested model (Poisson) is better than full model (NB) is rejected for all datasets, with \(p\)-value less than \(10^{-8}\). The clustered NB model is preferred over Poisson for the majority of the \(20\) meta-analytic datasets (with \(p\)-value less than \(10^{-8}\)) (more details to follow in Table S5 in Appendix S3.3 of Supplementary material).
Apart from model comparisons via likelihood-based criteria (LRT, AIC and BIC), we also integrate Quasi-Poisson model into comparisons of model fitness, based on bias and variance criteria which are applicable to Quasi-likelihood models. The results in Figure 4(a) suggest that four stochastic models (Poisson, NB, clustered NB and Quasi-Poisson model) all give rise to accurate intensity estimation with relative bias of sum of intensity estimation (per study) less than \(0.5\%\), among which Poisson model has the lowest median relative bias (\(0.05\%\)) across \(20\) meta-analytic datasets. Quasi-Poisson and NB model display both underestimation and overestimation of study-wise sum of intensity across \(20\) meta-analytic datasets, while sum of intensity are always overestimated with Poisson and clustered NB model. The results in Figure 4(b) suggest that CBMR framework also provides an accurate estimation of standard variation (Std) of intensity in each of \(x,y,z\) dimension, with relative bias controlled within \(-0.2\%\) to \(0.1\%\) for all stochastic models across \(20\) meta-analytic datasets, and estimated intensity along \(x\) axis are the most accurate (with smallest Std bias). As shown in Figure 4(c), CBMR with Poisson model displays the largest negative bias of variance between intensity estimation and foci count, which suggests that excess variance cannot be explained by Poisson assumption (only voxels with nonzero foci count are studied and included in this plot). Clustered NB model displays second largest negative relative bias in voxel-wise variance estimation (per study), which is potentially related to the fact that it evaluates a study-specific over-dispersion parameter over space, but the intensity function is modelled by Poisson model at voxel level. Small relative bias is found in both NB and Quasi-Poisson model (with median \(-0.28\%\) and \(-0.25\%\)), with less variation in relative bias across multiple datasets with Quasi-Poisson model, which suggests both models are capable of dealing with excess variance in CBMA data.
Overall, we regard these evaluations as an evidence that NB model is preferred. While it has slight bias for total intensity (Figure 4(c)), it has much more accurate variance than the Poisson model.
### Comparison with ALE
We compared our CBMR results to a widely used approach, computing tests for spatial homogeneity across space with both CBMA and ALE. For simplicity, we only demonstrate the comparison of detected activation regions on the Cue Reactivity dataset (total foci count of \(6288\)) (Hill-Bowen et al., 2021). For comparison purposes, we show z statistic values at all voxels significant at \(\alpha=0.05\) uncorrected in Figure 5. Here, we choose FWHM=\(14\) to obtain comparative spatial resolution between ALE and CBMR. Consistency in activation regions is found in left cerebral cortex, frontal orbital cortex, insular cortex, left and right accumbens, while spatial specificity of activation regions differ slightly in ALE and CBMR, with ALE detecting slightly more voxels.
Another criterion of consistency is the dice similarity coefficient (DSC), the intersection of ALE and CMBR significant voxels divided by the average number of significant voxels. As shown in Table 3, ALE appears generally more sensitive than CBMR regardless of foci counts in the datasets,
\begin{table}
\begin{tabular}{l|c|c|l|l|l} \hline \hline Dataset & Before & After & Dataset & Before & After \\ \hline
1. Social Processing & \(44\%\) & \(0\%\) & 2. PTSD & \(100\%\) & \(0\%\) \\
3. Substance Use & \(26\%\) & \(0\%\) & 4. Dementia & \(16\%\) & \(0\%\) \\
5. Cue Reactivity & \(28\%\) & \(0\%\) & 6. Emotion Regulation & \(23\%\) & \(0\%\) \\
7. Decision Making & \(18\%\) & \(0\%\) & 8. Reward & \(43\%\) & \(0\%\) \\
9. Sleep Deprivation & \(30\%\) & \(0\%\) & 10. Naturalistic & \(22\%\) & \(0\%\) \\
11. Problem Solving & \(26\%\) & \(0\%\) & 12. Emotion & \(100\%\) & \(0\%\) \\
13. Cannabis Use & \(63\%\) & \(0\%\) & 14. Nicotine Use & \(94\%\) & \(0\%\) \\
15. Frontal Pole CBP & \(90\%\) & \(0\%\) & 16. Face Perception & \(19\%\) & \(0\%\) \\
17. Nicotine Administration & \(54\%\) & \(0\%\) & 18. Executive Function & \(22\%\) & \(0\%\) \\
19. Finger Tapping & \(22\%\) & \(0\%\) & 20. n-Back & \(27\%\) & \(0\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The percentage of invalid FDR control (before/after p-value truncated at \(10^{-3}\)) in \(20\) meta-analytic datasets over \(100\) realisations
Figure 4: Results from bias-related model comparison criteria, fitted with four stochastic models on each of \(20\) meta-analytic datasets: (a) Boxplots of relative bias of sum of intensity estimation (per study); (b) Boxplots (each of \(x,y,z\) dimensions) of relative bias of standard deviation of intensity estimation (per study); (c) Boxplots of relative bias of voxelwise variance of intensity estimation (per study).
though DSC varies above \(71.89\%\) to \(80.33\%\) on the datasets with more than \(1200\) foci counts, which demonstrates good similarity between the methods.
ALE evaluates experimental effect by testing probabilistic maps (generated by Gaussian kernel) against the null hypothesis, CBMR estimates activation intensity and conducts hypothesis testing at voxel level, both of them neglect the effects of testing all voxels simultaneously, and cannot control the rate of false rejections. Some researchers proposed a conservative threshold (\(\alpha=0.0001\)) on the uncorrected p-values to reduce type I error (Turkeltaub et al., 2002), while a more principled approach is to control the false discovery rate (FDR) via Benjamini-Hochberg (BH) procedure. Figure 6 shows a comparison of results using a \(5\%\) FDR threshold, where CMBR (Poisson) p-values use a \(10^{-3}\) truncation, and Table 4 shows a comparison of number of detected voxels. The sensitivity of ALE and CBMR is comparable, with sometimes ALE or CBMR being detecting more voxels. The DSC varies between \(70.55\%\) and \(79.76\%\) for datasets with more than \(1225\) foci counts, indicating consistency of activation regions between ALE and CBMR approach after FDR correction.
### Effect of study-level covariates
Unlike ALE, CBMR is based on explicit probabilistic models, and can estimate the effect of study-level covariates. Here, we integrate two study-level covariates, study-wise (square root) sample size and year of publication (after centring and standardisation) into CBMR framework on each of \(20\) meta-analytic datasets. We find, for example, on Cue Reactivity dataset, the year of publication is not significant (\(Z=-0.6880,p=0.4915\)), while sample size is significant (\(Z=6.1454,p<10^{-8}\)) (See Table S6 for \(p-\)value and \(Z-\)score of study-level covariates on each of \(20\) meta-analytic datasets).
Figure 5: Activation maps (for significant uncorrected p-values under \(5\%\) significance level, presented in Z-scores) generated by ALE (with FWHM=\(14\)) and CBMR (with a variety of stochastic models) on the Cue Reactivity dataset with axial slices at \(z=-24,-12,0,12,24,36,48\). Under the null hypothesis of spatial homogeneity, activation regions with z-scores corresponding to uncorrected p-values below the significance level \(0.05\) are highlighted.
## 5 Discussion
In this work we have presented a meta-regression framework with a spatial model as a general approach for CBMA data, where we have considered multiple stochastic models and allowed for study-level factors (e.g., sample size and year of publication). Our approach uses a spline parameterization to model the smooth spatial distribution of activation foci, and fits a generalised linear model with different variants of voxelwise (Poisson model, NB model and Quasi-Poisson model) or study-wise (Clustered NB model) statistical distributions. Our approach is a computationally efficient alternative
\begin{table}
\begin{tabular}{l|l|l|l|l|l} \hline \hline Dataset & n\_ foci & \(|AR_{CBMR}|\) & \(|AR_{ALE}|\) & \(|AR_{CBMR}\cap AR_{ALE}|\) & DSC \\ \hline
14. Nicotine Use & 77 & 209 & 0 & 0 & 0.00\% \\
2. PTSD & \(154\) & 0 & 1201 & 0 & 0.00\% \\
13. Cannabis Use & \(314\) & 313 & 152 & 17 & 7.31\% \\
17. Nicotine Administration & \(349\) & 1338 & 943 & 522 & 45.77\% \\
9. Sleep Deprivation & \(454\) & 176 & 0 & 0 & 0.00\% \\
20.-Back & \(640\) & 11456 & 17725 & 10212 & 69.99\% \\
3. Substance Use & \(657\) & 3145 & 2082 & 1225 & 46.87\% \\
19. Finger Tapping & \(696\) & 12410 & 23837 & 11590 & 63.95\% \\
4. Dementia & \(1194\) & 5126 & 7931 & 3142 & 48.13\% \\
10. Natartistic & \(1220\) & 4192 & 3241 & 1861 & 50.07\% \\
7. Decision Making & \(1225\) & 15331 & 20468 & 12628 & 70.55\% \\
18. Executive Function & \(2629\) & 26039 & 37797 & 24690 & 77.67\% \\
16. Face Perception & \(2920\) & 28893 & 38193 & 25533 & 76.12\% \\
11. Problem Solving & \(3043\) & 28221 & 39091 & 25675 & 76.29\% \\
5. Cue Reactivity & \(3197\) & \(30382\) & 38847 & 27375 & 78.57\% \\
6. Emotion Regulation & \(3543\) & 23388 & 31620 & 20065 & 72.92\% \\
1. Social Processing & \(4943\) & 34317 & 45263 & 28555 & 71.76\% \\
8. Reward & \(6791\) & \(33021\) & 39743 & 28728 & 78.96\% \\
15. Frontal Pole CBP & \(9525\) & 44030 & 55251 & 39594 & 79.76\% \\
12. Emotion & \(22038\) & 50480 & 57321 & 41918 & 77.77\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Number of voxels in activation regions of ALE (FWHM=\(14\)) and CBMR (with Poisson), based on FDR corrected p-values (using BH procedure) with \(5\%\) significance level, as well as Dice similarity coefficient in \(20\) meta-analytic datasets (Datasets are listed in an ascending order according to total number of foci).
\begin{table}
\begin{tabular}{l|l|l|l|l|l} \hline \hline Dataset & n\_ foci & \(|AR_{CBMR}|\) & \(|AR_{ALE}|\) & \(|AR_{CBMR}\cap AR_{ALE}|\) & DSC \\ \hline
14. Nicotine Use & 77 & 1312 & 12431 & 1154 & 17.79\% \\
2. PTSD & \(154\) & 6306 & 15866 & 5067 & 45.71\% \\
13. Cannabis Use & \(314\) & 11841 & 18390 & 8235 & 54.48\% \\
17. Nicotine Administration & \(349\) & 11546 & 18916 & 8028 & 52.71\% \\
9. Sleep Deprivation & \(454\) & 10250 & 15461 & 5732 & 44.59\% \\
20. n-Back & \(640\) & 19404 & 31512 & 17627 & 69.24\% \\
3. Substance Use & \(657\) & 19024 & 26477 & 13602 & 59.79\% \\
19. Finger Tapping & \(696\) & 19067 & 33914 & 17939 & 67.72\% \\
4. Dementia & \(1194\) & 16244 & 30437 & 12464 & 53.41\% \\
10. Naturalistic & \(1220\) & 22328 & 29442 & 15344 & 59.28\% \\
7. Decision Making & \(1225\) & 28284 & 36735 & 23372 & 71.89\% \\
12. Emotion & \(2038\) & 57698 & 67699 & 48847 & 77.91\% \\
18. Executive Function & \(2629\) & 33848 & 46679 & 31698 & 78.73\% \\
16. Face Perception & \(2920\) & 41682 & 53109 & 36710 & 77.45\% \\
11. Problem Solving & \(3043\) & 38466 & 51315 & 34757 & 77.43\% \\
5. Cue Reactivity & \(3197\) & 41242 & 52371 & 37301 & 79.69\% \\
6. Emotion Regulation & \(3543\) & 36602 & 48157 & 31176 & 73.56\% \\
1. Social Processing & \(4934\) & 48376 & 61136 & 40740 & 74.40\% \\
15. Frontal Pole CBP & \(9525\) & 53165 & 65339 & 47595 & 80.33\% \\
8. Reward & \(6791\) & 43048 & 51721 & 37711 & 79.59\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Number of voxels in activation regions of ALE (FWHM=\(14\)) and CBMR (with Poisson), based on uncorrected p-values with \(5\%\) significance level, as well as Dice similarity coefficient in \(20\) meta-analytic datasets (Datasets are listed in an ascending order according to total number of foci).
to previous Bayesian spatial regression models, providing the flexibility and interpretability of a regression model while jointly modelling all of space. For comparison, the implementation of Bayesian log-Gaussian Cox process regression needed approximately \(30\) hours on an NVIDIA Tesla K20c GPU card Samartsidis et al. (2019), while our meta-regression runs for roughly \(20\) minutes on an NVIDIA GTX 1080 Graphics Card. Furthermore, as a more intuitive and interpretable approach derived from generalised linear model, we believe that our meta-regression framework is more comprehensible to practitioners, relative to the spatial posterior intensity function. Through simulations on synthetic data (with simulated foci counts analogous to foci counts in each of \(20\) meta-analytic datasets), we demonstrated valid FDR control for spatial homogeneity null hypothesis after a truncation of \(p-\)values below \(10^{-3}\). According to \(20\) meta-analytic datasets, we found that NB model is the most accurate stochastic model in model comparisons via LRT, AIC and BIC, as well as smallest relative bias in both mean and variance of intensity estimation (per study), while Poisson and clustered NB model cannot explain over-dispersion observed in foci count. Meanwhile, we also compare the findings of activation regions from both ALE and CBMR approach, and justify the validity and robustness of CBMR, especially on the datasets with relatively high foci count, e.g., datasets with at least \(200\) total foci.
There are a few limitations in our work. Here we have only considered a single group of studies. In future work, we will extend our method to estimate the spatial intensity function of multiple groups (e.g., multiple types of stimuli within a cognitive task), so that we can investigate the consistency and difference of activation regions by group comparison. Meanwhile, we are currently not using regularisation term on spatial regression coefficients of CBMR. We have considered Firth-type penalty which indeed guarantees convergent estimates (especially on brain regions without any foci) and removes the first-order asymptotic bias term of maximum likelihood estimates, but it also causes a significant over-estimation of intensity at the edge of brain mask. The edge effect induced by
Figure 6: Activation maps (for significant FDR corrected p-values under \(5\%\) significance level, presented in Z-scores) generated by ALE and CBMR with FDR correction (by BH procedure) with truncated p-values of Cue Reactivity dataset. The figure is shown with axial slices at \(z=-24,-12,0,12,24,36,48\). Under the null hypothesis of spatial homogeneity, activation regions with z-scores corresponding to corrected p-values below the significance level \(0.05\) are highlighted.
Firth-type penalty relates to the structure of the Jeffreys prior and higher intensity associated with edge and corner basis elements. However, it's plausible to consider regularising likelihood functions with alternative penalty term (e.g., \(L_{1}\) or \(L_{2}\) norm) in the future, and figure out the optimal value of hyper-parameter. To estimate the variance of voxelwise spatial intensity, we approximate the covariance of spatial regression coefficients by inverting the Fisher Information matrix. It might give rise to numerical instability because the dimension of Fisher Information matrix is large (there are hundreds or even thousands elements in spline bases), and it might even be numerically singular for datasets with low foci count since most of voxels have near-zero intensity estimation. We have tried many approaches to improve numerical stability, including adding a extremely small epsilon (\(10^{-6}\)) or \(1\%\) of the largest diagonal element on the diagonal of Fisher Information matrix, or computing the Fisher Information assuming the null hypothesis of homogeneity is true. However, all of these efforts produced underestimation of the variance of voxelwise spatial intensity and gave rise to invalid p-values. In future work, we might consider non-parametric methods to estimate the covariance of spatial regression coefficient instead of the inverse of Fisher Information, or add a regularisation term on B-spline roughness to avoid very negative spatial regression coefficients.
Another potential direction is to conduct meta-analysis with data from multiple source, specifically, integrate additional information about reported foci or full statistic map (e.g., p-values or t-scores) if available. Some researches proposed Markov melding as a fully Bayesian framework for joining probabilistic sub-models, where evidence from different source is specified in each sub-model, and sub-models are joined while preserving all information and uncertainty (Goudie et al., 2019). Such an approach might enrich the inference obtained from CBMR by integrating the magnitude of CBMA activation or even image-based meta-analysis data. Finally. it's worth considering a zero-inflated stochastic model (e.g., Poisson or NB model) as the current datasets only consist of studies with at least one focus, there might be inflated zero foci count than observed. Excess zeros are separated and modelled independently in zero-inflated models, which might provide a more accurate approximation for low-rate Binomial data.
## Software
Implementation in the form of Python and Pytorch code can be found in Github repository. CBMR framework has also been implemented and integrated into NiMARE python package.
## Acknowledgments
The computational aspects of this research were supported by the Wellcome Trust Core Award Grant Number 203141/Z/16/Z and the NIHR Oxford BRC. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
Conflict of Interest: The authors declare no conflict of interest.
## Funding
This work was supported by the National Institutes of Health (NIH) under Award Numbers H6R00550_CS00.01. |
2306.16241 | Focus on the Sound around You: Monaural Target Speaker Extraction via
Distance and Speaker Information | Previously, Target Speaker Extraction (TSE) has yielded outstanding
performance in certain application scenarios for speech enhancement and source
separation. However, obtaining auxiliary speaker-related information is still
challenging in noisy environments with significant reverberation. inspired by
the recently proposed distance-based sound separation, we propose the near
sound (NS) extractor, which leverages distance information for TSE to reliably
extract speaker information without requiring previous speaker enrolment,
called speaker embedding self-enrollment (SESE). Full- & sub-band modeling is
introduced to enhance our NS-Extractor's adaptability towards environments with
significant reverberation. Experimental results on several cross-datasets
demonstrate the effectiveness of our improvements and the excellent performance
of our proposed NS-Extractor in different application scenarios. | Jiuxin Lin, Peng Wang, Heinrich Dinkel, Jun Chen, Zhiyong Wu, Zhiyong Yan, Yongqing Wang, Junbo Zhang, Yujun Wang | 2023-06-28T14:09:46Z | http://arxiv.org/abs/2306.16241v2 | # Focus the Sound around You: Monaural Target Speaker Extraction via Distance and Speaker Information
###### Abstract
Previously, Target Speaker Extraction (TSE) has yielded outstanding performance in certain application scenarios for speech enhancement and source separation. However, obtaining auxiliary speaker-related information is still challenging in noisy environments with significant reverberation. Inspired by the recently proposed distance-based sound separation, we propose the near sound (NS) extractor, which leverages distance information for TSE to reliably extract speaker information without requiring previous speaker enrolment, called speaker embedding self-enrollment (SESE). Full- & sub-band modeling is introduced to enhance our NS-Extractor's adaptability towards environments with significant reverberation. Experimental results on several cross-datasets demonstrate the effectiveness of our improvements and the excellent performance of our proposed NS-Extractor in different application scenarios.
Jiuxin Lin\({}^{1,*}\), Peng Wang\({}^{2,*}\), Heinrich Dinkel\({}^{2}\), Jun Chen\({}^{1}\), Zhiyong Wu\({}^{1,\dagger}\), Yongqing Wang\({}^{2}\), Zhiyong Yan\({}^{2}\), Junbo Zhang\({}^{2}\), Yujun Wang\({}^{2}\)+\({}^{1}\)Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
\({}^{2}\)Xiaomi Inc., Beijing, China [email protected], [email protected]
Footnote †: dagger}\) Corresponding author.
+
Footnote †: dagger}\) Corresponding author.
**Index Terms**: target speaker extraction, distance-based sound separation
## 1 Introduction
Target Speaker Extraction (TSE) [1], also known as Target Speech Extraction, is an essential task in the field of audio processing that involves separating a speech signal of a specific speaker from an audio mixture containing multiple speakers. This task has become increasingly important in recent years with the rise of various speech-based applications such as speech recognition [2], speaker verification [3], and audio conferencing. While blind speech separation (BSS) is limited by permutation invariant training (PIT) [4], TSE methods face no such restriction. Moreover, while TSE can extract the desired speaker's speech directly, BSS outputs several speech signals from different speakers, which requires manual selection. Nevertheless, TSE has a disadvantage: auxiliary information related to the target speaker such as enrolled voice [5, 6, 7] or lip movements [8, 9, 10] are required in advance. Typically, this necessitates allocating additional resources and encroaching upon the privacy of the information involved.
Recently, [11] proposed distance-based sound separation (DSS), which can separate monaural audio sources by the perceived distance (due to reverberation) between a listener and a sound emitter. DSS produces two audio signals, one from within a fixed threshold distance ("near") and another from outside the distance ("far"). Currently, DSS may face certain limitations in practical applications. First, the threshold distance for separation cannot be arbitrarily changed during inference, which might result in having multiple "near" sources due to an intrusive sound source coming into the threshold distance range. As an example, within a meeting, multiple sources might be of equal distance to the microphone, which the approach in [11] is unable to separate. Furthermore, due to the heavy reliance on the reverberation effect, distance-based separation is limited to smaller rooms with a longer reverberation time (RT60), while many offices are in large rooms with a faint reverberation effect. Lastly, previous works based on LSTM [12] can be further optimized to use more modern separation models, which could significantly enhance the user experience. Our work is inspired by the human perception of the cocktail party problem, where humans can selectively focus on a specific sound source (i.e., speaker) if it is closer to them, while still filtering noise from far away sources. Thus we believe that if we incorporate this distance-based source separation into TSE, we can achieve a more potent separation performance.
Although separating mixed audio signals with and without reverberation may appear to be similar tasks, there are significant differences between the two in practice. Reverberation can cause several issues in speech modeling [13], including: (a) Create echoes that overlap with the original speech signal; (b) Dampen the high-frequency components of the speech signal; (c) Introduce a delay between the original speech signal and the reverberation sound. All these may lead to a more difficult understanding of speech. Therefore, when conducting TSE in a reverberant environment, a different approach must be taken compared to regular TSE.
While time-domain approaches have seen success on commonly used benchmark datasets such as WSJ0-2mix [14], some of them such as Conv-TasNet [15] generally perform poorly when faced with reverberant audio [16]. This performance decay has been analysed in [17], where time-frequency (spectral) domain frameworks have been seen to offer superior separation performance. Additionally, it was indicated that a sub-band model is capable of modelling the reverberation effect by focusing on the temporal evolution of the narrow-band spectrum in the results of [18].
In this work, we propose the **N**ear **S**ound Extractor (NS-Extractor), a TSE model combing full-, sub-band modeling and speaker embedding self-enrollment (SESE). NS-Extractor utilizes the perceived distance to the target speaker as a cue to extract a self-enrolled speaker embedding that represents the voice print of the target speaker, which is then used for further extraction. Full- and sub-band modeling are integrated to attain greater stability in extraction performance. Experimental results show that our proposed NS-Extractor not only outperforms the baseline in terms of signal and perceptual quality but also exhibits superior performance in more complex scenarios.
## 2 Methodology
### Problem description
Assuming a \(K\)-speaker mixture recorded in anechoic conditions, one can formulate the physical model in the time domain as \(\mathbf{x}[t]=\sum_{k=1}^{K}\mathbf{s}^{(k)}[t]\), where \(\mathbf{x}\) represents the mixture and \(\mathbf{s}^{(k)}\) source \(k\) in this mixture, and \(t\) indexes \(T\) time samples. The sound envisioned in our work is emitted in a confined space, where each source can be formulated as \(\mathbf{s}^{(k)}=\mathbf{d}^{(k)}\star\mathbf{r}^{(k)}\). \(\mathbf{d}^{(k)}\) and \(\mathbf{r}^{(k)}\) represent the direct-path signal and reverberation, respectively and convolution is denoted by \(\star\).
In order to provide a clearer exposition of our work, we provide a comparative analysis between our approach, traditional TSE techniques, and distance-based sound separation, highlighting all their discrepancies. Illustrated in Figure 1(a), distance-based sound separation in [11] separates mixed audio based on the distance of sound sources in space, which can be expressed as:
\[\mathbf{x}\longrightarrow\sum_{k_{i}}^{K_{\text{max}}}\mathbf{s}^{(k_{i})}+\sum_{k_{j }}^{K_{\text{int}}}\mathbf{s}^{(k_{j})},\]
where the two terms are the sum of near and far targets' sounds respectively. This modeling approach also indicates that the estimated targets (near, or far) may contain more than one sound (multiple speakers). By leveraging the auxiliary speaker-related information provided, TSE (Figure 1(b)) is capable of extracting the target speech from mixed audio. The process can be depicted as follows:
\[\mathbf{x}\overset{\mathbf{\alpha}}{\longrightarrow}\mathbf{s}^{(k_{g})},\]
where \(\mathbf{a}\) is the auxiliary speaker-related information, \(\mathbf{s}^{(k_{g})}\) represents the target speech of one single speaker who index \(k_{g}\). As illustrated in Figure 1(c), our proposed NS-Extractor possesses the ability to exclusively extract a single target speech within close proximity using an enrolled speaker embedding, which is obtained from the intermediate target source T-F embeddings. Thus, additional auxiliary speaker information is not required. The detailed process will be described in Section 2.2.1.
### NS-Extractor
Our extractor model is based on performing complex spectral mapping [19, 20, 21], whereby the real and imaginary (RI) components of \(\mathbf{X}\in\mathbb{R}^{2\times F\times T}\) are concatenated to form the input features, which are then utilized to predict the RI components of each speaker \(\mathbf{S}^{(\mathsf{c})}\in\mathbb{R}^{2\times F\times T}\). Adhering to the methodology of TF-GridNet [22], our proposed NS-Extractor first employs 2D Convolution (Conv2D) with a \(3\times 3\) kernel and global layer normalization (gLN) to compute \(D\)-dimensional embeddings for each T-F unit \(\mathbf{H}_{\mathbf{x}}^{(1)}\in\mathbb{R}^{D\times F\times T}\). \(\mathbf{H}_{\mathbf{x}}^{(1)}\) is then fed into \(C\) stacks of extractor blocks, with each consisting of SESE and full- & sub-band modeling to refine the T-F embeddings progressively. The extractor outputs \(\widehat{\mathbf{H}_{\mathbf{x}}}\), a 2D deconvolution (Deconv2D) with 2 output channels and a \(3\times 3\) kernel followed by linear activation is then used to obtain the predicted RI components \(\mathbf{Y}\in\mathbb{R}^{2\times F\times T}\) from \(\widehat{\mathbf{H}_{\mathbf{x}}}\).
#### 2.2.1 Speaker embedding self-enrollment
Each SESE step includes both speaker encoding and speaker embedding fusion. At each block of the extractor, the input \(\mathbf{H}_{\mathbf{x}}^{(\mathsf{c})}\) is chained to the output of the preceding block, while \(\mathbf{H}_{\mathbf{x}}^{(1)}\) is directly obtained by encoding the original mixed input spectrum \(\mathbf{X}\). The input of speaker encoder \(\mathbf{R}^{(\mathsf{c})}\in\mathbb{R}^{F\times T}\) is derived from \(\mathbf{H}_{\mathbf{x}}^{(\mathsf{c})}\) through a \(1\times 1\) Conv2D. The speaker encoder consists of a stack of 3 residual blocks followed by an adaptive average pooling layer (AvgPool) [6]. The 1D-AvgPool layer, with a kernel size of 3, compresses the temporal dimension of speaker embeddings in extractor block \(c\). The resulting single vector \(\mathbf{E}^{(\mathsf{c})}\in\mathbb{R}^{1\times F}\), serves as an speaker identity encoding.
Prior to the speaker embedding fusion, a concatenation of speaker embeddings \(\mathbf{E}^{(\mathsf{c})}\) and T-F embeddings \(\mathbf{H}_{\mathbf{x}}^{(\mathsf{c})}\) is required. \(\mathbf{E}^{(\mathsf{c})}\) is replicated across temporal dimension and concatenated with \(\mathbf{H}_{\mathbf{x}}^{(\mathsf{c})}\) along dimension \(D\) to form a tensor with shape \((D+1)\times T\times F\). Conv2D with a \(1\times 1\) kernel is employed to restore the dimension to \(D\times T\times F\).
\[\hat{\mathbf{H}}_{\mathbf{X}}^{(\mathsf{c})}=\mathrm{Conv2D}(\mathrm{Concat}( \mathbf{H}_{\mathbf{x}}^{(\mathsf{c})},\mathbf{E}^{(\mathsf{c})}),D+1,D)\in \mathbb{R}^{D\times T\times F},\]
where \(D+1\) and \(D\) represent the number of input and output channels respectively.
The speaker embedding fusion block is employed to model the internal relationship inside \(\hat{\mathbf{H}}_{\mathbf{X}}^{(\mathsf{c})}\). The input tensor \(\hat{\mathbf{H}}_{\mathbf{X}}^{(\mathsf{c})}\in\mathbb{R}^{D\times T\times F}\) is viewed as \(T\) separate sequences, each with length \(F\). To model, the local relationship between a speaker and spectral information at the frame level, a single-layer bidirectional LSTM (BLSTM) architecture is utilized. The unfold and layer normalization (LN) operation in [22] are employed as follows:
\[\mathbf{U}^{(\mathsf{c})}=\left[\mathrm{Unfold}(\hat{\mathbf{H}}_{ \mathbf{x}}^{(\mathsf{c})}[:,t,:]),\,\mathrm{for}\,\,t=1,\dots,T\right]\in \mathbb{R}^{(I\times D)\times T\times F},\] \[\mathbf{U}^{(\mathsf{c})}=\left[\mathrm{BLSTM}(\mathrm{LN}( \mathbf{U}^{(\mathsf{c})})[:,t,:]),\,\mathrm{for}\,\,t=1,\dots,T\right]\in \mathbb{R}^{2H\times T\times F},\]
where \(I\) and \(J\) represent kernel size and stride size respectively, \(H\) denotes the number of hidden units in BLSTMs in each direction. Subsequently, a 1D deconvolution (Deconv1D) layer with kernel size \(I\), stride size \(J\), input channel \(2H\) and output channel \(D\) is applied to the hidden embeddings of the BLSTM:
\[\tilde{\mathbf{U}}^{(\mathsf{c})}=[\mathrm{Deconv\,\,1D}(\hat{\mathbf{U}}^{( \mathsf{c})}[:,t,:]),\,\mathrm{for}\,\,t=1,\dots,T]\in\mathbb{R}^{D\times T \times F}.\]
Finally, \(\tilde{\mathbf{U}}^{(\mathsf{c})}\) is added to the input tensor via a residual connection to produce the output tensor: \(\hat{\mathbf{H}}_{\mathbf{X}}^{(\mathsf{c})}=\hat{\mathbf{H}}_{\mathbf{X}}^{( \mathsf{c})}+\tilde{\mathbf{U}}^{(\mathsf{c})}\).
Figure 1: _Illustrations of distance-based sound separation, TSE and our proposed NS-Extractor_.
#### 2.2.2 Full- & sub-band modeling
In the full- & sub-band modeling block, time-dimension and frequency-dimension attention are employed to guide the models to focus on position (time frames) and content (frequency channel) respectively [23]. Noteworthy, the attention module in our work shares the same network architecture as in [22] to reduce the number of parameters of the proposed NS-Extractor.
More specifically, taking 'Time-MHA' in the sub-band modeling as an example, the input tensor \(\tilde{\mathbf{H}}^{(\mathbf{c})}_{\mathbf{X}}\) is fed into a Conv2D with kernel \(1\times 1\) followed by PReLU and LN along the channel and time dimensions (denoted as \(\mathrm{ctLN}\)), then reshape operation is applied to form \(\mathbf{Q}_{\ell,\mathbf{t}}\in\mathbb{R}^{P_{\times}(T\times E)},\,\mathbf{K }_{\ell,\mathbf{t}}\in\mathbb{R}^{P_{\times}(T\times E)},\,\mathbf{V}_{\ell, \mathbf{t}}\in\mathbb{R}^{P_{\times}(T\times D/L)}\):
\[\mathbf{Q}^{(\mathbf{c})}_{\ell,\mathbf{t}} =\mathrm{ctLN}(\mathrm{PReLU}(\mathrm{Conv2D}(\tilde{\mathbf{H} }^{(\mathbf{c})}_{\mathbf{X}},D,E))),\] \[\mathbf{K}^{(\mathbf{c})}_{\ell,\mathbf{t}} =\mathrm{ctLN}(\mathrm{PReLU}(\mathrm{Conv2D}(\tilde{\mathbf{H} }^{(\mathbf{c})}_{\mathbf{X}},D,E))),\] \[\mathbf{V}^{(\mathbf{c})}_{\ell,\mathbf{t}} =\mathrm{ctLN}(\mathrm{PReLU}(\mathrm{Conv2D}(\tilde{\mathbf{H} }^{(\mathbf{c})}_{\mathbf{X}},D,D/L))),\]
where \(E\) is an embedding dimension that can be manually designated, \(L\) is the number of heads in "MHA". After that, attention output \(\mathbf{A}_{\ell,\mathbf{t}}\in\mathbb{R}^{F\times(T\times D/L)}\) is computed as:
\[\mathbf{A}_{\ell,\mathbf{t}}=\mathrm{softmax}\left(\frac{\mathbf{Q}_{\ell, \mathbf{t}}\mathbf{K}^{\top}_{\ell,\mathbf{t}}}{\sqrt{T\times E}}\right) \mathbf{V}_{\ell,\mathbf{t}}.\]
We then concatenate the attention of all heads along the second dimension and reshape it back to \(D\times T\times F\). At last, \(1\times 1\) Conv2D with fixed input and output channels \(D\) followed by PReLU and \(\mathrm{ctLN}\) is applied to aggregate cross-head information, add it to the input tensor \(\tilde{\mathbf{H}}^{(\mathbf{c})}_{\mathbf{X}}\) via a residual connection to produce the output tensor \(\tilde{\mathbf{H}}^{(\mathbf{c})}_{\mathbf{X}}\).
The full-band modeling block and 'Freq-MHA' contained within it share almost the same architecture as that in sub-band modeling block. The difference is that the modeling is processed within each temporal unit along the frequency dimensions, we need to change \(\mathrm{ctLN}\) to \(\mathrm{ctLN}\) (LN along the channel and frequency dimensions) and the reshaped dimensions of \(\mathbf{Q}_{\ell,\mathbf{t}}\), \(\mathbf{K}_{\ell,\mathbf{t}}\), \(\mathbf{V}_{\ell,\mathbf{t}}\) are \(T\times(F\times E)\), \(T\times(F\times E)\) and \(T\times(F\times D/L)\) respectively.
#### 2.2.3 Multi-task learning
To ensure the proposed NS-Extractor optimizes both discriminative speaker embedding and the target speech, a multi-task learning framework with two objectives is introduced. To be specific, the scale-invariant signal-to-noise ratio (SI-SDR) [24] loss measuring the quality between the extracted and clean target speech and the cross-entropy (CE) loss used for speaker classification is combined to optimize the network:
\[\mathcal{L}=\mathcal{L}_{\text{SI-SDR}}(\mathbf{\hat{s}},\mathbf{s})+\gamma \sum_{c=1}^{C}\mathcal{L}_{\text{CE}}(\mathbf{\hat{y}}^{(c)},\mathbf{y}^{(c)}),\]
where \(\mathbf{\hat{s}}\) and \(\mathbf{s}\) denote the estimated and ground truth target speech, \(\mathbf{\hat{y}}^{(c)}\) and \(\mathbf{y}^{(c)}\) are the estimated and ground truth target speaker label. \(\gamma\) is a scaling factor and set to \(0.1\) in this paper.
\[\mathbf{\hat{y}}^{(c)}=\mathrm{Linear}(\mathbf{E}^{(c)})\in\mathbb{R}^{(1,N )},\]
where \(N\) is the number of speakers in the training dataset.
## 3 Experiments
### Datasets
Each utterance in the datasets is simulated to be emitted from a specific location within a confined space. Therefore, the datasets include two parts: room impulse responses (RIRs) and speech.
RIRs generationWe use the randomized image method (RIM) [25] to generate RIRs1. Room dimensions in the RIR dataset are randomly generated, ranging from \(3\times 4\times 2.13\) meters to \(7\times 8\times 3\) meters. RT60 is also randomly generated and ranges from \(0.1\) to \(0.5\) seconds. In each room, one microphone position and five speaker positions are randomly generated, with each position being at least \(0.5\) meters away from the walls and floor and no higher than \(1.8\) meters for increased realism. To balance the number of near and far sources, two of the speakers are placed near the microphone while the other three are placed far away. Near and far sources are distinguished based on a fixed threshold of \(1.5\) meters.
Footnote 1: [https://github.com/LCAV/pyroomacoustics](https://github.com/LCAV/pyroomacoustics)
SpeechWe use the small subset of LibriLight [26] containing about 577 hours of untranscribed speech from 489 speakers for training. Regarding validation and test datasets, we employ the "dev-clean" and "test-clean" subsets of Librispeech [27], each of which comprises 5.4 hours of speech from 40 speakers. The speech in the dataset is recorded at a sampling rate of 16kHz.
Figure 3: Training data distributions for RIRs dataset. Distance distribution from microphone (left), spatial distribution (right)
Figure 2: Detailed structure of proposed NS-Extractor. The whole extraction process consists of three steps: self-enroll speaker encoder, speaker embedding fusion and full- & sub-band modeling.
Sample creationApplying randomization to the loudness of the speech is necessary. Specifically, the root mean square (RMS) energy of each speech signal is randomly set between \((-30,-20)\) dB before summing up all sources. For the ablations in Section 3.4, RMS of the speech beyond the threshold distance is randomization between \((-30,-10)\) dB to simulate more challenging scenarios, where speakers who are situated far away may potentially raise their voices in speech. When discussing the n-Spkr dataset, the typical reference is to the presence of one speaker situated within the threshold distance, while there exist (n-1) speakers positioned beyond the threshold distance. Finally, a sample is obtained by convolving the RIR with the respective speech signal.
### Setup
The number of the layers of extractor \(C\) is set to \(6\), while embedding dimensions of TF-units \(D\) is \(24\). Inside the 'Time-MHA' and 'Freq-MHA' blocks, embedding dimensions \(E\) and the number of heads \(L\) are both set to 4. For STFT, the window length is \(16\) ms and hop length \(8\) ms, a \(256\)-point discrete Fourier transform (DFT) is applied to extract \(129\)-dimensional complex STFT spectra at each frame.
Training runs with a batch size of 16 for at most 100 epochs using AdamW optimization [28] with a starting learning rate of 0.001, which is then gradually decreased using cosine annealing. Training stops when no improvement has been seen for more than 5 epochs.
### Comparison with other baseline models
We first compare the objective performance of NS-Extractor with the baseline speech separation model, where LSTM follows the configuration from [11]. Also, we use a standard U-Net [29] model as another baseline model, which is a lightweight 10-layer model with five encoder and five decoder layers, the number of filters for a layer for the encoder/decoder is \(16,32,64,128,256\). Note that the training and validation set only contains two speakers (2-Spkr) while testing involves multiple speakers. Use SI-SDR as the loss function for the baseline model, as shown in Table 1, NS-Extractor outperforms other baselines on all of the 2-, 3-, and 4-Speaker datasets.
### Ablation studies
To determine the effectiveness of the improved method proposed in this paper, we study variants of NS-Extractor. In this section, the training and validation set both contain two speakers (2-Spkr dataset) with and without an intruded speaker within the threshold distance. The duration of the intrusive speech is between 1 and 3 seconds, while the intruder appears at the end of the 5-second audio mixture. Table 2 shows the performance of these variants, which demonstrates that the absence of any module results in a decrease in the overall performance of NS-Extractor. It is worth noting that the variant without a speaker encoder shows a relatively significant decrease in performance on the 3-Spkr dataset, which suggests that the speaker encoder plays a significant role in multi-speaker scenarios.
We carried out further ablation experiments on the cross-dataset to better understand the impact of the speaker encoder. Three intricate scenarios are designed, the first involved interfering speakers within the extraction threshold distance, the second has speakers in a room with fainter reverberation (RT60 \(\subseteq[0.1,0.2]\)s), and the third blends the characteristics of the former two scenes, namely the intrusion of the speaker and fainter reverberation. Results in Table 3 demonstrate that the introduction of a speaker encoder can effectively mitigate such interference in the presence of interfering speakers within the threshold distance. Moreover, the NS-Extractor's performance remains strong even in rooms with shorter RT60.
## 4 Conclusions
This work2 introduced NS-Extractor, a joint speaker and distance separation model for monaural TSE. NS-Extractor is a carefully designed model, based on the previously introduced TF-GridNet, optimized towards usage within different meeting scenarios. Experimental results on several datasets that closely resemble real-life scenarios such as faint reverberation and unexpected intrusive speech demonstrate the efficacy of NS-Extractor in complex scenarios.
Footnote 2: Demo: [https://thuhesi.github.io/interspeech2023-NS-Extractor/](https://thuhesi.github.io/interspeech2023-NS-Extractor/)
**Acknowledgements:** This work is supported by National Natural Science Foundation of China (62076144), the Major Key Project of PCL (PCL2021A06, PCL2022D01) and Shenzhen Science and Technology Program (WDZC20220816140515001).
\begin{table}
\begin{tabular}{l l r r r} \hline \hline Dataset & Network & SI-SDR & SI-SDRi & PESQ \\ \hline \hline \multirow{4}{*}{\(2\)-Spkr} & Mixture & 5.02 & - & 1.541 \\ & LSTM & 10.02 & 5.00 & 1.917 \\ & U-Net & 11.13 & 6.11 & 2.088 \\ & NS-Extractor & **13.77** & **8.75** & **2.520** \\ \hline \multirow{4}{*}{\(3\)-Spkr} & Mixture & 0.34 & - & 1.280 \\ & LSTM & 3.99 & 3.65 & 1.463 \\ & U-Net & 5.21 & 4.87 & 1.570 \\ & NS-Extractor & **7.16** & **6.82** & **1.759** \\ \hline \multirow{4}{*}{\(4\)-Spkr} & Mixture & -2.48 & - & 1.196 \\ & LSTM & 0.29 & 2.77 & 1.305 \\ & U-Net & 1.63 & 4.11 & 1.380 \\ & NS-Extractor & **2.86** & **5.34** & **1.486** \\ \hline \hline \end{tabular}
\end{table}
Table 1: NS-Extractor shows consistent improvement over LSTM and U-Net implementations on LibriSpeech dataset.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline \multicolumn{2}{c}{Dataset} & \multicolumn{1}{c}{Use SE?} & SI-SDR & SI-SDRi & PESQ \\ \hline RIRs & Speech & & & & \\ \hline \hline \multirow{2}{*}{Normal} & Unintruded & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & **10.84** & **10.88** & **2.103** \\ & & ✗ & 10.28 & 10.32 & 1.927 \\ \hline \multirow{2}{*}{Normal} & Intruded & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & **8.40** & **11.84** & **1.628** \\ & & ✗ & 0.09 & 3.53 & 1.323 \\ \hline \multirow{2}{*}{Faint} & Unintruded & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & **13.78** & **7.79** & **2.592** \\ & ✗ & 13.38 & 7.39 & 2.275 \\ \hline \multirow{2}{*}{Faint} & Intruded & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & **7.16** & **10.02** & **1.900** \\ & ✗ & -1.20 & 1.39 & 1.428 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study of speaker encoder in various complex scenarios. “SE” denotes speaker encoder, “Faint” RIRs mean that RT60 is shorter, “ Intruded” speech means there are interfering speakers within the extraction threshold distance. |
2308.11082 | PrAIoritize: Automated Early Prediction and Prioritization of
Vulnerabilities in Smart Contracts | Context:Smart contracts are prone to numerous security threats due to
undisclosed vulnerabilities and code weaknesses. In Ethereum smart contracts,
the challenges of timely addressing these code weaknesses highlight the
critical need for automated early prediction and prioritization during the code
review process. Efficient prioritization is crucial for smart contract
security. Objective:Toward this end, our research aims to provide an automated
approach, PrAIoritize, for prioritizing and predicting critical code weaknesses
in Ethereum smart contracts during the code review process. Method: To do so,
we collected smart contract code reviews sourced from Open Source Software
(OSS) on GitHub and the Common Vulnerabilities and Exposures (CVE) database.
Subsequently, we developed PrAIoritize, an innovative automated prioritization
approach. PrAIoritize integrates advanced Large Language Models (LLMs) with
sophisticated natural language processing (NLP) techniques. PrAIoritize
automates code review labeling by employing a domain-specific lexicon of smart
contract weaknesses and their impacts. Following this, feature engineering is
conducted for code reviews, and a pre-trained DistilBERT model is utilized for
priority classification. Finally, the model is trained and evaluated using code
reviews of smart contracts. Results: Our evaluation demonstrates significant
improvement over state-of-the-art baselines and commonly used pre-trained
models (e.g. T5) for similar classification tasks, with 4.82\%-27.94\% increase
in F-measure, precision, and recall. Conclusion: By leveraging PrAIoritize,
practitioners can efficiently prioritize smart contract code weaknesses,
addressing critical code weaknesses promptly and reducing the time and effort
required for manual triage. | Majd Soud, Grischa Liebel, Mohammad Hamdaqa | 2023-08-21T23:30:39Z | http://arxiv.org/abs/2308.11082v2 | # PrAIoritize: Learning to Prioritize Smart Contract Bugs and Vulnerabilities
###### Abstract
Smart contract vulnerabilities and bugs have become a key concern for software engineers, as they can lead to significant financial losses, reputational damage, and legal issues. Therefore, prioritizing bug fixing for smart contracts is critical to maintaining trust. Due to the lack of tracking tools, prioritizing smart contract-reported bugs is done manually, which is a tedious task, limits bug triaging, and needs specialized knowledge. Towards this end, we propose PrAIoritize; an automated approach for predicting smart contract bug priorities that assist software engineers in prioritizing highly urgent bug reports.
PrAIoritize consists of two main phases: 1) automatic labeling, which involves the automatic construction of a smart contract keyword lexicon and the automatic assignment of priority levels to unlabeled bug reports; 2) model construction, which involves feature engineering and designs layers of feed-forward neural networks (FFNNs) and bidirectional long short-term memory (BiLSTM) with multi-class classification to better capture the features of the textual descriptions of bugs and predict their priority levels. The model then is trained using smart contract bug reports collected from two data sources: open-source software (OSS) projects available on GitHub and NVD vulnerability database. Our evaluation demonstrates significant improvement over state-of-the-art baselines and commonly used pre-trained models (e.g. BERT) for similar classification tasks, with 5.75%-35.29% increase in F-measure, precision, and recall.
Smart contracts, Blockchain, Automation, Software engineering, Vulnerability, Bug reports
## I Introduction
Smart contracts are self-executing programs that can be used to automate a wide range of processes and transactions on blockchain networks. Ethereum was the first blockchain platform to provide the functionality for deploying smart contracts on a decentralized network, opening a new era of digital asset trading [1]. However, due to the blockchain's immutable nature, once a smart contract has been deployed, it is extremely hard to be changed or modified [2]. In the recent few years, smart contracts have seen a high increase in their value and popularity. As of writing, WETH, the most valuable Ethereum smart contract, holds more than $2 billion in ether1. Ethereum market capitalization recently grew to over $150 billion, making it the second most valuable cryptocurrency2.
Footnote 1: Ethereum’s own cryptocurrency
Footnote 2: [https://coinmarketcap.com/currencies/ethereum/](https://coinmarketcap.com/currencies/ethereum/)
Smart contract vulnerabilities and bugs are prevalent [3, 4]. As a result of their high value, smart contracts have become targets for attackers who exploit vulnerable smart contracts to steal funds. In the past, multiple smart contracts holding millions of dollars have been hacked [5] resulting in an estimated $6.45 billion in financial losses [6]. Moreover, the lack of standardization in this field has led to a gap in the tracking and prioritization of security-related bugs and vulnerabilities, which can have significant consequences on smart contract security and quality.
Despite some efforts to assign severity levels to smart contract bugs (i.e., [7]), there has been little consensus on prioritization. Severity and priority are distinct concepts in the software development process. Customers assign severity, while developers provide priority [8]. In this work, we focus on bug priority. Our observations show that the majority of smart contract bug reports on the GitHub and NVD databases lack a priority designation. Moreover, although there is a large volume of bug reports generated by available smart contract analysis tools, but these reports often do not indicate the priority or the severity of the identified vulnerabilities. Consequently, a heterogeneous set of bug reports with varying information is available from various sources, making it challenging for developers to manually prioritize and concentrate their efforts on fixing the most urgent ones.
In consideration of the importance of proper bug report prioritization in software engineering [8], it is imperative to accurately determine the priority of smart contract bugs. Some bugs in smart contracts can pose a high risk and must be addressed as soon as possible. Other bugs may have a lower risk and can be fixed at a later time when resources are available [9].
Toward this end, this study proposes PrAIoritize. PrAIoritize uses automatic labeling and deep learning techniques to automatically predict the priority of reported bugs in smart contracts. To do so, we use heterogeneous data collected from open source repositories, of varying quality. After preprocessing and cleaning the bug reports, PrAIoritize applies two main phases to accurately predict the priorities of reported bugs; First, automated labeling that assigns priority levels to bug reports based on a constructed lexicon of keywords associated with vulnerabilities and bugs in smart contracts. Second, model construction phase, where a deep learning architecture is designed to better capture bug textual descriptions using FFNNs and BiLSTM. This phase involves feature engineering, designing hidden layers, implementing multi-class classification, and implementing output layers. The model then is
trained using smart contract bug reports collected from two data sources: open-source software (OSS) available on GitHub projects and NVD vulnerability databases. Finally, in the prediction phase, our approach leverages the trained model to predict the priority levels of a given set of bug reports. The features of the bug reports are extracted, and the model uses these features to analyze and predict the priority level.
Evaluation results of PrAIoritize outperform two state-of-the-art baselines and four popular and well-known pre-trained models for text classification, with 5.75%-35.29% higher F-measure, 5.75%-35.29% precision, and 5.75%-35.29% higher recall.
Our contribution can be summarized as one-fold.
* An empirical investigation and evaluation of "PrAIoritize," an automated smart contract bug prioritization system, leveraging neural network-based (NN-based) techniques. As well as a comprehensive exploration of leading text classification transformers, specifically BERT and DistilBERT, applied to the intricate landscape of smart contract bug reports sourced from Open Source Software (OSS) projects and the esteemed National Vulnerability Database (NVD).
## II Preliminaries & Problem Definition
### _Terminology and notations_
* **A vulnerability**: Refers to a defect, bug, imperfection, or weakness in a software system that has the potential to compromise the system's confidentiality, integrity, or availability [10].
* **A priority**: Refers to the level of urgency assigned to a software bug, indicating how quickly it needs to be fixed and removed. The priority level is typically determined from the perspective of the software developers and is based on several factors, such as bug severity, and potential impact [11].
* **Critical priority**: The bug report has a vulnerability present in the smart contract which can be triggered by attackers, and when that happens it could result in critical behavior similar to past attacks on smart contracts that led to financial losses or disclosure of sensitive information [2, 3]. In short, the attackers can exploit the vulnerability to their advantage. It includes well-known vulnerabilities in the field of smart contracts.
* **High priority**: The bug report describes a critical bug in the smart contract that causes significant undesirable outcomes. In contrast to a critical bug, a high-priority bug cannot be exploited by attackers, it may be triggered inside the code while executing and result in financial loss due to gas3 loss, as seen in some optimization bugs
Footnote 3: The unit for measuring the computational effort required to execute a transaction on Ethereum
* **Medium priority**: The bug report contains a bug in the contract and may lead to unintended behaviors, but it cannot be triggered by external attackers. Nevertheless, this bug may still impact dependent contracts and may potentially result in errors when other programs interact with the contract.
* **Low priority**: The bug report identifies errors or bugs in the smart contract, but these do not affect the contract's functionality or any associated environment calls outside the contract. These issues may be related to the contract's interface, documentation, or other non-essential components.
### _Related Models_
Pre-trained language models have achieved remarkable success in various software engineering tasks [12, 13], such as code summarization and bug localization. This section briefly describes state-of-the-art pre-trained models that have been used to perform classification tasks similar to our task and achieved high performance.
Bidirectional Encoder Representations from Transformers (BERT) is a pre-trained language model proposed by Google [14] that has achieved state-of-the-art performance on different software engineering tasks (e.g. [15]). BERT can learn dynamic context word vectors and capture textual semantic features.
DistilBERT [16] is designed to be more memory efficient and faster to train than BERT.
Recurrent Neural Networks (RNNs) are commonly used for text classification tasks [17]. RNNs are designed to effectively capture input text sequential dependencies.
Bidirectional Long Short-Term Memory (BiLSTM) is a popular neural network architecture widely used for natural language processing tasks, including text classification [18]. As BiLSTM processes the input sequence both in directions, it can capture past and future contexts.
A forward neural network (FFNN) is a type of artificial neural network (ANN) that is widely used for supervised learning tasks [19]. FFNNs can learn complex non-linear relationships between inputs and outputs. FFNNs have been applied to various software engineering tasks such as software reliability [20] and software vulnerability detection [21, 22].
## III PrAIoritize Approach
In this section, we describe the various phases of PrAIoritize approach. Moreover, we discuss motivational examples and challenges in smart contract bug prioritization.
### _Motivation_
**Illustrative Example.** In Figure 1, we illustrate the four priority levels using examples of four real-world bug reports.
The first report (P1) [23] expresses a vulnerability in the contract related to token auctions. This bug occurs when a function called during auction creation consumes large amounts of gas, resulting in the contract pausing if the token creation function fails due to insufficient gas, potentially exposing the contract to attacks. Passing a specific amount of gas for the auction creation function call, any user can exploit the vulnerability. If such a vulnerability were present in a high-value contract, such as the Axie Infinity4 contract
with a net value of $921 million 5, pausing the contract could result in significant financial losses, client loss, and damage to reputation. Therefore, it is a critical priority bug report, and it is crucial to fix the vulnerability to prevent attacks.
Footnote 5: [https://www.coingecko.com/en/coins/axie-infinity](https://www.coingecko.com/en/coins/axie-infinity)
The second report (P2) [24] demonstrates a bug in a function in the Registry contract. It is caused by an unbounded loop, which eventually results in an array overflow. This bug report can be considered a high-priority report, as it is less critical than the one addressed in the first report. Moreover, it cannot be exploited by attackers on its own.
In the third report (P3) [25], a function is used to ascertain if a contract is an ERC7216 contract or not. The bug is prompted when the function is called within the contract bytecode. If the bug is triggered, it can impact other dependent contracts that rely on this information. Moreover, the dependent contracts may incorrectly identify the contract as not an ERC721 contract, which could result in further bugs and financial losses when interacting with it.
Footnote 6: A widely adopted smart contract standard for representing ownership of unique non-fungible tokens [26].
Finally, the last report (P4) [27] identifies a bug with the SushiSwap contract, one of the biggest decentralized exchanges (DEX), where the interface shows the wrong function for tokens with taxes, resulting in an error displayed in the console. The bug does not affect contract functionality or associated dependencies. Therefore, the bug report is classified as a low-level priority report. Compared to the aforementioned bug reports, this bug is less critical and does not pose financial losses.
Based on the motivational example, we can make the following observations:
* Smart contract bug reports are written in natural language. Nevertheless, they describe domain-specific terminology and terms related to the smart contract's programming language (e.g. Solidity), the decentralized environment, and the complex structure of the smart contract.
* A smart contract bug report is often specific to its context and use cases. This means that building an NLP model that accurately classifies smart contract bug reports is challenging across various projects and data sources.
* The lack of standardized bug reports and priority-level definitions in the smart contract bug reports on GitHub can make it difficult to accurately assess the priority of the bug, especially given the incomplete information in smart contract bug reports.
* Due to smart contract's unique interactions with the decentralized network and their immutable nature, there may be significant differences in bug information, the way bugs are triggered and described compared to classic centralized systems. Therefore, a pre-trained NLP model that is not trained on blockchain-specific data, may not recognize these special characteristics and may not yield satisfactory results.
* Smart contract bug reports often suffer from poor quality, which may be because of the frequent changes and rapid evolution of blockchain technology. Therefore, developers are continually learning new updates which may affect writing effective bug reports. As a result, it makes prioritizing bug reports difficult.
Fig. 1: Illustrated Examples: Real-World Smart Contract Bug Reports
* Due to the high financial impact of smart contract bugs and the potentially significant value involved, correctly classifying the priority of bug reports in smart contracts is crucial.
In summary, determining the priority of smart contract bug reports is a challenging task due to their unique characteristics and the high financial value they hold.
### _Overall Approach_
Motivated by the previously presented observations. Our approach, PrAloritize, leverages automatic labeling and deep learning to predict the priority levels of smart contract bug reports. As shown in Figure 2, PrAloritize consists of two main phases: (1) automated labeling phase and (2) classification model construction, and training phase.
The first phase of our methodology involves collecting well-known smart contract vulnerabilities and bugs in a vulnerability catalog7, then using the vulnerability catalog along with the bug reports collected from GitHub and NVD to construct a lexicon of keywords associated with bugs in smart contracts, as well as their respective priority levels. Followed by automatically generating priority labels for unlabeled bug reports using the constructed lexicon. In the classification model construction, PrAloritize utilizes feature engineering to capture textual factors that may impact the priority level of a bug report. These features are then passed through a classification model to create a model capable of classifying a bug report with an unknown priority level. Finally, in the prediction phase, PrAloritize leverages the trained model to predict the priority levels of a given set of bug reports. The features of the bug reports are extracted, and the model uses these features to analyze and predict the priority level. We further detail our approach in the following sections.
Footnote 7: The catalog available along with PrAloritize at:[https://doi.org/10.5281/zenodo.7900928](https://doi.org/10.5281/zenodo.7900928)
### _Dataset Collection_
We use two data sources to collect smart contract bug reports; NVD reports and GitHub reports. Table I describes the data sources and the number of reports per source. We select all available NVD reports related to smart contracts. Moreover, we randomly select nearly the same number of closed bug reports related to smart contracts from cross-projects on GitHub. For noise reduction, reports with fewer than 5 words were excluded. The detailed number of bug reports per priority level is also listed in Table I.
Furthermore, the dataset is randomly divided into training and testing sets, with an 80/20 ratio, respectively.
### _Data Cleaning and Preprocessing_
In this phase, each bug report is preprocessed and cleaned to ensure that it does not have any data quality issues that may negatively affect the classification and prediction results. Bug reports consist of textual descriptions and code snippets, and it is crucial to process the textual description with care, as even a small error can lead to the exclusion of useful information.
We first combine the summary and description to create a new description for each bug report. Then, we remove all special characters, numbers, and punctuation characters from each bug report. These elements have no significant effect on the text mining process, and removing them from the text data ensures optimal performance of the text mining algorithms [28].
We then manually remove the URLs, code snippets, configurations, and transaction logs to ensure that the model can better comprehend the textual description. This is done to minimize the potential for the model to be confused by extraneous information and to better focus on the main content of the bug report. By simplifying the bug report in this way, we aim to improve the model's ability to accurately predict the priority level of the reported bug. We outline the primary techniques that we employ to preprocess in the following paragraphs.
Tokenization: is a technique used to break up a continuous stream of text into individual units called tokens. In the case of preprocessing the bug reports, the textual description is tokenized, meaning it is broken down into individual tokens, each of which refers to a word in the description. The extracted words are separated by delimiters, which in our case are white spaces.
Stop Words Removal: These are commonly used words that may not carry much meaning in the context of the bug reports, such as pronouns. We utilize a pre-existing list of stop words to identify and remove them from the extracted corpus of bug reports.
Stemming: In the English language, words have multiple forms, making analysis challenging. We use stemming to resolve this problem by converting words to their root form. Stemming involves reducing words to their base form, (i.e., "attack" is the base form of "attacking", "attacked", and "attack"), which aids in analyzing the meaning of the text.
### _Automated Labeling_
Due to the increasing complexity of smart contracts, the need for an automated labeling approach that can efficiently and accurately prioritize bug reports has become more urgent. Automated labeling has been used in various Natural Language Processing (NLP) tasks in the literature [29], such as sentiment analysis [29, 30] and text summarization [31].
In this study, we propose lexicon-based automatic labeling to assign priority labels to unlabeled bug reports.
**Lexicon Construction.** One of the essential steps in our approach is constructing a lexicon of smart contract bug
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Total \# Reports GitHub** & **Total \# Reports NVD** \\ \hline
560 & 440 \\ \hline \hline \multicolumn{3}{c}{**Total \# Reports per Priority Level**} \\ \hline
**Low** & **Medium** & **High** & **Critical** \\ \hline
192 & 179 & 234 & 395 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Statistics of Collected Dataset
related keywords along with their priority level. The proposed lexicon consists of keywords, and each keyword has a priority level. To ensure the lexicon's correctness, an expert reviewed the keywords and their assigned priorities after the construction process. Lexicon construction process consists of several steps as shown in Figure 2 and Algorithm 1. First, a vulnerability catalog is assembled by collecting well-known smart contract vulnerabilities and bugs from the literature [3] and from DASP8 which is a popular source of the top 10 critical security risks in smart contracts to include them in the lexicon. After that, the vulnerability catalog and the corpus of bug reports from the NVD and GitHub are preprocessed and concatenated using the techniques discussed in section III-D as shown in Algorithm 1-step 1. Then, index term extraction and term-weighting are applied using the term frequency-inverse document frequency (TF-IDF) weighting algorithm [32] and Bag of Word (BoW) [33] to extract keywords from the textual descriptions of the preprocessed corpus of bugs reports and vulnerability catalog (i.e., Algorithm 1-step 2). This step includes assigning each term in the textual bug report with a numerical score using the TF-IDF weighting scheme that ranks the terms based on both the frequency of the word in each bug report (term frequency) and the rarity of the word in the entire corpus (inverse document frequency). In addition to representing the textual descriptions of the bug reports as a bag (i.e., multiset) of its words while considering the frequency of each word in the report (i.e., Algorithm 1-step 3). The resulting keywords are sorted based on their term frequency. Then, we select the top 500 keywords and assign them to the corresponding priority level based on expert judgement. Finally, the lexicon is constructed based on the keywords with their priority levels. The lexicon can enable efficient automatic labeling of smart contract bug reports. The use of both TF-IDF weighting scheme and the BoW for pre-processing the lexicon-based approaches is a well-established technique in information retrieval [33, 34]. In the following, we describe the details of the TF-IDF used in our approach.
Footnote 8: [https://dasp.co/](https://dasp.co/)
**Details of the TF-IDF and BoW.** We use _TF-IDF_ function (i.e., Algorithm 1-step 3) to apply the term-weighting algo
Fig. 3: PrAIoritize Model Architecture
Fig. 2: PrAIoritize Approach Overview
rithm [33] on the corpus of the bug reports. Let \(C\) denote a corpus of bug reports and \(Rj_{c}\in C\) denote a bug report in the corpus. The term frequency, \(\text{tf}(T_{i},Rj_{c})\), is defined as the number of times a term \(t_{i}\) occurs in the bug report \(Rj_{c}\). Let \(C_{T_{i}}\) denote the set of bug reports in the corpus \(C\) that contain the term \(T_{i}\). The number of bug reports in which the search term \(T_{i}\) appears is denoted as \(|C_{T_{i}}|\), while \(|C|\) represents the total number of bug reports in the corpus \(C\). The inverse document frequency (idf) of term \(T_{i}\) in corpus \(C\) is defined as \(\text{idf}(T_{i})=\log\frac{|C|+1}{|C_{T_{i}}|+1}\).
The TF-IDF weight of a term \(T_{i}\) is proportional to the number of times it appears in a bug report, \(Rj_{c}\), and inversely proportional to the number of bug reports where the term occurs. In order to calculate TF-IDF, we used the following formula.
\[\text{TF}\cdot\text{IDF}(T_{i},Rj_{c})=\text{tf}(T_{i},Rj_{c})\times\log\frac {|C|+1}{|C_{T_{i}}|+1} \tag{1}\]
As shown in Formula 1, TF-IDF is the combination of TF and IDF functions. When a term occurs in a document frequently, its TF-IDF weight increases, and when it occurs in a corpus frequently, it decreases. For instance, stop words are common terms that occur in a large fraction of textual descriptions of the bug reports and do not contribute much to discriminating them. Therefore, the idf part in TF-IDF helps to measure the importance of a term in the bug report. If a term appears frequently in the report, it may not be very helpful in identifying relevant reports. The idf helps to downplay the significance of such terms. We apply BoW function (i.e., Algorithm 1-step 3) in the same way as TF-IDF. However, we use the following BG function.
\[\text{BG}(T,Rj)=\sum_{i=1}^{n}[t_{i}=T],\]
Where BG is the BoW function, \(\sum_{i=1}^{n}[t_{i}=T]\) is the count of occurrences of the term \(T\) in the bug report \(Rj\)
### _Model Construction_
Figure 3 shows the model architecture of PrAloritize. PrAloritize is composed of three main components: input processing layer, intermediate layers, and output layer. The input processing consists of converting smart contract bug report textual description into numerical representations using a HashingVectorizer layer. The intermediate layers in our model include a dense layer, a dropout layer, a reshape layer, a bidirectional LSTM layer, and another dropout layer. These layers work together to capture the semantics and contextual information of the bug reports while handling overfitting through the use of dropout. After processing the input data through the intermediate layers, the model utilizes a final dense layer with a softmax activation function to predict the priority of the bug reports. In the following, we further explain the architecture of the PrAloritize model.
#### Iii-F1 **Input process layer**
In this layer, we utilize the HashingVectorizer [35] because it provides a computationally efficient method for preprocessing and transforming text data into fixed-size numerical vectors. It can effectively handle out-of-vocabulary words [36], as it does not require a pre-built vocabulary. This attribute makes it more robust when encountering new, unseen words that might not have been present in the training data. HashingVectorizer performs well with heterogeneous data [37]. However, HashingVectorizer may not capture semantic relationships between words as effectively as other sophisticated embedding techniques such as Word2Vec [38] or GloVe [39]. In this layer, we initialize the HashingVectorizer with a fixed number of features (10,000). It is then used to transform the training and validation texts into numerical vectors. By setting the number of features to 10,000, the HashingVectorizer creates a 10,000-dimensional vector representation for each input text. The HashingVectorizer layer applies a hash function to the individual tokens (words) in the bug report, mapping these hashed values to a fixed-size vector space. The vector representation is created by summing the hashed values for each token in the report, resulting in a fixed-size vector representation for each report, regardless of its original length. The hashing function can be represented by the following formula:
\[h(t)=t\bmod N \tag{2}\]
where \(h(t)\) is the hash function, \(t\) is the input token, and \(N\) is the number of features (in our case, 10,000). The hashing function maps the input token to a position in the vector space by taking the remainder of the division of \(t\) by \(N\).
#### Iii-B2 **Intermediate Layers Learning**
In the intermediate layers of our model, we employ a combination of dense, dropout, reshape, BiLSTM, and GlobalMaxPooling1D layers within an FFNN architecture to effectively learn representations from the input textual descriptions and transform it into a format suitable for the subsequent classification layers. These layers are responsible for capturing the important features of the input textual reports, which are crucial for accurate priority prediction.
**Dense Layer** (128 units, ReLU activation): The dense layer takes the 10,000-dimensional vector representation from the HashingVectorizer and transforms it into a 128-dimensional dense representation. This layer helps learn a compact representation of the input reports while preserving its most important features. The dense layer is part of the FFNN architecture that allows the model to learn non-linear relationships within the input data.
**Dropout Layer (0.2 dropout rate):** As a regularization technique, the dropout layer is employed to prevent overfitting. It randomly drops out a fraction of the neurons during training, making the model more robust and less sensitive to noise in the data.
**Reshape Layer:** The reshape layer is utilized to transform the 128-dimensional output from the previous layer into a 1x128 matrix, which is the required input format for the BiLSTM layer that follows.
**BiLSTM Layer (128 units, ReLU activation):** The BiLSTM layer is used to capture both forward and backward dependencies in the input sequence. This layer allows the model to learn long-range dependencies and contextual information from the reshaped 1x128 matrix. It is particularly useful for capturing temporal relationships in the input data, which can be crucial for accurate priority prediction.
**GlobalMaxPooling1D Layer:** The globalMaxPooling1D layer is employed to reduce the output of the BiLSTM layer to a fixed-size vector. By taking the maximum value over the time dimension, this layer effectively captures the most important features from the input sequence while reducing the dimensionality of the data.
These intermediate layers work together to extract relevant features from the input data and transform them into a suitable format for the subsequent classification layers. The FFNN architecture with the BiLSTM layer allows the model to learn complex non-linear relationships within the input data, making it well-suited for the priority prediction task.
#### Iii-B3 **Output Layers: Priority Level Prediction**
**Dense Layer (64 units, ReLU activation):** This layer takes the output from the GlobalMaxPooling1D layer and maps it to a 64-dimensional vector. The ReLU activation function allows the model to learn non-linear relationships.
**Dropout Layer (0.2 dropout rate):** The dropout layer is used to reduce overfitting by randomly setting a fraction (20% in this case) of the input units to 0 during training. This forces the model to learn redundant representations and improves its ability to generalize to unseen data.
**Output Layer (Dense, num classes units, Softmax activation):** The output layer is a dense layer with the number of units equal to the number of priority levels. The softmax activation function is utilized in the output layer of the model for multi-class classification. It is responsible for converting the raw output scores from the neural network into probabilities that sum up to 1, making it possible to interpret the model's predictions as probabilities for each class. The formulas for the softmax activation function are as follows:
\[Z_{i}=\exp(a_{i})\text{ for }i\in 1,2,\dots,K \tag{3}\]
In this formula, \(a_{i}\) represents the raw output score for the \(i\)-th class produced by the FFNN, and \(K\) is the total number of classes (in our case \(K\) = 4). The exponential function, \(\exp()\), is applied to each \(a_{i}\) to obtain positive scores \(Z_{i}\) for every class.
\[P(\text{class}k)=\frac{Z_{k}}{\sum i=1^{K}Z_{i}}\text{ for }i\in 1,2,\dots,K \tag{4}\]
In this formula 4, \(P(\text{class}_{k})\) represents the probability of the input belonging to the \(k\)-th class. To calculate the probability, the exponential score \(Z_{k}\) for class \(k\) is divided by the sum of the exponential scores for all classes. This normalization step ensures that the probabilities of all classes sum up to 1.
### _Training Details_
In this work, we fine-tune PrAloritize hyperparameters to achieve optimal performance by selecting the following hyperparameters:
* Number of hidden units in the first dense layer: 128
* Number of hidden units in the second dense layer: 64
* Dropout rate for dropout layers: 0.2
* Batch size for training: 64
* Number of training epochs: 10
We employ the Adam optimization algorithm with default settings for learning rate and other parameters. The descriptions of the reports are first preprocessed using a HashingVectorizer with 10,000 features, followed by one-hot encoding of the labels. This prepares the data for input into PrAloritize. We implement PrAloritize using the open-source Keras library9 built on top of TensorFlow [40]. PrAloritize is trained on the training dataset and evaluated on the validation dataset. The best model weights achieved by PrAloritize are saved during training using a checkpoint callback based on validation accuracy. Finally, we assessed PrAloritize performance by utilizing classification reports and evaluation matrices.
Footnote 9: [https://github.com/keras-team/keras](https://github.com/keras-team/keras)
## IV Experiment Evaluation
In this section, we describe our experiment setup and evaluation measures. Moreover, we present our research questions and our findings.
### _Experiment Setup_
In this study, we answer the following two research questions:
* **RQ1 (Effectiveness):** How well does PrAIoritize perform in accurately prioritizing smart contract bug reports?
* **RQ2 (Comparison with Pre-trained Models and state-of-the-art Baselines):** How does the proposed model compare to state-of-the-art baselines and pre-trained models found in existing literature?
### _Baseline_
**Baseline 1:** We utilize the approach proposed by Meng et al. [41] in which the text information of the report along with the report explanation were used to classify bug reports. The proposed approach utilizes BERT and TF-IDF to extract the features of the text information, then it trains machine learning classifiers (e.g. K-Nearest Neighbor) to classify the bug reports. We consider the textual features part and implement the model strictly as described in the paper, as the code is not provided in the paper.
**Baseline 2:** We utilize DRONE as it is the most cited state-of-the-art approach by Tian et al. [11]. Drone proposed GRAY (ThresholdG and Linear Regression to ClAssifY Imbalanced Data) to classify bug reports based on several features extracted from six dimensions, i.e., textual, temporal, author, related report, severity, and product. Because smart contracts are still in their early stages, most of these dimensions are not available yet, so we consider the textual dimension only. We implement the model precisely based on the description provided in the research paper, as the source code is not available.
### _Evaluation Measures_
To evaluate the performance of smart contract bug priority prediction, we use commonly-used metrics in the literature, namely precision, recall, and F1-measure [42].
## V Empirical Results
In this section, we answer the proposed RQs and list our results and findings.
### _Answers to RQ1: PrAIoritize Effectiveness_
In order to evaluate the performance of the proposed automated labeling, we manually labeled the bug reports, then compared their results to those obtained by automatic labeling. In the manual labeling, an expert removed all the labels generated by the automated labeling and labeled the data according to their priority level. The manual labeling is based on the terminology proposed in the Background section II. Then, we calculated the inter-rater agreement using the kappa coefficient [43] between the data labeled by the expert and the automatically generated labels.
The resulting kappa coefficient is 0.92, which indicates a high level of agreement. We found that most of the disagreement cases belong to the critical and high priority levels. Overall, our results show that the proposed automated labeling performs similarly to the manual labeling in terms of inter-rater agreement. This finding supports the usefulness and validity of automatically prioritizing bug reports in smart contracts.
As shown in Table II, PrAIoritize predicts the four priority levels with an average F-measure of 90%, 91%, 88%, and 97%, respectively. The F-measures for the critical level is the best and the high priority level is the lowest, while low and medium levels are close in F-measure. Taking the mean of the F1-measures for all priority levels, regardless of their support, the macro average of F-measures is 92%, indicating reasonably good performance in predicting priority levels. Nevertheless, we believe it is highly important for report prioritization to have higher accuracy and F-measure for the critical level than other priority levels, which PrAIoritize is able to provide. When it comes to high priority and critical priority, we find that there is usually a blurry line between them, which could result in lower performance for high priority. Moreover, our manual labeling reveals poor descriptive terms for reports classified as high, as shown in Figure 11. For instance, high-priority reports can sometimes confuse the terms "bug" and "vulnerability", leading to ambiguity. Furthermore, these reports may not always provide adequate information about whether the vulnerability arises from the contract itself or from external sources. The only difference between the high and critical reports is that the vulnerability in the reports with critical allows attackers to access the smart contract and compromise it. Nevertheless, in the reports with high priority, a vulnerability is a bug triggered by the contract itself, which is not clearly described in most of the high-level reports. We believe these are some potential reasons that contribute to inaccurate predictions.
addition, it describes how they compare to PrAIortize's model. Table III shows the F-measures of PrAIortize versus the two baselines and four popular pre-trained models for text classification tasks (i.e., BERT, DistilBERT, BiLSTM, and RNN). All models and selected hyperparameters are briefly explained in the Background section II-B. We conducted a hyperparameter tuning process to optimize the performance of each model, selecting the best hyperparameters that yielded the highest results per model. We notice that Meng et al. [41] can predict the four priority levels with F-measures equal to 0.63, 0.38,0.73, and 0.96 from Low to Critical levels, which means it can assign critical priority levels correctly while it faces challenges in correctly identifying Medium priority levels. This may be attributed to the proposed feature extraction method (i.e., BERT) or the KNN classifier by Meng et al. [41].
DRONE's F-measure results show that it is struggling in assigning correct Low, Medium, and High levels to the bug reports. While it performs better with the Critical level (i.e., F-measure equals 0.47), it remains the lowest-performing model when compared to other models in our experiment. We believe it is because of the linear regression of the Gray classifier, which is not well-suited to our dataset.
Comparing the two baselines with the result of PrAIortize, we observe that we can improve the F-measures for the four priority levels, with the High level slightly less than the rest of the priority levels. considering the overall average F-measures, PrAIortize also outperforms Meng et al. [41] baseline by 35.29% and DRONE baseline by approximately 206%. However, we only consider the textual dimension of DRONE implementation. Therefore, in cases where the dataset includes the other five dimensions (i.e., temporal, author, related report, severity, and product as proposed in DRONE paper), DRONE may surpass our model's performance.
Moreover, we studied the performance of BERT, DistilBERT, BiLSTM, and RNN in comparison to PrAIortize performance. Among these well-known models, we notice that RNN has the poorest average F-measure. PrAIortize average F-measure outperforms BERT, DistilBERT, BiLSTM, and RNN by 10.8% (i.e., ((0.92 - 0.83) / 0.83) * 100), 5.75%, 16.46%, and 26% respectively.
Table IV shows the recall measures for all baselines and models versus PrAIortize recall measures for the four priority levels. PrAIortize outperforms Meng et al. [41] baseline with 35.29% in terms of the recall measure and DRONE by approximately 206%. It also further improves the recall of BERT, DistilBERT, BiLSTM, and RNN by 9.52%, 5.75%, 15%, and 24.32% respectively. Finally, Table V presents the precision measure of PrAIortize and the rest models. Similar to F-measure and recall, PrAIortize significantly improves the precision of Meng et al. [41] by 35.29%, achieves a higher precision of DRONE by 170.6%, and the selected pre-trained models (i.e., BERT, DistilBERT, BiLSTM, and RNN) by 8.2%, 5.75%, 15%, and 16.5% respectively.
## VI Related Work
This section describes related research on bug priority prediction.
### _Bug priority prediction in Smart Contracts_
Research on the prioritization of bugs in smart contracts is still in its early stages. Some studies have attempted to classify smart contract bugs based on proposed severity schemes (i.e., [44, 45]). However, it is important to note that severity is determined by customers, while priority is determined by developers [8]. To the best of our knowledge, there has not yet been any attempt to automatically predict the priority of bugs in smart contracts.
### _Bug priority prediction in Software Engineering_
Several methods have been suggested in the literature for improving the quality of software using bug reports. These approaches can be grouped into duplication detection and classification, bug triage, and bug localization. Many studies have focused on predicting the priority of bug reports, and some of these studies are based on deep learning. For example, Fang et al. [46] proposed a method that uses graph convolutional networks and a weighted loss function for the prediction of bug-fixing priority. Another study by Li et al. [42] used deep multitask learning to develop an approach called PRIMA, which simultaneously learns both the bug category prediction task and the priority prediction task. Other studies are based on machine learning, for instance, Tian et al. [8] proposed a machine learning model for priority prediction based on features extracted from six dimensions: temporal, textual, author, related report, severity, and product. Valvida-Garcia et al. [47, 47] used the experience features of reporters to create blocking bug prediction models based on various classical machine learning classifiers. Zhou et al. [48] conducted a study to examine the impact of source code file feature sets on the accuracy of bug report priority classification. The results
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Priority Level** & **Low** & **Medium** & **High** & **Critical** & **Average** \\ \hline PrAIortize & 0.92 & 0.94 & 0.82 & 1.00 & **0.92** \\ Meng et al. [41] & 0.64 & 0.50 & 0.61 & 0.96 & **0.68** \\ DRONE & 0.24 & 0.12 & 0.25 & 0.73 & **0.34** \\ BERT & 0.67 & 0.79 & 0.95 & 0.99 & **0.85** \\ DistilBERT & 0.81 & 0.81 & 0.92 & 0.95 & **0.87** \\ BiLSTM & 0.74 & 0.72 & 0.76 & 0.97 & **0.80** \\ RNN & 0.81 & 0.56 & 0.84 & 0.96 & **0.79** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Comparisons of Precision measures of PrAIortize versus other classifiers and baselines
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Priority Level** & **Low** & **Medium** & **High** & **Critical** & **Average** \\ \hline PrAIortize & 0.92 & 0.88 & 0.94 & 0.94 & **0.92** \\ Meng et al. [41] & 0.64 & 0.43 & 0.67 & 0.96 & **0.68** \\ DRONE & 0.37 & 0.08 & 0.42 & 0.34 & **0.30** \\ BERT & 0.85 & 0.81 & 0.73 & 0.97 & **0.84** \\ DistilBERT & 0.86 & 0.83 & 0.80 & 0.99 & **0.87** \\ BiLSTM & 0.86 & 0.84 & 0.58 & 0.94 & **0.80** \\ RNN & 0.72 & 0.93 & 0.37 & 0.95 & **0.74** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Comparisons of Recall measures of PrAIortize versus other classifiers and baselines
of the study demonstrated that source code file feature sets did not perform as well as textual description features in bug report classification. Tran et al. [49] compared different machine learning methods for evaluating the severity and priority of software bug reports. They suggested using an approach based on optimal decision trees to assess the severity and priority of new bugs. Another study by Haung et al. [50] developed a model for predicting multi-class bug priority that integrates sentiment and community-oriented sociotechnical features from both users and developers. The model was validated in various scenarios, including within-project and cross-project. The results of the study indicate that including assignee and reporter features from sociotechnical perspectives can improve prediction performance.
## VII Threats to Validity
One potential threat to external validity with our proposed approach is its generalizability. To address this, we trained and evaluated PrAloritize on a dataset comprising two reliable sources to further verify the effectiveness of our approach and reduce the risk of threats to external validity. A potential threat to the construct validity of our study is the choice of evaluation metrics. However, the metrics we selected (precision, recall, and F-measure) are widely used in the literature and have been adopted in numerous previous studies, including the selected baselines [11, 41]. Another concern with the construct validity of our study is the distribution of the four levels of priority across the two data sources. While this may potentially impact the performance of PrAloritize to some extent, the high performance across data sources with four priority levels suggests the effectiveness of our approach.
## VIII Conclusion
Smart contract bugs and vulnerabilities are prevalent. To effectively resolve these vulnerabilities and bugs, software engineers often rely on bug reports on open-source platforms such as GitHub, and the National Vulnerability Database (NVD). This paper focuses on the automatic priority prediction of bug reports for smart contracts in order to improve the overall trustworthiness of these contracts. We introduce PrAloritize, an approach that utilizes automatic labeling and deep learning techniques to prioritize bugs in smart contracts. PrAloritize evaluation demonstrates its effectiveness in accurately prioritizing smart contract bug reports, and the automatic labeling helps in cross-projects unlabeled reports. The proposed approach offers a reliable method for automatically predicting the priority of bugs for smart contracts and illustrates the usefulness of using automatic labeling and multi-class deep learning techniques in the process of prioritizing bug reports. In the future, we plan to further expand our study by incorporating bug reports from additional OSS projects and experimenting with a wider range of pre-trained models that are commonly used for similar tasks.
|
2307.16168 | Credible intervals and bootstrap confidence intervals in monotone
regression | In the recent paper [5], a Bayesian approach for constructing confidence
intervals in monotone regression problems is proposed, based on credible
intervals. We view this method from a frequentist point of view, and show that
it corresponds to a percentile bootstrap method of which we give two versions.
It is shown that a (non-percentile) smoothed bootstrap method has better
behavior and does not need correction for over- or undercoverage. The proofs
use martingale methods. | Piet Groeneboom, Geurt Jongbloed | 2023-07-30T08:32:35Z | http://arxiv.org/abs/2307.16168v1 | # Credible intervals and bootstrap confidence intervals in monotone regression
###### Abstract
In the recent paper [5], a Bayesian approach for constructing confidence intervals in monotone regression problems is proposed, based on credible intervals. We view this method from a frequentist point of view, and show that it corresponds to a percentile bootstrap method of which we give two versions. It is shown that a (non-percentile) smoothed bootstrap method has better behavior and does not need correction for over- or undercoverage. The proofs use martingale methods.
Credible intervals, confidence intervals, bootstrap confidence intervals in monotone regression
Primary 62G05, 62N01; secondary 62-04.
_Keywords and phrases:_ credible intervals, confidence intervals, Chernoff's distribution, smoothed bootstrap, monotone regression.
evaluated at the point \(P_{i}\).
As is clear from this characterization, \(\hat{f}_{n}\) will be a nondecreasing step function with its jumps concentrated on a data-dependent subset of the observed points \(x_{i}\). Fixing a number of points in \([0,1]\), say \(0=\tau_{0}<\tau_{1}<\tau_{2}<\cdots<\tau_{m}=1\), one can also minimize (1) over all nondecreasing functions, piecewise constant on the intervals \(I_{j}=(\tau_{j-1},\tau_{j}]\). Then, writing \(n_{j}=|\{i\,:\,i\in I_{j}\}|\) and \(\bar{y}_{j}=(\sum_{x_{i}\in I_{j}}y_{i})/n_{j}\), we have
\[\sum_{i=1}^{n}\left(y_{i}-f(x_{i})\right)^{2} =\sum_{j=1}^{m}\sum_{x_{i}\in I_{j}}\left(y_{i}-\bar{y}_{j}+\bar{ y}_{j}-f(\tau_{j})\right)^{2}= \tag{2}\] \[=\sum_{j=1}^{m}\sum_{x_{i}\in I_{j}}\left(y_{i}-\bar{y}_{j} \right)^{2}+\sum_{j=1}^{m}\left(\bar{y}_{j}-f(\tau_{j})\right)^{2}n_{j},\]
where we use that for the current function class, \(f(x_{i})=f(\tau_{j})\) for \(x_{i}\in I_{j}\). As the first term in (2) does not involve \(f\), minimizing it boils down to a weighted isotonic regression. The solution to this minimization problem also allows for a graphical construction. The optimal value of \(f(\tau_{j})\) is the left derivative, taken at the point \(P_{j}\), of the greatest convex minorant of the diagram of points consisting of
\[P_{0}=(0,0),\ P_{j}=\left(\frac{1}{n}\sum_{k=1}^{j}n_{k},\frac{1}{n}\sum_{k=1 }^{j}n_{k}\bar{y}_{k}\right),\ \ 1\leq j\leq m.\]
From decomposition (2) it also follows that the piecewise constant function \(f\) defined by \(f(x_{i})=\bar{y}_{j}\) for \(x_{i}\in I_{j}\) is the least squares estimator over the class of piecewise constant functions without imposing the restriction of monotonicity.
In [5], a Bayesian method is proposed for constructing pointwise confidence intervals for a monotone regression function, based on credible intervals. The method is proved to give overcoverage for large sample sizes, but a correction table is given in [5] to correct for the overcoverage. Purely based on the algorithm that results in the credible intervals, the approach can be seen as a particular percentile bootstrap method.
In Section 2 we describe the approach in [5] to construct confidence intervals via credible intervals. In Section 3 we give the interpretation of the credible intervals as percentile bootstrap intervals and in particular Theorem 3 for the bootstrap procedure, corresponding to the key Theorem 3 in [5] for the construction of the credible intervals. In proving Theorem 3 we use a martingale method.
In analogy with the Bayesian procedure, we construct the bootstrap intervals by generating normal noise variables (following [5]), using the empirical Bayes method for estimating the variance of these variables, defined in Section 3. In subsection 3.2 we define a classical bootstrap procedure, where we resample with replacement from the original data, and do not have to estimate the variance. These two methods correspond, respectively, to the "regression method" (holding the regressors \(X_{i}\) in the regression model fixed), and the "correlation model" (where we consider the \(X_{i}\) as random) in the terminology of [12]. The results of the three methods are highly similar.
It has been proved by several authors that the straightforward bootstrap is inconsistent in this situation (see, e.g., [13], [18] and [17] for results related to this phenomenon).This straightforward bootstrap uses resampling with replacement from the pairs \((X_{i},Y_{i})\) and computes the monotone least squares estimator \(\hat{f}_{n}^{*}\) based bootstrap samples and approximates the distribution of \(n^{1/3}\left(\hat{f}_{n}(t_{0})-f_{0}(t_{0})\right)\) by that of the analogous 'bootstrap quantity' \(n^{1/3}\left(\hat{f}_{n}^{*}(t_{0})-\hat{f}_{n}(t_{0})\right)\). The Bayesian approach and the percentile bootstrap approach
circumvent this difficulty by using the convergence in distribution of the random variable (as a function of \(D_{n}=\{(X_{1},Y_{n}),\ldots,(X_{n},Y_{n})\}\))
\[\mathbb{P}\left(n^{1/3}\big{\{}\hat{f}_{n}^{*}(t_{0})-f_{0}(t_{0})\big{\}}\leq x \,\Big{|}\,D_{n}\right) \tag{1.3}\]
to
\[\mathbb{P}\left[\left(\frac{4\sigma_{0}^{2}f_{0}^{\prime}(t_{0})}{g(t_{0})} \right)^{1/3}\operatorname{argmin}_{t\in\mathbb{R}}\big{[}W_{1}(t)+W_{2}(t)+t ^{2}\big{]}\leq x\,\Big{|}\,W_{1}\right] \tag{1.4}\]
see Theorem 3.3 in [5] and Theorem 3.1, where \(W_{1}\) and \(W_{2}\) are two independent standard two-sided Brownian motions, originating from zero. Here \(\hat{f}_{n}^{*}\) is either the projection-posterior Bayes estimate (to be described in Section 2), in which case we would write
\[\Pi\left(n^{1/3}\big{\{}\hat{f}_{n}^{*}(t_{0})-f_{0}(t_{0})\big{\}}\leq x\, \Big{|}\,D_{n}\right)\]
instead of (1.3), or the percentile bootstrap estimate \(\hat{f}_{n}^{*}\). The limit (1.4) leads to credible intervals which asymptotically give overcoverage, which can be corrected for as described in [5].
In Section 4 an altogether different method for constructing the confidence intervals is given, where we use the smoothed (non-percentile) bootstrap. Here we keep the regressors fixed again, and resample with replacement residuals w.r.t. a smooth estimate of the regression function: the Smoothed Least Squares Estimator (SLSE). In this way, using theory from [9], consistent confidence intervals are constructed.
In fact, instead of \(n^{1/3}\big{\{}\hat{f}_{n}^{*}(t_{0})-f_{0}(t_{0})\big{\}}=n^{1/3}\big{\{}\hat {f}_{n}^{*}(t_{0})-\hat{f}_{n}(t_{0})+\hat{f}_{n}(t_{0})-f_{0}(t_{0})\big{\}}\) we can now consider
\[n^{1/3}\big{\{}\hat{f}_{n}^{*}(t_{0})-\tilde{f}_{nh}(t_{0})+\hat{f}_{n}(t_{0} )-f_{0}(t_{0})\big{\}},\]
where \(\hat{f}_{n}^{*}\) is based on sampling with replacement from the residuals w.r.t. the SLSE \(\tilde{f}_{nh}\) with bandwidth \(h\) of order \(n^{-1/5}\). In contrast with Theorem 3.3 in [5] and Theorem 3.1 in the present paper, we now have convergence of
\[\mathbb{P}\left(n^{1/3}\big{\{}\hat{f}_{n}^{*}(t_{0})-\tilde{f}_{nh}(t_{0})+ \hat{f}_{n}(t_{0})-f_{0}(t_{0})\big{\}}\leq 0\,\Big{|}\,D_{n}\right)\]
to the uniform distribution on \([0,1]\) (using the symmetry of the limit distribution of \(n^{1/3}\{\hat{f}_{n}(t_{0})-f_{0}(t_{0})\}\)), see Theorem 4.2.
For the non-percentile bootstrap, however, it is more natural to consider
\[n^{1/3}\big{\{}\hat{f}_{n}(t_{0})-\{\hat{f}_{n}^{*}(t_{0})-\tilde{f}_{nh}(t_{0 })\}-f_{0}(t_{0})\big{\}}\]
(avoiding "looking up the wrong tables, backwards", see the discussion on p. 938 of [11]), for which we also get convergence to the the uniform distribution of
\[\mathbb{P}\left(n^{1/3}\big{\{}\hat{f}_{n}(t_{0})-\{\hat{f}_{n}^{*}(t_{0})- \tilde{f}_{nh}(t_{0})\}-f_{0}(t_{0})\big{\}}\leq 0\,\Big{|}\,D_{n}\right),\]
implying the consistency of the smoothed bootstrap method. This method of constructing confidence intervals seems superior in comparison to the Bayesian method and the percentile bootstrap intervals, as is suggested by our simulations of the coverage of the different methods.
## 2 Credible intervals
In [5], a Bayesian approach to construct confidence intervals for a monotone regression function is proposed. A prior distribution is defined on the class of functions on \([0,1]\), supported on a sieve of piecewise constant functions. More specifically, the interval \([0,1]\) is partitioned into \(J\) intervals \(I_{j}=((j-1)/J,j/J]\), \(1\leq j\leq J\). In the notation of the previous section, \(\tau_{j}=j/n\). A draw from the prior distribution is then represented by
\[f_{\boldsymbol{\theta}}=\sum_{j=1}^{J}\theta_{j}1_{I_{j}},\qquad\boldsymbol{ \theta}=(\theta_{1},\ldots,\theta_{J}), \tag{1}\]
where the \(\theta_{j}\) are independent normal random variables with expectation \(\zeta_{j}\) and variance \(\sigma_{0}^{2}\lambda_{j}^{2}\), where \(0<\lambda_{j}<\infty\) (including noise variance \(\sigma_{0}^{2}\) as a factor is only done for convenience in formulas to follow). Note that function (1) will not automatically be monotone, a requirement that would seem natural in this setting. The main reason not to impose this, is that with this prior distribution, the posterior distribution can be conveniently analytically computed. Indeed, as seen in the Appendix, the posterior distribution of \(\boldsymbol{\theta}\) has independent coordinates, where \(\theta_{j}\) has distribution
\[\theta_{j}\sim N\left(\frac{n_{j}\bar{y}_{j}+\zeta_{j}/\lambda_{j}^{2}}{n_{j} +1/\lambda_{j}^{2}},\frac{\sigma_{0}^{2}}{n_{j}+1/\lambda_{j}^{2}}\right), \qquad n_{j}=\sum_{i=1}^{n}1_{I_{j}}(x_{i}). \tag{2}\]
Here, as before, \(\bar{y}_{j}\) is the mean of the \(y_{i}\) for the \(x_{i}\) belonging to the \(j\)-th interval \(I_{j}\). As mentioned in the previous section, this corresponds to the MLE of \(f\) over the (nonrestricted) class of functions which are constant on the intervals \(I_{j}\). A draw from the posterior on the set of piecewise constant functions on \([0,1]\) proceeds via (1), based on a draw from the posterior of \(\boldsymbol{\theta}\). The resulting function will in general not be monotone, so the support of the posterior extends outside the set of monotone functions on \([0,1]\).
Following ideas of [14], [2] and [3], in [5] a draw \(f_{\boldsymbol{\theta}}\) from the 'raw posterior' is subsequently projected on the set of nondecreasing functions on \([0,1]\), piecewise constant on the intervals \(I_{j}\), \(1\leq j\leq J\), via weighted isotonic regression. This projection \(f_{\boldsymbol{\theta}}^{*}\) is computed using Lemma 2.1 of [8]. This boils down to computing the left derivative of the greatest convex minorant of the cusum diagram consisting of the points \(P_{j}\), for \(0\leq j\leq J\) with \(P_{0}=(0,0)\) and
\[P_{j}=\left(\frac{1}{n}\sum_{k=1}^{j}n_{k},\frac{1}{n}\sum_{k=1}^{j}n_{k} \theta_{k}\right),\ 1\leq j\leq J\]
if all \(n_{j}>0\), see (3) in [5] (note that Lemma 2.1 in [8] has the condition that all weights are _strictly_ positive). It is clear that computing the isotonic regression can be restricted to those \(j\) with \(n_{j}>0\) and that for those \(j\) with \(n_{j}=0\), \(f_{\boldsymbol{\theta}}^{*}\) can be given any value such that monotonicity is not violated.
In this procedure, various choices need to be made. One is the number of intervals \(J\). In [5], the asymptotic bounds
\[n^{1/3}\ll J\ll n^{2/3} \tag{3}\]
are given and the closest integer to \(n^{1/3}\log n\) is chosen in the simulations. Here the symbol "\(\ll\)" means "is of lower order than", as \(n\to\infty\). Also the noise variance \(\sigma_{0}^{2}\) needs to be dealt with. For this, [5] choose the natural empirical Bayes estimate (fixing \(\zeta\) and \(\Lambda\)), given by
\[\hat{\sigma}_{\eta}^{2}=n^{-1}(\boldsymbol{y}-\boldsymbol{B}\boldsymbol{\zeta })^{T}(\boldsymbol{B}\boldsymbol{\Lambda}\boldsymbol{B}^{T}+\boldsymbol{I})^{ -1}(\boldsymbol{y}-\boldsymbol{B}\boldsymbol{\zeta})\qquad\boldsymbol{\Lambda }=\text{diag}(\lambda_{1}^{2},\ldots,\lambda_{J}^{2}), \tag{4}\]
where \(\mathbf{B}=(b_{ij})\) the \(n\times J\) 'design matrix' with entries \(b_{ij}=1_{I_{j}}(x_{i})\), corresponding to the regression model \(\mathbf{y}=\mathbf{B}\mathbf{\theta}+\mathbf{\epsilon}\) following from the representation \(f_{\mathbf{\theta}}(x_{i})=(\mathbf{B}\mathbf{\theta})_{i}\); see the Appendix. As also shown in the Appendix, this estimate can be rewritten as
\[\hat{\sigma}_{n}^{2} =\frac{1}{n}\sum_{j=1}^{J}\sum_{x_{i}\in I_{j}}\left(y_{i}-\bar{y }_{j}\right)^{2}+\frac{1}{n}\sum_{j=1}^{J}\frac{n_{j}(\bar{y}_{j}-\zeta_{j})^{ 2}}{1+n_{j}\lambda_{j}^{2}}\] \[=\frac{1}{n}\sum_{i=1}^{n}\left(y_{i}-\bar{f}(x_{i})\right)^{2}+ \frac{1}{n}\sum_{j=1}^{J}\frac{n_{j}(\bar{y}_{j}-\zeta_{j})^{2}}{1+n_{j} \lambda_{j}^{2}} \tag{2.5}\]
where \(\bar{f}(x)=\sum_{j=1}^{J}\bar{y}_{j}1_{I_{j}}(x)\) is the aforementioned maximum likelihood estimate of \(f_{0}\) over all piecewise constant (not necessarily monotone) functions on the intervals \(I_{j}\). The first term in this expression is the mean of the squared residuals of the observations with respect to \(\bar{f}\). This is a quite natural estimator of the variance. The influence of the hyper parameters \(\mathbf{\zeta}\) and \(\mathbf{\Lambda}\) on the estimate of \(\sigma_{0}^{2}\) can be inferred from the second term in the expression.
As shown in the Appendix, the empirical Bayes estimate for \(\zeta\), not taking into account monotonicity is given by
\[\bar{\zeta}=(\bar{y}_{1},\bar{y}_{2},\ldots,\bar{y}_{J})^{T}. \tag{2.6}\]
Substituting this in (2.5) makes the second term vanish. Using the empirical Bayes estimate over the monotone vectors \(\zeta\), being the isotonic regression of \(\bar{\zeta}\) with weights \(n_{j}/(1+n_{j}\lambda_{j}^{2})\) increases the empirical Bayes estimate for \(\sigma_{0}^{2}\).
With the choices \(\mathbf{\zeta}=0\) and \(\lambda_{j}\equiv\lambda\) made in [5], the empirical Bayes estimate for \(\sigma_{0}^{2}\) becomes
\[\hat{\sigma}_{n}^{2}=\frac{1}{n}\sum_{i=1}^{n}\left(y_{i}-\bar{f}(x_{i}) \right)^{2}+\frac{1}{n}\sum_{j=1}^{J}\frac{n_{j}\bar{y}_{j}^{2}}{1+n_{j} \lambda^{2}} \tag{2.7}\]
For relatively large values of \(\lambda^{2}n_{j}\), the second term becomes negligible to the first.
Because the density \(g\) generating the \(X_{i}\)'s is nonvanishing on \([0,1]\), the (random) number \(N_{j}\) of points in intervals of length of the order \(1/J=1/J_{n}\) is (in the setting of [5]) of the order \(n/J_{n}\). With the restriction \(n^{1/3}<<J_{n}<<n^{2/3}\), this means that \(N_{j}\) will be of bigger order than \(n^{1/3}\); taking \(J_{n}\approx n^{1/3}\log n\), \(N_{j}\) will be of order \(n^{2/3}/\log n\). Therefore, for reasonable choice of \(\lambda\), \(\lambda^{2}N_{j}>>1\) with high probability when \(n\) is large.
Considering (2.2) with \(\zeta_{j}\equiv 0\) and \(\lambda_{i}\equiv\lambda\), fixed, as chosen in [5], a draw from the raw posterior of \(\theta_{j}\) can be viewed as
\[\theta_{j}=\bar{f}(j/J)+\tilde{\epsilon}_{j}\]
where
\[\bar{f}(j/J)=\frac{\bar{y}_{j}}{1+1/(n_{j}\lambda^{2})}\text{ and }\tilde{ \epsilon}_{j}\sim^{\text{indep}}N\left(0,\frac{\sigma_{0}^{2}/n_{j}}{1+1/(n_{j} \lambda^{2})}\right), \tag{2.8}\]
where \(\sigma_{0}^{2}\) is estimated by its Empirical Bayes estimate (2.7). Again due to the restriction \(n^{1/3}<<J_{n}<<n^{2/3}\), \(\bar{f}\) in (2.8) is a (generally non-monotone) local average estimator of \(f_{0}\). The added noise \((\tilde{\epsilon}_{j})\) is normal and reflects the variance of the original \(\bar{Y}_{j}\). This means that the draw from the projected posterior is computed as left derivative of the cumulative sum diagram consisting of the points \(P_{j}\), \(0\leq j\leq J\) with \(P_{0}=(0,0)\) and
\[P_{j}=\left(\sum_{k=1}^{j}n_{k},\sum_{k=1}^{j}n_{k}\left(\frac{\bar{y}_{k}}{1+1 /(n_{k}\lambda^{2})}+\tilde{\epsilon}_{k}\right)\right) \tag{2.9}\]
In [5], the following example is considered:
\[f_{0}(x)=x^{2}+x/5,\qquad x\in[0,1].\]
Here the \(X_{i}\) are independently uniformly distributed on \([0,1]\) and the \(\varepsilon_{i}\) have a normal \(N(0,0.01)\) distribution. The choices for \(J\), \(\zeta_{j}\) and \(\lambda_{j}\) are \(J=\lfloor n^{1/3}\log n\rfloor\), \(\zeta_{j}=0\) and \(\lambda_{j}=10\), following the parametrization in the R code, kindly sent to us by Moumita Chakraborty. A picture of a single draw \(f_{\mathbf{\theta}}\) from the raw posterior and its isotonic projection \(f_{\mathbf{\theta}}^{*}\), for a sample of size \(n=500\) is shown in Figure 1.
Now, one can generate \(1000\) posterior samples of \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{J})\) from the posterior normal distribution, specified in (2.2), and consider the \(\frac{1}{2}\alpha\)th and \((1-\frac{1}{2}\alpha)\)th percentiles of the isotonic projections \(\hat{f}_{\mathbf{\theta}}^{*}(t)\) at a fixed point \(t\). Would this give us, at least asymptotically, valid \(95\%\) confidence intervals for \(f_{0}(t)\)?
The question is answered in [5] by Theorem 3.3 on p. 1017: \(\Pi\left(n^{1/3}\{\hat{f}_{\mathbf{\theta}}^{*}(t)-f_{0}(t)\}\leq z|D_{n}\right)\) converges to a limit distribution, leading to wider intervals than in the situation in which we have the Chernoff distribution as limit. The fraction by which they become wider is given in [5].
## 3 Credible intervals as bootstrap percentile confidence intervals
### The percentile bootstrap for the regression model
In the Bayes approach, we considered random parameters \(\theta_{j}\), with (posterior) distribution given in (2.2). In the simulations, accompanying the paper [5], the prior parameter \(\mathbf{\zeta}\) was taken \(\mathbf{\zeta}=\mathbf{0}\) and \(\lambda_{j}\equiv\lambda>0\). Moreover, the empirical Bayes estimator \(\hat{\sigma}_{n}^{2}\) was taken as estimator for \(\sigma_{0}^{2}\).
With these choices, we get:
\[\theta_{j}\sim N\left(\frac{\bar{y}_{j}}{1+1/(n_{j}\lambda^{2})},\frac{\hat{ \sigma}_{n}^{2}}{n_{j}\big{\{}1+1/(n_{j}\lambda^{2})\big{\}}}\right),1\leq j \leq J,\text{ independently}.\]
Figure 1: (a) A single draw \(f_{\mathbf{\theta}}\) (blue) from the raw posterior and (b) the corresponding projection \(f_{\mathbf{\theta}}^{*}\) (blue), for a sample of size \(n=1000\) and \(f_{0}(x)=x^{2}+x/5\) (red, dashed).
This means asymptotically, in first order:
\[\theta_{j}\sim N\left(\bar{y}_{j},\frac{\hat{\sigma}_{n}^{2}}{n_{j}}\right),\ 1 \leq j\leq J \tag{3.1}\]
if we keep \(\lambda\) bounded away from zero (\(\lambda^{2}=100\) was taken in the simulations with paper [5]). Next the confidence intervals were determined by taking the percentiles of simulated values of the (weighted) monotonic projections of the \(\theta_{j}\)'s with distribution given by (3.1).
Algorithmically, this can be viewed as a percentile bootstrap method, where a bootstrap sample is generated by adding noise to an estimate of the regression function. The regression estimate in this setting is the (weighted) least squares estimate of \(f_{0}\), piecewise constant on intervals \(I_{j}\) and not taking into account the monotonicity constraint (so: \(\bar{y}_{j}\) on \(I_{j}\)). The noise is sampled from a centered normal distribution with estimated variance. Then, \(\hat{f}_{n}^{*}\) is determined by computing the (weighted) isotonic regression based on the bootstrap dataset. Adopting the "bootstrap notation" rather than the "Bayesian notation" \(\theta_{j}\), define where
\[Y_{j}^{*}\sim N(\bar{Y}_{j},\hat{\sigma}_{n}^{2}/N_{j}),\qquad j=1,\ldots,J \tag{3.2}\]
and note that given the original data, \((Y_{1}^{*},\ldots,Y_{J}^{*})=^{D}(\theta_{1},\ldots,\theta_{J})\), in view of (3.1) and (3.2). Using this notation, \(\hat{f}_{n}^{*}\) is found by taking the left derivative of the convex minorant of the cusum diagram, running through the points
\[P_{j}^{*}=\left(\sum_{i=1}^{j}N_{i},\sum_{i=1}^{j}N_{i}Y_{i}^{*}\right),\qquad j =1,\ldots,J. \tag{3.3}\]
To study the asymptotic behavior of \(\hat{f}_{n}^{*}\), we define the (local) "bootstrap" process
\[\widetilde{W}_{n}^{*}(t)=n^{-1/3}\Bigg{\{}\sum_{j:j/J\in[0,t_{0}+n^{-1/3}t]}N_ {j}\left\{Y_{j}^{*}-\bar{Y}_{j}\right\}-\sum_{j:j/J\in[0,t_{0}]}N_{j}\left\{ Y_{j}^{*}-\bar{Y}_{j}\right\}\Bigg{\}} \tag{3.4}\]
and the (local) "sample" process \(\widetilde{W}_{n}\) by:
\[\widetilde{W}_{n}(t)=n^{-1/3}\Bigg{\{}\sum_{j:j/J\in[0,t_{0}+n^{-1/3}t]}1_{ \left\{N_{j}>0\right\}}N_{j}\Big{\{}\bar{Y}_{j}-\frac{n}{N_{j}}\int_{I_{j}}f_ {0}(u)\,d\mathbb{G}_{n}(u)\Big{\}}\\ -\sum_{j:j/J\in[0,t_{0}]}1_{\left\{N_{j}>0\right\}}N_{j}\Big{\{} \bar{Y}_{j}-\frac{n}{N_{j}}\int_{I_{j}}f_{0}(u)\,d\mathbb{G}_{n}(u)\Big{\}} \Bigg{\}}. \tag{3.5}\]
With these definitions we have the following theorem, similar to Theorem 3.3 in [5].
**Theorem 3.1**.: _Let \(D_{n}=\{(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\}\) and let \(\hat{f}_{n}^{*}\) be a draw generated according to the bootstrap procedure described above. Then, for each fixed \(x\in\mathbb{R}\), as \(n\to\infty\),_
\[\begin{split}&\mathbb{P}\left(n^{1/3}\big{\{}\hat{f}_{n}^{*}(t_{0})-f_{ 0}(t_{0})\big{\}}\leq x\,\Big{|}\,D_{n}\right)\\ &\stackrel{{\mathcal{D}}}{{\longrightarrow}}\mathbb{P }\left[\left(\frac{4\sigma_{0}^{2}f_{0}^{\prime}(t_{0})}{g(t_{0})}\right)^{1/ 3}\text{\rm argmin}_{t\in\mathbb{R}}\left[W_{1}(t)+W_{2}(t)+t^{2}\right]\leq x \,\Big{|}\,W_{1}\right]\end{split} \tag{3.7}\]
_where \(W_{1}\) and \(W_{2}\) are independent standard two-sided Brownian motions._
Note that, for \(t>0\),
\[n^{-1/3}\Biggl{\{}\sum_{j:j/J\in[0,t_{0}+n^{-1/3}t]}N_{j}\bigl{\{}Y_ {j}^{*}-f_{0}(t_{0})\bigr{\}}-\sum_{j:j/J\in[0,t_{0}]}N_{j}\bigl{\{}Y_{j}^{*}-f_{ 0}(t_{0})\bigr{\}}\Biggr{\}}\] \[=\widetilde{W}_{n}^{*}(t)+\widetilde{W}_{n}(t)+n^{-1/3}\sum_{j:j /J\in[t_{0},t_{0}+n^{-1/3}t]}n\,\int_{I_{j}}\bigl{\{}f_{0}(u)-f_{0}(t_{0}) \bigr{\}}\,d\mathbb{G}_{n}(u) \tag{3.8}\] \[\sim\widetilde{W}_{n}^{*}(t)+\widetilde{W}_{n}(t)+\tfrac{1}{2}f_ {0}^{\prime}(t_{0})g(t_{0})t^{2},\]
with a similar expansion for \(t\leq 0\).
So the percentile bootstrap estimates have the same behavior as the Bayes estimates in [5]. Histograms of estimates of the posterior probabilities for the Bayesian procedure in [5] and the corresponding conditional probabilities of the percentile bootstrap in Lemma 3.1 for varying \(D_{n}\) of size \(n=2000\) and \(t_{0}=0.5\) are shown in Figure 2. The estimates are the relative frequencies in \(1000\) posterior, resp. percentile bootstrap samples for each of the original (1000) samples.
Remark 3.1.Let \(\Delta_{n}=\Pi\{n^{1/3}\bigl{\{}f_{n}^{*}(t_{0})-\hat{f}_{n}(t_{0})\bigr{\}} \leq 0|D_{n}\}\). In Figure 1 on p. 1017 of [5] three pictures of \(\Delta_{n}\) are shown for three different sets of simulated data, where \(\hat{f}_{n}\) is the LSE. Is is not completely clear to us how \(\Delta_{n}\) is sampled here. Since we do not have an explict expression for \(\Delta_{n}\), it seems that an estimate of \(\Delta_{n}\) has to be based on a sample of posterior draws \(f^{*}(t_{0})\). If we use such a procedure and consider the fluctuation of \(\Delta_{n}\) as a function of \(D_{n}\), we get a histogram similar to the histograms in Figure 1 of [5]. The estimates are relative frequencies in \(1000\) samples of size \(2000\). See Figure 3.
Theorem 3.1 is the consequence of the following two lemmas.
**Lemma 3.1**.: _Let \(W\) be standard two-sided Brownian motion on \(\mathbb{R}\), originating from zero. Let \(D(\mathbb{R})\) be the space of right continuous functions, with left limits (cadlag functions) on \(\mathbb{R}\), equipped with the metric of uniform convergence on compact sets, and let \(t_{0}\in(0,1)\). Let \(\widetilde{W}_{n}^{*}\) be defined by (3.4). Then, along almost all sequences \((X_{1},Y_{1}),(X_{2},Y_{2}),\dots\), the process \(\widetilde{W}_{n}^{*}\) defined by (3.4) converges in \(D(\mathbb{R})\) in distribution conditionally to the process \(V\), defined by_
\[V(t)=\sigma_{0}\sqrt{g(t_{0})}\,W(t),\,t\in\mathbb{R}. \tag{3.9}\]
_Here \(W\) is standard two-sided Brownian motion, originating from zero._
Proof.: We consider the case \(t\geq 0\). It is clear that \(t\mapsto\widetilde{W}_{n}(t)\) is a martingale with respect to the family of \(\sigma\)-algebras \(\mathscr{F}_{n,t}^{*},\,t\geq 0\), defined by:
\[\mathscr{F}_{n,t}^{*}=\sigma\left\{(j/J,\bar{y}_{j}):j/J\in(t_{0},t_{0}+n^{-1 /3}t]\right\}.\qquad t\geq 0.\]
The quadratic variation process is, for \(t\geq 0\) given by:
\[\left[\widetilde{W}_{n}^{*}\right](t)=n^{-2/3}\sum_{j:j/J\in(t_{0},t_{0}+n^{- 1/3}t]}N_{j}^{2}\left\{\theta_{j}^{*}-\bar{y}_{j}\right\}^{2}\sim n^{-2/3}\sum _{j:j/J\in(t_{0},t_{0}+n^{-1/3}t]}N_{j}\sigma_{0}^{2}\,.\]
If, for example as in ([5]), \(J\sim n^{1/3}\log n\), we get:
\[N_{j}\sim n^{2/3}g(t_{0})/\log n,\]
and
\[n^{-2/3}\sum_{j:j/J\in(t_{0},t_{0}+n^{-1/3}t]}N_{j}\sigma_{0}^{2}\sim(\log n) ^{-1}\sum_{j:j/J\in(t_{0},t_{0}+n^{-1/3}t]}g(t_{0})\sigma_{0}^{2}\sim\sigma_{ 0}^{2}g(t_{0})\,t.\]
The case \(t<0\) is treated similarly. The result now follows from Rebolledo's theorem, see Theorem 3.6, p. 68 of [8] and [15].
**Lemma 3.2**.: _Let \(W\), \(t_{0}\) and \(V\) be as defined in Lemma 3.1 and \(\widetilde{W}_{n}\) by (3.5). Then the process \(\widetilde{W}_{n}\) converges in \(D(\mathbb{R})\) in distribution, conditionally on the sequence \(X_{1},X_{2},\ldots\), to the process \(V\)._
Proof of Lemma 3.2.: This is proved in the same way as Lemma 3.1, using that (3.5) is a martingale.
Proof of Theorem 3.1.: We use the "switch relation" (see, e.g., Section 3.8 of [8] and section 5.1 of [10]; the terminology is due to Iain Johnstone to denote a construction introduced in a course given by the first author in Stanford, 1990). The bootstrap estimate \(f_{n}^{*}\) is computed as left derivative of the greatest convex minorant of cumulative sum diagram (3.3). Let the processes \(G_{n}\) and \(V_{n}^{*}\) be defined by
\[G_{n}(t)=\sum_{j/J\leq t}N_{j}/n,\qquad V_{n}^{*}(t)=\sum_{j/J\leq t}N_{j}Y_{j }^{*}/n,\qquad t\in[0,1].\]
Moreover, let \(U_{n}^{*}\) be defined by
\[U_{n}^{*}(a)=\text{argmin}\{t\in[0,1]:V_{n}^{*}(t)-aG_{n}(t)\},\]
for \(a\) in the range of \(f_{0}\). Then we have the "switch relation":
\[\hat{f}_{n}^{*}(t)\geq a\iff G_{n}(t)\geq G_{n}(U_{n}^{*}(a))\iff t\geq U_{n} ^{*}(a),\]
(compare with (3.35), p. 69 of [8]). So we get if \(a=f_{0}(t_{0})\),
\[\mathbb{P}\left\{n^{1/3}\{\hat{f}_{n}^{*}(t_{0})-f_{0}(t_{0})\} \geq x|D_{n}\right\}=\mathbb{P}\left\{\hat{f}_{n}^{*}(t_{0})\geq a+n^{1/3}x|D _{n}\right\}\] \[= \,\mathbb{P}\left\{U_{n}^{*}(a+n^{-1/3}x)\leq t_{0}|D_{n}\right\} =\mathbb{P}\left\{n^{1/3}\big{\{}U_{n}^{*}(a+n^{-1/3}x)-t_{0}\big{\}}\leq 0 |D_{n}\right\}\] \[= \,\mathbb{P}\left\{\text{argmin}\left[t\in[0,1]:V_{n}^{*}(t)-(a+n ^{-1/3}x)\,G_{n}(t)\right]\leq 0|D_{n}\right\}\] \[= \,\mathbb{P}\left\{\text{argmin}\left[t\in[0,1]:V_{n}^{*}(t)-V_{n }^{*}(t_{0})-(a+n^{-1/3}x)\{G_{n}(t)-G_{n}(t_{0})\}\right]\leq 0|D_{n}\right\},\]
where the last equality holds since the values of the argmin function do not change if we add constants to the function for which we determine the argmin.
By Lemmas 3.1 and 3.2 we get the local expansion:
\[\mathbb{P}\left\{n^{1/3}\big{\{}\hat{f}_{n}^{*}(t_{0})-f_{0}(t_{0})\big{\}} \geq x\,\Big{|}\,D_{n}\right\}\] \[\sim \,\mathbb{P}\left[\text{argmin}_{t}\left[\tilde{W}_{n}^{*}(t)+ \tilde{W}_{n}(t)+\tfrac{1}{2}f_{0}^{\prime}(t_{0})g(t_{0})t^{2}-xg(t_{0}) \right]\leq 0\,\big{|}\,D_{n}\right],\]
which (using Brownian scaling) converges in distribution to
\[\mathbb{P}\left[\left(\frac{4\sigma_{0}^{2}f_{0}^{\prime}(t_{0})} {g(t_{0})}\right)^{1/3}\text{argmin}_{t\in\mathbb{R}}\left[W_{1}(t)+W_{2}(t)+t ^{2}\right]\leq-x\,\Big{|}\,W_{1}\right]\] \[= \,\mathbb{P}\left[\left(\frac{4\sigma_{0}^{2}f_{0}^{\prime}(t_{0} )}{g(t_{0})}\right)^{1/3}\text{argmin}_{t\in\mathbb{R}}\left[W_{1}(t)+W_{2}(t)+ t^{2}\right]\geq x\,\Big{|}\,W_{1}\right].\]
The last line uses the type of symmetry used in the proof of Theorem 5.2 of [10].
In the proof of Theorem 3.1 we use the tightness of \(n^{1/3}\{U_{n}^{*}(a+n^{-1/3}x)-t_{0}\}\), which can be proved along entirely similar lines as the proof of Lemma 3.5 in [8].
### Convergence of a classical percentile bootstrap
It is of interest to investigate what happens if we perform a classical empirical bootstrap, where we resample with replacement from the pairs \((X_{i},Y_{i})\). This situation, where we also treat the \(X_{i}\) as random from the start instead of keeping them fixed, is called the "correlation model" in [12]. In this case we compute the local means
\[\bar{Y}_{j}^{*}=(N_{j}^{*})^{-1}\sum_{X_{i}^{*}\in I_{j}}Y_{i}^{*},\qquad I_{j} =((j-1)/J,j/J],\qquad N_{j}^{*}=\#\{i:X_{i}^{*}\in I_{j}\}, \tag{3.10}\]
where the \((X_{i}^{*},Y_{i}^{*})\) are (discretely) uniformly (re-)sampled with replacement from the set \(D_{n}=\{(X_{1},Y_{i}),\ldots,(X_{n},Y_{n})\}\). If \(N_{j}^{*}=0\) we define \(\bar{Y}_{j}^{*}=0\), these values play no role in the isotonization step.
Note that we can write alternatively, if \(N_{j}^{*}>0\):
\[\bar{Y}_{j}^{*}=(N_{j}^{*})^{-1}\sum_{X_{i}\in I_{j}}M_{in}^{*}Y_{i},\qquad N _{j}^{*}=\sum_{i:X_{i}\in I_{j}}M_{in}^{*}, \tag{3.11}\]
where
\[(M_{1n}^{*},\ldots,M_{nn}^{*})\sim\text{Multinomial}\left(n;n^{-1},\ldots,n^{- 1}\right).\]
This means that
\[\mathbb{E}\left\{N_{j}^{*}\bar{Y}_{j}^{*}\;\big{|}\;(X_{1},Y_{1}),\ldots,(X_{ 1},Y_{n})\right\}=N_{j}\bar{Y}_{j},\qquad j=1,\ldots,J.\]
The points of the cusum diagram needed to compute the bootstrap realization of the LSE are given by
\[P_{j}^{*}=\left(\sum_{i=1}^{j}N_{i}^{*},\sum_{i=1}^{j}N_{i}^{*}\bar{Y}_{i}^{* }\right),\qquad j=1,\ldots,J.\]
In order to study the local asymptotics of the greatest convex minorant of this diagram, we consider the process
\[\widetilde{W}_{n}^{*}(t)=n^{-1/3}\Bigg{\{}\sum_{j:j/J\in[0,t_{0} +n^{-1/3}t]}\sum_{X_{i}\in I_{j}}(M_{in}^{*}-1)(Y_{i}-a_{0}) \tag{3.12}\] \[-\sum_{j:j/J\in[0,t_{0}]}\sum_{X_{i}\in I_{j}}(M_{in}^{*}-1)(Y_{i }-a_{0})\Bigg{\}},\]
where \(a_{0}=f_{0}(t_{0})\), and
\[\widetilde{W}_{n}(t)=n^{-1/3}\Bigg{\{}\sum_{j:j/J\in[0,t_{0}+n^{-1/3}t]}\sum_ {X_{i}\in I_{j}}(Y_{i}-a_{0})-\sum_{j:j/J\in[0,t_{0}]}\sum_{X_{i}\in I_{j}}(Y _{i}-a_{0})\Bigg{\}}.\]
Defining
\[U_{n}(a)=\text{argmin}\Big{\{}t\in[0,1]:n^{-1}\sum_{j:j/J\leq t}\left\{N_{j} ^{*}\bar{Y}_{j}^{*}-a\,N_{j}^{*}\right\}\Big{\}},\]
results analogous to Lemmas 3.1 and 3.2 hold. For example we get Lemma 3.3 (the analogue to Lemma 3.1) which is proved in the Appendix.
**Lemma 3.3**: _Let \(W\), \(t_{0}\) and \(V\) be as defined in Lemma 3.1 and \(\widetilde{W}_{n}^{*}\) be defined by (3.12). Then, along almost all sequences \((X_{1},Y_{1}),(X_{2},Y_{2}),\ldots\), the process \(\widetilde{W}_{n}^{*}\) converges in \(D(\mathbb{R})\) in distribution conditionally to the process \(V\)_
So we get the same behavior as in subsection 3.1, but the present approach has the interesting feature that we do not have to estimate the variance of the errors separately. We can just resample with replacement from the original sample \((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\) and compute the estimator \(\hat{f}_{n}^{*}\) in the bootstrap samples.
The simulations, based on the regression function \(f(x)=x^{2}+x/5\) with normal noise with expectation \(0\) and variance \(0.01\) show almost no difference beween the three methods if \(n=20,000\), see Figure 4. At smaller sample size, like, e.g., \(n=1000\), the overcoverage is still not reached, as can be seen in Figure 5. So the phenomenon of overcoverage also here only seems to occur with very large sample sizes.
Figure 4: Coverage percentages for credible intervals and percentile confidence intervals, for \(n=20,000\). (a) credible intervals, (b) percentile confidence intervals of section 3.1, (c) percentile confidence intervals of section 3.2. The red line is at level 0.96324, but the intervals are based on the \(0.025\) and \(0.975\) quantiles of the credible, respectively percentile bootstrap simulations. The level 0.96324 was determined from the values of the function \(A\) in [5].
Figure 5: Coverage percentages for credible intervals and percentile confidence intervals, for \(n=1000\). (a) credible intervals, (b) percentile confidence intervals of section 3.1, (c) percentile confidence intervals of section 3.2. The red line is at level 0.96324, but the intervals are based on the \(0.025\) and \(0.975\) quantiles of the credible, respectively percentile bootstrap simulations. The level 0.96324 was determined from the values of the function \(A\) in [5].
## 4 Cube root \(n\) consistent smoothed bootstrap confidence intervals
In [18] it was shown for interval censoring models that cube root \(n\) convergent bootstrap confidence intervals can be computed for the distribution function at a fixed point with the right asymptotic coverage. Key in this, is the convergence of the nonparametric maximum likelihood estimator to Chernoff's distribution. We show that a similar approach is possible in the present context.
In the regression context this means that we use, as in [9], the smoothed least squares estimator (the SLSE) \(\tilde{f}_{nh}\), for a bandwidth \(h=cn^{-1/5}\). To define \(\tilde{f}_{nh}\), let \(K\) be a symmetric twice continuously differentiable nonnegative kernel with support \([-1,1]\) such that \(\int K(u)\,du=1\). Let \(h>0\) be a bandwidth and define the scaled kernel \(K_{h}\) by
\[K_{h}(u)=\frac{1}{h}K\left(\frac{u}{h}\right),\ \ u\in\mathbb{R}. \tag{4.1}\]
The SLSE \(\tilde{f}_{nh}\) is then for \(t\in[h,1-h]\) defined by
\[\tilde{f}_{nh}(t)=\int K_{h}(t-x)\,\hat{f}_{n}(x)\,dx. \tag{4.2}\]
For \(t\notin[h,1-h]\) we use the boundary correction, defined in [9] (see (2.6) and (2.7) in [9]).
We now generate residuals \(E_{i}\) with respect to (4.2), defined by
\[E_{i}=Y_{i}-\tilde{f}_{nh}(X_{i}),\qquad i=1,\ldots,n,\]
and compute the centered residuals \(\tilde{E}_{i}\),
\[\tilde{E}_{i}=E_{i}-n^{-1}\sum_{j=1}^{n}E_{j},\qquad i=1,\ldots,n. \tag{4.3}\]
From these residuals, we generate bootstrap samples
\[(X_{i},Y_{i}^{*}),\qquad Y_{i}^{*}=\tilde{f}_{nh}(X_{i})+\tilde{E}_{i}^{*}, \qquad i=1,\ldots,n, \tag{4.4}\]
where the \(\tilde{E}_{i}^{*}\) are drawn uniformly with replacement from the residuals \(\tilde{E}_{i}\) defined by (4.3). For the bootstrap samples (4.4), we compute the monotone (non-smoothed) LSE \(\hat{f}_{n}^{*}\) and consider the differences
\[\hat{f}_{n}^{*}(t)-\tilde{f}_{nh}(t), \tag{4.5}\]
and the \(95\%\) bootstrap confidence intervals, given by
\[\left(\hat{f}_{n}(t)-Q_{0.975}^{*},\hat{f}_{n}(t)-Q_{0.025}^{*}\right), \tag{4.6}\]
where \(Q_{0.025}^{*}\) and \(Q_{0.975}^{*}\) are the \(2.5\)th and \(97.5\)th percentiles of \(1000\) bootstrap samples of (4.5) and \(\hat{f}_{n}\) is the LSE in the original sample. Note that this is the more conventional bootstrap approach, rather than the percentile method.
The SLSE \(\tilde{f}_{nh}\). which plays a central role in the construction of these confidence intervals, has the limit behavior specified in the following theorem, which is Theorem 1 in [9].
**Theorem 4.1**.: _Let \(f_{0}\) be a nondecreasing continuous function on \([0,1]\). Let \(X_{1},X_{2},\ldots\) be i.i.d random variables with continuous density \(g\), staying away from zero on \([0,1]\), where the derivative \(g^{\prime}\) is continuous and bounded on \((0,1)\). Furthermore, let \(\varepsilon_{1},\varepsilon_{2}\ldots\) be i.i.d. random variables distributed according to a sub-Gaussian distribution with expectation zero and variance \(0<\sigma_{0}^{2}<\infty\), independent of the \(X_{i}\)'s. Then consider \(Y_{i}\), defined by_
\[Y_{i}=f_{0}(X_{i})+\varepsilon_{i},\ \ i=1,2,\ldots\]
Suppose \(t_{0}\in(0,1)\) such that \(f_{0}\) has a strictly positive derivative and a continuous second derivative \(f_{0}^{\prime\prime}(t_{0})\neq 0\) at \(t_{0}\). Then, for the SLSE \(\tilde{f}_{nh}\) defined by (4.2) based on the pairs \((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\), and \(h\sim cn^{-1/5}\) for \(c>0\),_
\[n^{2/5}\left\{\tilde{f}_{nh}(t_{0})-f_{0}(t_{0})\right\}\stackrel{{ \mathscr{D}}}{{\longrightarrow}}N(\beta,\sigma^{2}).\]
_Here_
\[\beta=\tfrac{1}{2}c^{2}f_{0}^{\prime\prime}(t_{0})\int u^{2}K(u)\,du\ \text{ and }\ \sigma^{2}=\frac{\sigma_{0}^{2}}{cg(t)}\int K(u)^{2}\,du. \tag{4.7}\]
_The asymptotically Mean Squared Error optimal constant \(c\) is given by:_
\[c=\left\{\frac{\sigma_{0}^{2}}{g(t_{0})}\int K(u)^{2}\,du\Big{/}\left\{f_{0}^ {\prime\prime}(t_{0})\int u^{2}K(u)\,du\right\}^{2}\right\}^{1/5}.\]
We have the following lemma of which the proof is given in the Appendix.
**Lemma 4.1**.: _Let \(W\), \(t_{0}\) and \(V\) be as defined in Lemma 3.1 and \(\widetilde{W}_{n}^{*}\) be defined by:_
\[\widetilde{W}_{n}^{*}(t)=n^{-1/3}\Bigg{\{}\sum_{i:X_{i}\in[0,t_{0}+n^{-1/3}t]} \tilde{E}_{i}^{*}-\sum_{i:X_{i}\in[0,t_{0}]}\tilde{E}_{i}^{*}\Bigg{\}},\qquad t \in\mathbb{R},\]
_where \(\tilde{E}_{i}^{*}\) is drawn with replacement from the residuals \(\tilde{E}_{i}\), defined by (4.3). Then, along almost all sequences \((X_{1},Y_{1}),(X_{2},Y_{2}),\ldots\), the process \(\widetilde{W}_{n}^{*}\) converges in \(D(\mathbb{R})\) in distribution conditionally to the process \(V\)_
This leads to the following corollary.
**Corollary 4.1**.: _Let the bootstrap LSE \(f_{n}^{*}\) be constructed as in (4.5) and let \(t_{0}\) and the SLSE \(\tilde{f}_{nh}\) be defined as in Theorem 4.1. Then, along almost all sequences \((X_{1},Y_{1}),\ldots\), we have, under the condtions of Theorem 4.1,_
\[n^{1/3}\big{\{}\widehat{f}_{n}^{*}(t_{0})-\tilde{f}_{nh}(t_{0})\big{\}} \stackrel{{\mathscr{D}}}{{\longrightarrow}}Z,\]
Figure 6: Confidence intervals, based on (4.6), for \(n=100,500\) and \(1000\). The blue curve is the nonparametric monotone LSE, and the red dashed curve the real regression function \(f_{0}(x)=x^{2}+x/5\). (a) \(n=100\), (b) \(n=500\), (c) \(n=1000\).
_where_
\[Z=\left(\frac{4\sigma_{0}^{2}f_{0}^{\prime}(t_{0})}{g(t_{0})}\right)^{1/3}\text{ argmin}_{t\in\mathbb{R}}\left[W(t)+t^{2}\right],\]
_and \(W\) is standard two-sided Brownian motion, originating from zero._
Proof. We use the "switch relation" again (see the proof of Theorem 3.1). Let \(G_{n}\) be the empirical distribution function of the \(X_{i}\) and let \(V_{n}^{*}\) be defined by
\[V_{n}^{*}(t)=\sum_{X_{i}\leq t}Y_{i}^{*},\]
where \(Y_{i}^{*}\) is defined by (4.4). Let \(U_{n}^{*}\) be defined by
\[U_{n}^{*}(a)=\text{argmin}\{t\in[0,1]:V_{n}^{*}(t)-aG_{n}(t)\},\]
for \(a\) in the range of \(f_{0}\). Then we have the "switch relation":
\[\hat{f}_{n}^{*}(t)\geq a\iff G_{n}(t)\geq G_{n}(U_{n}^{*}(a))\iff t\geq U_{n} ^{*}(a).\]
Hence we get if \(a_{n}=\tilde{f}_{nh}(t_{0})\) and \(D_{n}=\{(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\}\),
\[\mathbb{P}\left\{n^{1/3}\{\hat{f}_{n}^{*}(t_{0})-\tilde{f}_{nh}( t_{0})\}\geq x|D_{n}\right\}=\mathbb{P}\left\{\hat{f}_{n}^{*}(t_{0})\geq a _{n}+n^{-1/3}x|D_{n}\right\}\] \[=\mathbb{P}\left\{U_{n}^{*}(a_{n}+n^{-1/3}x)\leq t_{0}|D_{n} \right\}=\mathbb{P}\left\{n^{1/3}\big{\{}U_{n}^{*}(a_{n}+n^{-1/3}x)-t_{0} \big{\}}\leq 0|D_{n}\right\}\] \[=\mathbb{P}\left\{\text{argmin}\left[t\in[0,1]:V_{n}^{*}(t)-(a_{n }+n^{-1/3}x)\,G_{n}(t)\right]\leq 0|D_{n}\right\}\] \[=\mathbb{P}\left\{\text{argmin}\left[t\in[0,1]:V_{n}^{*}(t)-V_{n} ^{*}(t_{0})-(a_{n}+n^{-1/3}x)\{G_{n}(t)-G_{n}(t_{0})\}\right]\leq 0|D_{n} \right\},\]
where the last equality holds since the values of the argmin function do not change if we add constants to the function for which we determine the argmin.
We have:
\[\mathbb{P}\left\{\text{argmin}\left[t\in[0,1]:V_{n}^{*}(t)-V_{n} ^{*}(t_{0})-(a_{n}+n^{-1/3}x)\{G_{n}(t)-G_{n}(t_{0})\}\right]\leq 0|D_{n}\right\}\] \[=\mathbb{P}\left\{\text{argmin}_{t}\left[W_{n}^{*}(t)+n^{1/3} \int_{(t_{0},t_{0}+n^{-1/3}t]}\tilde{f}_{nh}(u)\,dG_{n}(u)\right.\]
Figure 7: Coverage percentages for confidence intervals, based on (4.6), for \(n=100,500\) and \(1000\). (a) \(n=100\), (b) \(n=500\), (c) \(n=1000\).
\[-n^{1/3}(\tilde{f}_{nh}(t_{0})+n^{-1/3}x)\{G_{n}(t_{0}+n^{-1/3})-G_{n}(t_{0})\} \Big{]}\leq 0|D_{n}\Big{\}}\] \[\sim\mathbb{P}\left\{\operatorname{argmin}_{t}\big{[}W_{n}^{*}(t) +\tfrac{1}{2}f_{nh}^{\prime}(t_{0})g_{0}(t_{0})t^{2}-xg(t_{0})t\big{]}\leq 0 |D_{n}\right\}\] \[\sim\mathbb{P}\left\{\operatorname{argmin}_{t}\big{[}W_{n}^{*}(t) +\tfrac{1}{2}f_{0}^{\prime}(t_{0})g_{0}(t_{0})t^{2}-xg(t_{0})t\big{]}\leq 0 |D_{n}\right\}\] \[\longrightarrow\mathbb{P}\left\{\operatorname{argmin}_{t}\left[ \sigma_{0}\sqrt{g(t_{0})}\,W(t)+\tfrac{1}{2}f_{0}^{\prime}(t_{0})g(t_{0})t^{2 }-xg(t_{0})t\right]\leq 0\right\},\]
where we use Lemma 4.1 in the last step. By Brownian scaling, we can write
\[\mathbb{P}\left\{\operatorname{argmin}_{t}\left[\sigma_{0}\sqrt{ g(t_{0})}\,W(t)+\tfrac{1}{2}f_{0}^{\prime}(t_{0})g_{0}(t_{0})t^{2}-xg(t_{0})t \right]\leq 0\right\}\] \[= \,\mathbb{P}\left\{\left(\frac{4\sigma_{0}^{2}f_{0}^{\prime}(t_{ 0})}{g(t_{0})}\right)^{1/3}\operatorname{argmin}_{t}\big{[}W(t)+t^{2}\big{]} \leq-x\right\}\] \[= \,\mathbb{P}\left\{\left(\frac{4\sigma_{0}^{2}f_{0}^{\prime}(t_{ 0})}{g(t_{0})}\right)^{1/3}\operatorname{argmin}_{t}\big{[}W(t)+t^{2}\big{]} \geq x\right\}.\]
Corollary 4.1 shows that the limit distribution of \(n^{1/3}\big{\{}\hat{f}_{n}^{*}(t_{0})-\tilde{f}_{nh}(t_{0})\big{\}}\), along almost all sequences \(D_{n}\), is the same as the limit distributionof \(n^{1/3}\big{\{}\hat{f}_{n}(t_{0})-\tilde{f}_{0}(t_{0})\big{\}}\). In the latter case the limit distribution was first derived in [4].
Using this corollary it is clear that in using this method the (ordinary, not percentile) smoothed bootstrap simulations recreate the actual asymptotic distribution correctly, and that we do not have to use a correction for over- or undercoverage. It is also clear from Figure 7 that its behavior is much better than the behavior of the confidence intervals in the preceding section. Even for sample size \(n=100\) the confidence intervals are more or less "on target". In comparison, the credible intervals are still far off the target for these sample sizes, and the overcoverage has still not set in for these sample sizes, see Figure 8.
We can also prove the result below, illustrating the fact that there is no need for correction for over-or under coverage.
Figure 8: Coverage percentages for the confidence intervals, for (a) \(n=100\), (b) \(n=500\), (c) \(n=1000\).
**Theorem 4.2**.: _Let \(D_{n}=\{(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\}\) and \(h\asymp n^{-1/5}\), and let \(t_{0}\) and \(\tilde{f}_{nh}\) be defined as in Corollary 4.1. Let the conditions of Theorem 4.1 be satisfied. Then, for \(z\in(0,1)\), as \(n\to\infty\),_
\[\mathbb{P}\left\{\mathbb{P}\left(n^{1/3}\big{\{}\hat{f}_{n}^{*}(t_{0})-\tilde{ f}_{nh}(t_{0})+\hat{f}_{n}(t_{0})-f_{0}(t_{0})\big{\}}\leq 0\,\Big{|}\,D_{n} \right)\leq z\right\}\longrightarrow z. \tag{4.8}\]
Proof.: Let
\[U_{n}=n^{1/3}\big{\{}\hat{f}_{n}^{*}(t_{0})-\tilde{f}_{nh}(t_{0})\big{\}}, \qquad V_{n}=n^{1/3}\big{\{}\hat{f}_{n}(t_{0})-f_{0}(t_{0})\big{\}},\]
and let \(\Phi\) be the distribution function of \(Z\), defined in Corollary 4.1. We have
\[\mathbb{P}\left\{\mathbb{P}\left(n^{1/3}\big{\{}\hat{f}_{n}^{*}( t_{0})-\tilde{f}_{nh}(t_{0})+\hat{f}_{n}(t_{0})-f_{0}(t_{0})\big{\}}\leq 0\, \Big{|}\,D_{n}\right)\leq z\right\}\] \[=\mathbb{P}\left\{\mathbb{P}\{V_{n}\leq-U_{n}|D_{n}\}\leq z\right\}\] \[\sim\mathbb{P}\left\{\Phi(-U_{n})\right\}\leq z\right\}=\mathbb{ P}\left\{-U_{n}\leq\Phi^{-1}(z)\right\}\longrightarrow\Phi\circ\Phi^{-1}(z)=z,\]
using the symmetry around zero of the distribution of \(Z\) (the limit in distribution of \(U_{n}\)).
Note that \(\hat{f}_{n}^{*}(t_{0})\) is now centered by \(\tilde{f}_{nh}(t_{0})\) instead of \(\hat{f}_{n}\), and that \(n^{1/3}\{\hat{f}_{n}^{*}(t_{0})-\tilde{f}_{n,h}(t_{0})\}\) tends to the right limit distribution, in contrast with \(n^{1/3}\{\hat{f}_{n}^{*}(t_{0})-\hat{f}_{n}(t_{0})\}\). The histogram of estimates of \(\mathbb{P}\big{(}n^{1/3}\{\hat{f}_{n}^{*}(t_{0})-\tilde{f}_{nh}(t_{0})+\hat{f} _{n}(t_{0})-f_{0}(t_{0})\}\leq 0\,\big{|}\,D_{n}\big{)}\), based on relative frequencies, is shown in Figure 9.
For the ordinary (non-percentile) bootstrap we get by an entirely similar proof, in which we do not need the symmetry of the limit distribution:
**Theorem 4.3**.: _Let \(D_{n}=\{((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\}\) and \(h\asymp n^{-1/5}\). Let the conditions of Theorem 4.1 be satisfied. Then, for \(z\in(0,1)\), as \(n\to\infty\),_
\[\mathbb{P}\left\{\mathbb{P}\left(\hat{f}_{n}^{*}(t_{0})-\tilde{f}_{nh}(t_{0}) \leq\hat{f}_{n}(t_{0})-f_{0}(t_{0})\,\Big{|}\,D_{n}\right)\leq z\right\} \longrightarrow z. \tag{4.9}\]
The result gives an interesting consequence of what it means to say that the "bootstrap works". This phenomenon also occurs in the simple bootstrap setting where one resamples with replacement from samples \(U_{1},\ldots,U_{n}\) from a normal \(N(\mu,\sigma^{2})\) distribution with the aim to construct a confidence set for the mean. Then also, for all \(z\in(0,1)\),
\[\mathbb{P}\left\{\mathbb{P}\left(\bar{U}_{n}^{*}-\bar{U}_{n}\leq\bar{U}_{n}- \mu\,\Big{|}\,U_{1},\ldots U_{n}\right)\leq z\right\}\longrightarrow z. \tag{4.10}\]
## 5 Concluding remarks
We showed that the construction of pointwise credible intervals for the monotone regression function, as proposed in [5], has an interpretation as the construction of percentile bootstrap intervals. The overcoverage, as explained by Theorem 3.1, only sets in for very large sample sizes, like \(n=20,000\); for smaller sample sizes we have observed undercoverage.
Because the confidence intervals are based on piecewise constant estimates of the regression function, on intervals of equal length, the effect of bias is very pronounced, which does not hold for the confidence intervals, based on the smoothed bootstrap in Section 4. The latter confidence intervals have the further advantages of being on target also for smaller sample sizes, and not needing correction for overcoverage or undercoverage, since the estimates are consistent.
The consistency is also borne out by Theorem 4.3, showing convergence to the uniform distribution of \(\mathbb{P}(\hat{f}_{n}^{*}(t_{0})-\hat{f}_{nh}(t_{0})\leq\hat{f}_{n}(t_{0})- f_{0}(t_{0})|D_{n})\), \(D_{n}=\{(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\}\), for the smoothed bootstrap estimates \(\hat{f}_{n}^{*}\), in contrast with the situation for the credible intervals and the percentile bootstrap intervals, where we need a correction for convergence of \(\mathbb{P}(\hat{f}_{n}^{*}(t_{0})-f_{0}(t_{0})\leq 0\,\big{|}\,D_{n})\) to a distribution different from the uniform distribution.
As shown in [9], it is also possible to use the smoothed least squares estimator (SLSE) directly as the basis for confidence intervals. In this case, resampling is done from residuals w.r.t. an oversmoothed estimate of the regression function to treat the bias in the right way. The bias is in this case much more of a problem because the variance and squared bias of the SLSE are of the same order if the bandwidth is of order \(n^{-1/5}\). A picture of confidence intervals of this type is given in Figure 10. For more details, see [9].
All simulations in our paper can be recreated using the R scripts in [7].
## Appendix
Derivation of raw posterior distribution (2.2)
Considering \(\mathbf{X}\) fixed, we have \(\mathbf{Y}=\mathbf{B}\mathbf{\theta}+\mathbf{\epsilon}\), where \(\mathbf{B}=(b_{ij})\) with \(b_{ij}=1_{I_{j}}(X_{i})\) giving \(\mathbf{Y}|\mathbf{\theta}\sim N_{n}(\mathbf{B}\mathbf{\theta},\sigma_{0}^{2}I_{n})\). Writing \(\mathbf{\Lambda}=\mathrm{Diag}(\lambda_{j}^{2})\), this can be combined with the prior \(\mathbf{\theta}\sim N_{J}(\mathbf{\zeta},\sigma_{0}^{2}\Lambda)\) yielding
\[f(\mathbf{\theta}|\mathbf{Y})\propto f(\mathbf{Y}|\mathbf{\theta})f(\mathbf{\theta})\propto\exp \left(-\frac{1}{2\sigma_{0}^{2}}\left[\mathbf{\theta}^{T}(\mathbf{\Lambda}^{-1}+\mathbf{B }^{\mathrm{T}}\mathbf{B})\mathbf{\theta}-\mathbf{2}\mathbf{\theta}^{\mathrm{T}}(\mathbf{B}^{ \mathrm{T}}\mathbf{Y}+\mathbf{\Lambda}^{-1}\mathbf{\zeta})\right]\right)\]
By 'completing the square', this function can be seen to be proportional to the normal density with covariance matrix
\[\sigma_{0}^{2}\left(\mathbf{\Lambda}^{-1}+\mathbf{B}^{T}\mathbf{B}\right)^{-1}=\sigma_{0 }^{2}\mathrm{Diag}(1/\lambda_{j}^{2}+n_{j})^{-1}=\mathrm{Diag}\left(\frac{ \sigma_{0}^{2}}{1/\lambda_{j}^{2}+n_{j}}\right)\]
where we use that both \(\mathbf{\Lambda}\) and \(\mathbf{B}^{T}\mathbf{B}\) are diagonal matrices with diagonal entries \(\lambda_{j}^{2}\) and \(n_{j}=\sum_{i}b_{ij}=\#\{i:x_{i}\in I_{j}\}\) respectively. The expectation is given by
\[\left(\mathbf{\Lambda}^{-1}+\mathbf{B}^{T}\mathbf{B}\right)^{-1}(\mathbf{B}^{T}\mathbf{Y}+\mathbf{ \Lambda}^{-1}\mathbf{\zeta})=\mathrm{Diag}\left(\frac{1}{1/\lambda_{j}^{2}+n_{j}} \right)\begin{pmatrix}\zeta_{1}/\lambda_{1}^{2}+\sum_{\{i:x_{i}\in I_{1}\}}y_{ i}\\ \vdots\\ \zeta_{J}/\lambda_{J}^{2}+\sum_{\{i:x_{i}\in I_{J}\}}y_{i}\end{pmatrix},\]
boiling down to the expression in (2.2).
### Derivation of (2.5)
First note that \(I+B\Lambda B^{T}\) is a block diagonal matrix, where block \(j\), has size \(n_{j}\times n_{j}\), diagonal elements \(1+\lambda_{j}^{2}\) and off-diagonal elements \(\lambda_{j}^{2}\) for \(1\leq j\leq J\). The \(j\)-th block can be written as
\[A_{j}=I_{n_{j}\times n_{j}}+\lambda_{j}^{2}\mathbf{1}_{n_{j}}\mathbf{1}_{n_{j} }^{T},\]
where \(I_{k\times k}\) is the \(k\times k\) identity matrix and \(\mathbf{1}_{k}\) is the column vector of length \(k\) with all elements equal to one.
This means that \((I+B\Lambda B^{T})^{-1}\) is also a block diagonal matrix, with \(j\)-th block
\[A_{j}^{-1}=I_{n_{j}\times n_{j}}-\frac{\lambda_{j}^{2}}{1+n_{j}\lambda_{j}^{2} }\mathbf{1}_{n_{j}}\mathbf{1}_{n_{j}}^{T},\]
by the Sherman Morrison formula. For convenience, write \(z=y-B\zeta\) and denote by subscript \([j]\) the part of a vector corresponding to the \(j\)-th block (so \(i\) for which \(x_{i}\in I_{j}\); \(z_{[j]}\) has length \(n_{j}\)). Then
\[\left[(I+B\Lambda B^{T})^{-1}z\right]_{[j]}=A_{j}^{-1}z_{[j]}=z_{[j]}-\frac{ \lambda_{j}^{2}}{1+n_{j}\lambda_{j}^{2}}\mathbf{1}_{n_{j}}\mathbf{1}_{n_{j}}^ {T}z_{[j]}=z_{[j]}-\frac{\lambda_{j}^{2}n_{j}\bar{z}_{[j]}}{1+n_{j}\lambda_{j} ^{2}}\mathbf{1}_{n_{j}},\]
where \(\bar{z}_{[j]}\) denotes the average of the entries of \(z_{[j]}\). Therefore
\[z_{[j]}^{T}\left[(I+B\Lambda B^{T})^{-1}z\right]_{[j]}=z_{[j]}^{T}z_{[j]}- \frac{\lambda_{j}^{2}n_{j}^{2}\bar{z}_{[j]}^{2}}{1+n_{j}\lambda_{j}^{2}}.\]
Hence,
\[z^{T}(I+B\Lambda B^{T})^{-1}z=\sum_{j=1}^{J}z_{[j]}^{T}\left[(I+B\Lambda B^{T} )^{-1}z\right]_{[j]}=z^{T}z-\sum_{j=1}^{J}\frac{n_{j}\bar{z}_{[j]}^{2}}{1+1/( n_{j}\lambda_{j}^{2})}. \tag{5.1}\]
Figure 10: The SLSE (blue, solid) and \(95\%\) confidence intervals, using the theory in [9], for sample size \(n=100\) and \(f_{0}(x)=x^{2}+x/5\) (red dashed curve).
Now write \((1+1/(n_{j}\lambda_{j}^{2}))^{-1}=1-\delta_{j}\), so \(\delta_{j}=(1+n_{j}\lambda_{j}^{2})^{-1}\). Then (5.1) can be further rewritten as
\[z^{T}(I+B\Lambda B^{T})^{-1}z=z^{T}z-\sum_{j=1}^{J}(1-\delta_{j})n _{j}\bar{z}_{[j]}^{2}=z^{T}z-\sum_{j=1}^{J}n_{j}\bar{z}_{[j]}^{2}+\sum_{j=1}^{J }\delta_{j}n_{j}\bar{z}_{[j]}^{2}\] \[=\sum_{j=1}^{J}\sum_{x_{i}\in I_{j}}\left((y_{i}-\zeta_{j})^{2}-( \bar{y}_{j}-\zeta_{j})^{2}\right)+\sum_{j=1}^{J}\delta_{j}n_{j}(\bar{y}_{j}- \zeta_{j})^{2}\] \[=\sum_{j=1}^{J}\sum_{x_{i}\in I_{j}}\left(y_{i}^{2}-\bar{y}_{j}^{ 2}\right)+\sum_{j=1}^{J}\delta_{j}n_{j}(\bar{y}_{j}-\zeta_{j})^{2}=\sum_{j=1}^ {J}\sum_{x_{i}\in I_{j}}\left(y_{i}-\bar{y}_{j}\right)^{2}+\sum_{j=1}^{J} \delta_{j}n_{j}(\bar{y}_{j}-\zeta_{j})^{2}.\]
Substituting \(\delta_{j}=(1+n_{j}\lambda_{j}^{2})^{-1}\) and writing the first term as one sum, yields (2.5).
Derivation of Empirical Bayes estimators (2.6) and (2.4) The distribution of the observed \(\mathbf{Y}\) can be expressed in terms of the parameters \(\sigma_{0}^{2}\), \(\Lambda\) and \(\zeta\),
\[\mathbf{Y}=\mathbf{B}\mathbf{\theta}+\epsilon=\mathbf{B}(\mathbf{\zeta}+\sigma_{0}\tilde{\epsilon })+\sigma_{0}\epsilon=\mathbf{B}\zeta+\sigma_{0}(\mathbf{B}\tilde{\epsilon}+\epsilon),\]
where \(\epsilon\sim N_{n}(0,I_{n\times n})\) and \(\tilde{\epsilon}\sim N_{J}(0,\Lambda)\) are independent. Therefore,
\[\mathbf{Y}\sim N_{n}\left(\mathbf{B}\zeta,\sigma_{0}^{2}\left(I_{n\times n}+\mathbf{B} \mathbf{\Lambda}\mathbf{B}^{T}\right)\right).\]
Maximizing the likelihood in \(\zeta\), for fixed values of \(\sigma_{0}^{2}\) and \(\Lambda\) entails minimizing
\[(y-B\zeta)^{T}\left(I_{n\times n}+\mathbf{B}\mathbf{\Lambda}\mathbf{B}^{T}\right)^{-1}(y-B \zeta).\]
Recognizing (5.2) in this expression, it is clear that the empirical Bayes estimate of \(\zeta\) is either given by the vector \((\bar{y}_{1},\ldots,\bar{y}_{J})^{T}\) if the likelihood is maximized over \(\mathbb{R}^{J}\) or its isotonic regression with weights \(n_{j}\delta_{j}=n_{j}/(1+n_{j}\lambda_{j}^{2})\) if monotonicity is taken into account.
For any fixed value of \(\zeta\), maximizing the the log likelihood of \(\sigma_{0}\) corresponds to minimizing
\[\frac{n}{2}\log\sigma_{0}^{2}+(y-B\zeta)^{T}\left(I_{n\times n}+\mathbf{B}\mathbf{ \Lambda}\mathbf{B}^{T}\right)^{-1}(y-B\zeta)/(2\sigma_{0}^{2}),\]
yielding (2.4).
Proof of Lemma 3.3.: We employ a construction, used in the proof of Lemma 2.2 in [6]. Let \(A_{n}\) be the interval \([t_{0}-n^{-1/3}\log n,t_{0}+n^{-1/3}\log n]\) and let \((U_{1}^{*},V_{1}^{*}),(U_{2}^{*},V_{2}^{*}),\ldots\) be an i.i.d sequence of points, (discretely) uniformly distributed on the set of points \((X_{i},Y_{i})\) such that \(X_{i}\in A_{n}\).
Let \(M_{n}\) be the number of points \(X_{i}\in A_{n}\). The number of bootstrap draws such that the first component belongs to \(A_{n}\) has distribution
\[M_{n}^{*}\sim\text{Binom}(n,M_{n}/n), \tag{5.3}\]
so, taking the random variable \(M_{n}^{*}\) defined by (5.3), independent of the sequence \((U_{1}^{*},V_{1}^{*})\), \((U_{2}^{*},V_{2}^{*}),\ldots\), we can represent the bootstrap variables such that the first component belongs to \(A_{n}\) by
\[\sum_{i=1}^{M_{n}^{*}}\delta_{\{(U_{i}^{*},V_{i}^{*})\}},\]
where \(\delta_{x}\) denotes Dirac measure.
We can couple this process with a Poisson process
\[\sum_{i=1}^{N_{n}}\delta_{\{(U^{*}_{i},V^{*}_{i})\}},\]
where
\[N_{n}\sim\text{Poisson}\,(M_{n}),\]
independent of the \((U^{*}_{i},V^{*}_{i})\), using the construction with a Uniform(0,1) random variable \(U\) as in the construction in the proof of Lemma 2.2 in [6]. We find in this way:
\[\mathbb{P}\left\{\sum_{i=1}^{M^{*}_{n}}\delta_{\{(U^{*}_{i},V^{*}_{i})\}}\neq \sum_{i=1}^{N_{n}}\delta_{\{(U^{*}_{i},V^{*}_{i})\}}\right\}\leq 2M_{n}/n,\]
where \(M_{n}/n\) tends to zero almost surely, using an inequality from [20].
This means that we can replace \(M^{*}_{in}\) by \(N_{in}\) in (3.12), where the \(N_{in}\) are independent Poisson\((1)\) random variables and can replace \(\widetilde{W}^{*}_{n}\) by its Poissonized version
\[W^{(P)}_{n}(t)= n^{-1/3}\sum_{j:j/J\in[0,t_{0}+n^{-1/3}t]}\sum_{X_{i}\in I_{j}}(N _{in}-1)(Y_{i}-a_{0}-n^{-1/3}x) \tag{5.5}\] \[-n^{-1/3}\sum_{j:j/J\in[0,t_{0}]}\sum_{X_{i}\in I_{j}}(N_{in}-1)(Y _{i}-a_{0}-n^{-1/3}x). \tag{5.4}\]
For the latter process we have the martingale structure again, and the quadratic variation process \([W^{(P)}_{n}](t)\), \(t\geq 0\) satisfies
\[\left[W^{(P)}_{n}\right](t)=n^{-2/3}\sum_{j:j/J\in(t_{0},t_{0}+n^{ -1/3}t]}\Bigl{\{}\sum_{X_{i}\in I_{j}}(N_{in}-1)(Y_{i}-a_{0}-n^{-1/3}x)\Bigr{\}} ^{2}\] \[\longrightarrow\sigma_{0}^{2}g(t_{0})t.\]
for almost all sequences \((X_{1},Y_{1}),\ldots\). A similar reaction holds for \(t<0\). So the result follows in the same way as in the proof of Lemma 3.1.
Proof of Lemma 4.1.: We consider the case \(t\geq 0\). It is clear that, conditionally on \((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\), \(t\mapsto\widetilde{W}^{*}_{n}(t)\) is a martingale with respect to the family of \(\sigma\)-algebras \(\mathscr{F}^{*}_{n,t}\), \(t\geq 0\), defined by:
\[\mathscr{F}^{*}_{n,t}=\sigma\left\{(X_{i},E^{*}_{i}):X_{i}\in[t_{0}+n^{-1/3}t] \right\}.\qquad t\geq 0.\]
The quadratic variation process is given by:
\[\left[\widetilde{W}^{*}_{n}\right](t)=n^{-2/3}\sum_{i:X_{i}\in[t_{0},t_{0}+n^{ -1/3}t]}(E^{*}_{i})^{2}\]
We have:
\[\mathbb{E}\left\{\left[\widetilde{W}_{n}^{*}\right](t)\Bigm{|}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\right\}\] \[=n^{-2/3}\sum_{i:X_{i}\in[t_{0},t_{0}+n^{-1/3t}]}\mathbb{E}\left\{ (E_{i}^{*})^{2}\Bigm{|}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\right\}\] \[=n^{-2/3}\sum_{i:X_{i}\in[t_{0},t_{0}+n^{-1/3t}]}n^{-1}\sum_{j=1}^ {n}\tilde{E}_{j}^{2}\stackrel{{ a.s.}}{{\longrightarrow}}\sigma_{ 0}^{2}g(t_{0})t.\]
Note that
\[\big{\{}Y_{j}-\tilde{f}_{nh}(X_{j})\big{\}}^{2}\] \[=\big{\{}Y_{j}-f_{0}(X_{j})\big{\}}^{2}+\big{\{}f_{0}(X_{j})- \tilde{f}_{nh}(X_{j})\big{\}}^{2}+2\big{\{}Y_{j}-f_{0}(X_{j})\big{\}}\big{\{}f _{0}(X_{j})-\tilde{f}_{nh}(X_{j})\big{\}},\]
and that therefore
\[n^{-1}\sum_{j=1}^{n}\tilde{E}_{j}^{2}\sim n^{-1}\sum_{j=1}^{n} \big{\{}Y_{j}-f_{0}(X_{j})\big{\}}^{2}\]
almost surely, as \(n\to\infty\), by the properties of \(\tilde{f}_{nh}\).
We can treat the case \(t<0\) in a similar way. This means that we can apply Rebolledo's theorem, see Theorem 3.6, p. 68 of [8] and [15]. The conclusion of the lemma now follows.
|
2305.00173 | A Novel Rotated Constellation-based LLR Detector for Flexible NOMA based
OFDM-IM Scheme | OFDM-IM NOMA is a newly created flexible scheme for future generation
communication systems. For the downlink OFDM-IM NOMA system, a low-complexity
"rotated constellation based log likelihood ratio (LLR) detector" has been
proposed in this work. This detector is able to significantly reduce the
complexity by employing the rotating constellation-based concept and the
log-likelihood ratio-based algorithm together. Complexity analysis and
simulation results show that the proposed detector achieves significantly lower
computational complexity and much better error performance than the earlier
introduced detectors under different scenarios for the OFDM-IM NOMA scheme. | Subham Sabud, Preetam Kumar | 2023-04-29T05:05:21Z | http://arxiv.org/abs/2305.00173v1 | # A Novel Rotated Constellation-based LLR Detector for Flexible NOMA based OFDM-IM Scheme
###### Abstract
OFDM-IM NOMA is a newly created flexible scheme for future generation communication systems. For the downlink OFDM-IM NOMA system, a low-complexity "rotated constellation based log likelihood ratio (LLR) detector" has been proposed in this work. This detector is able to significantly reduce the complexity by employing the rotating constellation-based concept and the log-likelihood ratio-based algorithm together. Complexity analysis and simulation results show that the proposed detector achieves significantly lower computational complexity and much better error performance than the earlier introduced detectors under different scenarios for the OFDM-IM NOMA scheme.
Maximum Likelihood (ML) detector, Log Likelihood Ratio (LLR) detector, OFDM, NOMA, Index Modulation (IM).
## I Introduction
Non-orthogonal multiple access (NOMA) has established itself as a potential candidate for addressing the high connectivity density environment and massive data load in 5G, 6G, and beyond wireless communication networks. In power-domain NOMA [1], the same time and frequency resources are shared by several users, who are separated by various power levels according to their channel condition. NOMA can provide high spectral efficiency, low latency, massive connectivity and fairness.
On the other hand, orthogonal frequency division multiplexing (OFDM) is a popular multicarrier transmission technique that has the ability to successfully counter the inter-symbol interference (ISI) brought on by a frequency-selective channel. Using the spatial modulation (SM) concept on OFDM, different techniques have been proposed in [2, 3, 4]. Among them, index modulated OFDM (OFDM-IM) [4] has proven to be the most promising one. The M-ary signal constellation, as well as the subcarrier indices, carry information in OFDM-IM, Unlike the classical OFDM. As a result, OFDM-IM has better spectral efficiency, error performance, and energy efficiency than the traditional OFDM. Using the IM and SM paradigm in conjunction with cooperative NOMA, several excellent studies have been published. An OFDM-IM system built on co-operative NOMA (C-NOMA) is suggested in [5] and is known as CIM-NOMA. The CDM-NOMA technique, which is basically a co-operative NOMA based dual mode OFDM-IM scheme with superior error performance than the generalised CIM-NOMA (GCIM-NOMA), is introduced in [6]. [7] illustrates the use of a novel technology named C-NOMA-GQSM in multi-vehicle networks. A new energy-efficient, spectrally-efficient, and flexible transmission method called OFDM-IM NOMA [8] has recently been proposed by merging the OFDM-IM and NOMA techniques, where OFDM-IM provides flexibility with a tunable subcarrier activation ratio and NOMA provides flexibility with a tunable power allocation coefficient. [9] discusses a rather similar technique called IM-NOMA, which differs from the previously proposed NOMA-MCIK [10] and Hybrid IM-NOMA [11] schemes. In conventional NOMA-OFDM, all the available subcarriers are in use for both the users, whereas, in the OFDM-IM NOMA, we have the flexibility of activating the subcarriers partially based on the user's needs. In spite of having all the benefits listed above, the OFDM-IM NOMA [8] system's ML-based detection approach suffers from very high computational complexity. In [12], a low-complexity LLR-based detection algorithm is suggested, which is applicable to a more generalized IM-MA scheme. However, in [13] the IM-NOMA [9] system uses a constellation rotation-based low-complexity SIC detection technique and recently for the OFDM-IM NOMA [8] system, a low complexity two-stage LLR detector has been introduced in [14].
In this letter, a novel "rotated constellation based LLR detector" is proposed to address the issue of high detection complexity in the downlink OFDM-IM based flexible NOMA scheme. This proposed detector substantially reduces the complexity by combining the constellation rotation-based concept and the log-likelihood ratio-based algorithm together. Additionally, this detector provides an improved error performance. In this system model, User A's and User B's symbols are taken from a \(\pi/2\)-rotated and a \(0\)-rotated constellation, respectively. Therefore, there will not be any interference between user A's and user B's transmitted signals. As a result, both user A and user B can directly decode their own signals from the superimposed signal without performing the successive interference cancellation (SIC) process. On the other hand, at the receiver side, the purpose of determining the first stage log-likelihood ratio is to identify the active indices, while the purpose of the second stage log-likelihood ratio is to decode the symbols corresponding to the identified active indices.
## II OFDM-IM NOMA with rotated constellation based ML detector
This section explains the transceiver architecture of downlink OFDM-IM NOMA employing the rotated constellation based ML detector. A total of \(m_{\beta}\) information bits pass through the transmitter for user \(\beta\). Here \(\beta\in\{A,B\}\), where user A is the near user and user B is the far user.
\(m_{\beta}\) bits are split into \(G_{\beta}\) groups, each of which carries \(p_{\beta}\) bits, so \(p_{\beta}=m_{\beta}/G_{\beta}\). In order to build an active indices set \(I_{\beta}(g)=\left\{i_{\beta,1}(g),\ldots,i_{\beta,k_{g}}(g)\right\}\) for the \(g^{th}\) subblock and user \(\beta\), the first \(p_{\beta,1}\) bits among the \(p_{\beta}\) bits utilize a look-up table approach [4] to choose the \(k_{\beta}\) active indices from the \(n_{\beta}\) available indices. The remaining \(p_{\beta,2}\) bits are mapped to the \(k_{\beta}\) active indices' corresponding \(M_{\beta}\)-ary modulated symbols. Here \(g=1,\ldots,G_{\beta}\). In this constellation rotation-based system model, the near user (A) and the far user (B) will take symbols from a \(\pi/2\)- rotated constellation and a \(0\)- rotated constellation, respectively. For \(g^{th}\) subblock of user A, the vector of modulated symbols is presented as \(\mathbf{s}_{A}(g)=[S_{A,1}(g),\ldots,S_{A,k_{x}}(g)]\), where \(S_{A,\tau}(g)\in e^{j\pi/2}\ Q_{M_{A}}=Q^{\prime}_{M_{A}}\) and \(\tau=1,\ldots,k_{A}\). For \(g^{th}\) subblock of user B, the vector of modulated symbols is presented as \(\mathbf{s}_{B}(g)=[S_{B,1}(g),\ldots,S_{B,k_{B}}(g)]\), where \(S_{B,\iota}(g)\in Q_{M_{B}}=Q^{\prime}_{M_{S}}\) and \(\iota=1,\ldots,k_{B}\). Here \(Q^{\prime}_{M_{A}}=\{q^{A}_{0},\ldots,q^{A}_{M_{A}-1}\}\) and \(Q^{\prime}_{M_{B}}=\{q^{D}_{0},\ldots,q^{B}_{M_{B}-1}\}\) represent the \(\pi/2\)- rotated constellation corresponding to user A and the \(0\)- rotated constellation corresponding to user B, respectively. Therefore, each subblock fetches a total of \(p_{\beta}=p_{\beta,1}+p_{\beta,2}=\left[\log_{2}\left(C\left(n_{\beta},k_{ \beta}\right)\right)\right]+k_{\beta}\log_{2}\left(M_{\beta}\right)\) bits of information. The main OFDM-IM block for user \(\beta\) is produced as \(\mathbf{s}_{\beta}=[x_{\beta}(1)\ldots x_{\beta}(N)]^{T}\in\mathbb{C}^{N\times 1}\), taking into account \(I_{\beta}(g)\) and \(\mathbf{s}_{\beta}(g)\) for all the subblocks. Here \(N=n_{\beta}G_{\beta}\) represents the overall number of subcarriers. The superimposed signal in the frequency domain is obtained as \(\mathbf{x}_{sc}=\sqrt{\gamma P_{T}}\mathbf{x}_{A}+\sqrt{(1-\gamma)P_{T}}\mathbf{x}_{B}\), where \(\gamma\) and \(P_{T}\) are the power allocation factor and the total transmit power per subcarrier, respectively. Per subcarrier, the average power assigned to user A and user B is \(P_{A}=\gamma P_{T}\) and \(P_{B}=(1-\gamma)P_{T}\), respectively. After applying the inverse fast fourier transform (IFFT) and cyclic prefix addition (CP) operation on \(\mathbf{x}_{sc}\), the resultant signal is transmitted through a frequency-selective Rayleigh fading channel. The time domain portrayal of the channel impulse response coefficient vector for user \(\beta\) is \(\mathbf{h}_{T,\beta}=\left[h_{T,\beta}(1)\ldots h_{T,\beta}(L)\right]^{T}\), where \(h_{T,\beta}(\rho),\rho=1\ldots L\) are circularly symmetric and obey \(\mathcal{CN}\left(0,\frac{1}{L}\right)\) distribution.
Following the CP removal and fast fourier transform (FFT) operations at the receiver side, the signal received for user \(\beta\) in the frequency domain can be represented as \(\mathbf{y}_{\beta}=\mathbf{x}_{sc}\operatorname{diag}\left(\mathbf{h}_{F,\beta}\right)+ \mathbf{w}_{\beta}=\left[y_{\beta}(1)\ldots y_{\beta}(N)\right]^{T}\). Here, \(\mathbf{h}_{F,\beta}=\text{FFT}\left\{\mathbf{h}_{T,\beta}\right\}=\left[h_{F,\beta} (1)\ldots h_{F,\beta}(N)\right]^{T}\) is the channel vector in the frequency domain following \(\mathcal{CN}\left(0,\varrho_{\beta}^{2}\right)\) distribution and \(\mathbf{w}_{\beta}=\left[w_{\beta}(1)\ldots w_{\beta}(N)\right]^{T}\) is the noise vector that obeys \(\mathcal{CN}\left(0,\sigma_{N_{F}}^{2}\right)\) distribution. Here \(\varrho_{\beta}^{2}\) represents the channel gain for user \(\beta\), and due to the distance factor it is assumed that \(\varrho_{A}^{2}\geq\varrho_{B}^{2}\). As a result, \(P_{B}>P_{A}\). Due to the constellation rotation based system model, the transmitted signal of user A and user B will not interfere with each other. Therefore, both user A and user B can directly decode their own signal from the superimposed signal. \(E_{\beta}=\left\{\mathbf{e}_{\beta,1},\mathbf{e}_{\beta,2},\ldots,\mathbf{e}_{\beta,2^{ \beta}}\right\}\) represents the set containing all feasible subblock realizations for performing the rotated constellation based ML detection for user \(\beta\).
The \(g^{th}\) subblock of the user \(\beta\)'s signal is directly decoded using the "rotated constellation based ML detector" as follows.
\[\widehat{\mathbf{x}}_{\beta}^{g}=\operatorname*{arg\,min}_{\mathbf{e}_{\beta}\in E_{ \beta}}\left\|\mathbf{y}_{\beta}^{g}-\sqrt{P_{\beta}}diag(\mathbf{h}_{F,\beta}^{g}) \mathbf{e}_{\beta}\right\|^{2}, \tag{1}\]
where \(\mathbf{h}_{F,\beta}^{g}=\left[h_{F,\beta}\left(n_{\beta}(g-1)+1\right)\ldots h_{F, \beta}\left(n_{\beta}g\right)\right]^{T}\in\mathbb{C}^{n_{\beta}\times 1}\), and \(\mathbf{y}_{\beta}^{g}=\left[y_{\beta}\left(n_{\beta}\left(g-1\right)+1\right) \ldots\ y_{\beta}\left(n_{\beta}\left(g\right)\right.g\right)]^{T}\in\mathbb{C}^{ n_{\beta}\times 1}\) are the channel vector and the received signal vector for the \(g^{th}\) subblock of user \(\beta\), respectively. Here \(\beta\in\{A,B\}\).
## III Proposed Detector
In this section, the proposed "rotated constellation-based LLR detector" has been discussed.
In this system model, the symbols of user A and user B are taken from a \(\pi/2\)- rotated and a \(0\)- rotated constellation, respectively. Therefore, user A's and user B's transmitted signals will not interfere with each other. As a result, both user A and user B can decode directly their signals from the superimposed signal. For direct decoding of user \(\beta\)'s signal, the "constellation rotation-based LLR detector" is employed as follows.
#### Iii-1 First stage
Identifying the active subcarrier indices is the receiver's first stage objective. To accomplish this, it first calculates a log-likelihood ratio for each subcarrier index, taking into account that the symbols in the frequency domain can have either non-zero or zero values. For index \(\delta\), the first stage log-likelihood ratio can be expressed as follows:
\[\lambda_{stage1,\beta}(\delta)=\ln\frac{\sum_{\mu=0}^{M_{\beta}-1}P(x_{\beta}( \delta)=q_{\mu}^{\beta}\left|\ y_{\beta}(\delta)\right)}{P(x_{\beta}(\delta)=0 \ |\ y_{\beta}(\delta))}. \tag{2}\]
Further (2) can be written [4] as follows by applying the Bayes' formula:
\[\lambda_{stage1,\beta}(\delta)=\ln(k_{\beta})-\ln(n_{\beta}-k_{ \beta})+\frac{\left|y_{\beta}(\delta)\right|^{2}}{\sigma_{N_{F}}^{2}}\\ +\ln\left(\sum_{\mu=0}^{M_{\beta}-1}\exp\left(\frac{-\left|y_{ \beta}(\delta)-\sqrt{P_{\beta}}h_{F,\beta}(\delta)q_{\mu}^{g}\right|^{2}}{ \sigma_{N_{F}}^{2}}\right)\right), \tag{3}\]
where \(q_{\mu}^{\beta}\in Q^{\prime}_{M_{\beta}}\) and \(\delta=1,\ldots,N\). Here \(\beta\in\{A,B\}\).
(3) can be simplified [4] as follows for the BPSK modulation:
\[\lambda_{stage1,\beta}(\delta)=\max(c_{\beta},d_{\beta})+\frac{\left|y_{ \beta}(\delta)\right|^{2}}{\sigma_{N_{F}}^{2}}\\ +\ln\left(1+\exp\left(-\left|d_{\beta}-c_{\beta}\right|\right) \right), \tag{4}\]
where \(c_{\beta}=-\left|y_{\beta}(\delta)-\sqrt{P_{\beta}}h_{F,\beta}(\delta)q_{1}^{g} \right|^{2}/\sigma_{N_{F}}^{2}\) and \(d_{\beta}=-\left|y_{\beta}(\delta)-\sqrt{P_{\beta}}h_{F,\beta}(\delta)q_{0}^{ \beta}\right|^{2}/\sigma_{N_{F}}^{2}\). From (4), \(N\) number of LLR values are determined. Then, for each subblock \(g\), we compute a total \(R_{\beta}\) number of LLR sums for the \(R_{\beta}\) sets of feasible active indices combinations provided by the associated look-up table, where \(R_{\beta}=2^{p_{\beta,1}}\). The set containing all feasible active indices combinations for the \(g\) th subblock is indicated as \(\psi_{g,\beta}=\left\{I_{g,\beta}^{1},\ldots,I_{g,\beta}^{R_{\beta}}\right\}\), where \(I_{g,\beta}^{\nu}=\left\{i_
\(J_{g,\beta}^{\nu}=\sum_{\zeta=1}^{k_{\beta}}\lambda_{stage1,\beta}\left(n_{ \beta}(g-1)+i_{g,\beta,\zeta}^{\nu}\right)\) computes the LLR sums that correspond to the \(\nu\) in active subcarrier indices set, where \(g=1,\ldots,G_{\beta}\) and \(\nu=1,\ldots,R_{\beta}\). Thus, for each subblock g, a total \(R_{\beta}\) number of LLR sums are obtained. The receiver selects the active indices set with the highest LLR sum among them. That is \(\widehat{\nu}_{g}=\operatorname*{arg\,max}_{\nu}J_{g,\beta}^{\nu}\) and the detected active indices set is \(I_{g,\beta}^{\widehat{\nu}_{g}}\).
#### Iii-B2 Second stage
Finding constellation symbols corresponding to the detected active subcarrier indices set is the receiver's second stage objective. In order to do that, it calculates a log-likelihood ratio for each subcarrier index, taking into account that the frequency domain symbols can have a value of either \(q_{0}^{\beta}\) or \(q_{1}^{\beta}\), under the assumption that the modulation type is BPSK modulation. for index \(\delta\), it is possible to compute this second stage log-likelihood ratio as follows.
\[\lambda_{stage2,\beta}(\delta)=\ln\frac{P\left(x_{\beta}(\delta)=q_{0}^{\beta} \left|\,y_{\beta}(\delta)\right.\right)}{P\left(x_{\beta}(\delta)=q_{1}^{\beta }\left|\,y_{\beta}(\delta)\right.\right)}. \tag{5}\]
Applying Bayes' formula, (5) is expressed as:
\[\lambda_{stage2,\beta}(\delta)=d_{\beta}-c_{\beta},\quad\delta=1,\ldots,N. \tag{6}\]
The second stage receives knowledge from the first stage regarding the detected active indices set. This set's denotation for the g th subblock is \(I_{g,\beta}^{\widehat{\nu}_{g,\beta}}=\left\{i_{g,\beta,1}^{\widehat{\nu}_{g,\beta}^{\widehat{\nu}_{g}}},\ldots,i_{g,\beta,k_{\beta}}^{\widehat{\nu}_{g}}\right\}\), where \(i_{g,\beta,\zeta}^{\widehat{\nu}_{g}}\in\{1,\ldots,n_{\beta}\}\) for \(\zeta=1,\ldots,k_{\beta}\), here \(\widehat{\nu}_{g}\in\{1,\ldots,R_{\beta}\}\) for \(g=1,\ldots,G_{\beta}\). The following is the algorithm of the second stage symbol decoding:
```
1:for\(\beta=A,B\)do
2:for\(g=1\) to \(G_{\beta}\)do
3:for\(\zeta=1\) to \(k_{\beta}\)do
4:if\(\lambda_{stage2,\beta}^{g}\left(i_{g,\beta,\zeta}^{\widehat{\nu}_{g}}\right) \geq 0\)then
5: decoded symbol is \(q_{0}^{\beta}\) i.e. \(\overrightarrow{x}_{\beta}^{g}\left(i_{g,\beta,\zeta}^{\widehat{\nu}_{g}} \right)=q_{0}^{\beta}\);
6:elseif\(\lambda_{stage2,\beta}^{g}\left(i_{g,\beta,\zeta}^{\widehat{\nu}_{g}}\right)<0\)then
7: decoded symbol is \(q_{1}^{\beta}\) i.e. \(\overrightarrow{x}_{\beta}^{g}\left(i_{g,\beta,\zeta}^{\widehat{\nu}_{g}} \right)=q_{1}^{\beta}\);
8:endif
9:endfor
10:endfor
```
**Algorithm 1** Second stage decoding algorithm
Here \(\lambda_{stage2,\beta}^{g}\left(i_{g,\beta,\zeta}^{\widehat{\nu}_{g}}\right)= \lambda_{stage2,\beta}\left(n_{\beta}(g-1)+i_{g,\beta,\zeta}^{\widehat{\nu}_{g }}\right)\). Following the preceding steps, the decoded \(g\) th subblock of user \(\beta^{\prime}s\) signal is formed as \(\overrightarrow{x}_{\beta}^{g}=\left[\overrightarrow{x}_{\beta}^{g}(1) \ldots\overrightarrow{x}_{\beta}^{g}(n_{\beta})\right]^{T}\), where \(\overrightarrow{x}_{\beta}^{g}(\varphi)\in\left\{0,q_{0}^{\beta},q_{1}^{ \beta}\right\}\) for \(\varphi=1,\ldots,n_{\beta}\).
## IV Performance analysis
In this section, the computational complexity of the proposed detector is investigated at both user A and user B, and it is compared with the existing detectors in the literature. Computational complexity is quantified via the number of complex multiplications.
At user B, the computational complexity of the "ML detector" derived from [8, eq.(6)] is \(\sim\mathcal{O}\left(k_{B}R_{B}\left(M_{B}\right)^{k_{B}}\right)\) per subblock, and the computational complexity of the "rotated constellation based ML detector" derived from (1) is \(\sim\mathcal{O}\left(k_{B}R_{B}\left(M_{B}\right)^{k_{B}}\right)\) per subblock. That is, the "ML detector" and the "rotated constellation based ML detector" have the same computational complexity at user B. On the other hand, The computational complexity of the "two-stage LLR detector" obtained from [14, eq.(5)] is \(\sim\mathcal{O}\left(n_{B}M_{B}\right)\) per subblock, and the computational complexity of the "proposed detector" obtained from (3) is \(\sim\mathcal{O}\left(n_{B}M_{B}\right)\) per subblock. In other words, the computational complexity of the "two-stage LLR detector" and the "proposed detector" comes out to be the same at user B. Note that, for user B, \(\beta=B\) in (1) and (3).
At user A, the total computational complexity of the "ML detector" computed from [8, eq.(3)] and [8, eq.(5)] is \(\sim\mathcal{O}\left(k_{B}R_{B}\left(M_{B}\right)^{k_{B}}+k_{A}R_{A}\left(M_{A} \right)^{k_{A}}\right)\) per subblock. While the computational complexity of the "rotated constellation-based ML detector" calculated from (1) is \(\sim\mathcal{O}\left(k_{A}R_{A}\left(M_{A}\right)^{k_{A}}\right)\) per subblock. On the other hand, at user A, the total computational complexity of the "two-stage LLR detector" determined from [14, eq.(10)] and [14, eq.(15)] is \(\sim\mathcal{O}\left(n_{B}M_{B}+n_{A}M_{A}\right)\) per subblock. while the computational complexity of the "proposed detector" derived from (3) is \(\sim\mathcal{O}\left(n_{A}M_{A}\right)\) per subblock. Note that, for user A, \(\beta=A\) in (1) and (3).
From the above analysis, it can be seen that the proposed detector has significantly less computational complexity than the conventional ML detector. At user B, the reason for the much reduced complexity of the proposed detector is the log-likelihood ratio (LLR) based algorithm used in it. While at user A, there are two reasons for the complexity reduction of the proposed detector. First, due to the use of the LLR-based algorithm. Second, due to the use of the rotated constellation-based concept, the receiver can directly decode user A's signal without needing to go through the SIC process at user A. For better understanding, the following TABLE I, TABLE II, and TABLE III show the complexity reduction of different detectors as compared to the conventional ML detector in various scenarios.
## V Simulation results
This section presents the simulation results of the error performance of the proposed detector in presence of the Rayleigh fading channel and compares it with the error performance of
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Detection type** & **Complexity reduction at user B** & **Complexity reduction at user A** \\ \hline \hline Retoad ML & 0\% & 50\% \\ \hline Two-stage LLR & 7\% & 75\% \\ \hline Proposed & 75\% & 87.5\% \\ \hline \end{tabular}
\end{table} TABLE I: Complexity reduction as compared to the optimal ML detector where both the users having mid data rate application (i.e. \(k_{A}=k_{B}=2\), \(n_{A}=n_{B}=n=4\), \(M_{A}=M_{B}=M=2\)).
the "ML detector", the "two-stage LLR detector" [14] and the "rotated constellation based ML detector". The system parameters used for all the simulations are: \(N=256,G_{A}=G_{B}=64,n_{A}=n_{B}=n=4\), \(\varrho_{A}^{2}=\rm 4\;dB,\varrho_{B}^{2}=0\;dB,N_{cp}=16,L=12,\gamma=0.1\), and \(M_{A}=M_{B}=M=2\).
With a same-user application taken into account, Fig. 1 compares the BER performance of the proposed detector with the BER performance of the "ML detector" and the "two-stage LLR detector". By setting the simulation parameter to \(k_{A}=k_{B}=3,n=4\), it is assumed that both users having high data rate applications (e.g. ultrahigh definition video streaming) utilize the same NOMA spectrum. Therefore, an identical subcarrier activation ratio of 3/4 is considered for both users in this case. From this figure, we can observe that at user B, the proposed detector achieves a gain of 4 dB at a BER of \(10^{-3}\), when compared to the "ML detector" and the "two-stage LLR detector". While, at user A, the proposed detector attains a slightly better error performance than the optimal ML detector and the two-stage LLR detector at a BER of \(10^{-3}\).
Considering another same-user application, Fig. 2 compares the BER performance of the proposed detector with the BER performance of the "ML detector" and the "two-stage LLR detector". By setting the simulation parameter to \(k_{A}=k_{B}=1,n=4\), it is assumed that both users having low data rate applications (e.g. Internet of Things application) are utilizing the same NOMA spectrum. Therefore, an identical subcarrier activation ratio of 1/4 is considered for both users in this case. According to this figure, at user B, the proposed detector achieves a gain of 2 dB at a BER of \(10^{-3}\), when compared to the "ML detector" and the "two-stage LLR detector". While at user A, the proposed detector achieves a minute improvement in error performance over the ML detector and the two-stage LLR detector at a BER of \(10^{-3}\).
For a hybrid-user application, Fig. 3 compares the BER performance of the proposed detector with the BER perfor
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Detector type** & **Complexly reduction at user B** & **Complexly reduction at user A** \\ \hline \hline Roated ML & 0\% & 50\% \\ \hline Two-stage LLR & 91.7\% & 91.7\% \\ \hline Proposed & 91.7\% & 95.8\% \\ \hline \end{tabular}
\end{table} TABLE II: Complexity reduction as compared to the optimal ML detector where both the users having high data rate application (i.e. \(k_{A}=k_{B}=3\), \(n_{A}=n_{B}=n=4\), \(M_{A}=M_{B}=M=2\)).
Fig. 1: Comparing BER of the proposed detector with the “ML detector” and the “two-stage LLR detector”, where both users have high data rate applications
Fig. 3: Comparing BER of the proposed detector with the “ML detector” and the “two-stage LLR detector” for the hybrid user configuration
Fig. 2: Comparing BER of the proposed detector with the “ML detector” and the “two-stage LLR detector”, where both users have low data rate applications
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Detector type** & **Complexly reduction at user B** & **Complexly reduction at user A** \\ \hline \hline Roated ML & 0\% & 50\% \\ \hline Two-stage LLR & 91.7\% & 91.7\% \\ \hline Proposed & 91.7\% & 95.8\% \\ \hline \end{tabular}
\end{table} TABLE III: Complexity reduction as compared to the optimal ML detector for the hybrid user specification (i.e. \(k_{A}=3,k_{B}=1\), \(n_{A}=n_{B}=n=4\), \(M_{A}=M_{B}=M=2\)).
mance of the "ML detector" and the "two-stage LLR detector". Since user A and user B with different settings are utilizing the same spectrum, the users in this scenario are referred to as hybrid users. By setting the simulation parameter to \(k_{A}=3,k_{B}=1,n=4\), it is considered that the user A with a high data rate application and user B with a low data rate application utilize the same NOMA spectrum. Therefore, a subcarrier activation ratio of 3/4 and 1/4 is assigned to user A and user B, respectively. Fig. 3 shows that at user B, the proposed detector achieves a gain of 5.5 dB at a BER of \(10^{-3}\) when compared to the "ML detector" and the "two-stage LLR detector". While, at user A, the proposed detector achieves a slightly better error performance than the optimal ML detector and the two-stage LLR detector at a BER of \(10^{-3}\).
In Fig. 4, the error performance of the proposed detector is compared with that of the "rotated constellation-based ML detector". This figure illustrates that the proposed detector obtains nearly identical error performance as the "rotated constellation-based ML detector", while the proposed detector has much reduced computational complexity than the "rotated constellation-based ML detector" at both user A and user B, as indicated in the previous performance analysis section.
## VI Conclusion
In this paper, a novel "constellation rotation-based LLR detector" is proposed for the OFDM-IM NOMA scheme. This proposed detector is evaluated under several scenarios, including users with high data rate applications, low data rate applications, and hybrid applications. Simulation results and complexity analysis demonstrate that in every case, the proposed detector achieves significantly lower computational complexity while also achieving much better error performance than the earlier introduced detectors in the literature.
|
2304.01144 | Constraints on the frequency and mass content of r-process events
derived from turbulent mixing in galactic disks | Metal-poor stars in the Milky Way (MW) halo display large star-to-star
dispersion in their r-process abundance relative to lighter elements. This
suggests a chemically diverse and unmixed interstellar medium (ISM) in the
early Universe. This study aims to help shed light on the impact of turbulent
mixing, driven by core collapse supernovae (cc-SNe), on the r-process abundance
dispersal in galactic disks. To this end, we conduct a series of simulations of
small-scale galaxy patches which resolve metal mixing mechanisms at parsec
scales. Our set-up includes cc-SNe feedback and enrichment from r-process
sources. We find that the relative rate of the r-process events to cc-SNe is
directly imprinted on the shape of the r-process distribution in the ISM with
more frequent events causing more centrally peaked distributions. We consider
also the fraction of metals that is lost on galactic winds and find that cc-SNe
are able to efficiently launch highly enriched winds, especially in smaller
galaxy models. This result suggests that smaller systems, e.g. dwarf galaxies,
may require higher levels of enrichment in order to achieve similar mean
r-process abundances as MW-like progenitors systems. Finally, we are able to
place novel constraints on the production rate of r-process elements in the MW,
$6 \times 10^{-7} {M_\odot / \rm yr} \lesssim \dot{m}_{\rm rp} \ll 4.7 \times
10^{-4} {M_\odot / \rm yr} $, imposed by accurately reproducing the mean and
dispersion of [Eu/Fe] in metal-poor stars. Our results are consistent with
independent estimates from alternate methods and constitute a significant
reduction in the permitted parameter space. | A. N. Kolborg, E. Ramirez-Ruiz, D. Martizzi, P. Macias, M. Soares-Furtado | 2023-04-03T17:11:08Z | http://arxiv.org/abs/2304.01144v1 | Constraints on the frequency and mass content of r-process events derived from turbulent mixing in galactic disks
###### Abstract
Metal-poor stars in the Milky Way (MW) halo display large star-to-star dispersion in their r-process abundance relative to lighter elements. This suggests a chemically diverse and unmixed interstellar medium (ISM) in the early Universe. This study aims to help shed light on the impact of turbulent mixing, driven by core collapse supernovae (cc-SNe), on the r-process abundance dispersal in galactic disks. To this end, we conduct a series of simulations of small-scale galaxy patches which resolve metal mixing mechanisms at parsec scales. Our set-up includes cc-SNe feedback and enrichment from r-process sources. We find that the relative rate of the r-process events to cc-SNe is directly imprinted on the shape of the r-process distribution in the ISM with more frequent events causing more centrally peaked distributions. We consider also the fraction of metals that is lost on galactic winds and find that cc-SNe are able to efficiently launch highly enriched winds, especially in smaller galaxy models. This result suggests that smaller systems, e.g. dwarf galaxies, may require higher levels of enrichment in order to achieve similar mean r-process abundances as MW-like progenitors systems. Finally, we are able to place novel constraints on the production rate of r-process elements in the MW, \(6\times 10^{-7}M_{\odot}/\mathrm{yr}\lesssim m_{\mathrm{rp}}\ll 4.7\times 10^{-4}M_{ \odot}/\mathrm{yr}\), imposed by accurately reproducing the mean and dispersion of [Eu/Fe] in metal-poor stars. Our results are consistent with independent estimates from alternate methods and constitute a significant reduction in the permitted parameter space.
+
Footnote †: journal: ApJ
0000-0002-2070-0002]Anne Noer Kolborg
0000-0002-3882-8870]Enrico Ramirez-Ruiz
0000-0002-3133-1888]Davide Martizzi
0000-0002-3133-1888]Phil Macias
0000-0002-1881-8888]Melinda Soares-Furtado
## 1 Introduction
The detailed physical ingredients required for r-process nucleosynthesis to take place were identified in the pioneering works of [21] and [22]. Despite that, the dominant astrophysical site for r-process production in the early Universe remains highly debatable (Cowan et al., 2021), even after the landmark discovery of the kilonova associated with GW170817 (Abbott et al., 2017; Coulter et al., 2017; Kasen et al., 2017; Watson et al., 2019). To this end, metal-poor stars in the galactic halo can be used as unique probes of r-process element synthesis in the early Universe and could help elucidate the astrophysical assets of the dominant progenitor system (Sneden et al., 2008; Thielemann et al., 2017).
Abundance similarities observed among metal-poor halo stars with ages discrepant by billions of years but with a distribution that is representative of the solar system (Sneden et al., 2003; McWilliam et al., 1995; Sneden et al., 2008; Roederer et al., 2014) suggest that nuclear pathways responsible for r-process elements are rather robust and have been operating coherently over extensive periods of time in the assembly history of the Milky Way (MW). From the large star-to-star chemical abundance dispersion in r-process elements (such as Eu), with respect to the \(\alpha\) elements (such as Mg), we can infer that the injection of r-process elements occurs at a dramatically diminished rate when compared to core-collapse supernovae (cc-SNe)(Wasserburg and Qian, 2000; Fields et al., 2002). Additionally, the largest enhancements in [Eu/Fe] observed in metal poor stars imply that individual r-process sites need to synthesize a minimum of roughly \(10^{-3}M_{\odot}\) r-process material (Macias and Ramirez-Ruiz, 2018).
A central goal of this paper is to use a series of idealized hydrodynamic simulations of the response of galactic disks
to SN feedback and metal mixing (Kolborg et al., 2022) to provide a deeper physical interpretation of the star-to-star r-process scatter observed in the MW in the metallicity extent [Fe/H] of approximately -3.0 to -1.5, where Fe enrichment is driven primarily by cc-SNe (Hotokezaka et al., 2018; Cote et al., 2019; Wanajo et al., 2021). The constraints derived from the abundance pattern of r-process elements in the MW may ultimately help decipher the dominant production mechanism in the early Universe.
The numerical modeling aimed at addressing the inhomogeneous enrichment of r-process elements in the MW has been primarily performed in a cosmological context (Shen et al., 2015; van de Voort et al., 2015; Naiman et al., 2018; Haynes and Kobayashi, 2019; van de Voort et al., 2020, 2022). These studies are limited by resolutions of a few tens to hundreds of parsecs, and they inevitably involve "sub-grid" conjectures for SN feedback, star formation and turbulent mixing. As such, large uncertainties remain at scales at which metal injection and turbulent metal mixing are taking place. In preference, in this paper we propose to model the gas interactions in small patches of a galaxy disk simulations in order to effectively isolate the small-scale mixing effects of metals at sub parsec scales from cc-SNe and r-process producing events (such as neutron star mergers (NSM)) into the interstellar medium (ISM). Besides being computationally workable (Kolborg et al., 2022), investigating the distribution of r-process elements in these simulations and comparing them extensively with observational data of metal poor stars can, in turn, help constrain the frequency of events and the mass content of r-process per event.
This paper is structured as follows. In Section 2, we summarize the simulation setups and introduce the key parameters relevant for metal production in a wide range of galaxy models. Sections 3 through 6 present the results of our investigation. Initially, Section 3 shows the results of the turbulent mixing study in a given galaxy potential. This section serves to introduce the reader to the salient concepts and build an understanding of the key mixing processes from which we build in subsequent sections. In Section 4, we present the mixing results across the three galaxy potentials studied, while in Section 5 we investigate how changing the allowed height of r-process events influences the mixing and ejection of r-process mass. Section 6 investigates the production rate of r-process elements by studying how the spread of elements changes in response to variations in the mass per event (Section 6.1) and the relative rate of events (Section 6.2). This section concludes with a comparison between simulations and observations of [Eu/Fe] abundance of metal poor halo stars (Section 6.3). Section 7 gives a discussion of the implications of our simulation results, while Section 8 provides a final summary of our findings.
## 2 Numerical Methods
We use the hydrodynamic code RAMSES (Teyssier, 2002) to simulate the evolution of gas in patches of galactic disks over timescales of hundreds of Myr. The fundamental simulation setup is the same as that used in Martizzi et al. (2016) and Kolborg et al. (2022), with the inclusion of a passive scalar field in order to trace the enrichment of r-process elements. In this section, we describe the salient aspects of the models as they pertain to this project. For additional details, we refer the interested reader to Martizzi et al. (2016) and Kolborg et al. (2022). The full suite of simulations used in this study are listed in Table 1.
### Galaxy models
The simulations assume a static gravitational potential produced by gas, stars, and dark matter as described by Kuijken and Gilmore (1989). The parameters of this gravitational potential are varied to emulate three different types of galaxies (Kolborg et al., 2022).
The first model mimics an early MW progenitor with a high star formation rate (SFR). The second model simulates a MW progenitor with a more modest SFR. This second model is motivated by the recent findings of Wang et al. (2021), who argue that the presence of the Large Magellanic Cloud satellite suggests a MW progenitor with a less active SFR. The third model mimics a weak gravitational potential with a high gas fraction and low SFR, emulating the expected properties of a classical dwarf galaxy.
We employ cubic boxes1 with periodic boundary conditions on the four edges perpendicular to the disk and outflow boundary conditions on the two parallel ones. The resolution of each simulation box is chosen such that the evolution of individual supernova remnants (SNR) are always well resolved. Specifically the cooling radius is resolved by at least 5 cells for least 94% of all remnants, in all galaxy models (Kolborg et al., 2022).
Footnote 1: RAMSES does not yet allow tall box simulations
These various setups of galaxy disk patches at relatively high resolution, which capture the momentum injection of individual SNe, are able to effectively replicate the local conditions of galactic environments that might be representative of metal-poor stars assembled in the early MW and within accreted dwarf satellites.
### Core collapse supernovae and neutron star mergers
cc-SNe and r-process events are seeded randomly with a constant rate (see the following section) and a flat distribution in space within fixed maximum allowed heights from the center of the galactic disk.
We model cc-SNe using the sub-grid model for SN feedback implemented by Martizzi et al. (2015). In this model, each cc-SN event has an ejecta mass equal to the initial mass function (IMF) weighted average (\(M_{\rm ej}=6.8\,M_{\odot}\)) and energy \(E=1\times 10^{51}\) erg. Metals are introduced into the gas by individual cc-SNe, which are assumed to be chemically identical.
We apply the same sub-grid model of injection of mass and energy to the r-process producing events, however, we selected \(M_{\rm ej}=1\times 10^{-2}\,M_{\odot}\) and \(E=1\times 10^{51}\) erg. These values reflect the properties of the ejecta inferred in the gravitational-wave triggered NSM event GW170717 (Kasen et al., 2017).
Although the morphology of a NSM remnant is highly asymmetric at early times (Rosswog and Ramirez-Ruiz, 2002; Ramirez-Ruiz and MacFadyen, 2010; Roberts et al., 2011), the subsequent radiative evolution is notably analogous to that of a SNR with similar total energy (Montes et al., 2016). The shell formation epoch, which occurs when the remnant becomes radiative, takes place by the time the mass of swept-up material reaches \(M_{\rm c}\approx 10^{3}(n_{\rm H}/1\,{\rm cm}^{-3})^{-2/7}M_{\odot}\)(Karpov et al., 2020; Macias and Ramirez-Ruiz, 2018; Cioffi et al., 1988; Thornton et al., 1998; Martizzi et al., 2015). The implications of this are twofold. First, sub-grid SN feedback models, like the ones used in this study, can be effectively used to resolve the key evolutionary phases of NSM remnants (Montes et al., 2016). Second, our r-process injection models can be applied whether enrichment has occurred via extremely rare cc-SNe (e.g., Cowan and Thielemann, 2004; Winteler et al., 2012; Nishimura et al., 2015; Mosta et al., 2018; Siegel et al., 2019), or through NSMs (e.g., Metzger et al., 2010; Roberts et al., 2011), provided that any contribution of other freshly-synthesized metals (i.e., non r-process) is less than those contained in the swept-up ISM mass by the time the blast wave reaches the cooling phase (Macias and Ramirez-Ruiz, 2019). For the purposes of this study we focus our attention on NSMs as the source of r-process elements.
The maximum allowed height of standard cc-SNe events (\(z_{\rm SNe}\)) is fixed to twice the scale height of the gaseous disk. Fixing the allowed height of the cc-SNe to the gaseous scale height is motivated by the typical short distances traveled by massive stars between the time of their birth and the SNe. The r-process events, on the other hand, are allowed to occur within a region defined by the parameter \(z_{\rm NSM}\). As a starting point, we set this value to \(z_{\rm NSM}=1.33z_{\rm SNe}\). In Section 5, we explore how changes to this scale height impact the turbulent mixing process, and we discuss the limitations of our galaxy patch simulations.
#### 2.2.1 Event and metal injection rates
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c} Galaxy model & \(\Sigma_{\rm SFR}\) & \(\rho_{0}\) & \(z_{\rm eff}\) & \(z_{\rm SNe}\) & \(L\) & \(dx\) & \(T_{\rm end}\) & \(\kappa\) & \(z_{\rm NSM}\) & \(f_{\rm rp}\) & \(m_{\rm rp}\) \\ & M\({}_{\odot}\) / kpc\({}^{2}\) / Myr & g/cm\({}^{3}\) & pc & pc & pc & pc & Myr & pc km/s & \(z_{\rm SNe}\) & & \(M_{\odot}\) \\ \hline MW progenitor, high SFR & \(3\times 10^{4}\) & \(3.47\times 10^{-23}\) & 40 & 100 & 1000 & 3.9 & 120 & 147 & 1.33 & \(1\times 10^{-3}\) & \(1\times 10^{-2}\) \\ & & & & & & & & & & & & \(1\times 10^{-1}\) \\ & & & & & & & & & & & \(1\times 10^{-2}\) & \(1\times 10^{-2}\) \\ & & & & & & & & & & & \(1\times 10^{-2}\) & \(1\times 10^{-2}\) \\ & & & & & & & & & & & \(1\times 10^{-2}\) & \(1\times 10^{-3}\) \\ & & & & & & & & & & & \(1.0\) & & \(1\times 10^{-2}\) \\ & & & & & & & & & & & 2.0 & & \\ \hline MW progenitor, low SFR & \(1\times 10^{3}\) & \(2.08\times 10^{-24}\) & 80 & 160 & 1000 & 3.9 & 250 & 213 & 1.33 & \(1\times 10^{-3}\) & \(1\times 10^{-2}\) \\ \hline Satellite & \(8\times 10^{2}\) & \(4.67\times 10^{-25}\) & 365 & 800 & 4000 & 15.6 & 500 & 1460 & 1.33 & \(1\times 10^{-3}\) & \(1\times 10^{-2}\) \\ & & & & & & & & & & & \(1\times 10^{-1}\) & \\ & & & & & & & & & & & \(1\times 10^{-2}\) & \(1\times 10^{-2}\) \\ & & & & & & & & & & & \(1\times 10^{-2}\) & \(1\times 10^{-3}\) \\ \end{tabular}
\end{table}
Table 1: Overview of all the simulations presented in this project. The columns are: name of the galaxy patch model, surface density of star formation rate, mid-plane density initial condition, effective scale height of the gaseous disk, maximum vertical height of a cc-SNe, box side length, cell size, total evolution time, turbulent diffusion coefficient (Kolborg et al., 2022), maximum vertical height of an r-process event (as a fraction of the maximum cc-SNe altitude), relative rate of r-process to cc-SNe events, and mass of r-process material per r-process event.
The surface density rate of cc-SNe, \(\Gamma\), is set by the rate of star formation in the galaxy patch, such that \(\Gamma=\frac{\Sigma_{\rm c-SNe}}{100\,{\rm M}_{\odot}}\). The star formation rate (and hence the cc-SNe event rate) is assumed to be constant. Due to the relatively short time span and the physical extent of the galaxy patches modelled, one can think of this as modelling a short star formation burst early in the history of the galaxy.
The rate of NSM events is set as a rate relative to standard cc-SNe, \(\Gamma_{\rm NSM}=f_{\rm rp}\Gamma\). In this study we explore relative rates of \(f_{\rm rp}\) in the range \([10^{-3},10^{-2}]\), which is equivalent to 1 NSM event per [100,1000] cc-SNe.
For the purposes of this project, we are interested in studying the spread of r-process elements relative to elements produced by cc-SN. We select Fe as our representative cc-SNe element, with each cc-SNe yielding \(M_{\rm Fe}=8\times 10^{-2}\,M_{\odot}\)(Kolborg et al., 2022). We model r-process events as NSMs and assume that the entire ejecta is predominately comprised of r-process elements with a total mass \(m_{\rm rp}=M_{\rm ej,NSM}\). That is, r-process events are assumed not to contribute to Fe enrichment, in our simulations.
As commonly carried out by the community, we designate Eu as the representative r-process element. We calculate the mass of Eu by applying the solar r-process abundance pattern (Asplund et al., 2009; Sneden et al., 2008) and calculating the mass fraction of Eu to all the r-process elements.
This is particularly useful when comparing with observations of metal-poor stars (Sneden et al., 2008). For the purpose of this study, we consider atomic mass number 69 as the strict lower limit for neutron capture element production (\(A_{\rm min}=69\)). In Section 7.1, we discuss the implications of this choice and the consequences for the results if \(A_{\rm min}=90\), which corresponds to the second and third r-process peaks (Sneden et al., 2008).
\(\alpha\), iron peak and r-process elements are tracked as individual passive scalar fields. For the purposes of this study only iron and r-process elements are of interest and the individual scalar fields allow us to make adjustments to \(M_{\rm Fe}\) and \(m_{\rm rp}\) in post-processing. This approach allows us to simulate a wide range of r-process masses per event, \(m_{\rm rp}=[10^{-3},10^{-1}]M_{\odot}\), studying their effects on the metal enrichment of the ISM.
In Kolborg et al. (2022) we showed that the metals injected into the ISM by these cataclysmic events are mixed through the galactic disk by turbulent diffusion, which is in turn driven by the energy and momentum deposition from the most common events. The turbulent diffusion coefficient sets the timescale for metal mixing, which depends mainly on the scale height of the disk and the turbulent velocity dispersion (Kolborg et al., 2022). Martizzi et al. (2016) conducted resolution tests on the velocity dispersion in the MW progenitor with high SFR set-up and found that the turbulent diffusion coefficient is effectively converged at the resolution employed in this work. Furthermore, the turbulent driving scales in the MW progenitor models are \(\gtrsim 100\,{\rm pc}\) in both galaxy set-ups (Martizzi et al., 2016). This characteristic scale is nearly two orders of magnitudes greater than the resolution size scale of our models and, as a result, we expect the turbulent cascade to be well resolved.
The advantage of galaxy patch simulations is that they allow us to effectively capture the driving of the turbulence by cc-SNe, as well as, accurately studying its effects on the mixing of other freshly synthesized metals that are produced by much rarer cataclysmic events. The simulations are designed to isolate the mixing driven by cc-SNe feedback, which is primarily driven by turbulence and by galactic wind launching. By design, other important sources of mixing in galaxies, such as gas inflow, recycling of gas in galactic fountains, and disc shearing are not present in these simulations.
### Steady State
All simulations are initialized with the gas in hydrostatic equilibrium with the static gravitational potential. As cooling and SNe feedback turn on simultaneously some of the thermal pressure support falls and turbulent pressure support increases. Hence, at early times, the disk is primarily supported by thermal pressure, while at later times, the disk is supported by both thermal and turbulent pressure driven by cc-SNe. Since information needs to be effectively communicated to all regions before turbulent motions in the bulk of the disk reach a statistical steady state, an initially transient phase is produced. The duration of this phase is usually well described by the characteristic relaxation time in our disk models, \(t_{r}=4z_{\rm eff}\langle\sigma_{v}\rangle^{-1}\), where \(\langle\sigma_{v}\rangle\) is the time averaged, mass-weighted velocity dispersion of the gas (Kolborg et al., 2022). In what follows, we neglect this initial phase, which lasts \(\approx\frac{1}{5}T_{\rm end}\) in all of our simulations (Table 1). In this work, we only discuss the results from the steady state of the simulations.
## 3 Metal Mixing in High SFR Environments
In this section, we examine the mixing of elements in the MW progenitor model with high SFR. For simplicity, we limit our attention to a single instance of this model where r-process events take place at a rate relative to cc-SNe of \(f_{\rm rp}=10^{-3}\) and each event yields a mass of r-process elements of \(m_{\rm rp}=10^{-2}\,M_{\odot}\). This assumes that cc-SNe occur in a region of \(z_{\rm SNe}=100\,{\rm pc}\), while the r-process events occur in the region defined by \(|z|\leq 1.33z_{\rm SNe}\). We use this representative model to present a detailed account of the metal mixing features that may be present in the interstellar gas. These salient features can result naturally when one examines metal turbulent mixing by events occurring at disparate rates, yet their exact properties depend on the specific galaxy model and more sensitively on \(f_{\rm rp}\) and \(m_{\rm rp}\).
### The Tomography of a Galaxy Patch
In Figure 1 we show the density-weighted projections of the thermodynamic properties of the gas along two axes in the simulation box. The density (left-hand column) and temperature (right-hand column) projections along an axis perpendicular to the galactic disk (top row) clearly illustrate how the momentum deposition from cc-SNe ejects gas from the disk in the form of a hot, rarefied galactic wind. From another point of view, the projection along the galactic disk plane (similar to a face-on view of the galaxy, bottom row) distinctively shows individual cc-SN and r-process remnants. They can be observed as nearly spherical regions of lower density and higher temperature gas, which are embedded in the denser, cooler, star forming gas.
In addition to momentum and energy, the individual events inject new metals: iron peak elements in the case of cc-SNe and heavy metals in the case of r-process events. These metals are mixed into the surrounding ISM by the turbulent diffusion, which is driven primarily by cc-SNe. Figure 2 shows the density-weighted projections of the metal abundances in the gas for both Fe and Eu enrichment.
Several key aspects can be noted by examining the maps shown in Figure 2. First, the rarefied galactic wind is highly enriched in Fe and Eu when compared to cold, dense gas in the mid-plane. This is primarily due to the low density of the hot wind within this region, which implies that even the mixing of a small amounts of metals can lead to a comparatively high enrichment.
Second, the abundance of [Fe/H] is relatively uniform across the disk and within the galactic wind region. The full span of [Fe/H] variations is generally small. Across the disk, we observe localized [Fe/H] variations where neighboring cc-SN remnants remain compact and fail to overlap (at scales of a few tens of pc). These localized Fe enhancements are then subsequently smoothed out as the enriched gas is given more time to expand and mix throughout the disk (Kolborg et al., 2022). Yet, individual remnants always expand to characteristic lengths that are smaller than the disk scale height.
Lastly, [Eu/H] is inspected in the middle panels of Figure 2 to exhibit significantly larger scatter in abundance concentrations than [Fe/H] in the left panels. This originates from the comparative rarity of r-process metal injection, which naturally produces a chemically inhomogeneous and unmixed ISM at these early epochs. As more events are injected and r-process material diffuses and mixes, we can expect these metal concentrations to be gradually smoothed out.
### Metal mixing as a function of time
From visual inspection of the abundance maps alone, we can gain a clear insight regarding the differential distribution of Eu and Fe in the galactic disk. In this section, we present a quantitative description of these differences.
Figure 3 depicts the volume weighted mean and \(1\sigma\) spread of the gas abundances when considering all the gas in the simulation volume (global) and the cold, dense gas in the disk. For reference, the snapshots depicted in Figures 1 and 2 correspond to a simulation time \(57.4\mathrm{Myr}\approx 0.475T_{\mathrm{end}}\).
Our scheme does not explicitly include star formation so, for the purpose of making an informed matching, we isolate the cold, dense gas within the disk (\(|z|\leq z_{\mathrm{eff}}\)), that is the most apt to form stars. We impose a temperature boundary of \(T\leq 10^{4}\,\mathrm{K}\), which is related to the cooling function temperature floor, and a density limit of \(\rho\gtrsim\rho_{0}/4\) (see Table 1).
As the simulation evolves, the [Fe/H] within the disk is gradually enriched as ensuing cc-SNe increase the mean Fe metallicity of the gas. At the same time, cc-SNe feedback drive outflows which cause Fe to spread from the disk onto the rarefied wind. The volume weighted average tends to emphasize the abundance of the hot, rarefied gas, which has a large volume filling factor. This becomes evident when examining the underlying distributions at fixed times shown in Appendix A. The global gas distribution of [Fe/H] has a very narrow peak at high metallicities (which drives the mean
Figure 1: Density-weighted projection plots of the thermodynamic proprieties of the gas in the MW progenitor with high SFR (left-hand column: density; right-hand column: temperature). The top row presents an edge-on view of the galaxy disk, while the bottom presents a face-on view. The SNe drive turbulent mixing in the disk and a galactic wind. Individual SNR are clearly visible in the plane of the disk.
value) and a fairly long tail at lower metallicities (which influences the spread calculation). By contrast, the global and disk [Fe/H] distributions are rather similar when considering the mass weighted distributions. A similar behavior is seen for the [Eu/H] distributions, albeit with much broader "peaks" given the lesser degree of metal mixing which results from less frequent injections.
Initially, the metal distributions of both [Fe/H] and [Eu/H] are highly inhomogeneous because there are significant regions that contain unmixed material. Due to its much more frequent injection, [Fe/H] is more uniformly distributed than [Eu/H]. The impact of higher metal injection rates on the abundance of [Fe/H] is twofold. First, the spread in the distribution is much narrower than the one seen for [Eu/H] in both the global and the cold, dense gas at all times in the simulation. In other words, there is a higher degree of homogenization of Fe than Eu as cc-SN products migrate and mix across the disk more effectively. Second, the evolution of the mean metallicity with time, which is caused by the rate of metal injection, is much swifter for [Fe/H] than for [Eu/H]. As time evolves, the localized inhomogeneities in both [Fe/H] and [Eu/H] are smoothed out and the evolution of the mean abundance becomes more gradual as each new injection of metals contributes progressively less to the total metal content. Yet r-process products migrate and mix throughout the box much more slowly. This is because large amount of freshly synthesized r-process material are injected at a rate that is usually faster than the the rate at which turbulent mixing smooths them down.
### Mass loading factor
Figure 2: Density weighted projection plots of the metal mixing properties of the gas in the galaxy patch simulation shown in Figure 1. The top row presents an edge-on view of the galaxy disk, while the bottom presents a face-on view. The panels show the abundances of [Fe/H] (left), [Eu/H] (middle) and [Eu/Fe] (right). Both Fe and Eu are efficiently ejected from the galactic disk and drastically increase the metal abundance in the galactic wind region. The effects of the relative rate of metal injection on the metal mixing of Fe and Eu are clearly seen; the less frequently injected Eu is less evenly distributed than Fe, which is injected more frequently.
Through injection of energy and momentum, cc-SNe drive material out of the disk and launch galactic winds. In this section, we consider the mass loading factor of the wind, which is commonly defined as (e.g. Martizzi et al., 2016; Li & Bryan, 2020):
\[\eta(z)=\frac{\dot{M}_{\rm out(z)}}{\rm SFR}, \tag{1}\]
\(\dot{M}_{\rm out}\) denotes the rate at which mass is leaving the galaxy.
Analogously, the metal mass loading factor is defined as the ratio of the mass in metals leaving the galaxy to the ratio of mass of metals that are injected (Li & Bryan, 2020):
\[\eta_{Z_{i}}(z)=\frac{\dot{M}_{Z_{i},\rm out}(z)}{\dot{M}_{Z_{i},\rm inj}}. \tag{2}\]
Here we calculate the rate of metal injection as
\[\dot{M}_{Z_{i},\rm inj}=\dot{n}_{\rm SNe}M_{\rm ej}Y_{Z_{i}}. \tag{3}\]
\(y_{Z_{i}}\) is the fractional yield of element \(Z_{i}\) produced by a particular event ejecting a mass \(M_{\rm ej}\), so that \(M(Z_{i})=M_{\rm ej}Y_{Z_{i}}\). The parameter \(f_{Z_{i}}\) denotes the fraction of events that produce metal \(Z_{i}\) relative to the rate of cc-SNe. For cc-SN elements \(f_{Z_{i}}=1.0\), while for r-process elements \(f_{Z_{i}}=f_{\rm rp}\).
The galaxy patch set-up is uniquely equipped to study the launching of galactic winds, which develop naturally from SNe feedback (Martizzi et al., 2016) and require no sub-grid model. We estimated the rate at which mass leaves the disk by measuring the mass flux that streams through a surface parallel to the disk with an out-flowing z-velocity. We measure the outflow rate at three different heights. In order of increasing height from the disk midplane, the chosen distances are as follows: \(|z|=3z_{\rm eff}\), \(|z|=2z_{\rm SNe}\), and \(|z|=\frac{L}{2}-dx\).The last measurement height is one cell size away from the edge of the simulation domain. The physical size of the relative heights are listed in Table 1 for each galaxy model.
It is important to note that the local boxes we consider in this study do not have a clearly defined escape velocity (Martizzi et al., 2016) and the wind mass loading factor is observed to decline with increasing box height. Martizzi et al. (2016) studied SNe feedback in the galaxy patch simulations and argue that the structure of the galactic winds are not always well captured by these boxes. However, in the case of the MW progenitor with high SFR simulation (equivalent to their model FX-ULTRA-MW-L8), they found that the global wind properties are well modelled within \(|z|\lesssim 200\,\rm pc\), which is equivalent to \(2z_{\rm SNe}\) for that model. Therefore, we proceeded with the analysis of the wind loading, focusing on results from heights comparable to this value. Moreover, Li & Bryan (2020) compared loading factors from a wide range of simulations, including the work by Martizzi et al. (2016), and found similar values reported across different global and local studies. This lends further credence to the robustness of the derived loading factors in this study.
Figure 4 shows the evolution of the loading factors relating to the total mass, cc-SN elements, and r-process elements during the steady state of the simulation. The mass loading factor has a nearly constant value throughout the evolution of the simulation, with values that are consistent with the results of Martizzi et al. (2016). The evolution of loading factor of Fe closely traces \(\eta_{M}\). This seems natural to expect for a galactic wind driven primarily by momentum injection from the same cc-SNe that are also injecting Fe. Interestingly, \(\eta_{\rm Zn}\), indicates that only about \(\approx 10\%\) of the injected metals are incorporated into the galactic winds. The evolution of \(\eta_{\rm Z_{\rm\eta}}\), on the other hand, is much less smooth and contains large metal outbursts. The location of these outbursts closely follow the injection timing of r-process events, which are marked by dark vertical lines in Figure 4.
Figure 3: Temporal evolution of the volume weighted mean and \(1\sigma\) spread of the abundances of [Fe/H] (left), [Eu/H] (middle) and [Eu/Fe] (right) for the MW progenitor with high SFR. The global values refer to the mean and spread when considering all the gas in the simulation volume, while the cold, dense, disk material is defined by gas located at \(|z|\leq z_{\rm eff}\) and with the following thermodynamic properties: \(T\leq 1\times 10^{4}\,\rm K\) and \(\rho\geq\rho_{4}/4\approx 1\times 10^{-23}\,\rm g\,cm^{-3}\). The spread in [Fe/H] is always significantly less than the spread of [Eu/X] due to cc-SN metals being injected at much higher rates.
The rise of r-process material in the wind takes place near the local injection sites, which produce large amounts of heavy elements. This freshly synthesized r-process material does not have time to mix effectively within the disk before being expelled. The smoothing of the individual outburst with \(z\) is caused by r-process metals further mixing with wind material. It is noteworthy that the loading factors of cc-SN and r-process elements are very similar to each other at all times in the simulation, suggesting that r-process metals are carried out of the disk by winds driven by cc-SNe only after mixing significantly with the ISM. On average, the mass loading factor of iron elements is \(\approx 7.5\%\), while the mass loading factor of r-process elements is \(\approx 11\%\) (both measured at \(|z|=2z_{\rm SNe}\)). The slightly higher mass loading factor implies that r-process elements are retained slightly less effective than cc-SN elements. This is closely related to the r-process metal outbursts captured in Figure 2, which occurs because individual r-process events do not have time to mix effectively with the surrounding ISM before being expelled from the disk.
## 4 Metal Mixing in Different Galaxy Types
In this section, we consider the influence of the galaxy potential on the turbulent mixing of the freshly synthesized metals. To facilitate comparison, we assume standard values for all parameters that do not relate to the galaxy type. More explicitly, the relative rate of NSM to cc-SN events is set to \(f_{\rm rp}=10^{-3}\), the mass per event is set to \(m_{\rm rp}=10^{-2}\,M_{\odot}\), and the scale height of NSM is set to \(z_{\rm NSM}=1.33z_{\rm SNe}\).
### Abundance evolution
Figure 5 illustrates the evolution of the (volume-weighted) mean and spread of the [Fe/H], [Eu/H], and [Eu/Fe] abundances as a function of time in each of the three galaxy potentials. Similar to Figure 3, the mean and spread are shown for all the gas in the box (global) and the cold, dense gas within the disk. For all galaxy types, the global spread is observed to be larger than the spread of the cold, dense gas. We also see a significant offset between the mean abundance of the global gas and the cold, dense gas for all galaxy models. This offset is the smallest for the satellite model and is related to the lower gas density in the disk, which naturally leads to a higher volume filling factor of the enriched material in both the wind and the disk regions.
As a whole, galaxy models with lower SFRs take longer to achieve the same mean [Fe/H] metallicity. As such, Fe is given more time to migrate throughout the disk. This is because the lower-density gas associated with a low SFR allows each cc-SN remnant to reach larger physical sizes than remnants in higher-density environments. As such, galactic disks with lower SFRs show less spread in abundances at a comparable average [Fe/H] metallicity. As anticipated, the spread of [Eu/H] is generally larger than that of [Fe/H], particularly in the cold, dense gas phase.
In contrast with [Fe/H], the [Eu/H] spread is primarily driven by the relative rate of r-process to cc-SN events (which is the same in all simulations) and is less dependent on the SFR. It is compelling to note that the evolution of [Eu/H] for the MW progenitor with low SFR is rather odd (see middle panel in Figure 5). This transpires because the SFR is so low in this model that the galaxy patch is host to only two r-process events over its entire evolution. As a result, very metal poor stars in this model will not be enriched with any r-process material until after the first event takes place. This behavior clearly illustrates that galaxies with very low global SFRs might not be enriched with r-process material, as it is commonly expected to be the case for ultra faint dwarf (UFD) galaxies in the MW halo. Most of these systems are very metal poor and exhibiting low r-process enhancements (Cowan et al., 2021). A notable exception is Reticulum-II, which has been shown to contain several highly enriched stars (Ji et al., 2016, 2022; Roederer et al., 2016). In this context, it is interesting to consider the likelihood that any one small system hosts an r-process event given the very low SFRs associated with these systems.
### Mass loading factors
One of our goals in this study is to understand the effects that turbulence driven by cc-SNe in galactic disks has on metal mixing. cc-SNe increase the mean metallicity of the gas in the disk and drive turbulent mixing, which prompts the metals to diffuse across the disk. At the same time,
Figure 4: Evolution of the loading factors of the total, iron and r-process mass (see legend) as a function of time in the MW progenitor with high SFR. The loading factor is shown for two different heights in the potential: \(|z|=3z_{\rm eff}\) (solid lines) and \(|z|=2z_{\rm SNe}\) (dashed lines). The dark vertical lines indicate the timing of r-process injections in the disk. The iron loading factor evolves very similarly to the total mass loading factor albeit with a different normalization. The r-process loading factor is strongly correlated with the injection of new r-process events. Interestingly, the loading factors of iron and r-process elements are comparable to each other, suggesting that both metal groups are carried away at similar rates irrespective of their very different injection rates.
Figure 5: The temporal evolution of the volume weighted mean and \(1\sigma\) of [Fe/H] (left), [Eu/H] (middle), and [Eu/Fe] (right) in each of the three galaxy potentials. From top to bottom we present the results for the MW progenitor with high SFR, the MW progenitor with low SFR, and the satellite model. The abundances were calculated for all the gas in the box (global) and for the cold, dense gas in the disk as done in Figure 3. Overall, the evolution of the abundances are similar in all three galaxy potentials. Yet, the difference between the global and the cold, dense gas mean values is smaller for the weaker galactic potentials, implying larger volume filling factors of enriched gas in all gas phases.
Figure 6: Loading factors of total, iron and r-process mass (see legend) measured at \(|z|=2z_{\rm SNe}\) for three different galaxy potentials: MW progenitor with high SFR (left); MW progenitor with low SFR; and satellite galaxy (right). The dark vertical lines along the bottom of each panel denote the times at which r-process events take place. The times are normalized to the run times of the simulations (see Table 1). The initial phase (before steady-state is reached) is indicated by lower opacity color. The loading factors are generally larger for weaker gravitational potentials. The weaker gravitational potentials also display smaller differences in loading factors at greater distances from the disk (see Appendix B), indicating the material streams from the disk more easily.
the momentum and energy that goes into driving turbulence launches galactic winds and drives metals out of the disk. Figure 6 shows the temporal evolution of the loading factors of mass, iron, and r-process metals for the three different galaxy potentials. The loading factors are all measured at \(|z|=2z_{\rm SNe}\) (see Table 1). As expected, the mass loading factor is significantly larger for weaker galaxy potentials, highlighting the greater relative importance of cc-SN energetics in these systems. This result is consistent with what is frequently ascertained in dwarf galaxy simulations (e.g., Fielding et al., 2017).
During the simulations, the satellite galaxy model and the MW progenitor with low SFR release \(\approx 35\%\) and \(\approx 10\%\) of their initial mass to cc-SNe-driven winds, respectively. In contrast, the MW progenitor model with high SFR loses \(\lesssim\)5% of its initial mass (Kolborg et al., 2022). This mass loss is naturally explained by the mass loading factors observed for these less massive systems. The associated larger mass loading factors in low mass systems are, for example, necessary in order to interpret the observed galaxy stellar mass function (e.g., Li & Bryan, 2020).
Across the different galaxy potentials, the loading factor of iron and r-process elements are remarkably similar to each other, with a major caveat being that the MW progenitor with low SFR undergoes very low r-process mass loading before the first event occurs at \(t\approx 0.6T_{\rm end}\).
Models with fewer cc-SNe give the freshly synthesized r-process metals more time to diffuse and travel across the disk and, as a result, show less prominent r-process outbursts when compare with the MW progenitor with high SFR. The average mass loading factor of iron and r-process elements in satellite galaxy model (MW progenitor with low SFR), at \(|z|=2z_{\rm SNe}\), are \(\approx 70\) % (\(\approx 30\) %) and \(\approx 89\)% (\(\approx 22\) %), indicating significantly less retention of metals in the ISM than the MW progenitor with high SFR. In essence, higher mass injection rates of r-process material are necessary in a satellite galaxy in order to average [Eu/H] abundance comparable to a more massive system.
## 5 Varying the scale heights of r-process injection and its effect on metal mixing
Here we examine the impact of altering the scale height of r-process metal injection sites on metal mixing. The galaxy patch setup does not lend itself well to studying large offsets from the disk (e.g., Rosswog et al., 2003; Zheng & Ramirez-Ruiz, 2007; Kelley et al., 2010; Behroozi et al., 2014; Zevin et al., 2022), as would be expected for systems with long merger timescales (\(t_{\rm delay}\)) and large velocity kicks (\(v_{\rm kick}\)). This setup can, however, be used to probe the impact of modest event offsets, \(\bar{z}\approx 140(t_{\rm delay}/10{\rm Myr})(v_{\rm kick}/20{\rm km\,s^{-1}})\) pc, on the subsequent mixing efficacy, as could be envisaged for r-process production in fast-merging double neutron stars (Ramirez-Ruiz et al., 2015; Safarzadeh et al., 2019) or rare cc-SNe (Winteler et al., 2012; Nishimura et al., 2015; Mosta et al., 2018). These prompt channels are commonly argued to be more effective at enriching very metal poor stars with r-process products (e.g., Siegel et al., 2019; Cowan & Sneden, 2006), which is the central focus of this study.
In this section, we present results for three different simulations. For these runs, we use the same galaxy type. We select the MW progenitor with high SFR in view of the fact that, as we argued in Section 4, r-process products in this model are given the smallest amount of time to migrate throughout the disk. As such, the effects of varying the scale height of metal injection should be particularly obvious for this model. The relative rate and mass per event of r-process events remain unchanged. That is, \(f_{\rm rp}=10^{-3}\) and \(m_{\rm rp}=10^{-2}\,M_{\odot}\). Yet, the maximum allowed height of NSM is varied between the following three scale heights: \(z_{\rm NSM}=z_{\rm SNe}\), \(z_{\rm NSM}=1.33z_{\rm SNe}\), \(z_{\rm NSM}=2z_{\rm SNe}\). To facilitate comparison, the injection procedure for cc-SNe remains the same in all simulations.
### Mass loading of r-process elements
It is natural to expect that the event scale height will influence the rate at which r-process material is lost via galactic winds. We explore this assumption here by examining the consequences of increasing \(z_{\rm NSM}\) on \(\eta_{Z_{\rm rp}}\).
We expect \(\eta_{M}\) and \(\eta_{Z_{\rm em}}\) to remain unchanged in these models as cc-SNe, whose injection properties are the same in all models, dominate the energy and momentum injection in the disk. In essence, cc-SNe are primarily responsible for driving both the turbulent mixing and the resulting galactic wind in these models. This is confirmed by our simulations which show that the average \(\eta_{M}\) and \(\eta_{Z_{\rm em}}\) are similar to within a few percent in all simulations.
In contrast, the average mass loading factor of r-process elements vary perceptibly with \(z_{\rm NSM}\) (see Figure 7). More precisely, the values of \(\eta_{Z_{\rm rp}}\) measured at \(|z|=2z_{\rm SNe}\), are \(\approx 8.2\) % (\(z_{\rm NSM}=z_{\rm SNe}\)), \(\approx 12.3\) % (\(z_{\rm NSM}=1.33z_{\rm SNe}\))2 and \(\approx 12.6\) % (\(z_{\rm NSM}=2z_{\rm SNe}\)). As expected, the model where r-process events are confined to a region that is significantly larger than the one for cc-SNe shows the largest \(\eta_{Z_{\rm rp}}\), indicating slightly less effective retention of r-process metals in the ISM.
Footnote 2: There is a small difference between the average reported here and in section 3.3. The number reported in this section includes only the time that is common among the three simulations, this shorter time span leads to small differences in the reported figures.
### Abundance evolution
Having found only a small increase in \(\eta_{Z_{\rm rp}}\) with r-process event offsets, we now consider if the location of events are capable of altering the metal abundance of the ISM. Figure 8
shows the volume-weighted mean and 1\(\sigma\) of [Eu/Fe] as a function of the mean [Fe/H] for the global (left) and cold, dense, disk gas (right) in the three simulations. The definition of cold, dense disk gas remains unchanged from Sections 3.2 and 4.1.
While there is a trend of slightly higher mean [Eu/Fe] abundances with increasing event offsets in the global measurement, we note no discernible difference when considering only the cold, dense gas. This is to be expected as the retention of r-process material in these models is \(\gtrsim 90\%\). Both the mass loading factors and abundances are found to be similar across models with different allowable scale heights of r-process events. Thus, we conclude that the range of offsets examined here do not produce measurable changes in observed galaxy properties.
We caution the reader that global disk galaxy simulations are necessary in order to study the mixing effects of r-process metals deposited by NSM with large kicks and long merger times. This will require a large range of scales to be resolved simultaneously, which is exceedingly challenging given the complexity of the interplay between the various galaxy assembly mechanisms at all scales. Considering the somewhat emerging stage of modeling in the field, this study amounts to a sizable improvement within the stated limitations in our understanding of how the location of metal injection within the galaxy influences the inhomogeneous enrichment of the ISM and the mixing of r-process elements.
## 6 On the repercussions of the production rate of r-process material on metal mixing
The purpose of this paper is to help isolate some of the key mechanisms that regulate turbulent mixing of r-process elements in galactic disks and, in particular, how the statistics of abundance variations in stars can help constrain astrophysical r-process synthesis models. It seems reasonable to expect that most astrophysical r-process production models would have r-process mass per event, \(m_{\rm rp}\), and (relative) rate of events, \(f_{\rm rp}\), as essential parameters. While a given r-process production mechanism may display a correlation or even interdependence between these two parameters, it is instructive to consider these two independently for understanding the mixing properties. Motivated by this, in this section, we examine more closely how the shape of the distribution of [Eu/Fe] abundances in the ISM changes in response to shifts in \(m_{\rm rp}\) and \(f_{\rm rp}\).
First, we consider changes in \(m_{\rm rp}\), keeping all other parameters fixed. Next, we consider changes in \(f_{\rm rp}\) while keeping \(m_{\rm rp}\) constant. Finally, we address how the statistics of variations of metal poor halo stars in chemical space can provide valuable constraints on \(m_{\rm rp}\) and \(f_{\rm rp}\). Throughout this analysis, we focus on the results derived from the MW progenitor with high SFR model and the satellite galaxy model.
### The role of \(m_{\rm rp}\)
The key to using galaxy patch models productively is to isolate the role of key parameters and then to analyse the
Figure 8: The temporal evolution of the volume weighted mean and 1\(\sigma\) of [Eu/Fe] as a function of the global mean [Fe/H] abundance in MW progenitor with high SFR simulations with varying \(z_{\rm NSM}\) (see legend). The [Eu/Fe] abundance is shown in the left panel for the entire simulation box and in the right panel for the cold, dense gas in the disk (defined as for Figures 3 and 5). We note no discernible difference in the mean abundance of the gas, suggesting that the offset of events does not strongly influence the observed abundances of r-process elements within the limits examined here.
Figure 7: The temporal evolution of the metal r-process loading factor, \(\eta_{\rm rp}\), in units of the characteristic relaxation time, \(t_{r}\), for varying \(z_{\rm NSM}\) in the MW progenitor high SFR model. \(\eta_{\rm rp}\) is measured here at \(|z|=2_{\rm SNe}\). The dark vertical lines denote the times at which r-process events transpire. The r-process retention is slightly altered by changes in \(z_{\rm NSM}\), with more prominent metal outbursts observed when injection takes place at higher scale heights.
simulations so that we can learn the role of these model ingredients on metal mixing. In this analysis we first begin by isolating the role of \(m_{\rm rp}\).
Figure 9 shows the stacked one-dimensional distributions of [Eu/Fe] within the cold, dense gas at each average [Fe/H] metallicity. The color bar notes the fractional mass in each bin. The filled-in contours represent the standard enrichment model (i.e., \(f_{\rm rp}=10^{-3}\) and \(m_{\rm rp}=10^{-2}\,M_{\odot}\)), while the line contours denote the model with \(m_{\rm rp}\) increased by a factor of 10 (i.e., \(m_{\rm rp}=10^{-1}\,M_{\odot}\)). The left-hand panel shows our results from the MW progenitor with high SFR model, while the right-hand panel reveals our outcomes from the satellite galaxy model.
In both galaxy models, as expected, the shape of the contours is unchanged by altering \(m_{\rm rp}\). The main difference is an increase in the mean abundance of [Eu/Fe], as the mass of r-process injected over time is augmented in the box. The metal dispersion, on the other hand, stays the same as the relative rate of metal mixing, which is controlled by the cc-SN rate, remains unchanged.
### The role of \(f_{\rm rp}\)
Our findings in Section 6.1 provide supporting evidence that while the average [Eu/Fe] abundance can be altered by changes in \(m_{\rm rp}\), the dispersion of the [Eu/Fe] abundance can only be modified by varying \(f_{\rm rp}\).
It is to this issue that we now turn our attention. Figure 10 shows how the stacked one-dimensional distributions of [Eu/Fe] as a function of the average [Fe/H] metallicity within the cold, dense gas are altered when \(f_{\rm rp}\) is modified. This result should be directly compared with those presented in Figure 9. For both of the models considered in Figure 10, the relative rate of r-process events is changed from \(f_{\rm rp}=10^{-3}\) to \(f_{\rm rp}=10^{-2}\) while keeping all other inputs constant.
Some conspicuous points should be underscored from Figure 10. First, it is important to note that the [Fe/H] abundance is determined solely by the cc-SNe, whose rate remains unchanged in all simulations. Thus, changes in \(f_{\rm rp}\) lead simply to adding more r-process material, thereby increasing the mean [Eu/Fe] abundance but producing very little change in the overall Fe mass. Second, the relative injection rate, \(f_{\rm rp}\), alters both the evolution of the mean [Eu/Fe] abundance, as well as, the [Eu/Fe] spread. This is because the total amount of metals injected is significantly increased with time but at the expense of augmenting the number of injection sites and not the total mass per event as was done in in Section 6.1. As such, the mean separation between injection sites is substantially decreased, which helps the effectiveness of turbulent mixing in smoothing metal inhomogeneities. As expected, this diminishes the [Eu/Fe] abundance spread. Finally, the mean [Eu/Fe] abundance increases more swiftly when \(f_{\rm rp}\) is augmented, causing the model to spend more time at higher relative abundances in the plane depicted in Figure 10. By its very nature, a mass build up at higher relative abundances is naturally produced in these models.
Motivated by the results presented in Sections 6.1 and 6.2, we direct our efforts to comparing our models to observations under the assumption that the selected cold gas within the disk is the most likely to form stars.
### Constrains from observations of metal poor halo stars
Having considered the influence of \(m_{\rm rp}\) and \(f_{\rm rp}\) on the mean and spread of [Eu/Fe] as a function of the mean [Fe/H] abundance, we dedicate ourselves to comparing the simulated abundance distributions with the observed stellar abundances in the MW halo. We do this in order to provide useful constraints on the operating conditions of r-process production models in the early Universe.
The metal-poor halo stars for this analysis are taken from Roederer et al. (2014). Our selection has a twofold rationale. First, this data set constitutes the largest homogeneously reduced sample of its kind (about 300 metal-poor halo stars). And, second, the authors have systematically and scrupulously documented the uncertainty of their measurements. This allows us to use the star-to-star [Eu/Fe] scatter and mean as a function of [Fe/H] derived by Roederer et al. (2014) as a constraint for r-process production models in the [\(m_{\rm rp}\),\(f_{\rm rp}\)] plane.
In Figure 11, we show the distributions of [Eu/Fe] as a function of the average abundance of [Fe/H] within the cold, dense gas in the disk for both the MW progenitor with high SFR (left panel) and the satellite galaxy model (right panel). The color indicates the fractional mass in a given bin. Also shown are the abundances and measurement uncertainties of the MW halo stars, as reported by Roederer et al. (2014). The parameters of the r-process events have been adjusted to achieve a reasonable agreement with the observations for both models. Shown is a representative model that effectively recovers both the mean and the spread in [Eu/Fe] as a function of [Fe/H] for the sample of MW halo stars that we selected. The representative values used to generate the models in Figure 11 are \(f_{\rm rp}=10^{-2}\) and \(m_{\rm rp}=5\times 10^{-3}\,M_{\odot}\), respectively. Given that the MW progenitor with high SFR model retains the r-process products more effectively, the mean [Eu/Fe] is slightly more elevated compared to the satellite model, albeit within the uncertainties of the data and the range of applicability of the simulations (e.g., results are only shown after steady state is reached).
From the simple model comparison presented in Figure 11, we find clear evidence that the properties of the metal-poor halo stars are consistent with stellar birth sites in a MW progenitor with a high SFR or a satellite galaxy. It is, however, important to note that model comparison is hindered by the fact that the observational sample used in this analysis is
impaired by incompleteness and selection effects, which are most evident at low [Eu/Fe] abundances.
In the preceding sections we outlined how the abundance distributions of [Eu/Fe] are influenced by changes in \(m_{\rm rp}\) and \(f_{\rm rp}\). In what follows, we briefly consider how these results may be extrapolated beyond the parameter space explored in this paper. The results presented in Section 6.1 indicate that that impact of altering \(m_{\rm rp}\) while keeping \(f_{\rm rp}\) unaltered is straight forward. Changing \(m_{\rm rp}\) only leads to a change in normalization, with an increase of mass per event leading to a larger abundance normalization.
Extrapolating \(f_{\rm rp}\) is more complicated due the intricate influence of this parameter on the shape of the [Eu/Fe] distribution. The results presented in section 6.2 clearly indicate that lower relative rate leads to later r-process enrichment. At \(f_{\rm rp}\) = 10\({}^{-3}\) the simulations struggle to explain the high levels of r-process pollution of low metallicity stars. For this reason, lower relative rates are unlikely to produce reasonable fits to the observed values. Higher relative rates, i.e. more frequent r-process events, seem more likely to reproduce the observational data at low metallicities. On the other hand, as we have shown here, higher relative rates also lead to more efficient metal mixing, quickly reducing the width of the abundance distribution at higher metallicity values. This makes high relative rates to be unable to reproduce the spread observed in [Eu/Fe]. We expand on these arguments in the following section.
## 7 Discussion
Observations of [Eu/Fe] in metal-poor stars suggest relative yields and variations of yields in r-process production events. Such inherent fluctuations are evident when r-process events inject metals at rates that are significantly reduced when compared to cc-SN rates. These fluctuations are smoothed out by turbulent diffusion driven by cc-SNe, which sets the rate of metal mixing. As we have contended in this paper, the degree of fluctuations along with the mean abundances of [Eu/Fe] are sensitive to the relative mass injection rate of r-process and, to a lesser extent, to the type of galaxy environment.
### Production rate of r-process elements
The total r-process mass in the MW is estimated to be \(M_{\rm tot,rp}\approx 2.3\times 10^{4}\)\(M_{\odot}\)(Hotokezaka et al., 2018). Combining this number with an estimate of the age of the MW (\(t_{\rm MW}\approx 1\times 10^{10}\) yr), one can calculate an average production
Figure 9: Stacked 1d distributions of [Eu/Fe] within the cold, dense gas at each average [Fe/H] metallicity. The truncation at higher [Fe/H] abundance is caused by the different time scales for enrichment in the two galaxy patch models. Contours show the fractional mass per bin in this abundance plane and the effects of altering the mass per event while keeping the relative rate (\(f_{\rm rp}\) = 10\({}^{-3}\)) constant. Filled in contours represent the simulations with \(m_{\rm rp}=1\times 10^{-2}\)\(\rm M_{\odot}\), while line contours represent \(m_{\rm rp}=1\times 10^{-1}\)\(\rm M_{\odot}\). Results are shown for two galaxy models: the MW progenitor high SFR model (left panel) and satellite galaxy model (right panel). Increasing the mass per event shifts the abundance distributions to higher mean [Eu/Fe] at constant mean [Fe/H], while the abundance spread, which is driven by turbulent mixing, remains unchanged.
rate of r-process elements in the galaxy as
\[\dot{m}_{\rm rp}\ \approx\frac{M_{\rm tot,rp}}{t_{\rm MW}}\approx 2.3\times 10^{-6} \,{\rm M_{\odot}}/{\rm yr}. \tag{4}\]
Hotokezaka et al. (2018) used this relationship to estimate the mass production rate of r-process events in the MW as
\[R_{\rm MW}\approx 230\,/{\rm Myr}\left(\frac{m_{\rm rp}}{0.01\,{\rm M_{\odot}}} \right)^{-1}. \tag{5}\]
This relationship is represented by the thick green line in Figure 12 along which the production rate of r-process elements is unchanged. The shaded region below the line is limited by the same relationship by for a lower total r-process mass in the MW. This relationship is set by \(\dot{m}_{\rm rp}\approx 1\times 10^{-7}\,{\rm M_{\odot}}/{\rm yr}\), which is thought to give a strict lower limit to \(\dot{m}_{\rm rp}\) based on the minimum amount of r-process mass in stars in the MW (Kasen et al., 2017; Hotokezaka et al., 2018).
The shaded-pink region in Figure 12 shows the range of values in the [\(m_{\rm rp}\), \(R_{\rm MW}\)] plane derived from observations of GW170817 (Kasen et al., 2017; Kilpatrick et al., 2017; Waxman et al., 2018; Hotokezaka et al., 2018). We also show in this plane the estimates of the rates of short gamma-ray bursts (\(R_{\rm MW}\)), which are believed to be an observational signature of NSM (Eichler et al., 1989; Lee et al., 2005; Lee & Ramirez-Ruiz, 2007) and the mass constraints (\(m_{\rm rp}\)) from observations of afterglows of sGRBs (Roberts et al., 2011; Hotokezaka et al., 2013; Tanvir et al., 2013; Berger et al., 2013; Yang et al., 2015; Jin et al., 2016; Kasen et al., 2015; Kasliwal et al., 2017; Kilpatrick et al., 2017; Murguia-Berthier et al., 2017; Ascenzi et al., 2019), referred to in Figure 12 as macronova candidates. Finally, the vertical black line marks the lower limit on r-process mass per event as deduced by Macias & Ramirez-Ruiz (2018) based on observations of metal poor halo stars in the MW.
In the left panel of Figure 12 we call attention to the results from our turbulent mixing study, which explores how the abundance distribution of [Eu/Fe] is altered by \(m_{\rm rp}\) and \(f_{\rm rp}\) (i.e., \(R_{\rm MW}\)). As highlighted by the horizontal arrow, changing \(m_{\rm rp}\) while keeping \(f_{\rm rp}\) constant shifts the mean abundance at a fixed [Fe/H] abundance but does not alter the scatter. This implies that the observed spread of abundances can not be used to constraint \(m_{\rm rp}\). In contrast, as underscored by the vertical arrow, \(f_{\rm rp}\) changes both the spread and the mean of the [Eu/Fe] abundance distributions at a fixed [Fe/H] abundance.
The results of the galaxy patch simulations of the MW progenitor with high SFR model are shown in the right panel of Figure 12. To accurately position our simulations in the [\(m_{\rm rp}\), \(R_{\rm MW}\)] plane, we follow Beniamini et al. (2016) who estimates
Figure 10: Stacked 1d distributions of [Eu/Fe] within the cold, dense gas at each average [Fe/H] metallicity. The truncation at higher [Fe/H] abundance is caused by the different time scales for enrichment in the two galaxy patches, as well as, slightly different evolution times between different simulations of the same galaxy patch. Results are shown for the same two galaxy models as in Figure 9. Color indicates the fractional mass per bin in this abundance plane. Filled in contours represent the simulations with \(f_{\rm rp}=10^{-3}\), while line contours represent \(f_{\rm rm}=10^{-2}\), in both cases \(m_{\rm rp}=1\times 10^{-2}\,{\rm M_{\odot}}\). Changing the relative rate of events from \(f_{\rm rp}=10^{-3}\) to \(f_{\rm rp}=10^{-2}\) reduces the spread of abundances and leads to more cold, dense gas at higher [Eu/Fe] abundances.
Figure 11: Stacked 1d distributions of [Eu/Fe] in the cold, dense gas at each average [Fe/H] metallicity in the MW progenitor with high SFR model (left panel) and satellite galaxy model (right panel). The color indicates the fractional mass in each bin, the white solid (dashed) lines indicate the 1\(\sigma\) (2\(\sigma\)) spread of the distribution. In both models, \(f_{\rm p}=10^{-2}\) and \(m_{\rm p}=5\times 10^{-3}\,M_{\odot}\). We consider the cold, dense gas within the disk as the most likely to form stars. The black symbols indicate the abundances of metal poor halo stars with uncertainties as reported by Roederer et al. (2014). The mean [Eu/Fe] abundance in high SFR model is slightly elevated when compared to the satellite galaxy model as expected from the ability of this model to more effectively retain r-process material. The truncation at higher [Fe/H] abundance is caused by the different time scales for enrichment in the two galaxy patch models.
Figure 12: The constraints derived for r-process production events in the [\(m_{\rm p},R_{\rm MW}\)] plane, which has been adapted from Hotokezaka et al. (2018). The left panel includes a compilation of the production rate of r-process element constraints in the MW: the constraints from GW170818, limits on rates from short gamma-ray burst observations, the mass limits from extremely metal-poor stars (Macias & Ramirez-Ruiz 2018), and the mass limits from macronova candidates. The derived relationships between the mean and spread of abundances when the rate or mass per event are altered are highlighted in the left panel. On the right-hand panel we show the constraints derived from turbulent mixing simulations. The red symbols indicate simulations that do not successfully explain the data while the blue symbols are for simulations that give a reasonable description of the [Eu/Fe] abundances (see also Appendix C). The lavernder shaded region highlights the parameter space for which simulations provide a reasonable description of the data. The upper limit for the mass production rate (purple line) arise naturally from the highest [Eu/Fe] abundance measurement (see text). Models above this region are stringently ruled out given that there are very few selection effects against uncovering the highest [Eu/Fe] metal poor stars.
an average rate of cc-SNe in the MW to be
\[\tilde{n}_{\rm SNe,MW}\approx 6.4\times 10^{4}\,{\rm Myr}^{-1}. \tag{6}\]
This allows us to chart the relative rate of r-process to cc-SN events from the local boxes to standard MW values by assuming constant relative rates across the galaxy. Then it follows that
\[R_{\rm MW,rp}=f_{\rm rp}\tilde{n}_{\rm SNe,MW}. \tag{7}\]
From Figures 9-11 we know that the SFR does not strongly impact the shape of the distribution. As such, we expect that that conclusions based on assuming a constant SFR everywhere in the disk will not significantly be altered if the SFR changes across the disk.
We divide the simulations broadly into two distinct groups by how well they describe the observed abundances of MW halo stars (Roederer et al., 2014). We consider both the mean and the spread of [Eu/Fe] abundance in bins of [Fe/H], as well as, the range of [Fe/H] abundance over which there is rough agreement when making these comparisons. In Appendix C we present the results of the simulations with varying \(m_{\rm rp}\) and \(f_{\rm rp}\) and how they compare to observations. The blue symbols indicate reasonable agreement between observations and simulations, while the red symbols indicate little to no overlap between the abundances predicted by the simulations and those observed in the comparison sample.
Obviously, the comparison presented in Figure 12 is only cursory and should be taken as an order of magnitude estimate at present. Having said this, the results of our metal mixing study allow us to clearly define a model confidence region within the [\(m_{\rm rp}\), \(R_{\rm MW}\)] plane, which is specified by the lavender shaded region in the left panel. The lavender region is restrained by a strict upper limit (purple line). Such a limit can be derived by the underlying fact that simulations above this boundary will naturally produce stars with a mean abundance of [Eu/Fe] = 1.4, which is higher than the abundance of the most r-process enhanced star in the Roederer et al. (2014) sample: [Eu/Fe] = 1.37. The slope of the line is derived from the mass ratios of Eu and Fe production leading to [Eu/Fe] = 1.4. This value can be written as a production rate with \(\dot{m}_{\rm rp,max}\approx 4.7\times 10^{-4}\,{\rm M}_{\odot}/{\rm yr}\). The lavender region presented in the left panel of Figure 12 is in remarkable good agreement with previous independent constraints derived both theoretically and empirically and provide a significant reduction in the permissible parameter space.
### Co-production of light and heavy r-process elements?
Several astrophysical processes might be required to explain the solar r-process abundance pattern of both the lighter neutron capture elements (between the first and second r-process peaks at \(A\approx 80\) and \(A\approx 130\), respectively) and heavier nuclei such as Eu (Cowan et al., 2021). A fascinating characteristic that emerges when comparing the relative abundances of metal poor halo stars is the robustness of the pattern for elements with \(A\geqslant 137\)(Sneden et al., 2008). In this study we have selected to model the r-process mass contribution as that comprised by all elements with \(A\geq 69\). If, for example, we change the minimum atomic number from \(A_{\rm min}=69\) to \(A_{\rm min}=90\) the average abundances of [Eu/Fe] shift by \(+0.68\,\)dex with no other changes to the results presented in Figure 12. Appendix C includes the simulated distributions for \(m_{\rm rp}\) calculated with and without this shifting (i.e., with \(A_{\rm min}=69\) and \(A_{\rm min}=90\)). This simply implies that constraints on \(\dot{m}_{\rm rp}\) are sensitive to whether r-process sites, which are assumed in this paper to be standard, are responsible for producing all r-process elements or only the heavier ones. If Eu is produced by sources that only produce heavier r-process elements (\(A\geq 90\)), this implies that the derived constraints on \(\dot{m}_{\rm rp}\) should be lower by \(0.68\,\)dex when compared to those illustrated in Figure 12.
### On the origin of the metal poor halo stars
The MW halo is expected to have been assembled from stars originally residing in disrupted satellites (e.g. Naidu et al., 2021; Santistevan et al., 2021). The presence of r-process enhanced stars (Beers & Christlieb, 2005; Holmbeck et al., 2020) in several dwarf galaxy systems and the orbital properties of r-process enhanced stars in the halo(Roederer et al., 2018) indicate that such stars likely were accreted from disrupted satellites. From a theoretical perspective, Hirai et al. (2022) recently performed hydrodynamic zoom-in cosmological simulations, focusing on a MW-like galaxies. They found that the vast majority (90%) of r-process enhanced stars ([Eu/Fe] \(>0.7\)) are formed early in the evolution of a galaxy and concluded that the majority of these r-process enhanced stars were accreted from disrupted satellite galaxies.
Our results support the hypothesis that both MW-like and dwarf-like galaxy systems are capable of producing stars with r-process abundances similar to those observed in the MW halo. It is, however, important to note that retention of r-process elements in satellite galaxies is found to be smaller than in MW-like progenitors (Section 4). This implies that viable mass production rates in satellite galaxies should be larger (by a factor of a few) than those presented in Figure 12 for a MW-like progenitor. Retention of r-process elements might also be significantly reduced for r-process events with large offsets from the star forming disk, such as expected for NSM with long delay times and large kick velocities. With that said, systems with large displacements are thought to be much less effective at polluting low metallicity gas with r-process material in the early Universe (e.g., Siegel et al., 2019).
In this paper we study the patchy enrichment of the ISM using numerical simulations at kpc scales that are able to resolve the mixing of metals by cc-SNe-driven turbulence. By investigating the statistics of variations of cc-SN and r-process products in these simulations, we are able to derive constraints on the allowed range of the production rate of r-process elements in the MW. By systematically varying the model parameters, we were able to identify some of the physical process that we believe are most relevant to explain the mean and dispersion of [Eu/Fe] abundances in metal-poor stars. Our salient findings are:
* cc-SNe inject freshly synthesized metals and drive turbulent mixing which causes metals to diffuse across the disk (Figure 1). By virtue of their much more frequent injection, cc-SNe products (for example, Fe) are more evenly distributed than r-process elements (for example, Eu) synthesized in rarer events. This difference naturally produces an ISM with r-process elements that are nonuniform and highly undiluted at early epochs (Figure 2). These congenital variations are most notable when r-process sources inject metals at rates that are considerably decreased compared to cc-SN rates (Sections 3 and 4).
* The momentum and energy that goes into driving turbulence in the disk also launches galactic winds and flings metals out of the disk (Figure 4). The rarefied galactic wind is highly enriched in both Fe and Eu compared to the star forming gas in the disk (Figure 2). This result implies that a considerable mass of r-process elements might reside in the hot inter galactic medium (Sections 3 and 4).
* The metal mass loading factors of cc-SN and r-process products are not exactly the same, suggesting that r-process metals are launched by winds before they are able to be efficiently mixed with the ISM (Figure 2). However, models with fewer cc-SNe give r-process elements more time to diffuse and mix in the disk and, as a result, show winds with less prominent Eu-enriched outbursts (Sections 3 and 4).
* Across different galaxy models, the metal mass loading factors of iron and r-process elements are rather similar. However, the magnitude of the loading factors depend on the specific galaxy potential, with larger mass loading factors found in less massive galaxies (Figure 6). Implicitly, higher mass injection rates of r-process material are required in a satellite galaxy in order to achieve an average [Eu/H] abundance comparable with a MW-like progenitor galaxy (Section 4).
* The r-process metal mass loading factor shows a measurable increase with increasing allowed height of r-process events (Figure 7). Despite this, the [Eu/Fe] abundances, especially in the cold, dense gas phase, are found to be very similar across models (Figure 8). Thus, we conclude that no measurable changes in observed galaxy properties are expected for the range of offsets we are able to probe in these simulations (Section 5).
* The degree of fluctuations in and the mean of [Eu/Fe] abundances in the cold, dense gas are found to be highly responsive to the mass injection rate of r-process (Section 6) and, to a lesser extent, to the type of galaxy (Section 4). Concretely, increasing the r-process mass per event increases the mean [Eu/Fe] abundance (Figure 9), while increasing the rate of r-process events relative to cc-SNe increases the mean [Eu/Fe] abundance and reduces the [Eu/Fe] spread (Figure 10).
* Observations of [Eu/Fe] in metal-poor stars are used to derive constraints on the mass per event and the event rate of r-process sources (Section 7). We find a production rate of \(6\times 10^{-7}M_{\odot}/{\rm yr}\lesssim\dot{m}_{\rm rp}\ll 4.7\times 10^{-4}M_{ \odot}/{\rm yr}\) best explains the data. The constraints presented are in notable agreement with other independently derived confinements and produce a marked reduction in the permitted parameter range (Figure 12).
* Our findings give credence to the idea that stars with r-process abundances similar to those observed in the MW halo can be manufactured by both MW-like and dwarf-like galaxy progenitors (Figure 11). Although we note that r-process mass retention in satellite galaxies is found to be smaller than in MW-like progenitors and, as such, the viable mass production rates should be appropriately higher (Section 6).
We thank the referee for helpful comments which improved the quality of this paper. We thank C. Sakari, K. Ruiz-Rocha, A. Ji, D. Kasen, N. Imara and L. Kenoly for insightful discussions. This work made use of an HPC facility funded by a grant from VILLUM FONDEN (project number 16599). A.N.K. and E.R.-R. acknowledge support by the Heising-Simons Foundation, the Danish National Research Foundation (DNRF132) and NSF (AST-2206243, AST-1911206 and AST-1852393). PM gratefully acknowledges support from NASA grant 14-WPS14-0048. M.S.F gratefully acknowledges support provided by NASA through Hubble Fellowship grant HST-HF2-51493.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, In., for NASA, under the contract NAS 5-26555. |
2310.03814 | Optimal Control of District Cooling Energy Plant with Reinforcement
Learning and MPC | We consider the problem of optimal control of district cooling energy plants
(DCEPs) consisting of multiple chillers, a cooling tower, and a thermal energy
storage (TES), in the presence of time-varying electricity price. A
straightforward application of model predictive control (MPC) requires solving
a challenging mixed-integer nonlinear program (MINLP) because of the on/off of
chillers and the complexity of the DCEP model. Reinforcement learning (RL) is
an attractive alternative since its real-time control computation is much
simpler. But designing an RL controller is challenging due to myriad design
choices and computationally intensive training.
In this paper, we propose an RL controller and an MPC controller for
minimizing the electricity cost of a DCEP, and compare them via simulations.
The two controllers are designed to be comparable in terms of objective and
information requirements. The RL controller uses a novel Q-learning algorithm
that is based on least-squares policy iteration. We describe the design choices
for the RL controller, including the choice of state space and basis functions,
that are found to be effective. The proposed MPC controller does not need a
mixed integer solver for implementation, but only a nonlinear program (NLP)
solver. A rule-based baseline controller is also proposed to aid in comparison.
Simulation results show that the proposed RL and MPC controllers achieve
similar savings over the baseline controller, about 17%. | Zhong Guo, Aditya Chaudhari, Austin R. Coffman, Prabir Barooah | 2023-10-05T18:07:11Z | http://arxiv.org/abs/2310.03814v1 | # Optimal Control of District Cooling Energy Plant
###### Abstract
We consider the problem of optimal control of district cooling energy plants (DECPs) consisting of multiple chillers, a cooling tower, and a thermal energy storage (TES), in the presence of time-varying electricity price. A straightforward application of model predictive control (MPC) requires solving a challenging mixed-integer nonlinear program (MINLP) because of the on/off of chillers and the complexity of the DCEP model. Reinforcement learning (RL) is an attractive alternative since its real-time control computation is much simpler. But designing an RL controller is challenging due to myriad design choices and computationally intensive training.
In this paper, we propose an RL controller and an MPC controller for minimizing the electricity cost of a DCEP, and compare them via simulations. The two controllers are designed to be comparable in terms of objective and information requirements. The RL controller uses a novel Q-learning algorithm that is based on least-squares policy iteration. We describe the design choices for the RL controller, including the choice of state space and basis functions, that are found to be effective. The proposed MPC controller does not need a mixed integer solver for implementation, but only a nonlinear program (NLP) solver. A rule-based baseline controller is also proposed to aid in comparison. Simulation results show that the proposed RL and MPC controllers achieve similar savings over the baseline controller, about 17%.
Introduction
In the U.S., 75% of the electricity is consumed by buildings, and a large part of that is due to heating, ventilation, and air conditioning (HVAC) systems [1]. In university campuses and large hotels, a large portion of the HVAC's share of electricity is consumed by District Cooling Energy Plants (DCEPs), especially in hot and humid climates. A DCEP produces and supplies chilled water to a group of buildings it serves (hence the moniker "district"), and the air handling units in those buildings use the chilled water to cool and de-humidify air before supplying it to building interiors. Figure 1 shows a schematic of such a plant, which consists of multiple chillers that produce chilled water, a cooling tower that rejects the heat extracted from chillers to the environment, and a thermal energy storage system (TES) for storing chilled water. Chillers - the most electricity intensive equipment in the DCEP - can produce more chilled water than buildings' needs when electricity price is low. The extra chilled water is then stored in the TES, and used during periods of high electricity price to reduce the total electricity cost. The District Cooling Energy Plants are also called central plants or chiller plants.
DCEPs are traditionally operated with rule-based control algorithms that use heuristics to reduce electricity cost while meeting the load, such as "chiller priority", "storage priority", and additional control sequencing for the cooling tower operation [2, 3, 4, 5, 6, 7, 8]. But making the best use of the chillers and the TES to keep the electricity cost at the minimum requires non-trivial decision making due to the discrete nature of some control commands, such as chiller on/off actuation, and highly nonlinear dynamics of the equipment in DCEPs. A growing body of work has proposed algorithms for optimal real-time control of DCEPs. Both Model Predictive Control (MPC) [9, 10, 11, 12, 13, 14, 15, 16, 17] and Reinforcement Learning (RL) [18, 19, 20, 21, 22, 23, 24, 25, 26] have been studied.
For MPC, a direct implementation requires solving a high dimension mixed-integer linear program (MINLP) that is quite challenging to solve. Various substitutive approaches are thus used, which can be categorized into two groups: NLP approximations [9, 10, 11, 12] and MILP approximations [13, 14, 15, 16, 17]. NLP approximations generally leave the discrete commands for some predetermined control logic and only deal with continuous control commands, which may limit the potential of their savings. MILP approximations mostly adopt a linear DCEP model so that the problem is tractable, though solving large MILPs is also challenging.
An alternative to MPC is Reinforcement Learning (RL): an umbrella term for a set of tools used to approximate an optimal policy using data collected from a physical system, or more frequently, its simulation. Despite a burdensome design and learning phase, real-time control is simpler since control computation is an evaluation of a state-feedback policy. However, designing an RL controller for a DCEP is quite challenging. The performance of an RL controller depends on many design choices and training an RL controller is computationally onerous.
In this paper we propose an RL controller and an MPC controller for a DCEP, and compare their performance with that of a rule-based baseline (BL) controller through simulations. All three controllers are designed to minimize total energy cost while meeting the required cooling load. The main source of flexibility is the TES, which allows a well-design controller to charge the TES in periods of low electricity price. The proposed RL controller is based on a new learning algorithm that is inspired by the "convex Q-learning" proposed in recent work [27] and the classical least squares policy iteration (LSPI) algorithm [28]. Basis functions are carefully designed to reduce computational burden in training the RL controller. The proposed MPC controller solves a two-fold non-linear program (NLP) that is transformed from the original MINLP via heuristics. Hence the MPC controller is "stand-in" for a true optimal controller and provides a sub-optimal solution to the original MINLP. The baseline controller that is used for comparison is designed to utilize the TES and time varying electricity prices (to the extent possible with heuristics) to reduce energy costs. The RL controller and baseline controller have the same information about electricity price: the current price and a backward moving average.
The objective behind this work is to compare the performance of the two complementary approaches, MPC and RL, for the optimal control of all the principal actuators in a DCEP. The two controllers are designed to be comparable, in terms of objective and information requirements. We are not aware of many works that has performed such a comparison; the only exceptions are [25, 26], but the decision making is limited to a TES or temperature setpoints. Since both RL and MPC approaches have merits and weaknesses, designing a controller with one approach and showing it performs well leaves open the question: would the other have performed better? This paper takes a first step in addressing such questions. To aid in this comparison, both the controllers are designed to be approximations of the same intractable infinite horizon optimal control problem. Due to the large difference in the respective approaches (MPC and RL), it is not possible to ensure exact parallels for an"apples-to-apples" comparison. But the design problems for RL and MPC controllers have been formulated to be similar to the possible extent.
Simulations results show that both the controllers, RL and MPC, leads to significant and similar cost savings (16-18%) over a rule-based baseline controller. These values are comparable to that of MPC controllers with mixed-integer formulation reported in the literature, which vary from 10% to 17% [13, 14, 15, 16, 17]. The cooling load tracking performance is similar between them. The real-time computation burden of the RL controller is trivial compared to that of the MPC controller, but the RL controller leads to higher chiller switches (from off to on and vice versa). However, the MPC controller enjoys the advantage of error-free forecasts in the simulations, something the RL controller does not.
The rest of the manuscript is organized as follows. The contribution of the paper over the related literature is discussed in detail in Section 1.1. Section 2 describes the District Cooling Energy Plant and its simulation model as well as the control problem. Section 3 describes the proposed RL controller, Section 4 the proposed MPC controller, and Sec
tion 5 describes the baseline controller. Section 6 provides simulation evaluation of the controllers. Section 7 provides an "under-the-hood" view of the design choices for the RL controller. Section 8 concludes the paper.
### Literature Review and Contributions
#### 1.1.1 Prior work on RL for DCEP
There is a large and growing body of work in this area, e.g. [18, 19, 20, 21, 22, 23, 24, 25, 26]. Most of these papers limit the problem to controlling part of a DCEP. For instance, the DCEPs considered in [18, 19, 20, 21, 23] do not have a TES. Refs. [18, 19, 20, 21, 22] optimize only the chilled water loop but not the cooling water loop (at the cooling tower), while [24] only optimize the cooling water loop. The reported energy savings are in the 10-20% range over rule-based baseline controllers; e.g. 15.7% in [23], 11.5% in [18] and around 17% in [21].
The ref. [25] considers a complete DCEP, but the control command computed by the RL agent is limited to TES charging and discharging. It is not clear what control law is used to decide chiller commands and cooling water loop setpoints. The work [26] also considers a complete DCEP, with two chillers, a TES, and a large building with an air handling unit. The RL controller is tasked with commanding only the zone temperature setpoint and TES charging/discharging flowrate whilst the control of the chillers or the cooling tower is not considered. Besides, trajectories of external inputs, e.g., outside air temperature and electricity price, are the same for all training days in [26]. Another similarity of [25, 26] with this paper is that these references compare the performance of RL with that of a model-based predictive control.
#### 1.1.2 Prior work on MPC for DCEP
The works that are closest to us in terms of problem setting are [13, 14, 15], which all reported MILP relaxation-based MPC schemes to optimally operate a DCEP with TES in presence of time varying electricity prices. The paper [13] reports an energy cost savings with MPC of about 10% over a baseline strategy that uses a common heuristic (charge TES all night) with some decisions made by optimization. In [14], around 15% savings over the currently installed rule-based controller is achieved in a real DCEP. The study [15] reported a cost savings of by 17% over "without load shifting" with help of the TES in a week-long simulation. The paper [16] also proposes an MILP relaxation based MPC scheme for controlling a DCEP and approximately 10% savings in electricity cost over a baseline controller over a one-day long simulation is reported. But the DCEP model in [16] ignores the effect of weather condition on plant efficiency, and the baseline controller is not purely-rule based; it makes TES and chiller decisions based on a greedy search. The recent paper [17] deserves special mention since it reports an experimental demonstration of MPC applied to a large DCEP; the control objective being manipulation of demand to help with renewable integration and decarbonization. It too uses an MILP relaxation. The decision variables include plant mode (combination of chillers on) and TES operation, but cooling water loop decisions are left to legacy rule-based controllers.
There is another body of work applying MPC to the control a DCEP, such as [9, 10, 11, 12]. But they either ignore the on/off nature of the chiller control [9, 10] or reformulate the problem using some heuristics [11, 12] so that the underlying optimization problem is naturally an NLP.
### Contribution over Priori Arts
#### 1.2.1 Contribution over "RL for DCEP" literature:
Unlike most prior works on RL for DCEPs that only deal with a part of DCEP [18, 19, 20, 21, 22, 23, 24], the control commands in this work consist of all the available commands (five in total) of both the water loops in a full DCEP. To the best of our knowledge no prior work has used RL to command both the water loops and a TES. Second, unlike some of the closely related work such as [26], we treat external inputs such as weather and electricity price as RL states, making the proposed RL controller applicable for any time-varying disturbances that can be measured in real time. Otherwise the controller is likely to work well only for disturbances seen during training. Third, the proposed RL controller commands the on/off status of chillers directly rather than the chilled/cooling water temperature setpoints [19, 21, 23] or zone temperature setpoints [26], which eliminates the need for another control system to translate those setpoints to chiller commands. Fourth, all the works cited above rely on discretizing the state and/or action spaces in order to use the classical tabular learning algorithms with the exception of [22]. The size of the table will become prohibitively large if the number of states and control commands become large and a fine resolution discretization is used. Training a such controller and using it in real time, which will require searching over this table, will become computationally challenging. That is perhaps why only a small number of inputs are chosen as control commands in prior work even though several more setpoints can be manipulated in a real DCEP. Although [22] considers continuous states, its proposed method only controls part of a DCEP with simplified linear plant model, which may significantly limit its potential of cost savings in reality. In contrast, the RL controller proposed in this paper is for a DCEP model consisting of highly nonlinear equations and the states and actions are kept as continuous except for the
Figure 1: Layout of District Cooling Energy Plant.
one command that is naturally discrete (number of chillers that are on).
While there is an extensive literature on learning algorithms and on designing RL controllers, design of an RL controller for practically relevant applications with non-trivial dynamics is quite challenging. RL's performance depends on myriad design choices, not only on the stage cost/reward, function approximation architecture and basis functions, learning algorithm and method of exploration, but also on the choice of the state space itself. A second challenge is that training a RL controller is computationally intensive and brute force training is beyond the computational means of most researchers. For instance, The hardware cost for a single AlphaGo Zero system in 2017 by DeepMind has been quoted to be around $25 million [29]. Careful selection of the design choices mentioned above is thus required, which leads to the third challenge: if a particular set of design choices lead to a policy that does not perform well, there is no principled method to look for improvement. Although RL is being extensively studied in the control community, most works demonstrate their algorithms on plants with simple dynamics with a small number of states and inputs; e.g. [30, 31]. The model for a DCEP used in this paper, arguably still simple compared to available simulation models (e.g. [32]), is quite complex: it has 8 states, 5 control inputs, 3 disturbance inputs, and requires solving an optimization problem to compute the next state given the current state, control and disturbance.
#### 1.2.2 Contribution over "MPC for DCEP" literature:
The MPC controller proposed here uses a combination of relaxation and heuristics to avoid the MINLP formulation. In contrast to [13, 14, 15, 16, 17], the MPC controller does not use a MILP relaxation. The controller does compute discrete decisions (number chillers to be on, TES charge/discharge) directly, but it does so by using NLP solvers in conjunction with heuristics. The cost savings obtained is similar to those reported in earlier work that use MILP relaxation. Comparing other NLP formulations [9, 10, 11, 12], our MPC controller determines the on/off actuation of equipments and TES charging/discharging operation directly.
Closed loop simulations are provided for all three controllers, RL, MPC, and baseline, to assess the trade-offs among these controllers, and especially between the model-based MPC controller and the "model-free" RL controller.
#### 1.2.3 Contribution over a preliminary version:
The RL controller described here was presented in a preliminary version of this paper [33]. There are three improvements. Firstly, a MPC controller, which is not presented in [33], was designed, evaluated, and compared with our RL controller. Therefore, the optimality of our control with RL is better assessed. Another difference is that the baseline controller described here is improved over that in [33] so that frequency on/off switching of chillers is reduced. Lastly, a much more thorough discussion of the RL controller design choices and their observed impacts are included here than in [33]. Given the main challenge with designing RL controllers for complex physical systems discussed above, namely, "what knob to tweak when it doesn't work?", we believe this information will be valuable to other researchers.
## 2 System description and control problem
The DCEP contains a TES, multiple chillers and chilled water pumps, a cooling tower and cooling water pumps, and finally a collection of buildings that uses the chilled water to provide air conditioning; see Figure 2. The heat load from the buildings is absorbed by the cold chilled water supplied by the DCEP, and thus the return chilled water temperature is warmer. This part of the water is called _load water_, and the related variables are denoted by superscript lw for "load water". The _chiller loop_ (subscript ch) removes this heat and transmits it to the _cooling water loop_ (subscript cw). The cooling water loop absorbs this heat and sends it to the cooling tower, where this heat is then rejected to the ambient. The cooling tower cool downs the cooling water returned from the chiller by passing both the sprayed cooling water and ambient air through a fill. During this process, a small amount of water spray will evaporate into the air, removing heat from the cooling water. The cooling water loss due to its evaporation is replenished by fresh water, thus we assume the supply water flow rate equals to the return water flow rate at the cooling tower. A fan or a set of fans is used to maintain the ambient airflow at the cooling tower. Connected to the chilled water loop is a TES tank that stores water (subscript tw). The total volume of the water in the TES tank is constant, but a thermocline separates two volumes: cold water that is supplied by the chiller (subscript twc for "tank water, cold") and warm water returned from the load (subscript tww for "tank water, warm").
### DCEP dynamics
Time is discretized with sampling period \(t_{s}\) with a counter \(k=0,1,...\) denoting the time step. With the consideration of hardware limits and ease of implementation, the control commands are chosen as follows:
1. \(\dot{m}_{k}^{\mathrm{lw}}\), the chilled water flowrate going through the cooling coil, to ensure the required cooling load is met.
2. \(\dot{m}^{\mathrm{lw}}\), charging/discharging flowrate of the TES, to take the advantage of load shifting.
3. \(n^{\mathrm{ch}}\), the number of active chillers to ensure the amount of chilled water required is met and the coldness of the chilled water is maintained.
4. \(\dot{m}^{\mathrm{cw}}\), the flowrate of cooling water going through the condenser of chillers to absorb the heat from the chilled water loop.
5. \(\dot{m}^{\mathrm{oa}}\), the flowrate of ambient air that cools down the cooling water to maintain its temperature within the desired range.
Therefore, the control command \(u_{k}\) is:
\[u_{k}:=[\dot{m}_{k}^{\mathrm{lw}},\dot{m}_{k}^{\mathrm{tw}},\dot{m}_{k}^{ \mathrm{ch}},\dot{m}_{k}^{\mathrm{cw}},\dot{m}_{k}^{\mathrm{oa}}]^{T}\in\mathrm{ U}. \tag{1}\]
Each of these variables can be independently chosen as set-points since lower level PI-control loops maintain them. There are limits to these setpoints, which determine the admissible input set \(\mathsf{U}\subset\{0,\ldots,n^{\text{ch}}_{\text{max}}\}\times\mathbb{R}^{4}\):
\[\mathsf{U}\overset{\Delta}{=}\{0,\ldots,n^{\text{ch}}_{\text{max} }\}\times[\dot{m}^{\text{lw}}_{\text{min}},\dot{m}^{\text{lw}}_{\text{max}}] \times[\dot{m}^{\text{fw}}_{\text{min}},\dot{m}^{\text{fw}}_{\text{max}}]\ldots\] \[\times[\dot{m}^{\text{cw}}_{\text{min}},\dot{m}^{\text{cw}}_{\text {max}}]\times[\dot{m}^{\text{oa}}_{\text{min}},\dot{m}^{\text{oa}}_{\text{max}}] \subset\mathbb{R}^{5}. \tag{2}\]
Since the TES can be charged and discharged, we declare \(\dot{m}^{\text{fw}}>0\) for charging and \(\dot{m}^{\text{tw}}<0\) for discharging as a convention.
The state of the DCEP \(x^{p}\) is:
\[x^{p}_{k}\overset{\Delta}{=}[T^{\text{lw,r}}_{k},S^{\text{tww}}_{k},S^{\text{ tuc}}_{k},T^{\text{twc}}_{k},T^{\text{tww}}_{k},T^{\text{chws}}_{k},T^{\text{cw,r}} _{k},T^{\text{cw,s}}_{k}]^{T}, \tag{3}\]
where \(S^{\text{tww}},S^{\text{twc}}\) are the fractions of the warm water and cold water in the TES tank, \(S^{\text{tww}}+S^{\text{twc}}=1\). The other state variables are temperatures at various locations - supply (subscript "\(,s\)") and return (subscript "\(,r\)") - in each water loop: load water, cooling water, tank water, and chiller; see Figure 2 for details. All the plant state variables \(x^{p}\) can be measured with sensors. The superscript "\(p\)" of \(x\) emphasizes that \(x^{p}\) is the state of the "plant", not the state in the reinforcement learning method that will be introduced in Section 3.3.
The plant state \(x^{p}\) is affected by exogenous disturbances \(w^{p}_{k}:=[T^{\text{oawb}}_{k},\dot{q}^{\text{L,ref}}_{k}]^{T}\in\mathbb{R}^ {2}\), where \(\dot{q}^{\text{L,ref}}_{k}\) is the required cooling load, the rate at which heat needs to be removed from buildings, and \(T^{\text{oawb}}_{k}\) is the ambient wet bulb temperature. The disturbance \(w^{p}_{k}\) cannot be ignored, e.g., ambient wet-bulp temperature plays a critical role in cooling tower dynamics.
The control command and disturbances affect the state through a highly nonlinear dynamic model:
\[x^{p}_{k+1}=f(x^{p}_{k},u_{k},w^{p}_{k}), \tag{4}\]
that is described in details in the Appendix. The dynamics (4) are implicit: there is no explicit function \(f(\cdot)\) that can be evaluated to obtain \(x_{k+1}\). The reason is that all the heat exchangers (in each chiller, in the cooling tower, and in the cooling coils in the buildings) has a limited capacity. Depending on the cooling load and the outdoor air wet-bulp temperature, one of the heat exchangers might saturate. Meaning, it will then only deliver as much exchange as its capacity, less than what is desired due to the load. Which heat exchanger will saturate first depends on the current state and disturbance and control on a complex manner. Hence, some form of iterative computation is required to simulate the dynamics, e.g., the method developed in [34]. A generalized way to perform the iterative update to account for the limits of heat exchange capacities is by solving a constrained optimization problem, which is the method used in this work.
The method is described in detail in the Appendix, but here we provide an outline for use in the sequel. First, define the decision variable \(z_{k}\) as
\[z_{k}\overset{\Delta}{=}\big{[}(x^{p}_{k+1})^{T},\,\dot{q}^{\text{L}}_{k},\, \dot{q}^{\text{ch}}_{k},\,\dot{q}^{\text{ct}}_{k}\big{]}^{T}, \tag{5}\]
where \(x^{p}\) is defined in (3), \(\dot{q}^{\text{L}}\) is the cooling load met by the DCEP, \(\dot{q}^{\text{ch}}\) and \(\dot{q}^{\text{ct}}\) are the cooling provided by chillers and cooling towers. The set \(\Omega(x^{p}_{k},w^{p}_{k},u_{k})\) is defined by the dynamics and constraints of the DCEP system:
\[\Omega(x^{p}_{k},w^{p}_{k},u_{k})\overset{\Delta}{=}\big{\{}\mathbf{Z}: \mathbf{Z}\text{ satisfies }\eqref{
optimization problem:
\[z_{k}^{*}=\arg \min_{z_{k}\in\Omega(x_{k}^{p},y_{k}^{p},z_{k})}r_{1}\|\dot{q}_{k}^ {\mathrm{L}}-\dot{q}_{k}^{\mathrm{L,ref}}\|\] \[+r_{2}\|T_{k+1}^{\mathrm{chw,s}}-T_{\mathrm{set}}^{\mathrm{chw,s}} \|+r_{3}\|T_{k+1}^{\mathrm{cw,s}}-T_{\mathrm{set}}^{\mathrm{cw,s}}\|, \tag{6}\]
where \(T_{\mathrm{set}}^{\mathrm{chw,s}}\) and \(T_{\mathrm{set}}^{\mathrm{cw,s}}\) are pre-specified setpoints that reflect nominal working conditions and \(r_{1}\), \(r_{2}\), and \(r_{3}\) are positive design choices, with \(r_{1}\gg r_{2},r_{3}\). When the required cooling load \(\dot{q}_{k}^{\mathrm{L,ref}}\) is within the capacity of all the heat exchangers, then the solution to (6) yields \(\dot{q}_{k}^{\mathrm{L}}=\dot{q}_{k}^{\mathrm{L,ref}}\). When the required load exceeds capacity of the DCEP, then (6) will lead to a solution that trades off maintaining nominal setpoints and meeting the cooling load, while respecting the limits of the heat exchangers. The solution leads to the next state \(x_{k+1}^{p}\) (as the first component of \(z_{k}^{*}\)), ad thus (6) implicitly defines the model \(f(\cdot)\). In this paper, we use CasADi/IPOPT [35, 36] to solve (6) for simulating the plant.
### Electrical demand and electricity cost
In the DCEP considered, the only energy used is electricity. The relationship between the thermal quantities and the electricity consumption in chillers and cooling tower are complex. We model the chillers power consumption \(P^{\mathrm{ch}}\) as [37]:
\[P^{\mathrm{ch}}_{k}=(\frac{T_{k}^{\mathrm{cw,s}}}{T_{k}^{\mathrm{chw,s}}}-1) \dot{q}_{k}^{\mathrm{ch}}-\beta_{1}+\beta_{2}T_{k}^{\mathrm{cw,s}}-\beta_{3} \frac{T_{k}^{\mathrm{cw,s}}}{T_{k}^{\mathrm{chw,s}}}. \tag{7}\]
Power consumption of water pumps is modeled using the black-box model in [13]:
\[P^{\mathrm{chw,pump}}_{k}=\alpha_{1}\ln(1+\alpha_{2}\dot{m}_{k}^ {\mathrm{chw}})+\alpha_{3}\dot{m}_{k}^{\mathrm{chw}}+\alpha_{4}, \tag{8}\] \[P^{\mathrm{cw,pump}}_{k}=\gamma_{1}\ln(1+\gamma_{2}\dot{m}_{k}^ {\mathrm{cw}})+\gamma_{3}\dot{m}_{k}^{\mathrm{cw}}+\gamma_{4}. \tag{9}\]
Finally, the electrical power consumption of the cooling tower mainly comes from its fan and is modeled as [38]:
\[P^{\mathrm{ct}}_{k}=\lambda(\dot{m}_{k}^{\mathrm{oa}})^{3}. \tag{10}\]
The constants \(\alpha_{i},\beta_{i},\gamma_{i}\), and \(\lambda\) are empirical parameters. The total electric power consumption of the DCEP is:
\[P^{\mathrm{tot}}_{k}=P^{\mathrm{ch}}_{k}+P^{\mathrm{ct}}_{k}+P^{ \mathrm{chw,pump}}_{k}+P^{\mathrm{cw,pump}}_{k}. \tag{11}\]
### Model calibration and validation
The parameters of the simulation model in Section 2.1 and electrical demand model in Section 2.2 are calibrated using data from the energy management system in United World College (UWC) of South East Asia Tampines Campus in Singapore, shown in Figure 2(b). The data is publicly available in [39], and details of the data are discussed in [40]. There are three chillers and nine cooling towers in the DCEP. The data from chiller one and cooling tower one are used for model calibration. We use 80% of data for model identification and 20% of data for verification. The out-of-sample prediction results for the total electrical demand are shown in Figure 3. Comparison between data and prediction for other variables are not shown in the interest of space.
### The (ideal) control problem
The electricity cost incurred during the \(k\)-th time step is:
\[c_{k}^{\mathrm{E}}:=t_{\mathrm{s}}\rho_{k}P^{\mathrm{tot}}_{k}, \tag{12}\]
where \(P^{\mathrm{tot}}_{k}\) is the total electric power consumed in \(k\) and is defined in (11). The goal of operating the DCEP to minimize electricity cost while meeting the required cooling load \(\dot{q}_{k}^{\mathrm{L,ref}}\) can be posed as the following infinite horizon optimal control
Figure 3: (Top) Map of the campus with a DCEP whose data is used for model calibration, and (Bottom) Out of sample prediction for \(P^{\mathrm{ch}}\) using the calibrated model (7).
problem.
\[\min_{\{u_{k}\}_{k=0}^{\infty}} \sum_{k=0}^{\infty}c_{k}^{\mathrm{E}},\] (13) s.t. \[x_{k+1}^{p}=f(x_{k}^{p},u_{k},w_{k}^{p}),\;x_{0}^{p}=x, \tag{14}\] \[x_{k}^{p}\in\mathcal{X}^{p}(w_{k}^{p}),\quad u_{k}\in\mathrm{U}( x_{k}^{p},w_{k})\] \[\dot{q}_{k}^{\mathrm{L}}(x_{k}^{p},u_{k})=q_{k}^{\mathrm{L,ref}}, \tag{15}\]
where \(\rho_{k}(\frac{\mathrm{USD}}{\mathrm{KWh}})\) is the electricity price. The state \(x_{k}^{p}\), input \(u_{k}\), and disturbance \(w_{k}^{p}\) of the DCEP are defined in Section 2.1; \(\dot{q}_{k}^{\mathrm{L}}\) ("L" stands for "load") represents the actual cooling load met by the DCEP, which is a function of \(x_{k}^{p}\) and \(u_{k}\). The bounds for \(x_{k}^{p}\) and \(u_{k}\) are \(\mathcal{X}^{p}(w_{k})\) and \(\mathrm{U}(x_{k}^{p},w_{k})\). The reason these sets are dependent on the state or disturbance can be found in the description of the dynamic model of the plant in the Appendix.
Even when truncated to a finite planning horizon considered, Problem (13) is an MINLP due to \(n_{k}^{\mathrm{ch}}\) being an integer and the nonlinear dynamics (4). In the sequel we propose two controllers to solve approximations of this idealized problem.
## 3 RL basics and proposed RL controller
### RL basics
For the following construction, let \(x\) represent the state with state space \(\mathsf{X}\) and \(u\) the input with input space \(\mathsf{U}(x)\). Now consider the following infinite horizon discounted optimal control problem
\[J^{*}(\bar{x})= \min_{\mathsf{U}} \sum_{k=0}^{\infty}\gamma^{k}c(x_{k},u_{k}),\quad x_{0}=\bar{x},\] (16) s.t. \[x_{k+1}=F(x_{k},u_{k}),\;u_{k}\in\mathrm{U}(x_{k}),\]
where \(\mathbf{U}\overset{\Delta}{=}\{u_{0},\ldots,\}\), \(c:\mathsf{X}\times\mathsf{U}\to\mathbb{R}^{\geq 0}\) is the stage cost, \(\gamma\in(0,1)\) is the discount factor, \(F(\cdot,\cdot)\) defines the dynamics, and \(J^{*}:\mathsf{X}\to\mathbb{R}^{+}\) is the optimal value function. The goal of the RL framework is to learn an approximate optimal policy \(\phi:\mathsf{X}\to\mathsf{U}\) for the problem (16) without requiring explicit knowledge of the model \(F(\cdot,\cdot)\). The learning process is based on the \(Q\) function. Given a policy \(\phi\) for the problem (16), the \(Q\) function associated with this policy is defined as
\[Q_{\phi}(x,u)=\sum_{k=0}^{\infty}\gamma^{k}c(x_{k},u_{k}),\quad x_{0}=x,\quad u _{0}=u, \tag{17}\]
where for \(k\geq 0\) we have \(x_{k+1}=F(x_{k},u_{k})\) and for \(k\geq 1\) we have \(u_{k}=\phi(x_{k})\). A well known fact is that the optimal policy satisfies [41]:
\[\phi^{*}(x)=\arg\min_{u\in\mathsf{U}(x)}Q^{*}(x,u),\quad\text{for all}\quad x \in\mathsf{X}, \tag{18}\]
where \(Q^{*}\overset{\Delta}{=}Q_{\phi^{*}}\) is the \(Q\) function for the optimal policy. Further, for any policy \(\phi\) the \(Q\) function satisfies the following fixed point relation:
\[Q_{\phi}(x,u)=c(x,u)+\gamma Q_{\phi}\big{(}x^{+},\phi(x^{+})\big{)}, \tag{19}\]
for all \(u\in\mathsf{U}(x)\), \(x\in\mathsf{X}\), and \(x^{+}=F(x,u)\). The above relation is termed here as the fixed-policy Bellman equation. If the optimal \(\mathsf{Q}\)-function can be learned, the optimal control command \(u_{k}^{*}\) is computed from the \(\mathsf{Q}\)-function as:
\[u_{k}^{*}:=\phi^{*}(x_{k})=\arg\min_{u\in\mathsf{U}(x_{k})}Q^{*}(x_{k},u), \tag{20}\]
### Proposed RL algorithm
The proposed learning algorithm has two parts: policy evaluation and policy improvement. First, in policy evaluation, a parametric approximation to the fixed policy \(\mathsf{Q}\)-function is learned by constructing a residual term from (19) as an error to minimize. Second, in policy improvement, the learned approximation is used to define a new policy based on (18). For policy evaluation, suppose for a policy \(\phi\) the \(Q\) function is approximated as:
\[Q_{\phi}(x,u)\approx Q_{\phi}^{\mathsf{B}}(x,u) \tag{21}\]
where \(Q_{\phi}^{\mathsf{B}}(\cdot,\cdot)\) is the function approximator (e.g., a neural network) and \(\mathsf{\theta}\in\mathbb{R}^{d}\) is the parameter vector (e.g., weights of the network). To fit the approximator, suppose that the system is simulated for \(\mathsf{T}_{\mathrm{sim}}\) time so that \(\mathsf{T}_{\mathrm{sim}}\) tuples of \((x_{k},u_{k},x_{k+1})\) are collected to produce \(\mathsf{T}_{\mathrm{sim}}\) values of:
\[d_{k}(\mathsf{\theta})=c(x_{k},u_{k})+\gamma Q_{\phi}^{\mathsf{B}}(x_{k+1}, \phi(x_{k+1}))-Q_{\phi}^{\mathsf{B}}(x_{k},u_{k}), \tag{22}\]
which is the temporal difference error for the approximator. We then obtain \(\mathsf{\theta}^{*}\) by solving the following optimization problem:
\[\begin{split}\mathsf{\theta}^{*}\overset{\Delta}{=}& \arg\min_{\mathsf{\theta}}\|D(\mathsf{\theta})\|_{2}+\alpha\|\mathsf{\theta}- \bar{\mathsf{\theta}}\|_{2},\\ &\text{s.t.}\quad Q_{\phi}^{\mathsf{B}}\geq 0\end{split} \tag{23}\]
where \(D(\mathsf{\theta})\overset{\Delta}{=}[d_{0}(\mathsf{\theta}),\ldots,d_{ \mathsf{T}_{\mathrm{sim}}-1}(\mathsf{\theta})]\). The term \(\|\mathsf{\theta}-\bar{\mathsf{\theta}}\|_{2}\) is a regularizer and \(\alpha\) is a gain. The values of \(\bar{\mathsf{\theta}}\) and \(\alpha\) are specified in step 3) of Algorithm 1. The non-negativity constraint on the approximate \(\mathsf{Q}\)-function is imposed since the \(\mathsf{Q}\)-function is a discounted sum of non-negative terms (17). How it is enforced is described in Section 3.3.3. The solution to (23) results in \(Q_{\phi}^{\mathsf{\theta}^{*}}\), which is an approximation to \(Q_{\phi}\). The quantity \(Q_{\phi}^{\mathsf{\theta}^{*}}\) can be used to obtain an improved policy, denoted \(\phi^{+}\), through
\[\phi^{+}(x)=\arg\min_{u\in\mathsf{U}(x)}Q_{\phi}^{\mathsf{\theta}^{*}}(x,u), \quad\text{for all}\quad x\in\mathsf{X}. \tag{24}\]
This process of policy evaluation (23) and policy improvement (24) are repeated. This iterative procedure is described formally in Algorithm 1, with \(\mathsf{N}_{\text{pol}}\) denoting the number of policy improvements.
```
Result: An approximate optimal policy \(\phi^{\mathsf{N}_{\text{pol}}}(x)\). Input:\(\mathsf{T}_{\text{sim}}\), \(\theta^{0}\), \(\mathsf{N}_{\text{pol}}\), \(\mathsf{\tilde{B}}>1\) for\(j=0,\ldots,\mathsf{N}_{\text{pol}}-1\)do 1) Obain input sequence \(\{u_{k}^{j}\}_{k=0}^{\mathsf{T}_{\text{sim}}-1}\), initial state \(x_{0}^{j}\), and state sequence \(\{x_{k}^{j}\}_{k=1}^{\mathsf{T}_{\text{sim}}}\). 2) For \(k=1,\ldots,\mathsf{T}_{\text{sim}}\), obtain: \(\phi^{j}(x_{k})=\arg\min_{u\in\cup(x_{k}^{j})}\mathcal{O}_{\phi}^{\theta^{j}}( x_{k}^{j},u)\). 3) Set \(\mathsf{\tilde{\theta}}=\theta^{j}\) and \(\alpha=\frac{j}{\mathsf{P}}\) appearing in (23). 4) Use the samples \(\{u_{k}^{j}\}_{k=0}^{\mathsf{T}_{\text{sim}}-1}\), \(\{x_{k}^{j}\}_{k=0}^{\mathsf{T}_{\text{sim}}}\), and \(\{\phi^{j}(x_{k})\}_{k=1}^{\mathsf{T}_{\text{sim}}}\) to construct and solve (23) for \(\theta^{*}\). 5) Set \(\theta^{j+1}=\theta^{*}\).
```
**Algorithm 1**Data Driven Policy Iteration: Batch mode and off-policy
This algorithm is inspired by: (i) the Batch Convex-Q learning algorithm found in [27, Section III] and (ii) the least squares policy evaluation (LSPI) algorithm [28]. The approach here is simpler than the batch optimization problem that underlies the algorithm in [27, section III], which has an objective function that itself contains an optimization problem. In comparison to [28] we include a regularization term that is inspired by proximal methods in optimization that aids convergence, and a constraint to ensure the learned Q-function is non-negative.
### Proposed RL controller for DCEP
We now specify the ingredients required to apply Algorithm 1 to obtain a RL controller (i.e., a state feedback policy) for the DCEP from simulation data. Namely, (1) the state description, (2) the cost function design, (3) the approximation architecture, and (4) the exploration strategy. Parts (1), (2), and (3) refer to the setup of the optimal control problem the RL algorithm is attempting to approximately solve. Part (4) refers to the selection of how the state/input space is explored (step 1 in Algorithm 1).
#### 3.3.1 State space description
In RL, the construction of the state space is an important feature, and the state is not necessarily the same as the plant state. To define the state space for RL, we first denote \(w_{k}\) as the vector of exogenous variables:
\[w_{k}=[(w_{k}^{p})^{T},\rho_{k},\tilde{\rho}_{k}]\in\mathbb{R}^{4}. \tag{25}\]
where \(\tilde{\rho}_{k}=\frac{1}{\mathsf{\Gamma}}\sum_{t=k-\tau}^{k}\rho_{t}\) is a backwards moving average of the electricity price. The expanded state for RL is:
\[x_{k}\overset{\Delta}{=}[x_{k}^{p},w_{k}]^{T}\in\mathsf{X}\overset{\Delta}{=} \subset\mathbb{R}^{12}. \tag{26}\]
Note that with the state defined by (26), _a state feedback policy is implementable_ since all entries of \(x_{k}\) can be measured with commercially available sensors (e.g., outside wet-bulb temperature, \(T_{\text{o}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text {o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r }\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r }\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r }\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r }\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r }\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o} \text{r}\text{o}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o} \text{r}\text{o}\text{r}\text{o}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o} \text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o} \text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o} \text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o} \text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o} \text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r} \text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o}\text{r}\text{o} \text{r}\text{o}\text
states are obtained sequentially through simulation, starting from state \(x_{0}^{j}\) for each \(j\). The choice to use either of the three controllers is determined by the probability mass function \(\mathsf{v}_{\text{exp}}^{j}\in\mathbb{R}^{3}\), which depends on the iteration index of the policy iteration loop:
\[\mathsf{v}_{\text{exp}}^{j}=\begin{cases}[0,0.1,0.9]&\text{for $j\leq 5$.}\\ [0.5,0.25,0.25]&\text{for $j>5$.}\end{cases} \tag{30}\]
The entries correspond to the probability of using the corresponding control strategy, which appear in the (i)-(iii) order as just introduced. The rational for this choice is that the BL controller provides "reasonable" state input examples for the RL algorithm in the early learning iterations so to steer the parameter values in the correct direction. After this early learning phase, weight is shifted towards the current working policy so to force the learning algorithm to update the parameter vector in response to its actions.
The policy evaluation problem (23) during training is solved using CVX [42]. The simulation model (4) to generate state updates, which requires solving a non-convex NLP, is run by using CasADi and IPOPT [35, 36].
The parameters used for RL training are \(\gamma=0.97\), \(d=36\), \(\kappa=500\), \(\beta=100\), \(\mathsf{T}_{\text{sim}}=432\) and \(\mathsf{N}_{\text{pol}}=50\). The parameter \(\tau\) for the backward moving average filter on the electricity price is chosen to represent 4 hours. The choice of the 36 basis functions are a bit involved; they are discussed in Section 7. Because a simulation time step, \(k\) to \(k+1\), correspond to a time interval of 10 minutes, \(\mathsf{T}_{\text{sim}}=432\) corresponds to 3 days. The controller was trained with weather and load data for the three days Oct. 10-12, 2011, from the Singapore UWC campus dataset described in Section 2.3. The electricity price data used for training was taken as a scaled version of the locational marginal price from PJM [43] for the three days Aug. 30 - Sept. 1, 2021.
### Real time implementation
Once the RL controller is trained, it computes the control command \(u_{k}\) in real-time as:
\[u_{k}:=\phi^{*}(x_{k})=\arg\min_{u\in\mathsf{U}(x_{k})}\ Q_{\phi}^{\hat{\theta} }(x_{k},u), \tag{31}\]
where \(\hat{\theta}\) is the parameter vector learned in Algorithm 1. This \(\hat{\theta}\) needs not to be \(\theta^{\mathsf{N}_{\text{pol}}}\) but the one with the best closed-loop performance, which is explained later in Section 6.2.
Due to non-convexity of the set \(\mathsf{U}(x_{k})\) and integer nature of \(n_{k}^{\text{ch}}\), the problem (31) is non-convex and integer-valued. We solve it as follows: for each possible value of \(n_{k}^{\text{ch}}\), we solve the corresponding continuous variable non-linear program using CasADi/IPOPT [35, 36], and then choose the minimum out of (\(n_{\text{max}}^{\text{ch}}\)+1) solutions by direct search. Direct search is feasible because of of \(n_{\text{max}}^{\text{ch}}\) for DCEPs is a small number in practice (\(n_{\text{max}}^{\text{ch}}=7\) in our simulated example).
## 4 Proposed Model Predictive Controller
Recall that a straightforward translation of (13) to MPC will require solving the following problem at every time index \(k\) (here we only describe the one at \(k=0\) to avoid cumbersome notation):
\[\min_{\{u_{k}\}_{k=0}^{\mathsf{T}^{\mathsf{plan}}-1}} \sum_{k=0}^{\mathsf{T}^{\mathsf{plan}}-1}c_{k}^{\text{E}},\] (32) s.t. \[x_{k+1}^{p}=f(x_{k}^{p},u_{k},w_{k}^{p}),\ x_{0}^{p}=x,\] \[x_{k}^{p}\in\mathsf{X}^{p}(w_{k}^{p}),\quad u_{k}\in\mathsf{U}(x _{k}^{p},w_{k})\] \[\dot{q}_{k}^{\text{L}}(x_{k}^{p},u_{k})=\dot{q}_{k}^{\text{L}, \text{ref}},\]
where \(c_{k}^{\text{E}}\) is defined in (12), and \(\mathsf{T}^{\mathsf{plan}}\) is the planning horizon. Even for a moderate planning horizon \(\mathsf{T}^{\mathsf{plan}}\) the optimization problem (32) will be a large MINLP. We now describe an algorithm that uses a dynamic model of the DCEP to approximately solve (32) without needing to solve an MINLP or even an MILP. This algorithm, which we call _MBOC_, for _Model Based (sub) Optimal Controller_, is then used to implement MPC by repeatedly applying it in a receding horizon setting as new forecasts of external disturbances become available.
The first challenge we have to overcome is not related to the mixed-integer nature of the problem but is related to the complex nature of the dynamics. Recall from Section 2.1 that the dynamic model, i.e., the function \(f\) in the equality constraint \(x_{k+1}=f(\cdot)\) in (4) is not available in explicit form; rather the state is propagated in the simulation by solving an optimization problem. Without an explicit form for the function \(f(\cdot)\), modern software tools that reduce the drudgery in nonlinear programming, namely numerical solvers with automatic differentiation, cannot be used.
We address this challenge by substituting the implicit equality constraint \(x_{k+1}^{p}=f(\alpha_{k}^{p},u_{k},w_{k}^{p})\) in (32) with the underlying constraints \(\Omega_{k}(\cdot)\) in (6), and add the objective of (6) to the objective of (32). The modified problem becomes:
\[\min_{u_{k}} \sum_{k=0}^{\mathsf{T}^{\mathsf{plan}}-1}c_{k}^{\text{E}}+r_{1} \|\dot{q}_{k}^{\text{L}}-\dot{q}_{k}^{\text{L},\text{ref}}\|^{2}+r_{2}\|T_{k+1} ^{\text{chw},s}-T_{\text{set}}^{\text{chw},s}\|^{2} \tag{33}\] \[+r_{3}\|T_{k+1}^{\text{cw},s}-T_{\text{set}}^{\text{cw},s}\|^{2},\] s.t. \[x_{k+1}^{p}\in\Omega_{k}(x_{k}^{p},u_{k},w_{k}^{p}),\ x_{0}^{p}=x,\] \[x_{k}^{p}\in\mathsf{X}^{p}(w_{k}^{p}),\quad u_{k}\in\mathsf{U}(x _{k}^{p},w_{k}).\]
Since the input \(n_{k}^{\text{ch}}\) takes integer value in the set \(\{0,1,\dots,n_{\text{max}}^{\text{ch}}\}\), the problem (33) is still a high-dimensional MINLP.
The proposed algorithm to approximately solve (33) without using an MINLP solver or an MILP relaxation consist of three steps. These are listed below in brief, with more details provided subsequently.
1. The integer variable \(n^{\text{ch}}\in[0,1,\ldots,n^{\text{ch}}_{\text{max}}]\) is relaxed to a continuous one \(n^{\text{ch},c}\in[0,n^{\text{ch}}_{\text{max}}]\). The relaxed problem, an NLP, is solved using an NLP solver to obtain a locally optimal solution. In this paper we use IPOPT (through CasADi) to solve this relaxed NLP.
2. The continuous solution \(\{n^{\text{ch},c}_{k}\}_{k=0}^{\text{train}-1}\in\mathbb{R}^{\text{T}^{\text{ train}}}\), resulting from Step 1, is processed by using Algorithms 2 and 3 to produce a transformed solution that is integer-valued, which is denoted by \(\{n^{\text{ch},d}_{k}\}_{k=0}^{\text{train}-1}\).
3. In Problem (33), the input \(\{n^{\text{ch}}_{k}\}_{k=0}^{\text{T}^{\text{plan}}-1}=\{n^{\text{ch},d}_{k}\}_ {k=0}^{\text{T}^{\text{plan}}-1}\) is fixed at the values obtained in Step 2, and the resulting NLP is solved again. The resulting solution is called the model-based sub-optimal solution (MBOC).
In the sequel, we will refer to a vector with non-negative integer components, \(x\in\mathbb{Z}^{n}\), as an n-length _discrete signal_. For a discrete signal \(x\in\mathbb{Z}^{n}\), the number of switches, \(N_{\text{switch}}\), is defined as the numbers of times two consecutive entries differ: \(N_{\text{switch}}:=\sum_{i=1}^{n-1}I(x_{i}-x_{i+1})\), where \(I(\cdot)\) is the indicator function: \(I(0)=0\) and \(I(y)=1\) for \(y\neq 0\).
The continuous relaxation in Step 1 is inspired by branch and bound algorithms for solving MINLPs, since such a relaxation is the first step in branch and bound algorithms. However, a simple round-off based method to convert the continuous variable \(n^{\text{ch},c}\) to a discrete one leads to a high number of oscillations in the solution. This corresponds to frequent turning on and off of one or more chillers, which is detrimental to them.
Step 2 converts the continuous solution from Step 1 to a discrete signal, and involves multiple steps in itself. The first step is Algorithm 2, which filters the signal \(n^{\text{ch},c}\) with a modified moving average filter with a two hour window (corresponding to 12 samples with a 10 minute sampling period) and then rounding up the filtered value to the nearest integer. Thus by operating the moving average filter on \(n^{\text{ch},c}\) one obtains a discrete signal for the chiller command \(n^{\text{ch},f}=\text{moving\_average\_round}(n^{\text{ch},c})\).
```
Input: Signal \(\mathbf{x}\in\mathbb{Z}^{n}\), \(w\in\mathbb{Z}^{+}\) (window length) for i=1:\(w\) \(\mathbf{x}_{d}[i]=\lceil\text{mean}(\mathbf{x}[1:i+w/2])\rceil\) end for i=\(w/2+1:n-w/2\) \(\mathbf{x}_{d}[i]=\lceil\text{mean}(\mathbf{x}[i-w/2:i+w/2])\rceil\) end for i=\(n-w/2+1:n\) \(\mathbf{x}_{d}[i]=\lceil\text{mean}(\mathbf{x}[i-w/2:end])\rceil\) end Output: Discrete signal \(\mathbf{x}_{d}\)
```
**Algorithm 2**\(\mathbf{x}_{d}=\text{moving\_average\_round}(\mathbf{x})\)
The rounding moving average filter typically does not reduce the switching frequency sufficiently. This is why an additional step, Algorithm 3, described below, is used to operates on this signal and produce the output \(n^{\text{ch},d}:=\text{reduce\_switching}(n^{\text{ch},f})\) that has fewer switches.
The need for Step 3 is that the chiller command \(\{n^{\text{ch},d}\}\) at the end of the second step, together with other variables in the solution vector from Step 1, may violate some constraints of the optimization problem (33). Even if \(\{x^{p}_{k+1}\}\) and \(\{n^{\text{ch},d}\}\) are feasible, the resulting control commands may not track the cooling load adequately. Step 3 ensures a feasible solution and improves tracking.
ForecastsImplementation requires availability of the forecasts of disturbance \(w^{p}_{k}\), i.e., cooling load reference and electricity price, over the next planning horizon. There is a large literature on estimating and/or forecasting loads for buildings and for real-time electricity prices; see [44, 45, 46] and references therein. The forecast of \(T^{\text{o}\text{a}\text{a}\text{a}\text{a}\text{b}}\) is available from National Weather Service [47]. We therefore assume the forecasts of the three disturbance signals, \(q^{\text{L,ref}}_{k}\), \(T^{\text{o}\text{a}\text{a}\text{a}\text{b}}\) and \(\rho_{k}\), are available to the MPC controller at each \(k\).
## 5 Rule-based Baseline Controller
In order to evaluate the performances of the RL and MPC controller, we will compare them to a rule-based baseline controller (BL). The proposed baseline controller is designed to utilize the TES and time varying electricity prices (to the extent possible with heuristics) to reduce energy costs. The RL controller and baseline controller have the same information about the price: the current price \(\rho_{k}\) and a backward moving average \(\bar{\rho}_{k}\). At each timestep \(k\), the baseline controller determines the control command \(u_{k}=[\hat{n}^{\text{lx}}_{k},\hat{n}^{\text{tw}}_{k},\hat{n}^{\text{ch}}_ {k},\hat{n}^{\text{cw}}_{k},\hat{n}^{\text{o}\text{a}}_{k}]^{T}\) following the procedure shown in
Figure 4. The flowcharts are explained in Section 5.1 and 5.2. The subscript "sat" indicates the variable is saturated at its upper or lower bound; the numerical values of the bounds used in simulations are shown in Table 1.
### For chilled water loop
1. At time step \(k\), \(n_{k}^{\text{ch}}\), \(\dot{m}_{k}^{\text{lw}}\) and \(\dot{m}_{k}^{\text{tw}}\) are initialized to \(n_{k-1}^{\text{ch}}\), \(\dot{m}_{k-1}^{\text{lw}}\) and \(\dot{m}_{k-1}^{\text{tw}}\).
2. The BL controller increases or decreases \(\dot{m}_{k}^{\text{tw}}\) by a fixed amount (10 kg/sec.) if \(\varrho_{k}\) is 5% lower or higher than \(\tilde{\varrho}_{k}\) in order to take the advantage of time-varying electricity price.
3. The BL controller estimates \(T_{k+1}^{\text{lw,r}}\), \(T_{k+1}^{\text{chws}}\), \(S_{k+1}^{\text{twc}}\) by (38), by (47), and (40) under the assumption of \(\dot{m}^{\text{ip}}=0\) and \(\dot{q}_{k}^{\text{ch}}=n_{k}^{\text{ch}}\dot{q}_{\text{indv}}^{\text{ch}}\). If \(T_{k+1}^{\text{lw,r}}\), \(T_{k+1}^{\text{chws}}\), \(S_{k+1}^{\text{twc}}\) are within their bounds, the already computed control command for the chilled water loop is executed. Otherwise, the controller repeatedly increases/decreases \(\dot{m}_{k}^{\text{lw}}\) and \(\dot{m}_{k}^{\text{tw}}\) by a fixed amount (10 kg/sec), and \(n_{k}^{\text{ch}}\) by 1 until \(T_{k+1}^{\text{lw,r}}\), \(T_{k+1}^{\text{chws}}\), and \(S_{k+1}^{\text{twc}}\) are within their bounds. Since \(\dot{m}_{k}^{\text{tw}}+\dot{m}_{k}^{\text{tw}}\) determines the minimum required \(n_{k}^{\text{ch}}\) through (43) and (45), the final \(n_{k}^{\text{ch}}\) is readjusted to meet the minimum required \(n_{k}^{\text{ch}}\).
### For cooling water loop
1. \(\dot{m}_{k}^{\text{cw}}\) and \(\dot{m}_{k}^{\text{oa}}\) are initialized to \(\dot{m}_{k}^{\text{cw}}\), and \(\dot{m}_{k-1}^{\text{oa}}\).
2. The BL controller estimates \(T_{k+1}^{\text{cw,r}}\) by (50) by assuming a fixed \(eta\). This value is to be estimated from historical data. If \(T_{k+1}^{\text{cw,r}}\) is above/below its bound, \(\dot{m}_{k}^{\text{cw}}\) is increased/decreased by a fixed amount (20 kg/sec.) repeatedly until \(T_{k+1}^{\text{cw}}\) is within its bound.
3. Once \(\dot{m}_{k}^{\text{cw}}\) is determined, the capacity of cooling tower \(\dot{q}_{\text{UB,k}}^{\text{ct}}\) and the required cooling \(\dot{q}_{\text{set,k}}^{\text{ct}}\) that cools down \(T_{k}^{\text{cw,r}}\) to \(T_{\text{set,s}}^{\text{cw,s}}\) is computed. If \(\dot{q}_{\text{set,k}}^{\text{ct}}\leq\dot{q}_{\text{UB,k}}^{\text{ct}}\leq 1.1 \dot{q}_{\text{set,k}}^{\text{ct}}\), then the control command for the cooling water loop computed so far is executed. If \(\dot{q}_{\text{UB,k}}^{\text{ct}}<\dot{q}_{\text{set,k}}^{\text{ct}}\) or \(\dot{q}_{\text{UB,k}}^{\text{ct}}>1.1\dot{q}_{\text{set,k}}^{\text{ct}}\), \(\dot{m}_{k}^{\text{oa}}\) is increased or decreased by a fixed amount (0.05 kg/sec.). Since \(\dot{q}_{\text{UB,k}}^{\text{ct}}\) depends on the ambient wet-bulb temperature \(T_{k}^{\text{oawb}}\) (see equation (52) ), there can be a case that \(\dot{q}_{\text{UB,k}}^{\text{ct}}\) cannot satisfy \(\dot{q}_{\text{set,k}}^{\text{ct}}\leq q_{\text{UB,k}}^{\text{ct}}\leq 1.1\dot{q}_{\text{set,k}}^{\text{ct}}\) even when \(\dot{m}_{k}^{\text{oa}}\) is already at its bound. In this case \(\dot{m}_{k}^{\text{tw}}\) is varied by a fixed amount (20 kg/sec.) repeatedly until \(T_{k+1}^{\text{cw,r}}\) and \(\dot{q}_{\text{UB,k}}^{\text{ct}}\) are within their bounds.
## 6 Performance evaluation
### Simulation setup
Simulations for closed loop control with RL, MPC and baseline controllers are performed for the week of Sept. 6-12, 2021, which we refer to as the _testing week_ in the sequel. The weather data for the testing week is obtained from the Singapore data set described in Section 2.3. The real-time electricity price used is a scaled version of PJM's locational margin price for the same week [43]. Other relevant simulation parameters are located in Table 1. There is no plant-model mismatch in the MPC simulations. In particular, since the forecasts of disturbance signals are available in practice (see the discussion at the end of Section 4), in the simulations the MPC controller is provided error-free forecasts in the interest of simplicity.
_We emphasize that the closed loop results with the RL controller presented here are "out-of-sample" results, meaning the external disturbance \(w_{k}\) (weather, cooling load, and electricity price) used in the closed loop simulations are different from those used in training the RL controller._
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Parameter & Unit & value & Parameter & Unit & value \\ \hline \(t_{\text{s}}\) & minutes & 10 & \(\frac{t_{\text{s}}\text{Tr}^{\text{lan}}}{60}\) & hours & 24 \\ \(\frac{\pi_{\text{s}}}{60}\) & hours & 4 & \(\frac{\text{wt}_{\text{s}}}{60}\) & hours & 2 \\ \(n_{\text{max}}^{\text{ch}}\) & N/A & 7 & \(\dot{m}_{\text{max}}^{\text{tw}}/\dot{m}_{\text{min}}^{\text{tw}}\) & \(\frac{\text{kg}}{\sec}\) & 30/-30 \\ \(\dot{m}_{\text{max}}^{\text{lw}}/\dot{m}_{\text{min}}^{\text{lw}}\) & \(\frac{\text{kg}}{\sec}\) & 350/20 & \(\dot{m}_{\text{max}}^{\text{cw}}/\dot{m}_{\text{min}}^{\text{cw}}\) & \(\frac{\text{kg}}{\sec}\) & 300/20 \\ \(S_{\text{max}}^{\text{twc/twc}}\) & N/A & 0.95 & \(S_{\text{min}}^{\text{twc/twc}}\) & N/A & 0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulation Parameters
Figure 4: Baseline Controller
Four performance metrics are used to compare the three controllers. The first is the energy cost incurred. The second is the Root Mean Square Error (RMSE) in tracking the cooling load reference:
\[e_{RMSE}:=\left(\frac{1}{N_{\mathrm{sim}}-1}\sum_{k=1}^{N_{\mathrm{sim}}}(\dot{ q}_{k}^{\mathrm{L,ref}}-\dot{q}_{k}^{\mathrm{L}})^{2}\right)^{\frac{1}{2}}, \tag{34}\]
where \(N_{\mathrm{sim}}\) is the duration for which closed loop simulations are carried out, which in this paper is 1008 (corresponding to a week: \(7\times 24\times 6\)). The third is the number of chiller switches over the simulation period:
\[n_{\mathrm{switch}}^{\mathrm{ch}}:=\sum_{k=1}^{N_{\mathrm{sim}}-1}|n_{k+1}^{ \mathrm{ch}}-n_{k}^{\mathrm{ch}}|. \tag{35}\]
The fourth is control computational time during closed loop simulations.
### Numerical Results and Discussion
A summary of performance comparisons from the simulations is shown in Table 2. All three controllers meet the cooling load adequately (more on this later), and both the RL and MPC controllers reduce energy cost over the baseline by about the same amount (16.8% for RL vs. 17.8% for MPC). These savings are comparable with those reported in the literature for MPC with MILP relaxation and RL.
In terms of tracking the reference load, both RL and MPC again perform similarly while the baseline controller performs the best in terms of standard deviation of the tracking error; see Figure 5 and Table 2. The worst tracking RMSE is 61 kW, which is a small fraction of the mean load (1313 kW). Thus the tracking performance is considered acceptable for all three controllers. The fact that the baseline performs the best in tracking the cooling load is perhaps not surprising since it is designed primarily to meet the required load and keep chiller switching low, with energy cost a secondary consideration.
In terms of chiller switches, the RL controller performs the worst; see Table 2. This is not surprising because no cost was assigned to higher switching in its design. The MPC performs the best in this metric, again most likely since keeping switching frequency low was an explicit consideration in its design. Ironically, this feature was introduced into the MPC controller after an initial design attempt without it led to extremely high switching frequency.
In terms of real-time computation cost, the baseline performs the best, which is not surprising since no optimization is involved. The RL controller has two orders of magnitude lower computation cost compared to MPC. The computation time for all controllers is well within the time budget, since control commands are updated every 10 minutes.
**Deeper look:** Simulations are done for a week, but the plots below show only two days to avoid clutter. The cost savings by RL and MPC controller come from their ability to use the TES to shift the peak electric demand to periods of low price better than the baseline controller; see Figure 6. The MPC controller has the knowledge of electricity price along whole the planning horizon, and thus achieves the most savings. The cause for cost saving difference between BL and RL controllers is that the RL controller learns the variation in the electricity price well, or at least better than the BL controller. This can be seen in Figure 7. The RL controller always discharges the TES (\(S^{\mathrm{twc}}\) drops) during the peak electricity price while the baseline controller sometimes cannot do so because the volume of cold water is already at its minimum bound. The BL controller discharges the TES as soon as the electricity price rises, which may result in insufficient cold water stored in the TES when the electricity price reaches its maximum. While both the RL and BL controllers are forced to use the same price information (current and a backward moving average), the rule-based logic in the baseline controller cannot use that information as effectively as RL.
An alternate view of this behavior can be obtained by looking at the times when the chillers are turned on and off, since using chillers cost much more electricity than using the TES, which only needs a few pumps. We can see from Figure 8 that all controllers shift their peak electricity demand to the times when electricity is cheap. But the rule-based logic
Figure 5: Load tracking performances of the MPC, RL, and BL controllers: The “Ref” is cooling required \(\dot{q}_{k}^{\mathrm{L,ref}}\).
Figure 6: Power consumption vs. real-time electricity price for the MPC, RL, and BL controllers.
of the BL controller is not able to line up electric demand with low price as well as the RL and MPC controllers do.
Another benefit of the RL controller is that it cycles the chillers less than the BL controller even the cost of switching between on-off status of chillers is not incorporated in the cost function; see Figure 9. Fast cycling decreases the life expectancy of a chiller greatly.
## 7 Under the hood of the RL controller
More insights about why the learned policy works under various conditions can be obtained by taking a closer look at the design choices made for the RL controller. All these choices were the result of considerable trial and error.
**Choice of basis functions** The choice of basis to approximate the Q-function is essential to the success of the RL controller. It defines the approximate Q-function, and consequently the policy (31). Redundant basis functions can lead to overfitting, which causes poor out-of sample performance of the policy. We avoid this effect by selecting a reduced quadratic basis, which are the 36 unique non-zero entries shown in Figure 10. Another advantage of reducing the number of basis functions is that it reduces the number of parameters to learn, as training effort increases dramatically with the number of parameters to learn.
The choices for the basis were based on physical intuition about the DCEP. First, basis functions can be simplified by dropping redundant states. One example is \(S^{\text{ww}}\). Since \(S^{\text{wc}}\) and \(S^{\text{tww}}\) are dual terms: \(S^{\text{tww}}+S^{\text{tww}}=1\), so one of them can be dropped. Considering that the \(S^{\text{twc}}\) reflects the amount of cooling saved in the TES, we dropped \(S^{\text{tww}}\). Another example is the term \(T^{\text{tww}}\), which is dropped since it is bounded by \(T^{\text{tw,r}}\) which is already included in the basis function. Second, if two terms have a strong causal or dependent relationship, e.g., \(\dot{m}^{\text{tw}}\) and \(T^{\text{tw,r}}\) (see (38)), then the corresponding quadratic term \(\dot{m}^{\text{tw}}T^{\text{tw,r}}\) should be selected as an element of the basis. Third, if two terms have minimal causal or dependent relationship, e.g., \(\dot{m}^{\text{oa}}\) and \(\dot{m}^{\text{tw}}\) (they are from different equipment and water loops), then the corresponding quadratic term \(\dot{m}^{\text{oa}}\dot{m}^{\text{tw}}\) should not be selected as an element of the basis.
**Choice of States** Exogenous disturbances have to be included into the RL states to make the controller work under various cooling load, electricity price, and weather trajectories _that are distinct from what is seen during training_. Without this feature the RL controller will not be applicable in the field.
**Convergence of the learning algorithm:** The learning algorithm appears to converge in training, meaning, \(|\theta_{k}-\theta_{k-1}|\) is seen to reduce as the number of training epochs \(k\) increases; see Figure 11. This convergence should not be confused with convergence to any meaningful optimal pol
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Total cost (\$) & \(e_{RMSE}\) (kW) & No. of switches & Control computation time (sec, \(\mu\pm\sigma\)) \\ \hline Baseline & 3308 & 4.14e-4 & 45 & 8.9e-5 \(\pm\) 3.9e-4 \\ \hline RL & 2752 & 1.85 & 114 & 0.32 \(\pm\) 0.01 \\ \hline MPC & 2709 & 61.38 & 65 & 27.33 \(\pm\) 5.99 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of RL, MPC, and baseline controllers (for a week-long simulation).
Figure 8: Required cooling load vs. real-time electricity price for the MPC, RL, and BL controllers.
Figure 7: TES cold water volume vs. real-time electricity price for the MPC, RL, and BL controllers.
icy. The policy learned in 40th iteration can be a better performing controller than the policy obtained in 50th iteration. We believe the proximal gradient type method used in learning helps in the parameters not diverging, but due to the same reason it may prevent the parameters from converging to a far away optima. This trade-off is necessary: our initial attempts without the damping proximal term was not successful in learning anything useful. As a result, after a few policy improvement iterations, every new policy obtained had to be tested by running a closed-loop simulation to assess its performance. The best performing one was selected as "the RL controller", which happened to be the 26th one.
**Numerical considerations for training:** Training of the RL controller is an iterative task that required trying many various configurations of the parameters appearing in Table 1. In particular, we found the following considerations useful.
1. If the value of \(\kappa\) is too small, the controller will not learn to track the load \(\hat{q}_{k}^{\text{L,ref}}\). On the hand, if \(\kappa\) is too large the controller will not save energy cost.
2. The condition number of (23) significantly affects the performance of Algorithm 1. However, the relative magnitudes of state and input values are very different, for example, \(\hat{q}^{\text{L}}\in[300,4000]\) (kW) and \(S^{TWC}\in[0.05,0.95]\), which makes the condition number of (23) extremely large. Therefore, we normalize all magnitudes of state and input values with their average values. With appropriate scaling of the states/inputs, we reduced the magnitude of the condition number from \(1\times 10^{20}\) to \(1\times 10^{3}\).
## 8 Conclusion
The proposed MPC and RL controllers are able to reduce energy cost significantly, \(\approx 17\%\) in a week long simulation, over the rule-based baseline controller. Apart from the dramatically lower real-time computationally cost of the RL controller compared to the MPC, tracking and energy cost performance of the two controllers are similar. This similarity in performance is somewhat surprising. Though both the controllers are designed to be approximations of the same intractable infinite horizon problem, there is nonetheless significant differences between them, especially the information the controllers have access to and the objectives they are designed to minimize. It should be noted that the MPC controller has a crucial advantage over the RL controller in our simulations: the RL controller has to implicitly learn to forecast disturbances while the MPC controller is provided with error-free forecasts. How much will MPC's performance degrade in practice due to inevitable plant-model mismatch is an open question.
Existing work on RL and on MPC tend to lie in their own silos, with comparisons between them for the same application being rare. This paper contributes to such comparisons for a particular application: control of DCEPs. Much more remains to be done, such as examination of robustness to uncertainties.
There are several other avenues for future work. One is to explore non-linear bases, such as neural networks, for designing an RL controller. Another is to augment the state space with additional signals, especially with forecasts, which might improve performance. Of course, such augmentation will also increase the cost and complexity of training the policy. Another avenue for improvement in the RL controller is to reduce number of chiller switches. In this paper all the chillers are considered to be the same. An area of improvement is to extend heterogeneous chillers with distinct performance curves, for both RL and MPC. On the MPC front, an MILP relaxation is a direction to pursue in the future.
|
2307.10519 | Probabilistic Multimodal Depth Estimation Based on Camera-LiDAR Sensor
Fusion | Multi-modal depth estimation is one of the key challenges for endowing
autonomous machines with robust robotic perception capabilities. There have
been outstanding advances in the development of uni-modal depth estimation
techniques based on either monocular cameras, because of their rich resolution,
or LiDAR sensors, due to the precise geometric data they provide. However, each
of these suffers from some inherent drawbacks, such as high sensitivity to
changes in illumination conditions in the case of cameras and limited
resolution for the LiDARs. Sensor fusion can be used to combine the merits and
compensate for the downsides of these two kinds of sensors. Nevertheless,
current fusion methods work at a high level. They process the sensor data
streams independently and combine the high-level estimates obtained for each
sensor. In this paper, we tackle the problem at a low level, fusing the raw
sensor streams, thus obtaining depth estimates which are both dense and
precise, and can be used as a unified multi-modal data source for higher level
estimation problems.
This work proposes a Conditional Random Field model with multiple geometry
and appearance potentials. It seamlessly represents the problem of estimating
dense depth maps from camera and LiDAR data. The model can be optimized
efficiently using the Conjugate Gradient Squared algorithm. The proposed method
was evaluated and compared with the state-of-the-art using the commonly used
KITTI benchmark dataset. | Johan S. Obando-Ceron, Victor Romero-Cano, Sildomar Monteiro | 2023-07-20T01:39:08Z | http://arxiv.org/abs/2307.10519v1 | # Probabilistic Multimodal Depth Estimation Based on Camera-LiDAR Sensor Fusion
###### Abstract
Multi-modal depth estimation is one of the key challenges for endowing autonomous machines with robust robotic perception capabilities. There have been outstanding advances in the development of uni-modal depth estimation techniques based on either monocular cameras, because of their rich resolution, or LiDAR sensors, due to the precise geometric data they provide. However, each of these suffers from some inherent drawbacks, such as high sensitivity to changes in illumination conditions in the case of cameras and limited resolution for the LiDARs. Sensor fusion can be used to combine the merits and compensate for the downsides of these two kinds of sensors. Nevertheless, current fusion methods work at a high level. They process the sensor data streams independently and combine the high-level estimates obtained for each sensor. In this paper, we tackle the problem at a low level, fusing the raw sensor streams, thus obtaining depth estimates which are both dense and precise, and can be used as a unified multi-modal data source for higher level estimation problems.
This work proposes a Conditional Random Field model with multiple geometry and appearance potentials. It seamlessly represents the problem of estimating dense depth maps from camera and LiDAR data. The model can be optimized efficiently using the Conjugate Gradient Squared algorithm. The proposed method was evaluated and compared with the state-of-the-art using the commonly used KITTI benchmark dataset.
A 100 100 100 100 100 100
A 2010
## 1 Introduction
Autonomous robots are composed of different modules that allow them to perceive, learn, decide and act within their environment. The perception module processes cues that inform the robot about the appearance and geometry of the environment. Especially when working in outdoor or underground scenarios, these cues must be robust to unseen phenomenon. A fully autonomous robot must execute all operations, monitor itself, and be able to handle all unprecedented events and conditions, such as unexpected objects and debris on the road, unseen environments, adverse weather, etc. Therefore, reliable and robust perception of the surrounding environment is one of the key tasks of autonomous robotics.
Among the main inputs for a perception system are the distances of the robot from multiple points in its environment. This input can be obtained directly from a sensor or estimated by a depth estimation module. Depth estimation can be performed by processing monocular camera images, stereo vision, radar or LiDAR (Light Detector and Ranging) sensors, among others. Although monocular cameras can only be used to generate depth information up-to-a-scale, they are still an important component of a depth estimation system due to their low price and the rich appearance data they provide.
Although a monocular camera is small, low-cost and energy efficient, it is very sensitive to changes in the illumination. Additionally, the accuracy and reliability of depth estimation methods based on monocular images is still far from being practical. For instance, the state-of-the-art RGB-based depth prediction methods [35, 12, 31] produce an average error (measured by the root mean squared error) of over 50 cm in indoor scenarios (e.g., on the NYU-Depth-v2 dataset [59]). Such methods perform even worse outdoors, with at least 4 meters of average error on the Make3D and KITTI datasets [56, 18].
3D LiDAR scanners, on the other hand, can provide accurate geometric information about the environment even when illumination changes occur. Even though, due to their active nature, LiDARs are robust to dark or overexposed scenarios, the generated 3D point cloud is sparse and non-uniformly distributed, which decreases its utility for recognition tasks. In general, each type of sensor has its own weaknesses.
To address the potential fundamental limitations of image-based depth estimation, this paper considers the use of sparse depth measurements, along with RGB data, to reconstruct depth in full resolution. Data fusion techniques have been extensively employed for robust perception systems, where fusing and aggregating data from different sensors is required. Although some approaches to robust perception resort to statistical methods for dealing with data outliers [2], the work presented in this paper belongs to the group that tackles the robust-perception problem by leveraging the complementary nature of passive and active sensor
modalities.
Multi-sensor approaches to robotic perception can be categorised according to the level at which the data from the different sensing modalities is fused in order to obtain the estimate of interest. According to [7], data fusion can be made at the level of symbolic estimates (high level fusion), at the level of features (medium level fusion), or at the level of raw data (low level fusion).
In this paper, a low level fusion method is developed. It explores the complementary relations between the passive and active sensors at the pixel level [14, 53, 8]. The approaches in [49, 52, 61] follow this intuition but require the fused modalities to have similar coverage densities. Our proposed framework provides a procedure for fusing LiDAR and image data independently of the LiDAR data's density.
The main contribution of this paper is a depth regression model that takes both a sparse set of depth samples and RGB images as the inputs, and predicts a full-resolution depth map. This is achieved by modelling the problem of fusing low resolution depth images with high resolution camera images as a Conditional Random Field (CRF).
The intuition behind our CRF formulation is that depth discontinuities in a scene often co-occur with changes in colour or brightness within the associated camera image. Since the camera image is commonly available at much higher resolution, this insight can be used to enhance the resolution and accuracy of the depth image. A depth map will be produced by our approach using three features as illustrated in Fig. 1. The first one is an RGB colour image from the camera sensor, top image Fig. 1 (a). The second one is 2D sparse depth map captured by a LiDAR sensor, middle image Fig. 1 (b). The third feature is a surface normal map generated from the sparse depth samples, bottom image Fig. 1 (c).
The rest of this paper is organized as follows. Section 2 reviews related work on depth estimation. Section 3 explains how the LiDAR points and the camera images were registered. In Section 4, we first introduce our CRF-Fusion framework, then we provide a detailed explanation of the proposed model: the energy potentials that compose our CRF model and its inference machine. The experimental validation, performed on the KITTI dataset, is reported in Section 5. Finally, conclusions and directions for future work are listed in Section 6.
## 2 Related work
Depth estimation from monocular images is a long-standing problem in computer vision. Early works on depth estimation using RGB images usually relied on hand-crafted features and inference on probabilistic graphical models. Classical methods include shape-from-shading [71] and shape-from-defocus [60]. Other early methods were based on hand-tuned models or assumptions about the orientations of the surfaces [4]. Newer methods treat depth estimation as a machine learning problem, most recently using deep artificial neural networks [13, 67]. For instance, Saxena et al. [55] estimated the absolute scales of different image patches and inferred a depth image using a Markov Random Field model. Eigen et al. used a multiscale convolutional network to regress from colour images to depths [13, 12]. Laina et al. used a fully convolutional network architecture based on ResNet [31]. Liu et al. proposed a deep convolutional neural field model that combines deep networks with Markov random fields [36]. Roy et al. combined shallow convolutional networks with regression forests to reduce the need for large training sets [54]. In [68] the proposed attention model is seamlessly integrated with a CRF, allowing end-to-end training of the entire architecture. This approach benefits from a structured attention model which automatically regulates the amount of information transferred between corresponding features at different scales.
The approach of Li et al. [5] combines deep learning features on image patches with hierarchical CRFs defined on a superpixel segmentation of the image. They use pretrained AlexNet [29] features of image patches to predict depth at the centre of the superpixels. A hierarchical CRF refines the depth across individual pixels. Liu et al. [35] also propose a deep structured learning approach that avoids hand-crafted features. They presented a deep structured learning scheme which learns the unary and pairwise potentials of a continuous CRF in a unified deep CNN framework. Liu et al. [37] proposed a discrete-continuous CRF model to take into consideration the relations between adjacent superpixels, e.g., occlusions.
Recent work has also shown the benefit of adopting multi-task learning strategies, e.g. for jointly predicting depth and performing semantic segmentation, ego-motion estimation or surface normal computation [74, 48]. Some
Figure 1: Input features of our framework. We developed a CRF regression model to predict a dense depth image from a single RGB image, and a set of sparse depth samples: (a), (b) and (c) are the input RGB image, a set of sparse depth samples projected on the image plane, and the projected surface normals, respectively
recent papers have proposed unsupervised or weakly supervised methods for reconstructing depth maps [21, 30].
With the rapid development of deep neural networks, monocular depth estimation based on deep learning and computer vision techniques has been widely studied recently and achieved promising performance in terms of accuracy [73]. However, not considering information from other sensors makes the estimate not so robust. In the mentioned literature, there are used different kinds of network frameworks, loss functions, and training strategies with just one sensory modality. The architecture proposed in this paper uses two sensory modalities.
Fusing data coming from multiple sensors has the potential to improve the robustness of the depth estimates. Ma et al. [41] uses RGB images together with sparse depth information to train a bottleneck network architecture. Compared to imagery-only methods, their approach generates better depth estimation results. Others have investigated depth estimation from colour images augmented with sparse sets of depth measurements using probabilistic graphical models. The techniques described in [70, 11, 3], and [66] are able to fuse the information from both sources to significantly improve the resolution of low quality and sparse range images.
Wang et al. proposed a multi-scale feature fusion method for depth completion [65] using sparse LIDAR data. Ma et al. proposed two methods: a supervised method for depth completion using a ResNet based architecture and a self-supervised method which uses the sparse LiDAR input along with pose estimates to add additional training information based on depth and photometric losses [40].
Although recent methods have achieved impressive progress in terms of evaluation metrics such as the pixel-wise relative error, most of them neglect the geometric constraints in 3D space. This component is considered in our CRF model, which makes this approach different from previous fusion methods.
Providing strong cues from surface information is relevant for improving the accuracy of the depth prediction [32]. Recently, Zhang et al. [72] proposed predicting surface normals and occlusion boundaries using a deep network and further used them to help depth completion in indoor scenes. The works in [51, 69] propose end-to-end deep learning systems to produce dense depth maps from sparse LiDAR data and a colour image taken from outdoor on-road scenes, leveraging surface normals as the intermediate representation. Zhang et al. [72] predicted surface normals by leveraging RGB data, leading to a better prior for depth completion. They ultimately combined these predictions with sparse depth input to generate a complete depth map. Our method takes advantage of the surface normals to improve the performance of the proposed model. [69], in particular, uses a diffusion layer to refine the completions. [38] shows that diffusion models can be interpreted as energy-based models (EBMs). Now, CRFs can also be interpreted as energy-based models parameterized as factor graphs. Therefore, there is a relation between the diffusion and the CRF approach as both can be interpreted as EBMs, but with different formulations and assumptions. For the problem of depth estimation, the CRF approach more naturally captures the prior knowledge of the geometric constraints inherent in the physics underlying the LiDAR data measurements, whereas the diffusion approach relies on a carefully crafted similarity metric, which might make the models unstable and hard to train. In other words, we expect a CRF approach to require less training to achieve equivalent performance as the diffusion-based method.
The problem of fusing LiDAR and image data can be approached as a camera pose estimation problem, where the relation between the 3D LIDAR coordinates and the 2D image coordinates is characterised by camera parameters, such as the position, orientation, and focal length. In [45], there was proposed an information-theoretic similarity measure to automatically register 2D-Optical imagery with 3D LiDAR scans by searching for a suitable camera transformation matrix. The fusion of 3D-LiDAR data with stereoscopic images is addressed in [42, 46, 23, 47]. The advantage of stereoscopic depth estimation is its ability to produce dense depth maps of the surroundings by using stereo matching techniques. In [47], for example, there was proposed a two-stage cascade deep architecture that first fuses stereo and LiDAR disparity maps and then refines the estimated disparity maps by introducing colour features. In contrast, our method does not rely on a stereo matching algorithm, something which tends to be computationally costly.
## 3 Image and LIDAR point cloud registration
This section gives a brief introduction to the process of aligning an image and a LIDAR point cloud, which allows the projection of LiDAR points onto the image plane. As presented in [19], in a robotic platform equipped with both a LiDAR and a camera, these two sensors are synchronized so simultaneous scans from both sensor are collected. The camera and LiDAR are cross-calibrated so that the point cloud can be projected onto the image plane [19]. Once projected, the LiDAR points are associated with either pixels or groups of pixel, also called superpixels. This section also briefly describes the Simple Linear Iterative Clustering (SLIC) algorithm, by which these superpixels are obtained.
### Point cloud projection
In order to fuse the image and LiDAR data, it is imperative to find mathematical models that represent the spatial correspondence of pixels and 3D points. These models allow us to project the LiDAR point cloud onto the image plane.
The projection \(\mathbf{y}\) of a 3D point \(\mathbf{x}=(x,y,z,1)^{T}\) in rectified and rotated camera coordinates to a point \(\mathbf{y}=(u,v,1)^{T}\) in the \(i^{\prime}\)th camera image is given by
\[\mathbf{y}=\mathbf{P}_{\text{rect}}^{(i)}\mathbf{x}\]
with
\[\mathbf{y}=\mathbf{P}_{\text{rect}}^{(i)}\mathbf{x}\]
with
\[\mathbf{P}_{\text{rect}}^{(i)}=\mathbf{P}_{\text{rect}}^{(i)}\mathbf{x}\]
\[\mathbf{P}_{rect}^{(i)}=\left(\begin{array}{ccc}f_{u}^{(i)}&0&c_{u}^{(i)}&-f_{u} ^{(i)}b_{x}^{(i)}\\ 0&f_{v}^{(i)}&c_{v}^{(i)}&0\\ 0&0&1&0\end{array}\right)\]
being the \(i\)th projection matrix. Here, \(b_{x}^{(i)}\) denotes the baseline with respect to a reference camera. Note that in order to project the 3D point \(\mathbf{x}\), in the camera reference coordinates, to a point \(\mathbf{y}\) on the \(i\)th image plane, the rectifying rotation matrix of the reference camera \(\mathbf{R}_{rect}^{(0)}\) must be considered as well.
\[\mathbf{y}=\mathbf{P}_{rect}^{(i)}\mathbf{R}_{rect}^{(0)}\mathbf{x}\]
Here, \(\mathbf{R}_{rect}^{(0)}\) has been expanded into a 4\(\times\)4 matrix by appending a fourth zero-row and column, and setting \(\mathbf{R}_{rect}^{(0)}(4,4)=1\). We also need to register the laser scanner with respect to the camera's coordinate system. The rigid-body transformation from LiDAR coordinates to camera coordinates is given by
\[\mathbf{T}_{vel\circ}^{cam}=\left(\begin{array}{ccc}\mathbf{R}_{vel\circ}^ {cam}&\mathbf{t}_{vel\circ}^{cam}\\ 0&1\end{array}\right)\]
Finally, a 3D point \(\mathbf{x}\) in the LiDAR coordinate system gets projected to a point \(y\) in the \(i\)th camera image:
\[\mathbf{y}=\mathbf{P}_{rect}^{(i)}\mathbf{R}_{rect}^{(0)}\mathbf{T}_{vel \circ}^{cam}\mathbf{x}\]
Subsequently, as a preprocessing step, points with a negative value of \(z\) are removed. Then the remaining points can be projected onto the image plane using the projection matrix
\[\left[x^{\prime}y^{\prime}z^{\prime}\right]^{T}=\mathbf{y}\left[x_{p}\ y_{p}\ z_{p}\ 1\right]^{T}\]
The projected pixel coordinates of the LIDAR points can be obtained by
\[\left[x,y\right]=\left[\frac{x^{\prime}}{z^{\prime}},\frac{y^{\prime}}{z^{ \prime}}\right]\]
Fig. 2 portrays the projection of a point cloud projection onto the image plane.
### Superpixel segmentation using Simple Linear Iterative Clustering (SLIC)
Considering superpixels rather than individual pixels greatly reduces the complexity of the subsequent image processing tasks. In order to harness the full potential of the use of superpixels, their calculation must be fast, easy to use, and produce high quality segmentations.
Superpixel segmentation algorithms provide an effective way to extract information about local image features. Our framework uses an implementation of the SLIC algorithm to group sparse depth measurements which have been previously projected onto the image plane. SLIC is a simple and parallelisable pixel clustering method, based on the \(k\)-means algorithm, which is used for decomposing an image into a regular grid of visually homogeneous regions or so-called superpixels [1]. As a result, SLIC superpixels provide a regular grouping of image pixels according to their distance both spatially and in the colour space. We use this superpixel segmentation method to assign depth values from a sparse point cloud to all of the pixels within the superpixels.
Applying SLIC to the original image provides a segmentation with as many segments as the number of superpixels set as a hyperparameter. Each segment is identified by an ID that allows individual pixels to be assigned to superpixel segments. Additionally, the coordinates of the segment's centroid and those of its neighbours are also output by a supersegmentation step. Thanks to the superpixel segmentation, each node in our proposed CRF will be associated with a small number of segments, rather than have the extremely large quantity of individual pixels usually present in high definition images.
As provided by the super-segmentation step, the number of neighbours around a segment may vary, depending on the spatial homogeneity of the colour in the image and
Figure 2: Given an (a) input image and its corresponding depth, where dark blue indicates closer distances, our work is focused on densifying the input sparse depth. We combine the (b) superpixel segmentation with the sparse input depth to obtain our initial superpixel depth. We use a CRF to enhance the resolution and accuracy of the depth image considering information from the 4 neighbours for each super pixel. In (d) we can see how we achieve significant improvements with regard to the sparse input depth. Each pixel inside the superpixels has the same depth value.
the expected number of segments. After segmentation, the grid-like structure of the original images is lost.
The grid-structure of a CRF requires that each node, and therefore each segment, be associated with only four neighbours. Thus it is necessary to find the four nearest neighbours corresponding to each segment, since the grid-like structure is not well defined.
In order to determine which superpixels are the four closest neighbours, the angles between the centroid of each segment and its neighbours are calculated. We select as neighbours the four segments whose angles have the least difference from 0, 90, 180 and 270 degrees, respectively. For two examples of nodes, depicted as red points, Fig. 3 illustrates their closest neighbours, whose centroids are represented by yellow dots.
Fig. 4 shows the 4 closest neighbours selected. These neighbours are represented by dark green lines and dark yellow dots. Note that for the superpixels located at the corners, respectively, on the edges, only the 2, respectively, 3, closest neighbours need to be found.
## 4 CRF-based camera-LIDAR fusion for depth estimation
In this paper, depth estimation is formulated as a superpixel-level inference task on a modified Conditional Random Field (CRF). Our proposed model is a multi-sensor extension of the classical pairwise CRF. In this section, we first briefly introduce the CRF model. Then we show how to fuse the information of an image and a sparse LIDAR point cloud with our novel CRF framework.
### Overview
The Conditional Random Field (CRF) is a type of undirected probabilistic graphical model which is widely used for solving labeling problems. Formally, let \(\mathbf{X}=\left\{X_{1},X_{2},\ldots,X_{N}\right\}\) be a set of discrete random variables to be inferred from an observation or input tensor \(\mathbf{Y}\), which in turn is composed of the observation variables \(c_{i}\) and \(y_{i}\), where \(i\) is an index over superpixels. For each superpixel \(i\), the variable \(c_{i}\) corresponds to an observed three-dimensional colour value and \(y_{i}\) is an observed range measurement.
The goal of our framework is to infer the depth of each pixel in a single image depicting general scenes. Following the work of [11, 43] we make the common assumption that an image is composed of small homogeneous regions (superpixels) and consider a graphical model composed of nodes defined on superpixels. Note that our framework is flexible and can estimate depth values on either pixels or superpixels.
The remaining question is how to parametrize this undirected graph. Because the interaction between adjacent nodes in the graph is not directed, there is no reason to use a standard Conditional Probability Distribution (CPD), in which one represents the distribution over one node given the others. Rather, we need a more symmetric parametrization. Intuitively, we want our model to capture the affinities between the depth estimates of the superpixels in a given neighbourhood. These affinities can be captured as follows: Let \(\tilde{P}(X,Y)\) be an unnormalized Gibbs joint distribution parametrized as a product of factors \(\Phi\), where
\[\Phi=\left\{\phi_{1}\left(D_{1}\right),\ldots,\phi_{k}\left(D_{k}\right) \right\},\]
and
\[\tilde{P}(X,Y)=\prod_{i=1}^{m}\phi_{i}\left(D_{i}\right).\]
We can then write a conditional probability distribution of the depth estimates \(X\) given the observations \(Y\) using the previously introduced Gibbs distribution, as follows:
\[Pr(X|Y)=\frac{P(X,Y)}{Z(Y)}\]
where,
\[Z(Y)=\sum_{X}\tilde{P}(X,Y).\]
Here, \(Z(Y)\), also known as 'the partition function', works as a normalizing factor which marginalizes \(X\) from \(\tilde{P}(X,Y)\), allowing the calculation of the probability distribution \(P(X|Y)\):
\[P(X|Y)=\frac{1}{\sum_{X}\tilde{P}(X,Y)}\tilde{P}(X,Y).\]
Therefore, similar to conventional CRFs, we model the conditional probability distribution of the data with the following density function:
\[\mathrm{P}(\mathbf{X}|\mathbf{y})=\frac{1}{Z(\mathbf{Y})}\exp(-E(\mathbf{X}, \mathbf{Y}))\]
Figure 4: Selected 4 nearest neighbours for super pixel nodes (red dots) in Fig. 3. Dark green lines connect nodes with their selected neighbours.
Figure 3: Two examples of superpixels and their neighbours. The red dots in A and B are nodes assigned to two different super pixels, while the green lines represent their corresponding nearest neighbours. Note that the super segmentation used allows a superpixel node to have more than four neighbours.
where \(E\) is the energy function and \(Z\) is the partition function defined by
\[\mathrm{Z(Y)}=\int_{\mathrm{Y}}\exp\{-E(\mathrm{X},\mathrm{Y})\}\mathrm{d}Y.\]
Since \(Z\) is continuous, this integral equation can be analytically solved. This is different from the discrete case, in which approximation methods need to be applied. To predict the depths of a new image, we solve the following maximum a posteriori (MAP) inference problem:
\[\mathbf{x}^{\star}=\operatorname*{arg\,max}_{\mathbf{x}}\mathrm{P(}\mathbf{X} \mathbf{|Y)}.\]
To simplify the solution for the energy function, one can take the negative logarithm of the left hand side and right hand side of the equation of the probability distribution \(\Pr(X|Y)\): then the problem of maximizing the conditional probability becomes an energy minimization problem. Therefore, maximizing the probability distribution \(\Pr(\mathbf{X}\mathbf{|Y)}\) is equivalent to minimizing the corresponding energy function:
\[\mathbf{x}^{\star}=\operatorname*{arg\,min}_{\mathbf{x}}E(\mathbf{X},\mathbf{ Y}).\]
We formulate the energy function as a typical combination of unary potentials \(U\) and pairwise potentials \(V\) over the nodes (superpixels) \(N\) and edges \(S\) of the image \(x\):
\[E(\mathbf{X},\mathbf{Y})=\sum_{p\in\mathcal{N}}U\left(x_{p},\mathbf{y}\right) +\sum_{(p,q)\in S}V\left(x_{p},x_{q},\mathbf{y}\right)\]
The unary term \(U\) aims to regress the depth value from a single superpixel. The pairwise term \(V\) encourages neighbouring superpixels with similar appearances to take similar depths [11, 23].
### Potential functions
The proposed multi-modal depth estimation model is composed of unary and pairwise potentials. For an input image, which has been over-segmented into \(n\) superpixels, we define a unary potential for each superpixel. The pairwise potentials are defined over the four-neighbour vicinity of each superpixel.
The unary potentials are built by aggregating all LiDAR observations inside each superpixel. The pairwise part is composed of similarity vectors, each with \(K\) components, that measure the agreement between different features of neighbouring superpixel pairs. Therefore, we explicitly model the relations between neighbouring superpixels through pairwise potentials. In the following, we describe the details of the potentials involved in our energy function.
#### 4.2.1 Unary potential
The unary potential is constructed from the LiDAR sensor measurements by considering the least square loss between the estimated \(x_{i}\) and observed \(y_{i}\) depth values:
\[\Phi(\mathbf{x},\mathbf{y})=\sum_{i\in\mathcal{I}}\sigma_{i} \left(x_{i}-y_{i}\right)^{2}\] \[\Phi(\mathbf{x},\mathbf{y})=\|\mathbf{W(}\mathbf{x}-\mathbf{y})\| ^{2}\]
Figure 5: **Illustration of the proposed model. On the top left is a fused view of the image and LIDAR point cloud on superpixels. On the top right are the normal surface map and RGB inputs used in the pairwise potentials. On the top middle is the graph structure of the CRF: The yellow nodes represent the centroids of the image superpixels and the green branches the connections between them. The outputs of the unary part and the pairwise part are then fed to the CRF structured loss layer, which minimizes the corresponding energy function. On the bottom left is the probabilistic output, a dense depth map and uncertainty estimation map (see text for details).**
where \(\mathcal{L}\) is the set of indices for which a depth measurement is available, and \(\sigma_{i}\) is a constant weight placed on the depth measurements. This potential measures the quadratic distance between the estimated range \(X\) and the measured range \(Y\), where available. Finally, in order to write the unary potential in a more efficient matrix form, we define the diagonal matrix \(W\) with entries
\[\mathbf{W}_{i,i}=\left\{\begin{array}{ll}\sigma_{i}&\text{if }i\in\mathcal{L} \\ 0&\text{otherwise}\end{array}\right.\]
#### Colour pairwise potential
We construct a pairwise potential from \(K\) types of similarity observations, each of which enforces smoothness by exploiting colour consistency features of the neighbouring superpixels. This pairwise potential can be written as
\[\Psi^{e}(\mathbf{x},\mathbf{I})=\sum_{i}\sum_{j\in\mathcal{J}(i)}e_{i,j} \left(x_{i}-x_{j}\right)^{2}\]
\[\Psi^{e}(\mathbf{x},\mathbf{I})=\|\mathbf{Sx}\|^{2}\]
where \(I\) is an RGB image, \(\mathcal{N}(i)\) is the set of horizontal and vertical neighbours of \(i\), and each row of \(S\) represents the weighting factors for pairs of adjacent range nodes. As the edge strength between nodes, we use an exponentiated \(L_{2}\) norm of the difference in pixel appearance.
\[e_{i,j}=\exp-\frac{\left\|\mathbf{c}_{i}-\mathbf{c}_{j}\right\|^{2}}{\sigma_ {d}^{2}}\]
where \(\mathbf{c}_{i}\) is the RGB colour vector of pixel \(i\) and \(\sigma_{d}\) is a tuning parameter. A small value of \(\sigma_{d}\) increases the sensitivity to changes in the image. Thanks to this potential, the lack of content or features in the RGB image is considered by our model as indicative of a homogeneous depth distribution, in other words, a planar surface.
#### Surface-normal pairwise potential
The mathematical formulation of this potential is similar to the previous colour potential. However, the surface-normal potential considers surface normal similarities instead of colour. The weighting factors \(nr_{i,j}\) for this case are formulated using the cosine similarity, which is a measure of the similarity between two non-zero vectors of an inner product space that employs the cosine of the angle between them. The cosine of 0 is 1, and it is less than 1 for any angle in the interval \([0,\pi]\) radians. It is thus a measurement of orientation instead of magnitude [58].The cosine of two non-zero vectors can be found by using the Euclidean dot product formula:
\[\mathbf{A}\cdot\mathbf{B}=\|\mathbf{A}\|\|\mathbf{B}\|\cos\theta\]
Therefore, the cosine similarity can be expressed by
\[\cos(\theta)=\frac{\mathbf{A}\cdot\mathbf{B}}{\|\mathbf{A}\|\|\mathbf{B}\|}= \frac{\sum_{t=1}^{n}A_{i}B_{i}}{\sqrt{\sum_{t=1}^{n}A_{i}^{2}\sqrt{\sum_{t=1}^{ n}B_{i}^{2}}}}\]
where \(A_{i}\) and \(B_{i}\) are the components of vectors \(A\) and \(B\), respectively. Finally, we define our surface normal potential by the following equations.
\[\Psi^{n}(\mathbf{x},\mathbf{In})=\sum_{i}\sum_{j\in\mathcal{J}(i)}nr_{i,j} \left(x_{i}-x_{j}\right)^{2}\]
\[\Psi^{n}(\mathbf{x},\mathbf{In})=\|\mathbf{Px}\|^{2}\]
\[nr_{i,j}=\frac{\sum_{t=1}^{n}In_{i}In_{j}}{\sqrt{\sum_{t=1}^{n}In_{i}^{2}} \sqrt{\sum_{t=1}^{n}In_{j}^{2}}}\]
#### Depth pairwise potential
This pairwise potential encodes a smoothness prior over depth estimates which encourages neighbouring superpixels in the image to have similar depths. Usually, pairwise potentials are only related to the colour difference between pairs of superpixels. However, depth smoothness is a valid hypothesis which can potentially enhance depth inference.
To enforce depth smoothness, a distance-aware Potts model was adopted. Neighbouring points with smaller distances are considered to be more likely to have the same depth.
The mathematical formulation of this potential is similar to the colour pairwise potential, as it follows the Potts model:
\[\Psi^{d}(\mathbf{x},\mathbf{D})=\sum_{i}\sum_{j\in\mathcal{J}(i)}e_{i,j} \left(x_{i}-x_{j}\right)^{2}\]
and the weighting factor \(dp_{i,j}\) for this case is formulated as
\[dp_{i,j}=\exp-\frac{\left\|\mathbf{p}_{i}-\mathbf{p}_{j}\right\|^{2}}{\sigma_ {p}^{2}}\]
where \(\mathbf{p}_{i}\) is the 3D location vector of the LiDAR point \(i\) and \(\sigma_{p}\) is a parameter controlling the strength of enforcing close points to have similar depth values.
#### Uncertainty potential:
Depth uncertainty estimation is important for refining depth estimation [64; 16], and in safety critical systems [28]. It allows an agent to identify unknowns in an environment in order to reach optimal decisions. Our method provides uncertainties for the estimates of the pixel-wise depths by taking into account the number of LiDAR points present for each superpixel. The uncertainty potential is similar to the unary potential. It is constructed from the number of LiDAR points projected onto a superpixel, and employs the following least square loss:
\[U^{e}(\mathbf{x},\mathbf{y})=\sum_{i\in\mathcal{L}}\sigma_{i}\left(x_{i}-unc_{ i}\right)^{2}\]
\[U^{e}(\mathbf{x},\mathbf{y})=\|\mathbf{W}(\mathbf{x}-\mathbf{unc})\|^{2}\]
where \(\mathbf{unc}\) is defined as follows:
\[\mathbf{unc}_{i,i}=\left\{\begin{array}{ll}\sigma_{i}&\text{if P projected on SPx is 0}\\ wi&\text{if P projected on SPx is >0 and <2}\\ mean&\text{otherwise}\end{array}\right.\]
where \(P\) is a 3D point and SPx is a superpixel. In locations with accurate and sufficiently many LiDAR points, the model will produce depth predictions with a high confidence. This uncertainty estimation provides a measure of how confident the model is about the depth estimation. This results in an overall better performance, since uncertain estimates with high uncertainty can be neglected by higher level tasks that use the estimated depth maps as an input.
### Optimization
With the unary and the pairwise potentials defined, we can now write the energy function as
\[E(\mathbf{X},\mathbf{Y})=(\alpha)\,\Phi(\mathbf{x},\mathbf{y})+(\beta)\Psi^{ \mathrm{c}}(\mathbf{x},\mathbf{I})\ldots\]
\[+\ldots(\gamma)\Psi^{\mathrm{m}}(\mathbf{x},\mathbf{In})+(\delta)\Psi^{d}( \mathbf{x},\mathbf{In}) \tag{1}\]
The scalars \(\alpha\), \(\beta\), \(\gamma\), \(\delta\in\) [0,1] are weightings for the four terms. We may further expand the unary and pairwise potentials to
\[\Phi(\mathbf{x},\mathbf{y})=\alpha(\mathbf{x}^{\mathrm{T}}\mathbf{W}^{ \mathrm{T}}\mathbf{W}\mathbf{x}-2\mathbf{z}^{\mathrm{T}}\mathbf{W}^{\mathrm{ T}}\mathbf{W}\mathbf{x}+\mathbf{z}^{\mathrm{T}}\mathbf{W}^{\mathrm{T}} \mathbf{W}\mathbf{z}) \tag{2}\]
\[\Psi^{\mathrm{c}}(\mathbf{x},\mathbf{In})=\beta(\mathbf{x}^{\mathrm{T}} \mathbf{S}^{\mathrm{T}}\mathbf{S}_{\mathbf{X}}) \tag{3}\]
\[\Psi^{\mathrm{m}}(\mathbf{x},\mathbf{In})=\gamma(\mathbf{x}^{\mathrm{T}} \mathbf{P}^{\mathrm{T}}\mathbf{P}_{\mathbf{X}}) \tag{4}\]
\[\Psi^{d}(\mathbf{x},\mathbf{In})=\delta(\mathbf{x}^{\mathrm{T}}\mathbf{D}^{ \mathrm{T}}\mathbf{D}_{\mathbf{X}}) \tag{5}\]
We shall pose the problem as one of finding the optimal range vector \(\mathbf{x}^{*}\) such that:
\[\mathbf{x}^{*}=\operatorname*{argmin}_{\mathbf{x}}\left\{E(\mathbf{X}, \mathbf{Y})\right\}\]
Substituting equations 2, 3, 4 and 5 into 1 and solving for \(x\) reduces the problem to: \(\mathbf{Ax}=\mathbf{b}\) where
\[\mathbf{A}=\alpha(\mathbf{W}^{\mathrm{T}}\mathbf{W})+\beta(\mathbf{S}^{ \mathrm{T}}\mathbf{S})+\gamma(\mathbf{P}^{\mathrm{T}}\mathbf{P})+\delta( \mathbf{D}^{\mathrm{T}}\mathbf{D})\]
\[\mathbf{b}=\alpha(\mathbf{W}^{\mathrm{T}}\mathbf{W}\mathbf{z})\]
All we need to do to perform the optimization is to solve a large sparse linear system. The methods for solving sparse systems are divided into two categories: direct and iterative. Direct methods are robust but require large amounts of memory as the size of the problem grows. On the other hand, iterative methods provide better performance but may exhibit numerical problems [20, 10]. In the present paper, the fast algorithm Conjugate Gradient Squared proposed by Hestenes and Stiefel [25, 63], is employed to solve the energy minimization problem.
### Pseudo-code
Algorithm 1 provides the complete pseudo-code for our proposed framework, which has been previously illustrated in Figure 5. In this algorithm, lines 1 to 5 perform the pre-processing, which includes gathering the multi-modal raw data, building a connection graph between pairs of adjacent superpixels, and projecting the clustered LiDAR points onto the image space.
Lines 6 to 11 constitute the core of the approach. They include constructing the cost function using different potentials (unary and pairwise) to obtain the complete CRF for the depth estimation. The objective of the pairwise potentials is to smooth the depth regressed from the unary part based on the neighbouring superpixels. The pairwise potential functions are based on standard CRF vertex and edge feature functions studied extensively in [50] and other papers. Our model uses both the content information of the superpixels and relation information between them to infer the depth.
```
Input: RGB Image and Sparse 3D Point Cloud Output: Dense Point Cloud
1 Compute point cloud normal vectors
2 Perform superpixel segmentation
3 Build a set of edges and nodes considering the definition of the 4-neighbourhood.
4z\(\leftarrow\)Project clustered PCL onto image plane
5 Initialize uncertainty depth map for the whole image
6foreachsuperpixel(node)do
7 Calculate unary potential
8 Calculate Colour pairwise potential
9 Calculate Surface-normal pairwise potential
10 Calculate Depth pairwise potential
11 Calculate Uncertainty potential
12 Infer 2D dense depth map
```
**Algorithm 1**UAOFusion Network: A multimodal CRF-based method for Camera-LiDAR depth estimation
## 5 Results and discussion
We evaluate our approach on the raw sequences of the KITTI benchmark, which is a popular dataset for single image depth map prediction. The sequences contain stereo imagery taken from a car driving in an urban scenario. The dataset also provides 3D laser measurements from a Velodyne laser scanner, which we use as ground-truth measure
ments (projected into the stereo images using the given intrinsics and extrinsics in KITTI). This dataset has been used to train and evaluate the state-of-the-art methods and allows quantitative comparisons.
First, we evaluate the prediction accuracy of our proposed method with different potentials in Section 5.2. Second, in Section 5.3 we explore the impact on the depth estimation of the number of sparse depth samples and the number of superpixels. Third, Section 5.5 compares our approach to state-of-the-art methods on the KITTI dataset. Lastly, in Sections 5.6 and 5.7, we demonstrate two use cases of our proposed algorithm, one for creating LiDAR super-resolution from sensor data provided by the KITTI dataset and another one for a dataset collected in the context of this work.
### Evaluation Metrics
We evaluate the accuracy of our method in depth prediction using the 3D laser ground truth on the test images. We use the following depth evaluation metrics: root mean squared error (RMSE), mean absolute error (MAE) and mean absolute relative error (REL), among which RMSE is the most important indicator and chosen to rank submissions on the leader-board since it measures error directly on depth and penalizes on further distance where depth measurement is more challenging. These metrics were used by [13, 30, 51, 17] to estimate the accuracy of monocular depth prediction.
\[RMSE=\sqrt{\frac{1}{|T|}\sum_{d\in T}\|\hat{d}-d\|^{2}}\]
\[MAE=\frac{1}{T}\sum_{d\in T}\|\hat{d}-d\|^{2}\]
\[REL=\frac{1}{T}\sum_{d\in T}\left(\frac{\left|\hat{d}-d\right|}{\hat{d}}\right)\]
Here, \(d\) is the ground truth depth, \(\hat{d}\) is the estimated depth, and \(T\) denotes the set of all points in the test set images. In order to compare our results with those of Eigen et al. [13] and Godard et al. [21], we crop our image to the evaluation crop applied by Eigen et al. We also use the same resolution of the ground truth depth image and cap the predicted depth at 80 m [21].
### Architecture Evaluation
This section presents an empirical study of the impact on the accuracy of the depth prediction of different choices for the potential functions and hyperparameters. In the first experiment, we compare the impact of sequentially adding our proposed pairwise potentials. We first evaluate a model with only unary and colour pairwise potentials. Then we added the surface-normal pairwise potential, and finally the depth pairwise potential is included. As shown in Table 1, the RMSE is improved after adding each pairwise potential.
### The number of superpixels
In this section, we explore the relation between the prediction accuracy and the number of available depth samples and the number of superpixels.
As displayed in Fig. 5, a greater number of super pixels yields better results in error measurements. Although a larger number of sparse depth observations improves the quality of the depth map, the performance converges when the number of superpixels is more than 5000, which is about 1.5% of the total number of pixels. We ran an exhaustive evaluation of our method for a different amount of superpixels. Fig. 8 clearly shows that our method's error decreases with an increased number of superpixels.
### Sub-sampling 3D depth points
We performed a quantitative analysis of the impact of observed 3D point sparsity on the error of our proposed method, by decreasing the amount of 3D points considered during inference. As it is shown in Fig. 9, our method's error decreases with an increased number of 3D depth points, en
\begin{table}
\begin{tabular}{l l l} \hline \hline Algorithm & Potential functions & RMSE \\ \hline
**Ours** & I & 865.31 \\
**Ours** & II & 854.24 \\
**Ours** & III & 849.39 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Depth completion errors [mm] after adding pairwise potentials (lower is better)
Figure 6: Qualitative evaluation of the impact of the pairwise potentials defined as CRF terms. In row order: 1st: Pairwise potential I, penalizes dissimilar depth estimates of neighbouring pixels which have similar colours in the RGB image, 2nd: pairwise potential II, penalizes the depth differences between neighbouring superpixels whose normal surface vectors have large cosine similarities, 3rd: pairwise potential III, penalizes neighbouring superpixels with large observed depth differences.
abling a better dense depth map estimation. From Fig. 9, we can argue that the amount of 3D depth points is really important for an accurate depth map estimation. Even though our estimation error increases with the sparsity of the depth observations, our method manages to provide state-of-the-art performances even when the depth observations are sampled down to 40%.
### Algorithm Evaluation for Depth Completion
This is a more challenging dataset than other datasets for depth estimation: the distances in the KITTI dataset are larger than in other datasets, e.g. NYU-Depth-V2 dataset. Hence, the KITTI odometry dataset is more challenging for the depth estimation task.
The performance of our method and those of other existing methods on the KITTI dataset are shown in Table 3. Table 3 shows that the proposed method outperforms other depth map estimation approaches which are well-accepted in the robotics community. Our model relies on the number of superpixels and the resolution of input data sources. This means that the model's performance will increase if we increase the number of superpixels, the image resolution, and the density of the LiDAR data.
### Algorithm Evaluation for LiDAR Super-Resolution
We present another demonstration of our method in super-resolution of LiDAR measurements. 3D LiDARs have a low vertical angular resolution and thus generate a verti
\begin{table}
\begin{tabular}{l l l} \hline \hline Algorithm & \#Superpixels & RMSE \\ \hline
**Ours** & 1200 & 1370.27 \\
**Ours** & 2400 & 1050.55 \\
**Ours** & 5500 & 849.39 \\ \hline \hline \end{tabular}
\end{table}
Table 2Depth completion errors [mm] for different number of superpixels (lower is better)
\begin{table}
\begin{tabular}{l l l} \hline \hline Algorithm & RMSE & MAE \\ \hline Semantically guided & & \\ depth upsampling [57] & 2312.57 & 605.47 \\ Convolutional spatial & & \\ propagation network (CSPN) [9] & 1019.64 & 279.46 \\ Hierarchical multi-scale & & \\ sparsity-invariant network (HMS-Net) [27] & 841.77 & 253.47 \\ End-to-end sparse-to-dense & & \\ network (S2DNet) [22] & 830.57 & 247.85 \\ Self-supervised & & \\ sparse-to-dense network [40] & 814.73 & 249.95 \\
**Ours** & 849.39 & 263.31 \\ \hline \hline \end{tabular}
\end{table}
Table 3Depth completion errors [mm] by different methods on the test set of KITTI depth completion benchmark (lower is better)
Figure 8: Convergence for different number of superpixels.
Figure 7: Visual comparison of dense depth maps produced by the CRF framework when varying the size of the superpixels. From top to bottom, 1200, 2400 and 5500 superpixels.
Figure 9: Convergence for different number of 3D depth points projected into 2D RGB images. The amount of 3D depth points used is represented as a percentage (%). 100% means we are using all the 3D depth points projected into the 2D image.
cally sparse point cloud. We use all measurements in the sparse depth image and RGB images as input to our framework. An example is shown in Fig. 4. The cars are much more recognizable in the prediction than in the raw scans.
On the other hand, starting from a LiDAR Super-Resolution map we can generate a 3D reconstruction of the scene. The reconstruction of three-dimensional (3D) scenes has many important applications, such as autonomous navigation [24], environmental monitoring [44] and other computer vision tasks [26; 62].
Therefore, a dense and accurate model of the environment is crucial for autonomous vehicles. In fact, imprecise representations of the vehicle's surrounding may lead to unexpected situations that could endanger the passengers. In this paper, the 3D modeling is generated using a combination of image and range data is a sensor fusion approach that takes the strengths of each in order to overcome their limitations. Images normally have higher resolution and more visual information than range data, and range data are noisy, sparse, and have less visual information, but already contain 3D information.
The qualitative and quantitative results presented here suggest that our system provides 3D reconstructions of reasonable quality. Following [41], we use a random subset of 2000 images from the test sequences for evaluation. We take the bottom part 912\(\times\)228 due to there being no depth at the top area, and only evaluate the pixels with ground truth. The performance of our approach and state-of-the-art depth completion methods are presented in Table 4.
In Table 4, the RMSE value of Sparse-to-dense, CSPN and S2DNet methods is slightly better (lower) than our approach. All these three methods solve the depth estimation task through deep learning algorithms which require large amounts of data, which can be very costly in practice, and additional techniques like data augmentation to improve their performance. For example, S2DNet uses a dataset composed of 1,070,568 images, while our approach only used 42% of that amount of data plus the LiDAR information. Additionally, these deep learning methods make use of advanced computing resources which limit their use in real-world applications.
Our method is very competitive with deep learning models as is shown in Table 4, without demanding a large amount of data or additionally strategies. Moreover, we highlight the fact that our method's evaluation does not consider the full image data. As displayed in Fig. 8, our method converges
\begin{table}
\begin{tabular}{l l l l} \hline \hline Algorithm & RMSE[m] & REL & Log10 \\ \hline Multi-modal Auto-Encoders [6] & 7.14 & 0.179 & - \\ Residual of residual network [33] & 4.51 & 0.113 & 0.049 \\ Residual Up-Projection [15] & 3.67 & 0.072 & - \\ Sparse-to-dense [41] & 3.37 & 0.073 & - \\ CSPN [9] & 3.24 & 0.059 & - \\ S2DNet [22] & 3.11 & 0.069 & 0.038 \\
**Ours** & 3.59 & 0.072 & 0.041 \\ \hline \hline \end{tabular}
\end{table}
Table 4Depth estimation errors [m] by different methods on the test set of KITTI depth estimation benchmark (lower is better)
Figure 11: LiDAR super-resolution. Creating dense point clouds from sparse raw measurements. From top to bottom: RGB image, raw depth map, predicted depth and ground truth depth map. Distant cars are almost invisible in the raw depth map, but are easily recognizable in the predicted depth map
Figure 10: Depth completion and uncertainty estimates of our approach on the KITTI raw test set. From top to bottom: RGB and raw depth projected onto the image; high-resolution depth map; raw uncertainty; and estimated uncertainty map.
to a better solution when we use 6000 superpixels instead of 5500. We do not evaluate further because the trend is clear, as much superpixels we use, a better dense depth map is predicted.
### Depth estimation with very sparse LiDAR data: The UAO LiDAR-RGB Dataset
Thus far, we have sampled the depth from high-quality LiDAR depth maps, but in practice, sparse depth inputs may come from less reliable sources. Therefore we provide a qualitative evaluation of this model on our own well-calibrated LiDAR and RGB dataset. We use a 16-beam LiDAR along with a stereo Labs Zed Mini camera with 1280\(\times\)720 resolution. This dataset enables us to demonstrate the stability and robustness of the proposed model in particularly challenging scenarios. The scenes were recorded with a low resolution of the camera and the LiDAR sensor in comparison to the KITTI benchmark.
Notably, the proposed algorithm is able to estimate a dense depth map of indoor and outdoor environments using colour and sparse depth data. The experimental results are shown in Fig. 12, Fig. 13 and Fig. 14. Dark red indicates farther distances and dark blue indicates closer distances.
Despite the lower number of LiDAR channels, the proposed method has provided accurate depth information even under challenging outdoor conditions, as shown in Fig. 14. In this scene there is lot of variability in terms of the light and shadows generated by the environment and the weather itself.
After a close look at Fig. 12, Fig. 13 and Fig. 14, it is noticeable that no depth observations from the LiDAR are available at the top and bottom locations of the colour image. After inference, the depth estimates, shown in the bottom images, at the above locations are consistent with the information provided by the image. We can conclude that the framework proposed here works reliably for the depth prediction task. Additionally, it also solves the depth completion problem, as it is able to deal with highly sparse input point clouds projected onto the image space.
of data, such as time series or videos. This can be useful for depth estimation using Camera-LiDAR fusion, as it can take into account temporal information in the data. However, both Higher order and Dynamic CRFs require more computational resources and are more difficult to train than pair-wise, also known as Markov CRFs, which only consider interactions between adjacent variables. Since our work focuses on robotics applications, in which computational performance is a hard constraint, instead of trying to capture long-term dependencies in the data, we leverage the potential of modeling both appearance and geometric constraints from multimodal sensor observations. A promising future research direction may tackle the challenges of developing efficient and real-time inference solutions for higher order CRF's in the context of multi-modal depth estimation solutions. Another promising research venue is the parallel implementation of our method, which will bring benefits when deploying it in real robotic platforms. Additionally, the inclusion of other sensor modalities, such as radar [34, 39], could be explored as a way of improving our system's robustness to challenging environmental conditions.
## 7 Acknowledgment
This work was supported by Universidad Autonoma de Occidente (UAO). The authors would like to thank the Research incubator in robotics and autonomous systems (RAS), the Research group on remote and distributed control systems (GITCoD) at UAO, Walter Mayor, Nicolas Llanos Neuta and Juan Carlos Perafan for their feedback and helpful discussions.
|
2305.06502 | Untargeted Bayesian search of anisotropic gravitational-wave backgrounds
through the analytical marginalization of the posterior | We develop a method to perform an untargeted Bayesian search for anisotropic
gravitational-wave backgrounds that can efficiently and accurately reconstruct
the background intensity map. Our method employs an analytic marginalization of
the posterior of the spherical-harmonic components of the intensity map,
without assuming the background possesses any specific angular structure. The
key idea is that the likelihood function of the spherical-harmonic components
is a multivariate Gaussian when the intensity map is expressed as a linear
combination of the spherical-harmonic components and the noise is stationary
and Gaussian. If a uniform and wide prior of these spherical-harmonic
components is prescribed, the marginalized posterior and the Bayes factor can
be well approximated by a high-dimensional Gaussian integral. The analytical
marginalization allows us to regard the spherical-harmonic components of the
intensity map of the background as free parameters, and to construct their
individual marginalized posterior distribution in a reasonable time, even
though many spherical-harmonic components are required. The marginalized
posteriors can, in turn, be used to accurately construct the intensity map of
the background. By applying our method to mock data, we show that we can
recover precisely the angular structures of various simulated anisotropic
backgrounds, without assuming prior knowledge of the relation between the
spherical-harmonic components predicted by a given model. Our method allows us
to bypass the time-consuming numerical sampling of a high-dimensional
posterior, leading to a more model-independent and untargeted Bayesian
measurement of the angular structures of the gravitational-wave background. | Adrian Ka-Wai Chung, Nicolas Yunes | 2023-05-11T00:51:44Z | http://arxiv.org/abs/2305.06502v2 | # Untargeted Bayesian search of anisotropic gravitational-wave backgrounds
###### Abstract
We develop a method to perform an untargeted Bayesian search for anisotropic gravitational-wave backgrounds that can efficiently and accurately reconstruct the background intensity map. Our method employs an analytic marginalization of the posterior of the spherical-harmonic components of the intensity map, without assuming the background possesses any specific angular structure. The key idea is to realize that the likelihood function is a multivariable Gaussian of the spherical-harmonic components of the energy spectrum of the gravitational-wave background. If a uniform and wide prior of these spherical-harmonic components is prescribed, the marginalized posterior and the Bayes factor can be well approximated by a high-dimensional Gaussian integral. The analytical marginalization allows us to regard the spherical-harmonic components of the intensity map of the background as free parameters, and to construct their individual marginalized posterior distribution in a reasonable time, even though many spherical-harmonic components are required. The marginalized posteriors can, in turn, be used to accurately construct the intensity map of the background. By applying our method to mock data, we show that we can recover precisely the angular structures of various simulated anisotropic backgrounds, without assuming prior knowledge of the relation between the spherical-harmonic components predicted by a given model. Our method allows us to bypass the time-consuming numerical sampling of a high-dimensional posterior, leading to a more model-independent and untargeted Bayesian measurement of the angular structures of the gravitational-wave background.
## I Introduction
The direct detection of gravitational waves (GWs) emitted by compact binary coalescence (CBC) is a milestone in GW astrophysics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. The detection of a GW background (GWB), formed by the random and incoherent superposition of numerous individually unresolvable GW signals emitted by different types of sources, may be the milestone that can be achieved next, in the foreseeable future [11; 12; 13; 14; 15]. Astrophysical sources, including CBCs [16; 17; 18; 19], rapidly rotating asymmetric neutron stars [20; 21; 22; 23; 24] and core-collapse supernova [25; 26; 27; 28], can generate GWs that form a GWB. Alternatively, a GWB can also be generated by GWs emitted by cosmological sources, like cosmological inflation [29; 30; 31; 32; 33; 34; 35; 36], the phase transitions that may have occurred in the early Universe [37; 38; 39; 40; 41; 42; 43; 44; 45; 46], and cosmic strings [47; 48; 49; 50; 51; 52; 53; 54; 55], if they exist. A GWB may even be generated by physics that has yet to be fully explored, such as ultralight bosons [56; 57; 58; 59; 60] and primordial black holes [61; 62; 63; 64], which are candidates to explain dark matter. As a GWB can be formed by sources significantly different from those generating individually detectable GW signals, detecting a GWB constitutes a unique probe of the Universe [15].
While a GWB is expected to be dominantly isotropic, it should also contain angular structures. In general, different sources and generation mechanisms could form GWBs with different angular structures [19; 53; 65; 66; 67; 68; 69; 70]. This source and mechanism dependence suggests that accurately mapping the angular structure of the GWB could be very informative, allowing us to pinpoint GWB sources and deduce their properties [71]. To this end, several methods have been developed to extract the angular distribution of the GWB power spectrum. Broadly speaking, these methods can be classified as either _frequentist_ or _Bayesian_. The frequentist approach amounts to constructing some maximum-likelihood estimator with different basis to characterize GWB anisotropies. Examples of the frequentist approach include radiometer search [72] and spherical-harmonic decomposition [73], which have been widely used in analyzing the actual data measured by the LIGO and Virgo detectors [74]. The Bayesian approach amounts to constructing the posterior of random variables related to GWB anisotropies, such as done very recently in [75; 76].
These two approaches are useful in probing GWB anisotropies, but they also have limitations. For example, since the radiometer search works in the pixel basis, it is not suitable for searching for extended sources [77; 78]. Working in the spherical-harmonic basis, one can use a spherical-harmonic decomposition to search for widespread sources and probe the anisotropies in a model-independent way, but it may lead to some nonphysical maximum likelihood estimates, such as complex estimates for some coefficients that, on physical grounds, should be real.
One way to remedy the drawback of the spherical-harmonic decomposition is to perform a model-independent Bayesian search for an anisotropic GWB. However, to describe an anisotropic GWB without assum
ing any source models, one needs a model-independent framework that is typically characterized by many parameters, such as (the formally infinite number of) the spherical-harmonic components required in the spherical-harmonic decomposition. This large number of variables makes the construction of the posterior of these variables computationally untenable, even if the posterior is estimated through numerical sampling [79]. Thus, the Bayesian search of an anisotropic GWB has been restricted to either a _targeted_ Bayesian analysis [75], inferring the overall amplitude of a GWB whose angular structures are given by a specific model, or to a model-independent framework characterized by only a few parameters [76].
The goal of this paper is to develop a computationally efficient, fast and _untargeted_ Bayesian search that can construct the Bayesian marginalized posterior of the spherical-harmonic components of the angular structure of the anisotropic GWB without prior knowledge of the relation among the spherical harmonic components. We achieve this through analytical marginalization of the posterior that exploits the Gaussian nature of the likelihood function. In particular, since the likelihood is a multivariable Gaussian of the spherical-harmonic components, the posterior of the latter and the Bayes factor (between an anisotropic GWB and a non-detection hypotheses) can be well approximated by a high-dimensional Gaussian integral. After evaluating this integral, the marginalized posterior of the real or imaginary part of a particular spherical-harmonic component is also a Gaussian function, whose mean and variance are given by the convolution between the cross-spectral density of the data (i.e. the product of the frequency-domain data measured by a detector and the complex conjugate of the frequency-domain data measured by another detector, see Eq. (20)) and the spherical-harmonic component of the overlap reduction function.
To fully illustrate the power of our analysis, we apply our scheme to mock data containing (i) no GWB signal, (ii) a time-independent dipole GWB signal, and (iii) a GWB formed by Galactic plane binaries. We show that our analysis is capable of extracting the angular structures of all of these signals, despite each type corresponding to different levels of anisotropy. In particular, we show that, in the strong signal-to-noise ratio limit, our analysis can recover an accurate sky map that is almost identical to the simulated Galactic plane signal without bias. Through our Bayes factor calculations, we show that the data can be used to infer the suitable angular length scale of anisotropies that should be included in the analysis.
Our marginalization scheme has several advantages compared to other existing search methods of anisotropic GWBs. First, the analytical formulae derived in this work allow us to reconstruct the intensity map of a GWB extremely accurately and rapidly, and compute the Bayes factors efficiently, completely bypassing the numerical sampling of an extremely high-dimensional posterior, which creates severe computational challenges. Second, since our analysis does not require prior knowledge about the relationship between various spherical-harmonic components, our work represents a major step toward a model-independent search for anisotropic GWBs, which is crucial for understanding the properties of their sources.
The remainder of this paper presents the details of the calculations summarized above, and it is organized as follows. Section II lays the foundation of our analysis by first reviewing the basic properties of GWBs. Section III explains the method we develop and defines different probability distribution functions and hypothesis ranking for the Bayesian search of GWBs. Section IV presents the details of the analytic marginalization and of the evaluation of the Bayes factor. Section V applies the marginalization to mock data. Section VI concludes and points to future research. Throughout this paper, we adopt the following conventions: bold lowercase characters represent a vector; bold uppercase characters represent a vector; bold uppercase characters represent a square matrix, and their corresponding italic unbolded characters with subscript(s) represent the elements of this matrix. For example, \(a_{i}\) is the \(i\)-th element of the vector \(\mathbf{a}\) and \(A_{ij}\) is the \((i,j)\)-th element of the square matrix \(\mathbf{A}\). Following [11; 12; 74; 80], we take the value of the Hubble constant to be \(H_{0}=67.9\) kms\({}^{-1}\)Mpc\({}^{-1}\), which is the _Planck_ 2015 value [81], although our conclusions will not depend on this choice.
## II Properties of anisotropic gravitational-wave background
In this section, we will briefly review the properties and Bayesian analysis of an anisotropic GWB. Only GW properties that are strictly relevant to our work will be reviewed. We refer the reader to, for example, [37; 73; 82] for a more detailed and exhaustive review of anisotropic GWB.
In general, metric perturbations at a given spacetime position can be written as a sum of contributions coming from all directions in the sky through a plane-wave expansion [82; 83; 84],
\[h_{ij}(t,\mathbf{x})\] \[=\sum_{A=+,x}\int_{-\infty}^{\infty}df\int d^{2}\hat{\Omega}\ \tilde{h}_{A}(f,\hat{\Omega})e^{A}_{ij}(\hat{\Omega})e^{-2\pi if(t-\hat{\Omega} \cdot\mathbf{x})}, \tag{1}\]
where \(A=+\) and \(\times\) stand for the GW polarization, \(\hat{\Omega}\) is a unit vector pointing in a sky direction, \(e^{A}_{ij}(\hat{\Omega})\) are the GW polarization tensors, and the overhead tilde stands for the Fourier transform. Without loss of generality, the expectation value 1 of \(h_{ij}\) produced by a GWB is usually
assumed to be zero [85],
\[\langle h_{ij}(t,\mathbf{x})\rangle=0. \tag{2}\]
However, the quadratic expectation value of \(\tilde{h}_{\mathrm{A}}\) is not zero [82],
\[\left\langle\tilde{h}_{A}(f,\hat{\Omega})\tilde{h}_{A^{\prime}}^{ \dagger}(f^{\prime},\hat{\Omega}^{\prime})\right\rangle=\frac{1}{4}\delta(f-f^ {\prime})\delta^{2}(\hat{\Omega},\hat{\Omega}^{\prime})\delta_{AA^{\prime}} \mathcal{H}(f,\hat{\Omega}), \tag{3}\]
where \(\delta(\cdot)\) and \(\delta^{2}(\cdot)\) are one- and two-dimensional Dirac delta functions respectively, \(\delta_{AA^{\prime}}\) is a Kronecker delta, and \(\mathcal{H}(f,\hat{\Omega})\) is a function of frequency and the sky position \(\hat{\Omega}\) that is related to the one-sided strain power spectral density (PSD) of the GWB via [82; 73; 83],
\[S_{h}(f)=\int_{S^{2}}d\hat{\Omega}\ \mathcal{H}(f,\hat{\Omega}). \tag{4}\]
In other words, \(\mathcal{H}(f,\hat{\Omega})\) characterizes the angular distribution of the GWB power in different sky directions. The one-sided PSD is related to the dimensionless energy density (also known as the "spectrum") of the GWB via
\[\Omega_{\mathrm{gw}}(f)\equiv\frac{f}{\rho_{c}}\frac{d\rho_{\mathrm{gw}}}{df} =\frac{2\pi^{2}}{3H_{0}^{2}}f^{3}S_{h}(f), \tag{5}\]
where \(d\rho_{\mathrm{gw}}\) is the energy density of GWs of frequencies between \(f\) and \(f+df\), \(\rho_{c}\) is the cosmological critical energy density (\(\rho_{c}=3H_{0}^{2}/8\pi G\)).
For a GWB that can be detected by ground-based GW interferometers, like advanced LIGO, advanced Virgo and KAGRA [74; 77; 86]. The spectral content of a GWB should be independent of the angular distribution. Hence, to a good approximation, we can product decompose [82; 73; 83; 75]
\[\mathcal{H}(f,\hat{\Omega})=H(f)\mathcal{P}(\hat{\Omega}), \tag{6}\]
where \(H(f)\) represents the spectral shape of the GWB and \(\mathcal{P}(\hat{\Omega})\) encapsulates the strength and angular distribution of the intensity of the GWB, a function of the sky position. As in any other GWB search, we need to specify the spectral shape of the GWB, \(H(f)\), that we are trying to detect. Within the sensitivity band of ground-based detectors, the energy spectrum of many GWBs can be well approximated by a power law in frequency [11; 12; 13; 14], which means we can choose \(H(f)\) to also have a power law structure, namely
\[H(f)=H_{\alpha}(f)=\left(\frac{f}{f_{\mathrm{ref}}}\right)^{\alpha-3}, \tag{7}\]
where \(f_{\mathrm{ref}}\) is a reference frequency and \(\alpha\) is the tilt index. Following the existing search of GWB from the actual data, we will fix \(\alpha\) and infer \(\mathcal{P}(\hat{\Omega})\). Throughout this paper, we also follow the existing LIGO/Virgo search of a GWB and choose \(f_{\mathrm{ref}}=25\)Hz, \(\alpha=0,2/3\) or \(3\)[11; 12; 13; 14]. The choice of \(f_{\mathrm{ref}}\) does not affect the rest of the analysis at all because it just provides the overall normalization for \(\Omega_{\mathrm{GW}}\).
To extract the angular structure of the GWB from data, we perform a spherical-harmonic decomposition to express \(\mathcal{P}(\hat{\Omega})\) as a linear combination of (scalar) spherical harmonics \(Y_{\ell m}(\hat{\Omega})\),
\[\mathcal{P}(\hat{\Omega})=\sum_{\ell=0}^{\ell_{\mathrm{max}}}\sum_{m=-\ell}^{ +\ell}\mathcal{P}_{\ell m}Y_{\ell m}(\hat{\Omega}), \tag{8}\]
where \(\mathcal{P}_{\ell m}\) are referred to as _spherical-harmonic components_ of the spectrum of the GWB, in units of strain\({}^{2}\) Hz\({}^{-1}\) rad\({}^{-1}\). In principle, this sum must include an infinite number of \(\ell\) terms, but in practice, one must truncate the sum at some \(\ell=\ell_{\mathrm{max}}\). The value of \(\ell_{\mathrm{max}}\) will be specified in subsequent calculations. From Eq. (8), we notice two things. First, upon sky averaging (i.e. integration over sky angle), all terms vanish except \(\mathcal{P}_{00}\). This implies that
\[\mathcal{P}_{00}=\frac{S_{h}(f_{\mathrm{ref}})}{\sqrt{4\pi}}=\frac{3H_{0}^{2}} {2\pi^{2}f_{\mathrm{ref}}^{3}}\frac{\Omega_{\mathrm{GW}}(f_{\mathrm{ref}})}{ \sqrt{4\pi}}. \tag{9}\]
Second, the real nature of \(\mathcal{P}(\hat{\Omega})\) and \(Y_{\ell 0}\) and the complex conjugation property of \(Y_{\ell,m}\) imply that
\[\begin{split} Y_{\ell 0}\in\mathbb{R}&\Rightarrow \mathcal{P}_{\ell 0}\in\mathbb{R},\\ Y_{\ell,-m}=(-1)^{m}Y_{\ell m}^{\dagger}&\Rightarrow \mathcal{P}_{\ell,-m}=(-1)^{m}\mathcal{P}_{\ell m}^{\dagger}.\end{split} \tag{10}\]
These requirements imply that, in order to specify the angular distribution of a GWB, we only need \((\ell_{\mathrm{max}}+1)^{2}\) real numbers,
\[\begin{split}&\mathcal{P}_{00},\mathcal{P}_{10},....,\mathcal{P}_{ \ell 0},\\ &\mathcal{P}_{11}^{\mathrm{Re}},\mathcal{P}_{21}^{\mathrm{Re}},...., \mathcal{P}_{\ell m}^{\mathrm{Re}},\\ &\mathcal{P}_{11}^{\mathrm{Im}},\mathcal{P}_{21}^{\mathrm{Im}},....,\mathcal{P}_{\ell m}^{\mathrm{Im}},\end{split} \tag{11}\]
where \(\mathcal{P}_{\ell m}^{\mathrm{Re}}\) and \(\mathcal{P}_{\ell m}^{\mathrm{Im}}\) are respectively the real and imaginary parts of \(\mathcal{P}_{\ell m}\). For the sake of clarity, we introduce a \((\ell_{\mathrm{max}}+1)^{2}\)-vector to denote these numbers,
\[\begin{split}\mathbf{w}=&(\mathcal{P}_{00},\mathcal{P} _{10},....,\mathcal{P}_{\ell 0},\\ &\mathcal{P}_{11}^{\mathrm{Re}},\mathcal{P}_{21}^{\mathrm{Re}},....,\mathcal{P}_{\ell m}^{\mathrm{Re}},\\ &\mathcal{P}_{11}^{\mathrm{Im}},\mathcal{P}_{21}^{\mathrm{Im}},....,\mathcal{P}_{\ell m}^{\mathrm{Im}})^{\mathrm{T}}.\end{split} \tag{12}\]
## III Basics of Bayesian Search for Gravitational-Wave Background
Searching for an anisotropic GWB amounts to determining the spherical-harmonic components of the intensity map from the data. In the presence of overwhelming noise, the spherical-harmonic components, just as the estimation of the parameters of other GW signals, should be determined by Bayesian inference. This section is devoted to reviewing the basics of a Bayesian search for an
anisotropic GWB. Explicitly, we will define the likelihood, prior and posterior for a Bayesian search. In terms of the spherical-harmonic components defined in the last section, we will explicitly write down the likelihood function as a Gaussian of the real and imaginary parts of the spherical-harmonic components of the GWB. Finally, we also define the Bayes factor, which competes the hypothesis that an anisotropic GWB is detected against that hypothesis that the data consists of only noises.
### Likelihood and posterior in Bayesian GWB analysis
When a GWB is present, it induces responses on GW detectors that force the latter to measure strain data consisting of two parts:
\[\tilde{s}_{I}(f,t)=\tilde{n}_{I}(f,t)+\tilde{h}_{I}(f,t), \tag{13}\]
where \(I\) labels the detector, \(\tilde{s}_{I}(f,t)\), \(\tilde{n}_{I}(f,t)\) and \(\tilde{h}_{I}(f,t)\) are the finite- or "short"-time Fourier transform of the time-domain data, of the instrumental noise and of the GWB-induced response on detector \(I\) centered at time \(t\), respectively. The time-domain GWB-induced response on detector \(I\) is related to the metric perturbations of the GWB (Eq. (1)) via
\[h_{I}(t)=D_{I}^{ij}h_{ij}(t,\mathbf{x}_{I}), \tag{14}\]
where \(D_{I}^{ij}\) is a tensor (known as the detector response tensor) that encapsulates the geometry and orientations geometry of detector \(I\), and \(\mathbf{x}_{I}\) is the position vector of detector \(I\).
The expectation value of the GWB-induced response satisfies
\[\langle\tilde{h}_{I}(f,t)\rangle=0\,, \tag{15}\]
which descends directly from Eq. (2). As \(\tilde{h}_{I}(f,t)\) is random and has zero mean, the GWB-induced response just looks like noise within individual detectors. However, the responses induced on two GW detectors, say \(I\) and \(J\), should be proportional to each other (in the time-domain), which means that they should be correlated across among detectors [73],
\[\langle\tilde{h}_{I}(f,t)\tilde{h}_{J}^{\dagger}(f,t)\rangle=\frac{\tau}{2}H_ {\alpha}(f)\sum_{\ell m}\gamma_{\ell m}^{(IJ)}(f,t)\mathcal{P}_{\ell m}, \tag{16}\]
where \(\tau\) is the time length of the data segment analyzed, and \(\gamma^{(IJ)}(f,t)\) is the spherical-harmonic components of the overlap reduction function of detectors \(I\) and \(J\), defined by [82]
\[\begin{split}&\gamma^{(IJ)}(f,t,\hat{\Omega})=\frac{1}{2}\sum_{A}R_{A} ^{(I)}(f,t,\hat{\Omega})\left[R_{A}^{(J)}(f,t,\hat{\Omega})\right]^{\dagger},\\ &\gamma^{(IJ)}_{\ell m}(f,t)=\int d^{2}\hat{\Omega}\;\gamma^{(IJ )}(f,t,\hat{\Omega})\;Y_{\ell m}(\hat{\Omega}),\end{split} \tag{17}\]
where \(A=+,\times\) stands for the GW polarization, and \(R_{A}^{(I)}(f,t,\hat{\Omega})\) is the polarization-basis response function of detector \(I\). The latter depends on time because of Earth's rotation. As the definition suggests, \(\gamma^{(IJ)}_{\ell m}(f,t)\) encapsulates information about the detectors' geometry, location, orientation and antenna pattern, and they should not be confused with the spherical-harmonic components of the spectrum of the GWB, \(\mathcal{P}_{\ell m}\). Instrumental noise, on the other hand, has very different properties: if \(I\) and \(J\) are well separated, their instrumental noise should be uncorrelated,
\[\langle\tilde{n}_{I}(f,t)\tilde{n}_{J}^{\dagger}(f,t)\rangle=0. \tag{18}\]
By the same token
\[\begin{split}&\langle\tilde{n}_{I}(f,t)\tilde{h}_{J}^{\dagger}(f,t )\rangle=\langle\tilde{n}_{J}(f,t)\tilde{h}_{I}^{\dagger}(f,t)\rangle\\ &=\langle\tilde{n}_{I}(f,t)\tilde{h}_{I}^{\dagger}(f,t)\rangle= \langle\tilde{n}_{J}(f,t)\tilde{h}_{J}^{\dagger}(f,t)\rangle=0.\end{split} \tag{19}\]
These correlation properties suggest that a GWB can be searched for by cross-correlating the strain data measured by different detectors. To this end, we define the cross-spectral density, \(C(f,t)\), between two detectors, \(I\) and \(J\), via
\[C(f,t)\equiv\frac{2}{\tau}\tilde{s}_{I}(f,t)\tilde{s}_{J}^{\dagger}(f,t). \tag{20}\]
If a GWB is present, the expectation value of \(C(f,t)\) is,
\[\langle C(f,t)\rangle=H_{\alpha}(f)\sum_{\ell m}\gamma_{\ell m}(f,t)\mathcal{P }_{\ell m}. \tag{21}\]
The Bayesian search of an anisotropic GWB amounts to determining the posterior of \(\mathbf{w}\), given the cross-spectral density of all data segments \(\{C\}\). According to Bayes' theorem, the posterior is related to the likelihood by
\[p(\mathbf{w}|\left\{C\right\},H)=\frac{p(\mathbf{w}|H)p(\{C\}\left|\mathbf{w},H)}{p(\{C\}\left|H\right.\right)}. \tag{22}\]
Here, \(p(\mathbf{w}|\left\{C\right\},H)\) is the posterior of \(\mathbf{w}\), given the cross-spectral density and the hypothesis \(H\) (e.g. that the measured data contains a GWB signal, which will be more precisely defined in Eq. (26)). The quantity \(p(\{C\}\left|H\right)\) is the Bayesian evidence, which is a normalization constant of the posterior. The quantity \(p(\mathbf{w}|H)\) is the prior of \(\mathbf{w}\), prescribed according to our hypothesis. The quantity \(p(\{C\}\left|\mathbf{w},H)\) is the likelihood that we will measure \(\{C\}\), given that there is a GWB with spherical-harmonic components \(\mathbf{w}\). Explicitly, in the weak-signal approximation, the likelihood \(p(\{C\}\left|\mathbf{w},H)\) is the noise model, and assuming Gaussian noise and two detectors \(I\) and \(J\), can be modelled by [73; 75; 82]
\[\begin{split}& p(\{C\}\left|\mathbf{w},H)\\ &=\mathcal{N}\exp\left\{-\frac{1}{2}\sum_{f,t}\frac{\left|C(f,t)-H( f)\gamma_{\ell m}\mathcal{P}_{\ell m}\right|^{2}}{N_{I}(f,t)N_{J}(f,t)}\right\}, \end{split} \tag{23}\]
where we have converted the integral into a Fourier sum and where \(\mathcal{N}\) is a proportionality constant that does not depend on \(\mathcal{P}_{\ell m}\), and \(\gamma_{\ell m}\mathcal{P}_{\ell m}\) is a shorthand notation for
\[\gamma_{\ell m}\mathcal{P}_{\ell m}=\sum_{\ell=0}^{\ell^{\text{inf}}}\sum_{m=- \ell}^{+\ell}\gamma_{\ell m}(f,t)\mathcal{P}_{\ell m}, \tag{24}\]
\(\ell^{\text{inf}}_{\text{max}}\) is the maximum \(\ell\) that we include in the inference analysis, and \(N_{I,J}(f,t)\) are, respectively, the one-sided PSD of the output of detectors \(I\) and \(J\).
When searching for an anisotropic GWB, Eq. (22) represents a high-dimensional posterior probability distribution function, which is difficult to visualize. Thus, it is very convenient to present the marginalized posterior of a particular spherical-harmonic component. To this end, one can marginalize the posterior (Eq. (22)) over all components of \(\mathbf{w}\) that one is not interested in (at the moment) to obtain the marginalized posterior of, say, \(w_{i}\),
\[p(w_{i}|\left\{C\right\},H)=\prod_{j\neq i}\int dw_{j}p(\mathbf{w}|\left\{C \right\},H). \tag{25}\]
The lower and upper limit of the integral involved in the marginalization depends on \(p(\mathbf{w}|H)\), which will be prescribed in Sec. IV.
### Bayes Factor
Other than constructing the marginalized posterior, Bayesian theory also provides a framework to compute the so-called "Bayes factor." The latter is a measure that allows one to compare two hypotheses in light of the data within Bayesian inference. In the context of GWBs, the Bayes factor can be used to quantify whether an anisotropic GWB has been detected or not by comparing the following two hypotheses:
\[\begin{split} H_{\ell_{\text{max}}}&:\text{the data }\{C\}\text{ contain a GWB signal whose}\\ &\mathcal{P}_{\ell_{\text{max}}m}\neq 0\text{ for at least one }m\in\\ &\left[-\ell_{\text{max}},-\ell_{\text{max}}+1,...,\ell_{\text{ max}}-1,\ell_{\text{max}}\right]\text{, and}\\ H_{\text{null}}&:\text{the data }\{C\}\text{ contain only noise.}\end{split} \tag{26}\]
In Bayesian inference, we can compare these two hypotheses by computing their odds ratio, namely, the ratio of their respective evidences given the data:
\[\mathcal{O}(\ell_{\text{max}})=\frac{p(H_{\ell_{\text{max}}}|\{C\})}{p(H_{ \text{null}}|\{C\})}=\frac{p(H_{\ell_{\text{max}}})}{p(H_{\text{null}})}\frac {p(\{C\}|H_{\ell_{\text{max}}})}{p(\{C\}|H_{\text{null}})}. \tag{27}\]
The term \(p(H)/p(H_{\text{null}})\) is known as the prior odds, and it represents our prior belief of one hypothesis over the other. The second term in the above equation is known as the Bayes factor,
\[\mathcal{B}(\ell_{\text{max}})=\frac{p(\{C\}|H_{\ell_{\text{max}}})}{p(\{C\}|H _{\text{null}})}\,, \tag{28}\]
which implies that the odds ratio is nothing but the product of the prior odds with the Bayes factor.
One can think of the Bayes factor as the odds ratio between two hypotheses under the assumption of equal prior belief between them. As we have no information about whether we have detected a GWB before we analyze the data, we naturally assume the two hypotheses are equally likely. Thus,
\[p(H_{\ell_{\text{max}}})=p(H_{\text{null}})\Rightarrow\mathcal{O}(\ell_{\text {max}})=\mathcal{B}(\ell_{\text{max}}). \tag{29}\]
If \(\mathcal{B}(\ell_{\text{max}})>1\), hypothesis \(H_{\ell_{\text{max}}}\) is favored over hypothesis \(H_{\text{null}}\), which implies it is more likely that we have detected a GWB than not; the opposite is true, of course, if \(\mathcal{B}(\ell_{\text{max}})\leq 1\). For convincing evidence that we have indeed detected a GWB, one typically requires that \(\mathcal{B}(\ell_{\text{max}})\gg 1\), where precisely how much larger than unity this requirement must depend on the statistician's definition of "convincing" [82].
## IV Analytic Marginalization of the posterior and Bayes Factor
As pointed out in the last section, the posterior of the spherical-harmonic components is a probability distribution function of high dimension. In principle, one can numerically sample the posterior using nested sampling or Markov-Chain Monte-Carlo techniques. But given the high dimensionality of the distribution, both sampling approaches will take an extremely long time to complete in the GWB case. In this section, we will show that, if a wide-enough uniform prior is prescribed, the marginalized posterior and Bayes factor for the search for an anisotropic GWB can be _analytically_ evaluated as a high-dimensional Gaussian integral, with the former also a Gaussian function.
### Marginalized posterior
Let us begin by explicitly writing down the exponent of the likelihood as a quadratic form of \(\mathbf{w}\). To start, we rewrite the likelihood as
\[p(\{C\}\,|\mathbf{w},H_{\ell^{\text{inf}}_{\text{max}}})\propto\exp\left\{- \frac{1}{2}\sum_{f,t}\frac{R(f,t)R^{\dagger}(f,t)}{N_{I}(f,t)N_{J}(f,t)}\right\} \tag{30}\]
where \(R(f,t)\) stands for the residual
\[R(f,t)=C(f,t)-H(f)\sum_{\ell=0}^{\ell^{\text{inf}}_{\text{max}}}\sum_{m=-\ell }^{\ell}\gamma_{\ell m}(f,t)\mathcal{P}_{\ell m}. \tag{31}\]
Explicitly writing out the summation over \(\ell\) and \(m\), we have
\[\begin{split}&\sum_{\ell=0}^{\ell^{\text{(inf)}}}\sum_{m=-\ell}^{+ \ell}\gamma_{\ell m}(f,t)\mathcal{P}_{\ell m}\\ &=\sum_{\ell=0}^{\ell^{\text{(ind)}}}\gamma_{\ell 0}(f)\mathcal{P}_{ \ell 0}+\sum_{\ell=0}^{\ell^{\text{(int)}}}\sum_{m=1}^{+\ell}[\gamma_{\ell m }(f,t)\mathcal{P}_{\ell m}\\ &\hskip 14.226378pt+\gamma_{\ell,-m}(f,t)\mathcal{P}_{\ell,-m}] \\ &=\sum_{\ell=0}^{\ell^{\text{(inf)}}}\gamma_{\ell 0}(f)\mathcal{P}_{ \ell 0}+\sum_{\ell=0}^{\ell^{\text{(int)}}}\sum_{m=1}^{+\ell}[\gamma_{\ell m }(f,t)\mathcal{P}_{\ell m}\\ &\hskip 14.226378pt+(-1)^{\ell}\gamma_{\ell m}^{\dagger}(f,t) \mathcal{P}_{\ell m}^{\dagger}],\end{split} \tag{32}\]
where in the last line, we have used Eqs. (12) and (13) of [73], namely
\[\begin{split}\gamma_{\ell m}^{\dagger}(f,t)&=(-1)^{ \ell+m}\gamma_{\ell,-m}(f,t),\\ \mathcal{P}_{\ell m}^{\dagger}&=(-1)^{m}\mathcal{P}_{ \ell,-m}.\end{split} \tag{33}\]
We further decompose \(\gamma_{\ell m}\mathcal{P}_{\ell m}\) into its real and imaginary parts,
\[\begin{split}&\Re\left[\gamma_{\ell m}\mathcal{P}_{\ell m} \right]\\ &=\sum_{\ell=0}^{\ell^{\text{(int)}}}\gamma_{\ell 0}^{\text{Re}}(f)\mathcal{P}_{\ell 0 }+\sum_{\ell=0}^{\ell^{\text{(int)}}}\sum_{m=1}^{+\ell}\left[1+(-1)^{\ell} \right][\gamma_{\ell m}^{\text{Re}}(f,t)\mathcal{P}_{\ell m}^{\text{Re}}\\ &\hskip 14.226378pt-\gamma_{\ell m}^{\text{Im}}(f,t)\mathcal{P}_{ \ell m}^{\text{Im}}],\\ &\Im\left[\gamma_{\ell m}\mathcal{P}_{\ell m}\right]\\ &=\sum_{\ell=0}^{\ell^{\text{(int)}}}\gamma_{\ell 0}^{\text{Im}}(f) \mathcal{P}_{\ell 0}+\sum_{\ell=0}^{\ell^{\text{(int)}}}\sum_{m=1}^{+\ell}\left[1-(-1)^{\ell} \right][\gamma_{\ell m}^{\text{Im}}(f,t)\mathcal{P}_{\ell m}^{\text{Re}}\\ &\hskip 14.226378pt+\gamma_{\ell m}^{\text{Re}}(f,t)\mathcal{P}_{ \ell m}^{\text{Im}}].\end{split} \tag{34}\]
These expressions can be more compactly expressed if we define two \((\ell^{\text{(inf)}}_{\text{max}}+1)^{2}\)-vectors, \(\mathbf{u}(f,t)\) and \(\mathbf{v}(f,t)\) such that
\[\begin{split}&\Re\left[R(f,t)\right]=C^{\text{Re}}(f,t)-\mathbf{u }^{\text{T}}(f,t)\cdot\mathbf{w},\\ &\Im\left[R(f,t)\right]=C^{\text{Im}}(f,t)-\mathbf{v}^{\text{T}}( f,t)\cdot\mathbf{w},\end{split} \tag{35}\]
where
\[\begin{split}&\mathbf{u}(f,t)\\ &=H(f)(\gamma_{00}^{\text{Re}},\gamma_{10}^{\text{Re}},....,\gamma _{\ell 0}^{\text{Re}},\left[1+(-1)^{\ell}\right]\gamma_{11}^{\text{Re}},\\ &\hskip 14.226378pt\left[1+(-1)^{2}\right]\gamma_{21}^{\text{Re}},....,\left[1+(-1)^{\ell}\right]\gamma_{\ell m}^{\text{Re}},\\ &\hskip 14.226378pt-\left[1+(-1)^{1}\right]\gamma_{11}^{\text{Im}},-\left[1+(-1)^{2}\right]\gamma_{21}^{\text{Im}},....,\\ &\hskip 14.226378pt-\left[1+(-1)^{\ell}\right]\gamma_{\ell m}^{ \text{Im}})^{\text{T}},\end{split} \tag{36}\]
Note that each element of \(\mathbf{u}(f,t)\) and \(\mathbf{v}(f,t)\) is a function of \(f\) and \(t\) because they inherit the frequency and time dependence from \(H(f)\) and \(\gamma_{\ell m}(f,t)\).
With \(\mathbf{u}\) and \(\mathbf{v}\) defined, the square of the modulus of \(R(f,t)\) can be computed as a quadratic function of \(\mathbf{w}\)
\[\begin{split}& R(f,t)R^{\dagger}(f,t)\\ &=C(f,t)C^{\dagger}(f,t)-2\mathbf{g}^{\text{T}}(f,t)\cdot\mathbf{w} +\mathbf{w}^{\text{T}}\cdot\mathbf{K}(f,t)\cdot\mathbf{w},\end{split} \tag{37}\]
where \(\mathbf{g}(f,t)\) is a \((\ell^{\text{(inf)}}_{\text{max}}+1)^{2}\)-vector,
\[\mathbf{g}(f,t)=C^{\text{Re}}(f,t)\mathbf{u}(f,t)+C^{\text{Im}}(f,t)\mathbf{v }(f,t), \tag{38}\]
and \(\mathbf{K}(f,t)\) is a symmetric-square matrix of order of \((\ell^{\text{(inf)}}_{\text{max}}+1)^{2}\), whose elements are given by
\[K_{ij}(f,t)=u_{i}u_{j}+v_{i}v_{j}. \tag{39}\]
Similarly, we can also write the exponent of the likelihood as a quadratic function of \(\mathbf{w}\),
\[\begin{split}&\sum_{f,t}\frac{R(f,t)R^{\dagger}(f,t)}{N_{I}(f,t)N_{J}( f,t)}\\ &=\sum_{f,t}\frac{|C(f,t)|^{2}}{N_{I}(f,t)N_{J}(f,t)}-2\mathbf{j}^ {\text{T}}\cdot\mathbf{w}+\mathbf{w}^{\text{T}}\cdot\mathbf{Q}\cdot\mathbf{w},\end{split} \tag{40}\]
where \(\mathbf{j}\) is a \([\ell^{\text{(inf)}}_{\text{max}}+1]^{2}\)-vector and \(\mathbf{Q}\) is another symmetric-square matrix of order of \([\ell^{\text{(inf)}}_{\text{max}}+1]^{2}\). Explicitly, their elements are
\[\begin{split}& j_{i}=\sum_{f,t}\frac{g_{i}(f,t)}{N_{I}(f,t)N_{J}( f,t)},\\ & Q_{ij}=\sum_{f,t}\frac{K_{ij}(f,t)}{N_{I}(f,t)N_{J}(f,t)}.\end{split} \tag{41}\]
Unlike \(\mathbf{g}\) and \(\mathbf{K}\), \(\mathbf{j}\) and \(\mathbf{Q}\) are constant. In terms of \(\mathbf{j},\mathbf{w}\) and \(\mathbf{Q}\), the likelihood and posterior are respectively given
by
\[p(\left\{C\right\}|\mathbf{w},H_{\ell_{\max}^{\text{(int)}}}) =\bar{\mathcal{N}}\exp\left(\mathbf{j}^{\text{T}}\cdot\mathbf{w}- \frac{1}{2}\mathbf{w}^{\text{T}}\cdot\mathbf{Q}\cdot\mathbf{w}\right),\] \[p(\mathbf{w}|\left\{C\right\},H_{\ell_{\max}^{\text{(int)}}}) =\bar{\mathcal{N}}\frac{p(\mathbf{w}|H_{\ell_{\max}^{\text{(int)}}} )}{p(\left\{C\right\}|H_{\ell_{\max}^{\text{(int)}}})}\] \[\qquad\times\exp\left(\mathbf{j}^{\text{T}}\cdot\mathbf{w}-\frac {1}{2}\mathbf{w}^{\text{T}}\cdot\mathbf{Q}\cdot\mathbf{w}\right), \tag{42}\]
where
\[\bar{\mathcal{N}}=\mathcal{N}\exp\left(\sum_{f,t}\frac{|C(f,t)|^{2}}{N_{I}(f, t)N_{J}(f,t)}\right). \tag{43}\]
We are now ready to marginalize the posterior. If we are particularly interested in knowing the posterior of \(w_{i}\), the argument of the exponential of the posterior can be written as
\[\mathbf{j}^{\text{T}}\cdot\mathbf{w}-\frac{1}{2}\mathbf{w}^{ \text{T}}\cdot\mathbf{Q}\cdot\mathbf{w}\] \[= j_{i}w_{i}-\frac{1}{2}Q_{ii}w_{i}^{2}\] \[+\sum_{k\neq i}\left[j_{k}-\frac{1}{2}w_{i}(Q_{ki}+Q_{ik})\right] w_{k}-\frac{1}{2}\sum_{k\neq i}\sum_{l\neq i}w_{k}Q_{kl}w_{l}, \tag{44}\]
where the index \(i\) in the first two terms, \(j_{i}w_{i}\) and \(Q_{ii}w_{i}^{2}\), does not imply summation. To facilitate subsequent calculations, we define the following \([\ell_{\max}^{\text{(int)}}+1]^{2}-1\)-vectors and square matrices of order \([[\ell_{\max}^{\text{(int)}}+1]^{2}-1]\):
\[\tilde{\mathbf{w}}^{(i)}= \text{the vector $\mathbf{w}$ with the $i$-th element removed},\] \[\mathbf{b}^{(i)}= \text{the vector $\mathbf{j}$ with the $i$-th element removed},\] \[\mathbf{a}^{(i)}= \text{a vector whose $k$-th element is}\] \[a_{k}= \mathcal{Q}_{ik}\text{ (for $k\neq i$ given $i$)}\] \[\mathbf{n}^{(i)}= \mathbf{b}^{(i)}-w_{i}\mathbf{a}^{(i)},\] \[\tilde{\mathbf{Q}}^{(i)}= \text{the matrix $\mathbf{Q}$ with the $i$-th row and the $i$-th column removed},\] \[\mathbf{M}^{(i)}= \text{the inverse of $\tilde{\mathbf{Q}}^{(i)}$}.\]
Note that since \(\tilde{\mathbf{Q}}^{(i)}\) is symmetric, so is \(\mathbf{M}^{(i)}\). The argument of the exponential of the posterior can then be more compactly written as
\[\mathbf{j}^{\text{T}}\cdot\mathbf{w}-\frac{1}{2}\mathbf{w}^{\text {T}}\cdot\mathbf{Q}\cdot\mathbf{w}\] \[= j_{i}w_{i}-\frac{1}{2}Q_{ii}w_{i}^{2}+\mathbf{n}^{(i)\text{T}} \cdot\tilde{\mathbf{w}}^{(i)}-\frac{1}{2}\tilde{\mathbf{w}}^{(i)\text{T}} \cdot\tilde{\mathbf{Q}}^{(i)}\cdot\tilde{\mathbf{w}}^{(i)}\,, \tag{45}\]
where we recall that \(\mathbf{n}^{(i)}\) depends on both the index \(i\) and \(w_{i}\).
The posterior can be analytically marginalized if we choose a prior for \(\mathbf{w}\) with the following properties:
1. The prior is factorized as a product of the prior of individual \(w_{i}\), \[p(\mathbf{w}|H_{\ell_{\max}^{\text{(int)}}})=\prod_{i=1}^{[\ell_{\max}^{\text {(int)}}+1]^{2}}\mathcal{P}_{i}(w_{i}|H_{\ell_{\max}^{\text{(int)}}}),\] (46) where \(\mathcal{P}_{i}(w_{i}|H_{\ell_{\max}^{\text{(int)}}})\) is the prior of \(w_{i}\). By choosing a factorized prior for \(\mathbf{w}\), we are assuming that different \(w_{i}\) are independent of each other.
2. Each \(\mathcal{P}_{i}(w_{i}|H_{\ell_{\max}^{\text{(int)}}})\) is uniform for \(w_{i}\in[-\Delta^{(i)},\Delta^{(i)}]\), where \(\Delta^{(i)}>0\) is the width of the prior of \(w_{i}\).
3. When \(w_{i}=\pm\Delta^{(i)}\), \(\mathbf{j}^{\text{T}}\cdot\mathbf{w}-\frac{1}{2}\mathbf{w}^{\text{T}}\cdot \mathbf{Q}\cdot\mathbf{w}\) is very negative, regardless of the value of the other \(w_{j\neq i}\). This condition can always be met if we choose a large enough \(\Delta^{(i)}\) such that \[p(\left\{C\right\}|w_{i}=\pm\Delta^{(i)},H_{\ell_{\max}^{\text{(int)}}})\approx 0.\] (47)
This prior corresponds to a square centered at the origin in the complex \(\mathcal{P}_{\ell m}\) plane for \((\ell,m)\neq 0\). One may think that a more natural prior would be one that is uniform for, say, \(|\mathcal{P}_{\ell m}|\leq\Delta\) with some \(\Delta>0\), which corresponds to a circle centered at the origin in the complex plane. However, if \(\Delta\) is large enough, both the square and circle priors will lead to similar parameter estimation results. This is because, in the region between the square and the circle priors, the argument of the exponential in the posterior is very negative, and thus, the contribution to the posterior can be well approximated by zero. This condition is not contradictory to the weak signal approximation, because it can be met by a smaller \(\Delta^{(i)}\), corresponding to a weaker signal if we have more data.
With these properties in place, the marginalized posterior of \(w_{i}\) can be evaluated as
\[p(w_{i}|\left\{C\right\},H_{\ell_{\max}^{\text{(int)}}})=\int d\tilde{\mathbf{w} }^{(i)}p(\mathbf{w}|\left\{C\right\},H_{\ell_{\max}^{\text{(int)}}})\]
\[=\frac{1}{p(\{C\}\,|H_{\ell^{(\text{int})}_{\text{max}}})}\prod_{j \neq i}\int_{-\Delta^{(i)}}^{\Delta^{(i)}}dw_{j}p(\mathbf{w}|H)p(\{C\}\,|\mathbf{ w},H_{\ell^{(\text{int})}_{\text{max}}})\] \[=\frac{1}{p(\{C\}\,|H_{\ell^{(\text{int})}_{\text{max}}})}\prod_{j \neq i}\int_{-\Delta^{(i)}}^{\Delta^{(i)}}\frac{dw_{j}}{2\Delta^{(i)}}p(\{C\}\,| \mathbf{w},H_{\ell^{(\text{int})}_{\text{max}}})\] \[\propto\prod_{j\neq i}\int_{-\Delta^{(i)}}^{\Delta^{(i)}}dw_{j} \exp\left(j_{i}w_{i}-\frac{1}{2}Q_{ii}w_{i}^{2}+\mathbf{n}^{(i)\text{T}}\cdot \tilde{\mathbf{w}}^{(i)}-\frac{1}{2}\tilde{\mathbf{w}}^{(i)\text{T}}\cdot \tilde{\mathbf{Q}}^{(i)}\cdot\tilde{\mathbf{w}}^{(i)}\right)\] \[\approx\prod_{j\neq i}\int_{-\infty}^{+\infty}dw_{j}\exp\left(j_{ i}w_{i}-\frac{1}{2}Q_{ii}w_{i}^{2}+\mathbf{n}^{(i)\text{T}}\cdot\tilde{\mathbf{w}}^{(i )}-\frac{1}{2}\tilde{\mathbf{w}}^{(i)\text{T}}\cdot\tilde{\mathbf{Q}}^{(i)} \cdot\tilde{\mathbf{w}}^{(i)}\right)\] \[\propto\exp\left(j_{i}w_{i}-\frac{1}{2}Q_{ii}w_{i}^{2}+\frac{1}{ 2}\mathbf{n}^{(i)\text{T}}\cdot\mathbf{M}^{(i)}\cdot\mathbf{n}\right)\] \[=\exp\left(j_{i}w_{i}-\frac{1}{2}Q_{ii}w_{i}^{2}+\frac{1}{2}( \mathbf{b}^{(i)}-w_{i}\mathbf{a}^{(i)})^{\text{T}}\cdot\mathbf{M}^{(i)}\cdot( \mathbf{b}^{(i)}-w_{i}\mathbf{a}^{(i)})\right)\] \[\propto\exp\left[\left(j_{i}-\mathbf{a}^{(i)\text{T}}\cdot \mathbf{M}^{(i)}\cdot\mathbf{b}^{(i)}\right)w_{i}-\frac{1}{2}\left(Q_{ii}- \mathbf{a}^{(i)\text{T}}\cdot\mathbf{M}^{(i)}\cdot\mathbf{a}^{(i)}\right)w_{ i}^{2}\right], \tag{48}\]
where in going from the fourth to the fifth line we have made use of the third property of the uniform prior of \(\mathbf{w}\), and from the sixth to the seventh line we have used that \(\mathbf{M}^{(i)}\) is symmetric, so that \(\mathbf{b}^{(i)\text{T}}\cdot\mathbf{M}^{(i)}\cdot\mathbf{a}^{(i)}=\mathbf{a} ^{(i)\text{T}}\cdot\mathbf{M}^{(i)}\cdot\mathbf{b}^{(i)}\). We see that the marginalized posterior of \(w_{i}\) is a Gaussian function of \(w_{i}\). The mean, \(\mu_{i}\), and the standard deviation, \(\sigma_{i}\), of \(w_{i}\) can be read from the marginalized posterior of \(w_{i}\) readily, namely
\[\mu_{i} =\frac{j_{i}-\mathbf{a}^{(i)\text{T}}\cdot\mathbf{M}^{(i)}\cdot \mathbf{b}^{(i)}}{Q_{ii}-\mathbf{a}^{(i)\text{T}}\cdot\mathbf{M}^{(i)}\cdot \mathbf{a}^{(i)}}, \tag{49}\] \[\sigma_{i} =\left(Q_{ii}-\mathbf{a}^{(i)\text{T}}\cdot\mathbf{M}^{(i)}\cdot \mathbf{a}^{(i)}\right)^{-\frac{1}{2}}.\]
The marginalization procedure described above, and in particular, the two equations presented here, are some of the key results of this paper. Note that \(\mu_{i}\) is the value of \(w_{i}\) that maximizes the marginalized posterior of \(w_{i}\), but it is not a component of the maximum-posterior \(\mathbf{w}\), the latter of which is defined as
\[\mathbf{w}_{\text{MP}}=\arg\max_{\mathbf{w}}p(\mathbf{w}|\{C\},H_{\ell^{(\text{ int})}_{\text{max}}}). \tag{50}\]
However, in the large signal-to-noise ratio limit, \(\mu_{i}\) can be regarded as a good approximation of the \(i\)-th component of \(\mathbf{w}_{\text{MP}}\) because the covariance of different components of \(\mathbf{w}\) can be ignored.
### Bayes factor
Given a large enough \(\Delta^{(i)}\), the Bayes factor between the two hypotheses defined in Eq. (26) can also be analytically evaluated in a similar manner. To calculate the Bayes factor, we need to evaluate \(p(\{C\}|H_{\ell^{(\text{int})}_{\text{max}}})\) and \(p(\{C\}|H_{\text{null}})\). We first evaluate \(p(\{C\}|H_{\ell^{(\text{int})}_{\text{max}}})\) using Bayes theorem,
\[p(\{C\}|H_{\ell^{(\text{int})}_{\text{max}}})=\int d\mathbf{w}\;p (\{C\}\,|\mathbf{w},H_{\ell^{(\text{int})}_{\text{max}}})\;p(\mathbf{w}|H_{\ell^ {(\text{int})}_{\text{max}}})\] \[=\bar{\mathcal{N}}\prod_{i=1}^{(\ell^{(\text{int})}_{\text{max}}+1 )^{2}}\int_{-\Delta^{(i)}}^{\Delta^{(i)}}\frac{dw_{i}}{2\Delta^{(i)}}\exp\left( \mathbf{j}^{\text{T}}\cdot\mathbf{w}-\frac{1}{2}\mathbf{w}^{\text{T}}\cdot \mathbf{Q}\cdot\mathbf{w}\right)\] \[\approx\bar{\mathcal{N}}\prod_{i=1}^{(\ell^{(\text{int})}_{\text{ max}}+1)^{2}}\int_{-\infty}^{+\infty}\frac{dw_{i}}{2\Delta^{(i)}}\exp\left( \mathbf{j}^{\text{T}}\cdot\mathbf{w}-\frac{1}{2}\mathbf{w}^{\text{T}}\cdot \mathbf{Q}\cdot\mathbf{w}\right)\] \[=\frac{\bar{\mathcal{N}}}{|\mathbf{Q}|^{1/2}\Delta(\ell^{(\text{ inf})}_{\text{max}})}\left(\frac{\pi}{2}\right)^{\frac{(\ell^{(\text{int})}_{\text{max}}+1)^{2}}{2}} \exp\left(\frac{1}{2}\mathbf{j}^{\text{T}}\cdot\mathbf{Q}^{-1}\cdot\mathbf{j} \right), \tag{51}\]
where from the third to the fourth line we have again made use of the third property of the uniform prior of \(\mathbf{w}\), \(|\mathbf{Q}|\) is the determinant of \(\mathbf{Q}\), and
\[\Delta(\ell^{(\text{inf})}_{\text{max}})=\prod_{i=1}^{[\ell^{(\text{int})}_{ \text{max}}+1]^{2}}\Delta^{(i)}. \tag{52}\]
When the hypothesis is \(H_{\text{null}}\), the evidence simplifies significantly, as we show below:
\[p(\{C\}|H_{\text{null}}) =\int d\mathbf{w}\;p(\{C\}\,|\mathbf{w},H_{\text{null}})\;p( \mathbf{w}|H_{\text{null}})\] \[=\int d\mathbf{w}\;p(\{C\}\,|\mathbf{w}=\mathbf{0},H_{\ell^{(\text{ int})}_{\text{max}}})\;p(\mathbf{w}|H_{\ell^{(\text{int})}_{\text{max}}})\] \[=p(\{C\}\,|\mathbf{w}=\mathbf{0},H_{\ell^{(\text{int})}_{\text{ max}}})\] \[=\bar{\mathcal{N}}. \tag{53}\]
Thus, the Bayes factor can be analytically evaluated as
\[\mathcal{B}(\ell_{\max}^{\rm(inf)}|\{\Delta^{(i)}\})\] \[=\frac{1}{|\mathbf{Q}|^{1/2}\Delta(\ell_{\max}^{\rm(inf)})}\left( \frac{\pi}{2}\right)^{\frac{|\ell_{\max}^{\rm(inf)}+1|^{2}}{2}}\exp\left(\frac{ 1}{2}\mathbf{j}^{\rm T}\cdot\mathbf{Q}^{-1}\cdot\mathbf{j}\right). \tag{54}\]
At this junction, a word of caution is necessary. Equation (54) is valid only if a large enough \(\Delta^{(i)}\) is chosen, because otherwise one cannot extend the limits of integration in the fifth line of Eq. (48) and in the third equality of Eq. (51). Apart from this criterion, the width of the prior of \(\mathbf{w}\) is arbitrary, which means that the Bayes factor is also, in this sense, arbitrary. This is because the Bayes factor depends on the prior volume of the parameters that characterize the hypothesis that is being compared. Thus, when computing the Bayes factor using Eq. (54), one should also be careful of and report the chosen \(\Delta^{(i)}\). This is also the reason why the Bayes factor in Eq. (54) is written as \(\mathcal{B}(\ell_{\max}^{\rm(inf)}|\{\Delta^{(i)}\})\) to emphasize its dependence on both \(\ell_{\max}^{\rm(inf)}\) and \(\Delta^{(i)}\), both specified according to our hypothesis. As we will show in the next section, however, for any reasonably large-enough choice of \(\Delta^{(i)}\), the effects of the value of \(\Delta^{(i)}\) on the Bayes factor is not significant and will not affect the ranking between the two hypotheses. Therefore, whether we choose \(\Delta^{(i)}=1\) or \(\Delta^{(i)}=10\), both of which are much larger than the astrophysically motivated value of \(\mathcal{P}_{\ell m}\), corresponding to \(|\mathcal{P}_{\ell m}|\sim\mathcal{O}(10^{-48})\), our conclusions will be unaffected.
Let us conclude this section by pointing out that the above calculations can be easily extended to a detector network that contains more detectors. To apply the method to a detector network, one just sums over the detector pairs when calculating the following quantities [87]
\[\mathbf{j} =\sum_{I}\sum_{J>I}\mathbf{j}^{(IJ)}, \tag{55}\] \[\mathbf{Q} =\sum_{I}\sum_{J>I}\mathbf{Q}^{(IJ)},\]
where \(\mathbf{j}^{(IJ)}\) and \(\mathbf{Q}^{(IJ)}\) are respectively the \(j\) vector and \(\mathbf{Q}\) of the detectors \(I\) and \(J\) (c.f. Eq. (41)).
## V Mock data analysis
In this section, we illustrate the accuracy of our analysis in extracting the angular structures of a GWB by applying it to mock data. We will first explain the general set-up of different mock data analyses. Then, we will apply our analysis to different sets of mock data, each corresponding to a different level of anisotropy. We will show that our analysis can extract the angular structure of different types of anisotropic sources with excellent accuracy.
### General set up
As the likelihood (Eq. (23)) does not explicitly depend on the strain data measured by individual detectors but on their correlation, we follow [75] and directly simulate the cross-spectral density of data segments in the frequency domain,
\[C_{\rm inj}(f,t)=C_{n}(f,t)+H_{\alpha}(f)\sum_{\ell=0}^{\ell_{\max}^{\rm( inj)}}\sum_{m=-\ell}^{\ell}\gamma_{\ell m}(f,t)\mathcal{P}_{\ell m}^{\rm( inj)}. \tag{56}\]
In this expression, \(\ell_{\max}^{\rm(inj)}\) and \(\mathcal{P}_{\ell m}^{\rm(inj)}\) are the maximum \(\ell\) and the spherical-harmonic components of the simulated GWB contained in the mock data respectively. Note that \(\ell_{\max}^{\rm(inj)}\) is in general different from \(\ell_{\max}^{\rm(inf)}\) because the maximum \(\ell\) that a GWB corresponds to can, in general, be different from the maximum \(\ell\) that we choose to infer. Through the mock data analyses, we choose \(\ell_{\max}^{\rm(inf)}=1,2,...,10\), but in general \(\ell_{\max}^{\rm(inf)}\) can be freely adjusted for analyzing actual data.
We study the effects of noise fluctuations by including \(C_{n}(f,t)\) in the injected cross-spectral density of data segments in Eq. (56). In particular, \(C_{n}(f,t)\) represents the cross-spectral density of the stationary Gaussian noise contained in the data. We simulate \(C_{n}(f,t)\) by generating a random complex frequency sequence of zero mean and variance that satisfies [75]
\[\left\langle|C_{n}(f,t)|^{2}\right\rangle-|\langle C_{n}(f,t)\rangle|^{2}\approx \frac{P_{I}^{\rm(n)}(f,t)P_{J}^{\rm(n)}(f,t)}{\tau\Delta f}, \tag{57}\]
where recall that \(\tau\) is the length of the data segments, \(\Delta f\) is the frequency resolution and \(N_{I,J}^{\rm(n)}(f,t)\) are the noise PSDs of the detectors \(I\) and \(J\) respectively. Since the measured strain data contain both the instrumental noise and the signal when a GWB presents, the PSD of the strain data measured by individual detectors will contain both the instrumental-noise PSD, \(N_{I,J}^{\rm(n)}(f,t)\), and the auto-correlated power of the responses due to a GWB,
\[N_{I,J}(f,t)=N_{I,J}^{\rm(n)}(f,t)+S_{h}(f,t). \tag{58}\]
Hence, in practice, this \(N_{I,J}^{\rm(n)}\) is not the same as the PSD in Eq. (23). These two PSDs are extremely difficult to separate in an actual detection. Since we expect the signal to be weak, we just regard the measured strain PSD as the noise PSDs for the evaluation of the likelihood, at the cost of slightly reducing our search sensitivity [88]. To account for this effect, in our mock-data analyses we include both PSDs in our search when evaluating the likelihood and marginalized posteriors, but we only include \(N_{I,J}^{\rm(n)}(f,t)\) when simulating \(C_{n}(f,t)\).
Other properties of the injection are chosen to remain in line with current GWB searches with advanced LIGO and Virgo detectors [86, 74]. More specifically, for each mock analysis, we simulate data that consist of segments
of equal time length \(\tau=192\)s. Since these mock data analyses are meant to represent proof-of-principle demonstrations, we only simulate data measured by the advanced LIGO Hanford and Livingston detectors at their design sensitivity. The PSD of the detectors is estimated with the exact frequency resolution of the cross-spectral density segments to avoid the need for coarse-graining data [11; 12; 13; 14]. As the mock data contain only stationary Gaussian noise and the responses induced by the simulated stationary GWB, we drop the time-dependence of the PSDs, so that \(N_{I,J}(f,t)=N_{I,J}(f)\) and we do not notch the data at particular frequency bins. We assume the data start at the starting time of the third observing run of the advanced LIGO and Virgo detectors. We also focus on simulating and searching for GWB with \(\alpha=0\), \(2/3\) and \(3\) because GWBs characterized by these \(\alpha\) are under extensive search and correspond to astrophysical interesting sources. More explicitly, \(\alpha=0\) describes the GWB produced by cosmic strings formed during the end of cosmological inflation [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55]. The spectral tilt \(\alpha=2/3\) characterizes the GWB produced by CBCs [16; 17; 18; 19], and \(\alpha=3\) approximately describe the GWB produced by supernova [25; 26; 27; 28]. The explicit value of the spherical-harmonic components of the simulated GWBs will be given individually in the corresponding sections below.
To gauge the accuracy of measuring \(w_{i}\) from the simulate data, for different \(i\), we define the measurement error in \(\sigma_{i}\) as
\[\delta_{i}=\frac{\mu_{i}-w_{i}^{\rm(ini)}}{\sigma_{i}}\,, \tag{59}\]
where recall that \(\mu_{i}\) is the mean of the marginalized posterior of \(w_{i}\), while \(\sigma_{i}\) is its variance. If \(\delta_{i}=N\), then the best-fit \(w_{i}\) is \(N\sigma\) away from the injected value. Therefore, when \(\delta_{i}\) is close to zero, then the recovered \(w_{i}\) is perfectly consistent with the injected \(w_{i}^{\rm(ini)}\). However, due to the presence of noise fluctuations, we expect that \(|\delta_{i}|\) can occasionally be as large as \(\sim 3\) (see e.g. [74], where the SNR of a GWB is \(3.6\), but one still cannot claim a detection). In what follows, we calculate the marginalized posterior of many parameters, but we will only show results (e.g. \(\delta_{i}\) and \(\sigma_{i}\)) for a subset of them. In the Supplementary material, we present all results obtained by our mock data analysis.
### Pure-noise injection
We first apply our formalism to \(365\) days of mock data that contain only pure noise. The left panel of Fig. 1 shows \(\delta_{i}\) and the right panel the base-10 logarithm of \(\sigma_{i}\), both as a function of \(\ell_{\rm max}^{\rm(inf)}\). To illustrate, we show only the marginalized posterior of \(\mathcal{P}_{00},\mathcal{P}_{10},\mathcal{P}_{11}^{\rm Re},\mathcal{P}_{11}^ {\rm Im},\mathcal{P}_{22}^{\rm Re}\) and \(\mathcal{P}_{22}^{\rm Im}\). Since the results of different \(\alpha\) are quantitatively the same, for illustration, we only show \(\alpha=2/3\), corresponding to the GWB formed by CBCs. First, we observe that for all \(\ell_{\rm max}^{\rm(inf)}\), \(|\delta_{i}|<3\); this means that \(\mu_{i}\) is consistent with \(w_{i}^{\rm(ini)}=0\) to \(3\sigma_{i}\), indicating that we can accurately pinpoint the fact that the mock data contain no GWB. Second, we observe that \(\sigma_{i}\) increases with \(\ell_{\rm max}^{\rm(ini)}\) and it is expected. Increasing \(\ell_{\rm max}^{\rm(ini)}\) introduces more (unnecessary) free parameters whose measure uncertainty correlates with those associated with the spherical-harmonic components of smaller \(\ell\), deteriorating the overall measurement accuracy.
We also check that \(\tilde{\mathbf{Q}}^{(i)}\) is numerically well-conditioned because the evaluation of \(\mu_{i}\) and \(\sigma_{i}\) involves the inverse of \(\tilde{\mathbf{Q}}^{(i)}\). To this end, we compute the individual condition
Figure 1: The measurement bias \(\delta_{i}\) (left, see Eq. (59) in the main text) and measurement uncertainty \(\sigma_{i}\) (right) of some \(\mathcal{P}_{\ell n}\), obtained by applying our analysis to one year of mock data, which contain solely stationary Gaussian noise, as a function of \(\ell_{\rm max}^{\rm(inf)}\). Note that \(\delta_{i}\) has been scaled by \(\sigma_{i}\) in its definition. For the purpose of illustration, we only show \(\delta_{i}\) and \(\sigma_{i}\) for \(\mathcal{P}_{00},\mathcal{P}_{10},\mathcal{P}_{11}\) and \(\mathcal{P}_{22}\). Observe that \(|\delta_{i}|<3\) for all \(\mathcal{P}_{\ell m}\), indicating that the results are consistent with the fact that the mock data contain no signal to \(3\sigma_{i}\) confidence.
number \(\kappa_{i}\) of the matrix \(\tilde{\mathbf{Q}}^{(i)}\), which is defined by 2
Footnote 2: This is not the usual definition of the condition number of a matrix, which is defined as the ratio between the eigenvalue of the largest modulus and that of the least modulus. The definition in this paper follows the convention in the literature of the search of anisotropic GWB, e.g., [73] and [89].
\[\kappa_{i}=\frac{\lambda_{i}^{\min}}{\lambda_{i}^{\max}}, \tag{60}\]
where \(\lambda_{i}^{\min}\) and \(\lambda_{i}^{\max}\) are the eigenvalues of \(\tilde{\mathbf{Q}}^{(i)}\) that have the smallest and largest modulus, respectively. A larger \(\kappa_{i}\) implies that \(\tilde{\mathbf{Q}}^{(i)}\) is easier to invert numerically and \(\kappa_{i}=0\) means that \(\tilde{\mathbf{Q}}^{(i)}\) is singular. Then, we define the overall condition number \(\kappa\) as
\[\kappa=\min_{i}\kappa_{i}. \tag{61}\]
Since \(\kappa\) is essentially the lower bound of \(\kappa_{i}\), a larger \(\kappa\) implies that \(\tilde{\mathbf{Q}}^{(i)}\) is easier to numerically invert for all \(i\). Figure 2 shows \(\kappa\) as a function of \(\ell_{\max}^{\rm(inf)}\) for \(\alpha=0,2/3\) and \(3\). Observe that, for \(\alpha=0,2/3\) and \(3\), \(\kappa>10^{-6}\) for \(\ell_{\max}^{\rm(inf)}=10\), the upper limit of \(\ell_{\max}\) considered throughout the paper. This means that \(\tilde{\mathbf{Q}}^{(i)}\) for all \(i\) and \(\alpha\) can be inverted within double precision without numerical issues3. To further check that \(\tilde{\mathbf{Q}}^{(i)}\) is properly inverted, we compute the max norm, the maximum of the modulus of the elements of a matrix, of the following error matrix,
Footnote 3: In principle, a regularization scheme, such as that presented in [73, 82, 90], can also be applied when inverting \(\tilde{\mathbf{Q}}^{(i)}\), but such regularization may bias results [72, 73, 91, 92, 93, 94].
\[E^{(i)}=I-\tilde{\mathbf{Q}}^{(i)}\mathbf{M}^{(i)}, \tag{62}\]
which should be a zero matrix if \(\mathbf{M}^{(i)}\) is exactly equal to the inverse of \(\tilde{\mathbf{Q}}^{(i)}\). We find that the max norm of \(E_{i}\) is at most \(10^{-10}\) for different \(i\) and \(\alpha\), confirming that \(\tilde{\mathbf{Q}}^{(i)}\) can be inverted within double precision without numerical issues.
### Time-independent dipole
We now validate our method by recovering a simulated time-independent dipole with \(\alpha=0,2/3\) and \(3\) from \(365\) days of mock data. The simulated dipole signals are motivated by the dipole produced by the peculiar motion of the solar-system barycenter relative to the cosmic rest frame 4. For all \(\alpha\), the non-zero spherical-harmonic components from the mock data injections are
Footnote 4: The orbit of the Earth around the Solar System barycenter induces a smaller time-dependent dipole signal, which requires special approaches to extract [95, 96, 97].
\[\mathcal{P}_{00}^{\rm(inj)} =4.69\times 10^{-46}, \tag{63}\] \[\mathcal{P}_{10}^{\rm(inj)} =-1.16\times 10^{-47},\] \[\mathcal{P}_{11}^{\rm(inj)} =(6.60+1.41i)\times 10^{-47},\]
and \(\ell_{\max}^{\rm(inj)}=1\). These spherical-harmonic components are chosen so that their value is significantly larger than the corresponding measurement uncertainty, facilitating the validation of our analysis. The monopole signal is included so that the intensity map is positive in all sky directions.
Figure 3 shows \(\delta_{i}\) and \(\sigma_{i}\) for \(\mathcal{P}_{00},\mathcal{P}_{10},\mathcal{P}_{11}^{\rm Re},\mathcal{P}_{11}^ {\rm Im},\mathcal{P}_{22}^{\rm Re}\) and \(\mathcal{P}_{22}^{\rm Im}\) with \(\alpha=2/3\), obtained by analyzing the mock data with the simulated dipole signal. Observe that \(|\delta_{i}|<3\) for different \(\ell_{\max}^{\rm(inf)}\), which shows the robustness of our analysis in two ways. First, our analysis can correctly infer different \(\mathcal{P}_{\ell m}\) to \(3\sigma_{i}\) confidence. In other words, our analysis does not mistake the angular structure of \(\ell\leq\ell_{\max}^{\rm(ini)}\) with the angular structure of \(\ell_{\max}^{\rm(ini)}<\ell\leq\ell_{\max}^{\rm(inf)}\). Second, choosing different \(\ell_{\max}^{\rm(inf)}\) does not significantly affect our measurement of \(\mathcal{P}_{\ell m}^{\rm(inj)}\). Thus, one can adjust \(\ell_{\max}^{\rm(ini)}\) for the search of different GWB without having to worry that the results will be significantly affected by this choice. Note that the measurement uncertainty for different \(i\) is slightly larger than those shown in Fig. 1 due to the contribution of the detectors' PSD from the monopole of the simulated GWB.
### Galactic plane distribution
Our last mock data analysis concerns the GWB emitted by sources populating the galactic plane. For the mock-data challenge of the galactic-plane signal, we focus on \(\alpha=2/3\) because we expect that the results obtained with other choices of \(\alpha\) will be quantitatively similar. We choose to focus on \(\alpha=2/3\) because this spectral index corresponds to the background due to CBCs, the only type of GW sources that the Advanced LIGO and Virgo detectors have detected so far. To investigate the performance of our analysis when extracting anisotropic GWB signals of different signal-to-noise-ratio (SNR, \(\rho\)), we simulate galactic-plane signals of different SNRs but we reduce the total time length of the mock data of each SNR to 30 days. The measurement results from analyzing data of longer time length can be estimated by scaling the SNR, which is proportional to the square root of the integration time. The \(\mathcal{P}_{\ell m}\) of the galactic-plane signal that we simulate are
\[\mathcal{P}_{\ell m}^{\rm(inj)}=\epsilon\mathcal{P}_{\ell m}^{\rm(GP)}, \tag{64}\]
where \(\epsilon\) controls the overall amplitude (and SNR) of the galactic-plane signal and \(\mathcal{P}_{\ell m}^{\rm(GP)}\) are explicitly given in Appendix A. When choosing these \(\mathcal{P}_{\ell m}^{\rm(inj)}\), we set \(\ell_{\rm max}^{\rm(inj)}=7\), following [75], because this is sufficient to capture the fine angular structures of such a galactic-plane signal.
The intensity map of the simulated galactic signal is visualized in the top left panel of Fig. 4, produced using HEALPix[98, 99]. The brightness of the color map in all panels represents the intensity, and the intensity of all maps is scaled by a number such that the maximum intensity of each panel is normalized to one. The top-right, middle-left, middle-right, bottom-left and bottom-right panels show intensity maps when \(\epsilon=1,10^{0.5},10^{1},10^{1.5},10^{2}\) respectively, constructed using our analysis, with the spherical harmonic components of the recovered background taken to be the \(\mu_{i}\) of Eq. (49). As we increase \(\epsilon\), the SNR of the signal increases (see the top horizon axis of Fig. 5 for the monopole SNR of each \(\epsilon\)), and the reconstructed intensity map is increasingly consistent with the simulated intensity map. At \(\epsilon=10^{2}\), the reconstructed intensity map shows almost no visual differences from the original intensity map. The close consistency between the simulated and recovered intensity maps demonstrates the ability of our formalism to resolve detailed and sophisticated angular structures of GWBs.
Despite the close visual consistency, we also quantitatively assess the consistency between the simulated and reconstructed intensity maps by defining the match,
\[\mathcal{M}=\frac{\sum_{i}w_{i}^{\rm(inj)}\mu_{i}}{\sqrt{\sum_{i}\left(w_{i}^ {\rm(inj)}\right)^{2}}\sqrt{\sum_{i}\mu_{i}^{2}}}, \tag{65}\]
where \(w_{i}^{\rm(inj)}\) is the value of the real or imaginary parts of \(\mathcal{P}_{\ell m}^{\rm(inj)}\) corresponding to the index \(i\), and \(\mu_{i}\) is the recovered value, given by Eq. (49). A match closer to unity implies a more faithful recovery. If the reconstructed intensity map is identical to the simulated intensity map, \(\mathcal{M}=1\). Figure 5 shows \(\mathcal{M}\) of the simulated galactic-plane signal as a function of \(\epsilon\), with the top horizontal axis denoting the SNR of the monopole part of the simulated background of the corresponding \(\epsilon\). Observe that \(\mathcal{M}\) increases to \(\sim 1\) as \(\epsilon\) increases. This is reasonable because, as the background SNR increases, the angular structures of the simulated anisotropic background can also be more clearly detected. Moreover, when the SNR reaches \(\sim 200\), which is an SNR that can be achieved within a reasonable time frame with the next-generation detectors if a GWB is present [95], our analysis can recover the intensity map with a match very close to one, indicating its applicability to the realistic detection of a GWB.
Figure 4: The top, left panel shows the angular distribution of gravitational-wave backgrounds produced by sources populating the galactic plane, which we simulate assuming \(\alpha=2/3\) and that the signal lasts for 30 sidereal days. The rest of the figures are the recovered intensity maps from mock data containing signals of different strengths, as characterized by \(\epsilon\) (see Eq. (64) for definition). The signal-to-noise ratio of the monopole of the background when \(\epsilon=1\) is \(7.94\) and that when \(\epsilon=10^{2}\) is \(524\). To show the intensity contrast across different sky directions, the brightness of the color in all panels represents the intensity, and the intensity of all maps is scaled by a number such that the maximum intensity of the simulated map is normalized to one. Observe that, as the signal-to-noise ratio of the gravitational-wave background increases, our analysis can recover an intensity map that is increasingly accurate and consistent with the simulated angular distribution. Moreover, when the signal-to-noise ratio of the monopole part of the background has reached \(\sim 10^{2}\), the reconstructed intensity maps show almost no visual difference relative to the simulated map. This close consistency shows that our formalism is capable of resolving detailed and sophisticated angular structures of a gravitational-wave background.
Besides the reconstruction of the intensity map, we also compute the Bayes factor between the hypotheses that there is an anisotropic GWB in the signal and that there is only noise (see Eq. (26)), given an injection of an anisotropic GWB from the mock galactic-plane signal. The left panels of Fig. 6 show the natural logarithm of the Bayes factor as a function of \(\ell_{\rm max}^{\rm(inf)}\), obtained by analyzing the galactic-plane signals of different \(\epsilon\), choosing \(\Delta^{(i)}=1\) (inverted blue triangles) and \(\Delta^{(i)}=10\) (red triangles). Both of these choices of \(\Delta^{(i)}\) correspond to a prior of width much larger than the astrophysically motivated value of \(w_{i}\), which should be of \(\mathcal{O}(10^{-48})\) (see Fig. 1 and 3). The dashed vertical line denotes the \(\ell_{\rm max}^{\rm(inj)}\) of the simulated galactic-plane signal. Observe that, in general, for all \(\epsilon\), \(\log\mathcal{B}(\ell_{\rm max}^{\rm(inf)}|\Delta^{(i)}=1)\) is slightly larger than \(\mathcal{B}(\ell_{\rm max}^{\rm(inf)}|\Delta^{(i)}=10)\) because it has a narrower prior. Nonetheless, despite these slight differences, both choices of \(\Delta^{(i)}\) lead to a similar Bayes factor. This suggests that, for a reasonably large \(\Delta^{(i)}\), the explicit choice of the prior width does not significantly affect the Bayes factor and the hypothesis ranking for the search of anisotropic GWBs. In this sense, our analysis is robust against different choices of \(\Delta^{(i)}\), provided that \(\Delta^{(i)}\) is reasonably large. Individually, we observe that for a given \(\epsilon\), the Bayes factor first increases until it reaches a maximum at a given \(\ell_{\rm max}^{\rm(inf)}\), and then it decreases. Let us denote the \(\ell_{\rm max}^{\rm(inf)}\) that maximizes the Bayes factor \(\ell_{\rm max}^{\mathcal{B}}\) and show it on Fig. 6 with a dotted vertical line. Observe further that \(\ell_{\rm max}^{\mathcal{B}}\) depends on \(\epsilon\). For a larger \(\epsilon\), corresponding to a louder signal, \(\ell_{\rm max}^{\mathcal{B}}\) is more consistent with \(\ell_{\rm max}^{\rm(inf)}\), until eventually \(\ell_{\rm max}^{\mathcal{B}}\) coincides with \(\ell_{\rm max}^{\rm(inf)}\) in the high signal-to-noise ratio scenario. This behavior is reasonable if one interprets the \(\ell_{\rm max}^{\mathcal{B}}\) as the maximal resolvable angular scale of the background. As we increase \(\ell_{\rm max}^{\rm(inf)}\) until \(\ell_{\rm max}^{\mathcal{B}}\), we are introducing more parameters in the model that are necessary for a more faithful description of the detectable anisotropic GWB signal. More precisely, even if we increase the number of inference parameters, the increase in the marginalized likelihood (the numerator of the Bayes factor) still compensates for the increase in the prior volume. Thus, the hypothesis that the detected GWB has non-zero \(\mathcal{P}_{\ell_{\rm max}^{\mathcal{B}}m}\) for at least one \(m\) between \(-\ell_{\rm max}^{\mathcal{B}}\) and \(\ell_{\rm max}^{\mathcal{B}}\) is increasingly favored by the data. But as we further increase \(\ell_{\rm max}^{\rm(inf)}\), the new model parameters are redundant because the detected background shows no resolvable angular structures of the corresponding angular scale. The hypothesis that a GWB signal of \(\ell_{\rm max}^{\rm(inf)}>\ell_{\rm max}^{\mathcal{B}}\) is detected in the data is now _no longer_ better supported by the data than the hypothesis that the signal contains only up to \(\ell_{\rm max}^{\mathcal{B}}\), which explains the decrease. Finally, if the signal is louder, we can naturally detect the finer angular structures (corresponding to a larger \(\ell\)) of the simulated background more confidently. This explains the increasing consistency between \(\ell_{\rm max}^{\rm(inf)}\) and \(\ell_{\rm max}^{\mathcal{B}}\) as \(\epsilon\) increases until \(\ell_{\rm max}^{\mathcal{B}}\) essentially coincides with \(\ell_{\rm max}^{\rm(ini)}\) in the high SNR limit, when \(\epsilon\) is large. This behavior could be used to decide which \(\ell_{\rm max}^{\rm(inf)}\) is suitable for a particular search, which is also consistent with the discussion in [75].
Apart from competing \(H_{\ell_{\rm max}^{\rm(inf)}}\) against \(H_{\rm null}\), we can also compete \(H_{\ell_{\rm max}^{\rm(inf)}}\) against \(H_{\ell_{\rm max}^{\rm(inf)}}\), where \(\tilde{\ell}_{\rm max}^{\rm(inf)}\) is another maximum angular scale included in the inference. This can be done by computing the Bayes factor between \(H_{\ell_{\rm max}}\) and \(H_{\bar{\ell}_{\rm max}}\), which is simply
\[\mathcal{B}_{\ell_{\rm max}^{\rm(inf)}}^{\rm(inf)}=\frac{p(\{C\}|H_{\bar{\ell} _{\rm max}^{\rm(inf)}})}{p(\{C\}|H_{\ell_{\rm max}^{\rm(inf)}})}=\frac{ \mathcal{B}(\bar{\ell}_{\rm max}^{\rm(inf)})}{\mathcal{B}(\ell_{\rm max}^{ \rm(inf)})}. \tag{66}\]
If \(\mathcal{B}_{\ell_{\rm max}^{\rm(inf)}}^{\rm(inf)}>1\), \(H_{\bar{\ell}_{\rm max}^{\rm(inf)}}\) is favored by the data.
The right panels of Fig. 6 show \(\log\mathcal{B}_{\ell_{\rm max}^{\rm(inf)}}^{\rm(inf)}\) for \(\tilde{\ell}_{\rm max}^{\rm(inf)}=1,4\) and \(7\) as a function of \(\ell_{\rm max}^{\rm(inf)}\), obtained by analyzing the galactic-plane signal of \(\epsilon=1\) (top right), \(10\) (middle right) and \(10^{2}\) (bottom right). We only show the results when \(\Delta^{(i)}=1\) because the \(\log\mathcal{B}_{\ell_{\rm max}^{\rm(inf)}}^{\rm(inf)}\) of \(\Delta^{(i)}=10\) are qualitatively the same. From these panels, we observe
Figure 5: The match between the simulated intensity map of the gravitational-wave background and that recovered using the untargeted Bayesian search (see Eq. (65) for the definition). The lower horizontal axis represents the base-10 logarithm of \(\epsilon\), a proportionality constant that regulates the amplitude of the simulated gravitational-wave background. The upper horizontal axis represents the signal-to-noise ratio of the monopole of the simulated background of the corresponding \(\epsilon\). A match closer to one indicates a more faithful recovery of the intensity map of the gravitational-wave background using our analysis. Observe that as \(\epsilon\) increases (or equivalently, as the signal-to-noise ratio increases), \(\mathcal{M}\) also increases, showing that the recovered intensity map is increasingly accurate for louder signals. Moreover, the match is close to one when the monopole signal-to-noise ratio is about \(200\), which is a ratio that can be obtained if a GWB is detected with next-generation instruments. This suggests that our untargeted Bayesian search can indeed be applied to actual gravitational-wave detection in the future.
Figure 6: To rank the hypotheses that we have detected a gravitational-wave background having angular structures up to the angular scale \(\ell_{\rm max}^{\rm(inf)}\) (\(H_{\ell_{\rm max}^{\rm(inf)}}\)) from the data and that the data contain pure noise (\(H_{\rm null}\)), we compute the Bayes factor between \(H_{\ell_{\rm max}^{\rm(inf)}}\) and \(H_{\rm null}\) (left panels) and that between \(H_{\ell_{\rm max}^{\rm(inf)}}\) and \(H_{\ell_{\rm max}^{\rm(inf)}}\) (right panels), assuming different widths of the prior (\(\Delta^{(i)}\)). To facilitate the reading of the figures, we represent the maximal angular scale of the simulated background, \(\ell_{\rm max}^{\rm(inj)}=7\), with a dashed vertical line, and the angular scale at which the Bayes factor is maximized, \(\ell_{\rm max}^{\rm B}\), with a dotted vertical line. Observe that assuming different \(\Delta^{(i)}\) does not significantly affect the resulting logarithm of the Bayes factor, indicating that our analysis is robust against the choice of prior. Observe also that as the amplitude of the background increases, as characterized by \(\epsilon\), \(\ell_{\rm max}^{\rm B}\) is increasingly consistent with \(\ell_{\rm max}^{\rm(inj)}\), until they eventually coincide in the high signal-to-noise ratio scenario. This feature is reasonable if we interpret \(\ell_{\rm max}^{\rm B}\) as the maximum resolvable angular scale of the background. This pattern suggests that we can determine the angular scale that should be included in the inference analysis by locating the angular scale at which the Bayes factor is maximized, which is consistent with the finding of [75].
the following 4 patterns in the behavior of \(\log\mathcal{B}_{\ell^{\rm(inf)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm int}}\) as a function of \(\ell^{\rm(inf)}_{\rm max}\):
1. \(\log\mathcal{B}_{\ell^{\rm(max)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm max}}\)_increases with \(\ell^{\rm(inf)}_{\rm max}\)_, e.g. when \(\epsilon=1\), indicating that \(H_{\ell^{\rm(inf)}_{\rm max}\leq\ell^{\rm(inf)}_{\rm max}}\) is better preferred by the data.
2. \(\log\mathcal{B}_{\ell^{\rm(inf)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm max}}\)_decreases with \(\ell^{\rm(inf)}_{\rm max}\)_, e.g. for \(\log\mathcal{B}_{\ell^{\rm(inf)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm max}-1}\) when \(\epsilon=10\) and for \(\log\mathcal{B}_{\ell^{\rm(inf)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm max}-1}\) and \(\log\mathcal{B}_{\ell^{\rm(inf)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm max}-4}\) when \(\epsilon=10^{2}\), indicating that \(H_{\ell^{\rm(inf)}_{\rm max}}\) is better preferred by the data.
3. \(\log\mathcal{B}_{\ell^{\rm(inf)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm max}}\)_first decreases, then increases with \(\ell^{\rm(inf)}_{\rm max}\) and changes sign at some intermediate \(\ell^{\rm(inf)}_{\rm max}\)_, e.g. for \(\log\mathcal{B}_{\ell^{\rm(inf)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm max}=7}\) when \(\epsilon=10\), indicating that \(H_{\ell^{\rm(inf)}_{\rm max}}\) is preferred over \(H_{\ell^{\rm(inf)}_{\rm max}}\). In other words, \(H_{\ell^{\rm(inf)}_{\rm max}}\) is not the hypothesis most preferred by the data.
4. \(\log\mathcal{B}_{\ell^{\rm(inf)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm max}}\)_first decreases, then increases with \(\ell^{\rm(inf)}_{\rm max}\) but remains non-negative_, e.g. for \(\log\mathcal{B}_{\ell^{\rm(inf)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm max}}\) when \(\epsilon=10\) and for \(\log\mathcal{B}_{\ell^{\rm(inf)}_{\rm max}}^{\bar{F}^{\rm(inf)}_{\rm max}}\) when \(\epsilon=10^{2}\), indicating that \(H_{\ell^{\rm(inf)}_{\rm max}}\) is the hypothesis that is best supported by the data.
By analyzing these patterns in the behavior of the log Bayes factor ration shown in the right panels of Fig. 6, we again conclude that \(\mathcal{B}(\ell^{\rm(inf)}_{\rm max})\) peaks at an angular scale that is increasingly consistent with the maximum angular scale contained in the injected background, which is consistent with what we observed from Fig. 5 and the left panel of Fig. 6.
## VI Concluding remarks
In this paper, we presented a novel formalism to analytically marginalize the posterior of the spherical-harmonic components of the intensity map of a GWB in an untargeted Bayesian search. By prescribing a wide uniform prior for the real and imaginary parts of the spherical-harmonic components, we approximated the marginalized posterior (or likelihood) and Bayes factor as a Gaussian integral. The resulting marginalized posterior is also a Gaussian function. By reading off the mean and variance of the marginalized posterior, we can immediately determine the individual maximum posterior value of _many_ spherical-harmonic components of the angular distribution of a GWB and gauge the associated measurement uncertainties. We validated our formalism by applying it to recover various anisotropic GWBs injections. For each simulated anisotropic GWB, our analysis accurately extracted the angular structures of the GWB within a \(3\sigma\) interval. Furthermore, we are able to immediately evaluate the Bayes factor, which is largely unaffected by the width of the uniform prior. We showed that the Bayes factor is a reliable indicator of the angular scale that should be included in inference studies in a self-consistent way, which is also consistent with the findings of [75]. As the data products required for our analysis are similar and closely related to those used for existing spherical-harmonic decompositions of the actual data [73, 74, 77, 78, 82, 100], we expect that, with minor modifications, our analysis can be applied to actual data to efficiently extract GWB anisotropies along with other existing pipelines. Our analysis can also be applied to cross-check the results produced by other existing pipelines that search for anisotropic GWBs.
Our formalism presents several advantages in the detection of GWBs. First, our scheme makes possible Bayesian inference of a larger number of spherical-harmonic components of the angular distribution of a GWB in a reasonable timescale, leading to a much more model-independent Bayesian search of anisotropic GWBs. Prior to this work, in principle, we could treat all the spherical-harmonic components of interest as free parameters and attempt to infer them through Bayesian methods, but the computational cost and time needed to numerically sample the posterior would be huge [79]. To keep the computational time reasonable, previous Bayesian searches of anisotropic backgrounds either limited the number of spherical-harmonic components inferred (such as in [76]) or precomputed the spherical-harmonic components according to a given model and only inferred the overall amplitude of the anisotropic background (such as in [75]). By analytically marginalizing the posterior, we transform the problem into that of evaluating Gaussian integrals, greatly reducing the time needed to construct the marginalized posterior of spherical-harmonic components and compute the Bayes factor through Bayesian inference. The marginalized posterior of individual spherical-harmonic components can be used to construct an accurate intensity map of the GWB. The recovered intensity map can be compared with different GWB models, making the studies of GWBs more efficient. Second, our formalism is sufficiently flexible that it can be modified for the search of GWBs in various situations. Although this paper lays out the formalism of our method and presents a proof-of-principle analysis of synthetic data, considering only the joint detection of the LIGO Hanford and Livingston detectors, our approach can be straightforwardly extended to a network of detectors. Moreover, although this paper focused on searching for the GWB of a power-law spectrum, our approach can easily be adapted to the search for anisotropic GWBs of more sophisticated energy densities, such as those described by a broken-power law (such as in [101]).
Several aspects of our mock-data analyses differ from those carried out in real searches, but these differences do not undermine the performance of our method when applied to a future search. First, in our mock-data analyses, we only considered observations with the LIGO Hanford and Livingston detectors. In an actual search, the Virgo detector is operational, and while KAGRA is
currently under development, this detector will join the network soon. Moreover, next-generation detectors, such as Cosmic Explorer [102] and Einstein Telescope [103], are also being planned. Our formalism can be easily extended to include these detectors in a future analysis. With Virgo and future detectors included, the actual search sensitivity for the detection of a GWB will be greatly improved (assuming the LIGO-Virgo detectors are operating at their design sensitivity), which will also improve the accuracy and performance of our analysis. Hence, the results reported in this paper can be regarded as _conservative_ estimates of what the future may hold. Second, when performing the short-time Fourier transform of the actual data in the time domain, this data will be Hann-windowed to avoid spectral leakage [73; 95]. To account for the windowing, we need to multiply the mean and variance by windowing factors [104]. The full use of the windowed data will then require that any windowed segment has an overlap of 50% with the Hann window and then be optimally combined. In this paper, since we are simulating the data in the frequency domain, we did not need to apply these procedures, but the results we obtained should not be affected by working in the time domain. Third, the noise we considered in our mock-data challenges was stationary. In realistic data, non-stationary and/or non-Gaussian noise transients, also commonly known as "glitches", may occasionally occur and individual GW signals from CBCs may be present. When analyzing the actual data, data segments containing glitches and individual GW signals will be removed upon applying data-quality cuts [105; 106; 107; 108; 109]. Once these data segments are removed, our formalism can be applied as explained in this paper. Fourth, to fully demonstrate the accuracy of resolving the angular structures of GWBs with our method, we assumed strong GWB signals. In an actual detection scenario, we expect that GWBs to be much weaker. Nonetheless, the signal-to-noise ratio of a GWB detection is approximately proportional to the square root of the detection time [82; 83]. Thus, in an actual detection, as the integration time is long enough, in principle, we can accumulate a sufficiently large signal-to-noise ratio so that the angular structures of the GWB can be accurately resolved by our analysis.
Several adaptations or explorations of our method can be carried out in the future to facilitate its implementation and improve its efficiency in the search for anisotropic GWBs in actual data. First, when no confident detection of a stochastic background is made, it is insightful to derive the 95% upper limit on the angular power spectrum, i.e. the 95% confidence region of
\[C_{\ell}=\left(\frac{2\pi^{2}f_{\text{ref}}^{3}}{3H_{0}^{2}}\right)^{2}\frac{ 1}{2\ell+1}\sum_{m=-\ell}^{\ell}\left[|\mathcal{P}_{\ell m}^{\text{Re}}|^{2}+ |\mathcal{P}_{\ell m}^{\text{Im}}|^{2}\right]. \tag{67}\]
Since the individual \(\mathcal{P}_{\ell m}^{\text{Re}}\) and \(\mathcal{P}_{\ell m}^{\text{Im}}\) follow a Gaussian marginalized posterior whose mean is non-zero in general, as shown by our calculations, \(C_{\ell}\) follows a _generalized_ chi-squared distribution, which does not admit a simple closed-form analytic expression for its cumulative probability distribution function. Instead, numerical means are still required for constructing the cumulative probability distribution function of a generalized chi-squared distribution. Further effort must be devoted to either derive analytic results or to develop efficient numerical schemes that rapidly reconstruct the upper limit on \(C_{\ell}\) when there is no GWB detection. Second, our analysis can be sped-up further if we can reduce the number of the marginalized posteriors that we need to construct. One possible way to reduce the number of marginalized posteriors is to make use of Clebsch-Gordan coefficients to parameterize the intensity map of a GWB [76]. However, the exponent of the likelihood in terms of Clebsch-Gordan coefficients becomes quartic in the relevant parameters. The analytical marginalization of such a posterior may be possible through an appropriate change of variables, but this requires further exploration. Third, the marginalization of the likelihood in joint inferences of a GWB and individually resolvable GW signals requires further investigation. As mentioned here and also pointed out by [76], a motivation to measure the angular structure of GWBs in a Bayesian way is its integration with the existing search of other GW signals, such the those emitted by CBCs. One formalism that is capable of simultaneously searching for GWBs and individual GW signals is the "master-likelihood" method (also known as the hyper-likelihood approach) [111; 112]. The marginalization of the master likelihood over the spherical harmonic components is certainly worth exploring to unite the search approaches of different types of GW signals for search efficiency reasons. Finally, our formalism essentially assumes that we are searching for stationary GWBs. However, the kinematic dipole of a GWB induced by the proper motion of the Earth around the solar-system barycenter, a guaranteed anisotropic signal of GWBs [83; 96; 87], is time-dependent and requires a specially targeted method to implement in a search [95]. As this type of GWB signal varies over a timescale that is much longer than a sidereal day, we expect that our formalism can be straightforwardly adapted, say, by including this mild time dependence of the signal into the likelihood (E.q. (23)) before marginalization, to search for these GWB signals. Nonetheless, more exploration is still needed to determine the optimal way to modify our formalism to search for GWB signals with time dependence.
## Acknowledgements
The authors would like to thank Erik Floden, Vuk Mandic and Leo Tsukada for insightful discussion, Sharan Banagiri, Sanjit Mitra, and Joesh Romano for providing the spherical-harmonic components of the galactic plane for the mock data analyses, and Neil Cornish, Alexander Jenkins, Xavier Siemens, and Leo Tsukada for comments on the initial manuscript. NY and AC acknowledge support from the Simmons Foundation through Award No.
896696 and the NSF through award PHY-2207650. The numerical results reported in this paper were produced using the workstation of the CUHK GW working group and the Illinois Campus Cluster, a computing resource that is operated by the Illinois Campus Cluster Program (ICCP) in conjunction with NCSA, and is supported by funds from the University of Illinois at Urbana-Champaign.
## Appendix A \(\mathcal{P}_{\ell m}\) of the galactic-plane signal injection
Below we provide the \(\mathcal{P}_{\ell m}\) for the mock galactic plane signals that we simulated. The \(\mathcal{P}_{\ell m}\) numbers are stored by \(m\), in accordance with the convention of HEALPix. The relative intensity map is not alternated if one scales all \(\mathcal{P}_{\ell m}\) by the same constant.
\[\mathcal{P}_{00}^{\rm(GP)} =3.12\times 10^{-48},\] \[\mathcal{P}_{10}^{\rm(GP)} =-1.92\times 10^{-50},\] \[\mathcal{P}_{20}^{\rm(GP)} =1.28\times 10^{-49},\] \[\mathcal{P}_{30}^{\rm(GP)} =-1.78\times 10^{-49},\] \[\mathcal{P}_{40}^{\rm(GP)} =-1.03\times 10^{-49},\] \[\mathcal{P}_{50}^{\rm(GP)} =-8.89\times 10^{-50},\] \[\mathcal{P}_{60}^{\rm(GP)} =-3.63\times 10^{-49},\] \[\mathcal{P}_{70}^{\rm(GP)} =-4.82\times 10^{-50},\] \[\mathcal{P}_{11}^{\rm(GP)} =-2.90\times 10^{-52}-5.54\times 10^{-50}i,\] \[\mathcal{P}_{21}^{\rm(GP)} =-9.05\times 10^{-49}-1.20\times 10^{-49}i,\] \[\mathcal{P}_{31}^{\rm(GP)} =-1.39\times 10^{-50}+2.90\times 10^{-49}i,\] \[\mathcal{P}_{41}^{\rm(GP)} =-5.72\times 10^{-50}+9.50\times 10^{-50}i,\] \[\mathcal{P}_{51}^{\rm(GP)} =6.89\times 10^{-51}-6.96\times 10^{-50}i,\] \[\mathcal{P}_{61}^{\rm(GP)} =3.86\times 10^{-51}-4.58\times 10^{-50}i,\] \[\mathcal{P}_{71}^{\rm(GP)} =-2.10\times 10^{-50}+3.32\times 10^{-50}i,\] \[\mathcal{P}_{22}^{\rm(GP)} =-9.95\times 10^{-49}-2.92\times 10^{-49}i,\] \[\mathcal{P}_{32}^{\rm(GP)} =-8.70\times 10^{-50}-1.36\times 10^{-49}i,\] \[\mathcal{P}_{42}^{\rm(GP)} =3.92\times 10^{-49}+8.90\times 10^{-50}i,\] \[\mathcal{P}_{52}^{\rm(GP)} =7.52\times 10^{-50}-2.21\times 10^{-49}i,\] \[\mathcal{P}_{62}^{\rm(GP)} =3.31\times 10^{-49}-2.03\times 10^{-50}i,\] \[\mathcal{P}_{72}^{\rm(GP)} =3.46\times 10^{-50}-5.64\times 10^{-50}i,\] \[\mathcal{P}_{72}^{\rm(GP)} =-1.85\times 10^{-50}+1.53\times 10^{-50}i,\] \[\mathcal{P}_{43}^{\rm(GP)} =5.59\times 10^{-49}+9.57\times 10^{-50}i,\] \[\mathcal{P}_{53}^{\rm(GP)} =1.14\times 10^{-50}-1.19\times 10^{-50}i,\] \[\mathcal{P}_{63}^{\rm(GP)} =1.69\times 10^{-49}-1.29\times 10^{-50}i,\] \[\mathcal{P}_{75}^{\rm(GP)} =-4.30\times 10^{-50}+1.01\times 10^{-49}i,\] \[\mathcal{P}_{44}^{\rm(GP)} =3.46\times 10^{-49}+2.89\times 10^{-49}i,\] \[\mathcal{P}_{54}^{\rm(GP)} =1.00\times 10^{-49}+1.47\times 10^{-49}i,\] \[\mathcal{P}_{64}^{\rm(GP)} =-2.59\times 10^{-49}-1.33\times 10^{-49}i,\] \[\mathcal{P}_{74}^{\rm(GP)} =-1.68\times 10^{-50}+5.37\times 10^{-50}i,\] \[\mathcal{P}_{55}^{\rm(GP)} =1.29\times 10^{-50}+1.42\times 10^{-49}i,\] \[\mathcal{P}_{65}^{\rm(GP)} =-3.35\times 10^{-49}-1.58\times 10^{-49}i,\] \[\mathcal{P}_{75}^{\rm(GP)} =-7.04\times 10^{-50}-4.37\times 10^{-51}i,\] \[\mathcal{P}_{66}^{\rm(GP)} =-1.66\times 10^{-48}-1.51\times 10^{-49}i,\] \[\mathcal{P}_{67}^{\rm(GP)} =3.05\times 10^{-50}-1.95\times 10^{-49}i,\] \[\mathcal{P}_{77}^{\rm(GP)} =-5.54\times 10^{-51}-1.72\times 10^{-49}i\]
|
2304.11357 | Learning Symbolic Representations Through Joint GEnerative and
DIscriminative Training | We introduce GEDI, a Bayesian framework that combines existing
self-supervised learning objectives with likelihood-based generative models.
This framework leverages the benefits of both GEnerative and DIscriminative
approaches, resulting in improved symbolic representations over standalone
solutions. Additionally, GEDI can be easily integrated and trained jointly with
existing neuro-symbolic frameworks without the need for additional supervision
or costly pre-training steps. We demonstrate through experiments on real-world
data, including SVHN, CIFAR10, and CIFAR100, that GEDI outperforms existing
self-supervised learning strategies in terms of clustering performance by a
significant margin. The symbolic component further allows it to leverage
knowledge in the form of logical constraints to improve performance in the
small data regime. | Emanuele Sansone, Robin Manhaeve | 2023-04-22T09:35:51Z | http://arxiv.org/abs/2304.11357v1 | # Learning Symbolic Representations Through Joint Generative and Discriminative Training
###### Abstract
We introduce **GEDI**, a Bayesian framework that combines existing self-supervised learning objectives with likelihood-based generative models. This framework leverages the benefits of both **GE**nerative and **DI**scriminative approaches, resulting in improved symbolic representations over standalone solutions. Additionally, GEDI can be easily integrated and trained jointly with existing neuro-symbolic frameworks without the need for additional supervision or costly pre-training steps. We demonstrate through experiments on real-world data, including SVHN, CIFAR10, and CIFAR100, that GEDI outperforms existing self-supervised learning strategies in terms of clustering performance by a significant margin. The symbolic component further allows it to leverage knowledge in the form of logical constraints to improve performance in the small data regime.
## 1 Introduction
Recently, neuro-symbolic learning has received attention as a new approach for integrating symbolic-based and sub-symbolic methods based on neural networks. This integration provides new capabilities in terms of perception and reasoning. Currently, neuro-symbolic solutions rely either on costly pre-training methods or on additional supervision at the symbolic representation level provided by the neural network, in order to effectively utilize subsequent learning feedback from the logical component (Manhaeve et al., 2018). This traditional top-down learning paradigm is subject to the problem of _representational collapse_. To gain a clearer understanding of the problem, imagine we have a tuple of three images, each of which contains a single digit (e.g., \(<3,5,8>\)). Along with this, we have information about the logical relationships between these digits (e.g., the third digit is the sum of the first two). Note that this task introduces less supervision compared to the digit addition experiment typically used in neuro-symbolic systems (i.e. the information about the sum is not provided). Current neuro-symbolic solutions can easily solve the task by mapping all input data onto the same symbol 0 and clearly solve the constrained task.
In this study, we present a bottom-up representation learning that can naturally integrate with, and leverage the information in logical constraints. We demonstrate that several existing self-supervised learning techniques and likelihood-based generative models can be unified within a coherent Bayesian framework called GEDI (Sansone & Manhaeve, 2022). The model leverages the complementary properties of discriminative approaches, which are suitable for representation learning, and of generative approaches, which capture information about the underlying density function generating the data, to learn better symbolic representations and support logical reasoning. Importantly, GEDI has two main advantages: it can be easily extended to the neuro-symbolic setting to address the collapse problem and it can also allow for learning symbolic representations in the small data regime, which is currently out of reach for existing self-supervised learning techniques.
## 2 GEDI Model
**Model.** Let us introduce the random quantities used in the model shown in Figure 1: (i) \(x\in\Omega\), where \(\Omega\) is a compact subset of \(\mathbb{R}^{d}\), represents a data vector drawn independently from an un
known distribution \(p(x)\) (for instance an image), (ii) \(x^{\prime}\in\Omega\) represents a transformed version of \(x\) using a stochastic data augmentation strategy \(\mathcal{T}(x^{\prime}|x)\) (obtained by adding for instance noisy or cropping the original image) (iii) \(\xi\in\mathbb{R}^{h}\) is the latent representation of an input data point obtained from an encoder network (the latent representation of the original image) (iv) \(w\in\mathcal{S}^{h-1}\), where \(\mathcal{S}^{h-1}\) is a \(h-1\) dimensional unit hypersphere, is the embedding vector of an input data point (obtained from the latent representation using a network called projection head), while (v) \(y\in\{1,\dots,c\}\) is the symbolic representation of an input data point defined over \(c\) categories (namely the cluster label obtained by an output layer defined over the embedding representation).
The corresponding probabilistic graphical model is given in Figure 1(a). Importantly, the generative process (solid arrows) is defined using the following conditional densities, namely: \(p(w|x)=\mathcal{N}(w|0,I)\), viz. a multivariate Gaussian with zero mean and identity covariance, \(p(\xi)=\mathcal{N}(\xi|0,I)\), \(p(x^{\prime}|x,\xi)=\mathcal{T}(x^{\prime}|x)\) and \(p(y|x)=\text{Softmax}(out(proj(enc(x))))\), where \(enc:\Omega\rightarrow\mathbb{R}^{h}\) is an encoder used to compute the latent representation, \(proj:\mathbb{R}^{h}\rightarrow\mathcal{S}^{h-1}\) is a projector head used to compute the embedding representation, and \(out\) computes the cosine similarity between the embedding representation \(w\) and the column vectors of a matrix of parameters \(U\in\mathbb{R}^{h\times c}\) known as the cluster centers/prototypes (Caron et al., 2020). The inference process (dashed arrows) is given by the following conditional densities: \(q(w|x)=\mathcal{N}(w|0,\Sigma)\), where \(\Sigma=\sum_{i=1}^{n}(w_{i}-\bar{w})(w_{i}-\bar{w})^{T}+\beta I\) is an unnormalized sample covariance matrix computed over the embedding representations of \(n\) data points, \(\beta\) is a positive scalar used to ensure that \(\Sigma\) is positive-definite, \(\bar{w}=1/n\sum_{i=1}^{n}w_{i}\) is the mean of the embedding representations; \(q(\xi|enc(x)-enc(x^{\prime}),I)\) assesses the level of invariance between the latent representation of the input data and its augmented version; finally \(q(y|x)=\text{SK}(out(proj(enc(x^{\prime}))))\) defines a distribution over cluster/prototype assignments leveraging the Sinkhorn-Knopp algorithm (SK). Please refer to the work of Caron et al. (2020) for further details.
**Objective**. Our training objective is based on an evidence lower bound on the negative entropy, derived from the probabilistic graphical model of Figure 1(a), namely:
\[E_{p(x)}\{\log p(x)\}\geq\underbrace{-CE(p,p_{\Psi})}_{\text{Generative term}}+\underbrace{\mathcal{L}_{NF}(\Theta)+\mathcal{L}_{DI}(\Theta)}_{\text{Self-supervised learning terms}} \tag{1}\]
where \(CE(p,p_{\Psi})\) is the cross-entropy between the unknown distribution \(p\) and a generative model \(p_{\Psi}\), equivalently seen as the negative data log-likelihood of the generative model \(p_{\Psi}\). We define \(p_{\Psi}(x)=e^{-u^{T}enc(x)}/\Gamma(\Psi)\) as an energy-based model, where \(\Psi\) includes both \(u\in\mathbb{R}^{h}\) and the encoder parameters. Additionally,
\[\mathcal{L}_{NF}(\Theta)=-\underbrace{\mathbb{E}_{p(x)}\{KL(q(w|x)\|p(w))\}}_{ \text{Decomputation term}}-\underbrace{\mathbb{E}_{p(x)\mathcal{T}(x^{\prime}|x)} \{KL(q(\xi|x^{\prime})\|p(\xi))\}}_{\text{Invariance term}} \tag{2}\]
where the first and the second addends promote decorrelated features in the embedding representation and latent representations that are invariant to data augmentations, respectively. Finally,
\[\mathcal{L}_{DI}(\Theta)\geq\mathbb{E}_{p(x)\mathcal{T}(x^{\prime}|x)}\{ \mathbb{E}_{q(y|x^{\prime})}\{\log p(y|x;\Theta)\}+H_{q}(y|x^{\prime})\} \tag{3}\]
where \(H_{q}(y|x^{\prime})\) is the entropy computed over \(q(y|x^{\prime})\) and \(\Theta\) includes all parameters of the encoder, projector head and the output layer of our model. Intuitively, the first addend in Eq. 3 forces the symbolic representations of the input data and its augmented version to be similar, whereas the second addend enforces uniformity on the cluster assignments, so as to avoid that all representations collapse to a single cluster. It is important to mention that the two objectives in Eqs. 2 and 3 are general enough to cover several proposed criteria in the literature of negative-free and cluster-based self-supervised learning (cf. Appendix A) (Sansone and Manhaeve, 2022). Interestingly, the objective in Eq. 1 provides a natural unification between generative and discriminative models based on self-supervised learning. Learning the GEDI model proceeds using standard gradient descent by maximizing Eq. 1 (more details about the training procedure are provided in Appendix C).
Figure 1: GEDI model. (a) shows the corresponding probabilistic graphical model (PGM). (b) shows the different modules of GEDI, namely the encoder, the projector head and an output module computing the cosine similarity between the embedding representation and the cluster centers.
## 3 Experiments
We perform experiments to evaluate the discriminative performance of GEDI and its competitors, namely an energy-based model JEM (Grathwohl et al., 2020), which is trained with persistent contrastive divergence (similarly to our approach) and 2 self-supervised baselines, viz. a negative-free approach based on Barlow Twins (Zbontar et al., 2021) and a discriminative one based on SwAV (Caron et al., 2020). The whole analysis is divided into two main experimental settings, the first one based on real-world data, including SVHN, CIFAR-10 and CIFAR-100, and the second one based on a neural-symbolic learning task in the small data regime constructed from MNIST. We use existing code both as a basis to build our solution and also to run the experiments for the different baselines. In particular, we use the code from Duvenaud et al. (2021) for training energy-based models and the repository from da Costa et al. (2022) for all self-supervised baselines. Implementation details as well as additional experiments are reported in the Appendices.
### Svhn, CIFAR-10, CIFAR-100
We consider three well-known computer vision benchmarks, namely SVHN, CIFAR-10 and CIFAR-100. We use a simple 8-layer Resnet network for the backbone encoder for both SVHN and CIFAR-10 (around 1M parameters) and increase the hidden layer size for CIFAR-100 (around 4.1M parameters) following Duvenaud et al. (2021). We use a MLP with a single hidden layer for \(proj\) (the number of hidden neurons is twice the size of the input vector), we choose \(h=256\) for CIFAR-100 and \(h=128\) for all other cases. Additionally, we use data augmentation strategies commonly used in the self-supervised learning literature, including color jitter, and gray scale conversion to name a few. We train JEM, Barlow, SwAV and GEDI for \(100\) epochs using Adam optimizer with learning rate \(1e-4\) and batch size \(64\). Further details about the hyperparameters are available in Appendix F. We evaluate the clustering performance against the ground truth labels by using the Normalized Mutual Information (NMI) score.
We report all quantitative performance in Table 1. Specifically, we observe that JEM fails to solve the clustering task for all datasets. This is quite natural, as JEM is a purely generative approach, mainly designed to perform implicit density estimation. Barlow Twins achieves inferior performance to SwAV, due to the fact that is not a cluster-based self-supervised learning approach. On the contrary, we observe that GEDI is able to outperform all other competitors, thanks to the exploitation of the complementary properties of both generative and self-supervised models. Indeed, the discriminative component in GEDI leverages the information about the underlying data manifold structure learnt by the generative part, thus improving the learning of the symbolic representation. In Appendix G we provide an ablation study to assess the importance of the different loss terms involved in Eq. 1. Additionally, we conduct experiments on linear probe evaluation, generation and OOD detection tasks commonly used in the literature of self-supervised learning and energy-based models. Results are reported in Appendix H.
### Neural-symbolic setting
For the final task, we consider applying the proposed method to a neural-symbolic setting. For this, we borrow an experiment from DeepProbLog Manhaeve et al. (2018). In this task, each example consists of a three MNIST images such that the value of the last one is the sum of the first two, e.g. \(\mathbf{\boxed{E}}+\mathbf{\boxed{E}}=\mathbf{\boxed{E}}\). This can thus be considered a minimal neural-symbolic tasks, as it requires a minimal reasoning task (a single addition) on top of the image classification task. This task only
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & **JEM** & **Barlow** & **SwAV** & **GEDI** & **Gain** \\ \hline SVHN & 0.04 & 0.20 & 0.24 & **0.39** & **+0.15** \\ CIFAR-10 & 0.04 & 0.22 & 0.39 & **0.41** & **+0.02** \\ CIFAR-100 & 0.05 & 0.46 & 0.69\({}^{*}\) & **0.72** & **+0.03** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Clustering performance in terms of normalized mutual information on test set (SVHN, CIFAR-10, CIFAR-100). Higher values indicate better clustering performance. We observe unstable training for SwAV on CIFAR-100. We report the best performance achieved out of 10 experiments.
contains positive examples and requires a minimal modification to the probabilistic graphical model, as shown in Figure 2. We use the inference mechanism from DeepProbLog to calculate the probability that this sum holds, and optimize this probability using the cross-entropy loss function, which is optimized along with the other loss functions. For this setting, this coincides with the Semantic Loss function (Xu et al., 2018). To be able to calculate the probability of this addition constraint, we need the classification probabilities for each digit.
It is a specially interesting use case for neural-symbolic learning, since when only the probability is optimized, the neural network tends to collapse onto the trivial solution of classifying each digit as a \(0\) (i.e. \(y_{1}=y_{2}=y_{3}=0\) in Figure 2). This is a logically correct but undesirable solution. Optimizing the discriminative objective should prevent this collapse. We hypothesize that a neural network can be trained to correctly classify MNIST digits by using both GEDI and the logical constraint. Since the MNIST is an easy dataset, we focus on the small data regime, and see whether the logical constraint is able to provide additional information. The hyperparameters are identical to those used in Section 3.1. Further details about the hyperparameters are dependent on the data regime, and are available in Appendix I.
We evaluate the model by measuring the accuracy and NMI of the ResNet model on the MNIST test dataset for different numbers of training examples. The results are shown in Table 2. Here, N indicates the number of addition examples, which each have \(3\) MNIST digits. As expected, the DeepProbLog baseline from Manhaeve et al. (2018) completely fails to classify MNIST images. It has learned to map all images to the class \(0\), as this results in a very low loss when considering only the logic, resulting in an accuracy of \(0.10\) and an NMI of \(0.0\). The results also show that, without the NeSy constraint, the accuracy is low for all settings. The NMI is higher, however, and increases as there is more data available. This is learn how to cluster from the data. However, it is unable to correctly classify, as there is no signal in the data that is able to assign the correct label to each cluster. By including the constraint loss, the accuracy improves, as the model now has information on which cluster belongs to which class. Furthermore, it also has a positive effect on the NMI, as we have additional information on the clustering which is used by the model. These results show us that the proposed method is beneficial to learn to correctly recognize MNIST images using only a weakly-supervised constraint, whereas other NeSy methods fail without additional regularization. Furthermore, we show that the proposed method can leverage the information offered by the constraint to further improve the NMI and classification accuracy.
\begin{table}
\begin{tabular}{r c c c c c c} \hline \hline & \multicolumn{2}{c}{**Without GEDI**} & \multicolumn{2}{c}{**Without constraint**} & \multicolumn{2}{c}{**With constraint**} \\
**N** & **Acc.** & **NMI** & **Acc.** & **NMI** & **Acc.** & **NMI** \\ \hline
100 & \(0.10\pm 0.00\) & \(0.00\pm 0.00\) & \(0.08\pm 0.03\) & \(0.28\pm 0.03\) & \(\mathbf{0.25\pm 0.03}\) & \(\mathbf{0.41\pm 0.03}\) \\
1000 & \(0.10\pm 0.00\) & \(0.00\pm 0.00\) & \(0.09\pm 0.02\) & \(0.47\pm 0.10\) & \(\mathbf{0.52\pm 0.26}\) & \(\mathbf{0.86\pm 0.06}\) \\
10000 & \(0.10\pm 0.00\) & \(0.00\pm 0.00\) & \(0.17\pm 0.12\) & \(0.68\pm 0.09\) & \(\mathbf{0.98\pm 0.00}\) & \(\mathbf{0.97\pm 0.01}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The accuracy and NMI of GEDI on the MNIST test set after training on the addition dataset, both with and without the NeSy constraint. Additionally, we use DeepProbLog (Manhaeve et al., 2018) as a baseline without using our GEDI model. We trained each model 5 times and report the mean and standard deviation.
Figure 2: Neuro-symbolic task requiring to learn the correct symbolic representation of the digits given only a tuple of images and the corresponding logical constraint. In this setting, \(n=3\) data points and \(q\) is a Boolean random variable used to detect if the logical constraint is satisfied.
#### Acknowledgments
This research is funded by TAILOR, a project from the EU Horizon 2020 research and innovation programme under GA No 952215. This research received also funding from the Flemish Government under the "Onderzoeksprogramma Artificiele Intelligente (AI) Vlaanderen" programme.
|
2307.07065 | Energization of charged test particles in magnetohydrodynamic fields:
waves vs turbulence picture | Direct numerical simulations of 3D compressible MHD turbulence were performed
in order to study the relation between waves modes and coherent structures and
the consequent energization of test particles. Moreover, the question of which
is the main mechanism of this particle energization is rigorously discussed. In
particular, using the same initial conditions, we analyzed the non-linear and
linear evolution of a turbulent state along with the case of randomized phases.
Then, the behavior of the linear and non-linear simulations were compared
through the study of time evolution of particle kinetic energy and preferential
concentration. Also, spatio temporal spectra were used to identify the presence
of wave modes and quantify the fraction of energy around the MHD modes in
linear and non-linear simulations. Finally, the variation of the correlation
time of the external forcing is studied in detail along with the effect on the
particle energization (and clustering) and the presence of wave modes. More
specifically, particle energization tends to decrease when the fraction of
linear energy increase, supporting the idea that energization by structures is
the dominant mechanism for particle energization instead of resonating with
wave modes as suggested by Fermi energization theory. | F. Pugliese, M. Brodiano, N. Andrés, P. Dmitruk | 2023-07-13T21:14:20Z | http://arxiv.org/abs/2307.07065v1 | # Energization of charged test particles in magnetohydrodynamic fields: waves vs turbulence picture
###### Abstract
Direct numerical simulations of 3D compressible MHD turbulence were performed in order to study the relation between waves modes and coherent structures and the consequent energization of test particles. Moreover, the question of which is the main mechanism of this particle energization is rigorously discussed. In particular, using the same initial conditions, we analyzed the non-linear and linear evolution of a turbulent state along with the case of randomized phases. Then, the behavior of the linear and non-linear simulations were compared through the study of time evolution of particle kinetic energy and preferential concentration. Also, spatio temporal spectra were used to identify the presence of wave modes and quantify the fraction of energy around the MHD modes in linear and non-linear simulations. Finally, the variation of the correlation time of the external forcing is studied in detail along with the effect on the particle energization (and clustering) and the presence of wave modes. More specifically, particle energization tends to decrease when the fraction of linear energy increase, supporting the idea that energization by structures is the dominant mechanism for particle energization instead of resonating with wave modes as suggested by Fermi energization theory.
0000-0002-4880-7880]F.Pugliese
## 1 Introduction
One of the first and most influential explanations for the origin of cosmic ray radiation was proposed by Enrico Fermi in 1949 (Fermi, 1949). In that work, he argued about particles that could interact and resonate with passing Alfven waves to achieve high kinetic energy. This explanation was later refined into the Quasi Linear Theory (QLT), which gave clear conditions for this resonance (Stix, 1992). In terms of particle energization, QLT provides an estimation for diffusion coefficients in momentum space. However, these calculations rely on the assumption of weak turbulence (Chandran, 2008, 2005), where each field is decomposed as a mean background value plus a small amplitude fluctuation described as a collection of weakly interacting waves.
For plasmas in a fully developed turbulent regime, as those present in astrophysical plasmas, these assumptions are not fulfilled and QLT is no longer adequate. Moreover, in the so-called strong turbulence regime, the main role in particle energization is played by self consistent structures (Dmitruk
et al., 2004; Dmitruk & Matthaeus, 2006; Gonzalez et al., 2017; Lemoine, 2021; Pugliese & Dmitruk, 2022; Pezzi et al., 2022; Balzarini et al., 2022). Particles are able to exploit the electric fields present in these structures to obtain a net energization during a whole gyroperiod (Dmitruk et al., 2004; Pugliese & Dmitruk, 2022). In the presence of a guide field, this energization is mainly perpendicular for protons and other heavy ions. Recent theoretical (Lemoine, 2021) and numerical studies (Greco et al., 2014; Pezzi et al., 2022; Pugliese & Dmitruk, 2022) have shown that some structures are also able to capture protons, which greatly enhances their energization capabilities.
At larger spatial scales, the magnetohydrodynamic (MHD) model provides a comprehensive explanation of interplanetary space plasmas, including the solar wind and planetary magnetospheres (Goedbloed et al., 2010). Within this framework, two distinct descriptions of the turbulence phenomenon can be considered. On one hand, the wave behavior, which describes field fluctuations as a collection of waves allowed by the MHD approximation, such as Alfven waves or magneto-acoustic waves in the plain compressible case (see, Galtier, 2006). On the other hand, the well-known strong turbulence regime, in which a broadband spectrum of fluctuations with coherent structures and the presence of intermittency, including non-propagating fluctuations, is present in wavenumber and frequency spaces (see, Pouquet & Yokoi, 2022). To analyze the wave versus strong turbulence regimes, a spatiotemporal spectrum analysis of the fields can be performed (see, Meyrand et al., 2015; Clark di Leoni et al., 2015). The scenario emerging from this analysis is one in which both the wave and turbulent behaviors of the fluctuations could coexist without contradictions (Dmitruk & Matthaeus, 2009). This is supported by in situ observations data, such as those of the solar wind, which show both wave and turbulent behaviors in fluctuations (Barnes et al., 1979; C. Y. Tu, 1995; Sahraoui et al., 2020). It is worth mentioning that the spatiotemporal analysis have been used to investigate incompressible hydrodynamic and incompressible and compressible MHD turbulence from experimental and numerical data (Clark di Leoni et al., 2014; Andres et al., 2017; Lugones et al., 2016, 2019; Broidiano et al., 2021; Kawachi et al., 2022).
Charged test particles provide a useful representation of a small fraction of charged particles in a plasma that can be energized by the fields; however, neglecting the feedback of the particles into the electromagnetic field - which would require a kinetic description of the plasma - limits their application. Despite this, charged test particles are commonly used to study the energization of charged particles in different scenarios. In any case, electromagnetic fields need to be prescribed and it is usual to do so in Fourier space (Dalena et al., 2012; Tautz & Dosch, 2013). Multiple models are available to provide the amplitudes simulating turbulent spectra such as Slab or 2D (Shalchi, 2009), while phases are usually taken as random and uncorrelated. For example, in solar wind a combination of 20% Slab / 80% 2D is realistic at 1AU heliocentric distance (Bieber et al., 1996). Another approach is to use electromagnetic fields obtained from the evolution of MHD equations (Dmitruk et al., 2004; Dalena et al., 2014; Gonzalez et al., 2016; Pugliese & Dmitruk, 2022).
One important question is whether the wave or strong turbulent picture of the background plasma is more pertinent for the energization of the charged particles. In the present work, we concentrate specifically on this issue, analyzing the behavior of charged test particles with different techniques to determine the effect of the full MHD evolution of the field versus an evolution where non-linearities are artificially suppressed, resulting in only linear (wave) behavior. Our study provides important insights into the behavior of charged particles in plasmas and how they are energized by the surrounding fields.
The paper is organized as follows: in section 2, we introduce the theoretical CMHD set of equations and we present the linearized and dimensionless version. Then, we show the dispersion relation of the MHD waves modes, within a briefly explanation of the spatio-temporal spectrum technique. Finally, we present the equations regarding charged test particles and describe the Voronoi Tessellation method, used to quantify the concentration of particles in space. In section 3, we describe the numerical set up for non-linear and linear simulations along with some details about phase randomization for the linear runs and particle integration. In section 4, we present our results. Finally, in 5 we summarize our main findings.
## 2 Theory
### Compressible MHD equations
The three dimensional (3D) compressible MHD (CMHD) model is given by the mass continuity equation, the induction equation for the magnetic field, the momentum Navier-Stokes equation and a polytropic state equation (relating pressure and density) and involves fluctuations of the velocity field \(\mathbf{u}\), density \(\rho\) and magnetic field \(\mathbf{B}=\mathbf{B_{0}}+\mathbf{b}\), where \(\mathbf{B_{0}}\) is the mean field and \(\mathbf{b}\) is the fluctuating part,
\[\frac{\partial\rho}{\partial t}+\boldsymbol{\nabla}\cdot(\rho\mathbf{u})=0, \tag{1}\]
\[\frac{\partial\mathbf{B}}{\partial t}=\boldsymbol{\nabla}\times(\mathbf{u} \times\mathbf{B})+\eta\nabla^{2}\mathbf{B}, \tag{2}\]
\[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\boldsymbol{\nabla} \mathbf{u}=-\frac{\boldsymbol{\nabla}P}{\rho}+\frac{\mathbf{J}\times\mathbf{ B}}{4\pi\rho}+\frac{\mu}{\rho}\bigg{[}\nabla^{2}\mathbf{u}+\frac{\boldsymbol{ \nabla}(\boldsymbol{\nabla}\cdot\mathbf{u})}{3}\bigg{]}, \tag{3}\]
\[\frac{P}{\rho^{\gamma}}=\text{constant}. \tag{4}\]
Note that changes of the magnetic field absolute value \(B_{0}\) involves modifying the relative amplitude between the initial fluctuations \(b\) and \(B_{0}\) (with the initial conditions untouched). In addition, \(P\) is the scalar isotropic pressure, \(\mathbf{J}=(\mathbf{4\pi}/\mathbf{c})\boldsymbol{\nabla}\times\mathbf{B}\) is the electric current, \(\gamma=5/3\) is the polytropic index, and \(\mu\) and \(\eta\) are the dynamic viscosity and magnetic diffusivity, respectively. The main purpose of these last terms is to dissipate energy at scales smaller than MHD scales, while allowing us to study with an adequate scale separation compressible effects at the largest scales.
The set of equations (1)-(4) can be written in a dimensionless form in terms of a characteristic length scale \(L_{0}\), a mean scalar density \(\rho_{0}\) and pressure \(P_{0}\), and a typical magnetic and velocity field magnitude \(b_{rms}\) and \(v_{0}=b_{rms}/\sqrt{4\pi\rho_{0}}\) (i.e., the r.m.s. Alfven velocity), respectively. Then, the unit time is \(t_{0}=L_{0}/u_{rms}\), which for MHD becomes the Alfven crossing time. The resulting dimensionless equations are,
\[\frac{\partial\rho}{\partial t}+\boldsymbol{\nabla}\cdot(\rho\mathbf{u})=0, \tag{5}\]
\[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\boldsymbol{\nabla} \mathbf{u}=-\frac{1}{\gamma M_{s}^{2}}\frac{\boldsymbol{\nabla}P}{\rho}+ \frac{\mathbf{J}\times\mathbf{B}}{\rho}+\frac{1}{\rho R_{e}}\bigg{[}\nabla^{ 2}\mathbf{u}+\frac{\boldsymbol{\nabla}(\boldsymbol{\nabla}\cdot\mathbf{u})}{ 3}\bigg{]}, \tag{6}\]
\[\frac{\partial\mathbf{B}}{\partial t}=\boldsymbol{\nabla}\times(\mathbf{u} \times\mathbf{B})+\frac{1}{R_{m}}\nabla^{2}\mathbf{B}, \tag{7}\]
\[P=\rho^{\gamma} \tag{8}\]
where \(M_{s}=v_{0}/C_{s}\) is the sonic Mach number and \(C_{s}^{2}=\gamma P_{0}/\rho_{0}\) is the sound speed. The kinetic and magnetic nominal Reynolds numbers are also defined as \(R_{e}=L_{0}v_{0}/\nu\) and \(R_{m}=L_{0}v_{0}/\eta\), respectively, with \(\nu=\mu/\rho_{0}\) the kinematic viscosity. Considering a static equilibrium (\(u_{0}=0\)) with homogeneous external magnetic field \(\mathbf{B_{0}}=B_{0}\mathbf{\hat{z}}\), a constant density \(\rho_{0}\), and a constant pressure \(P_{0}\), we can linearize Eqs. (5)-(8),
\[\frac{\partial\rho}{\partial t}+\mathbf{\nabla}\cdot\mathbf{u}=0, \tag{9}\] \[\frac{\partial\mathbf{u}}{\partial t}=-\frac{1}{M_{s}^{2}}\mathbf{ \nabla}\rho+\mathbf{J}\times\mathbf{B}_{0}+\frac{1}{\rho_{0}R_{e}}\bigg{[} \nabla^{2}\mathbf{u}+\frac{\mathbf{\nabla}(\mathbf{\nabla}\cdot\mathbf{u})}{3}\bigg{]},\] (10) \[\frac{\partial\mathbf{b}}{\partial t}=\mathbf{\nabla}\times(\mathbf{ u}\times\mathbf{B}_{0})+\frac{1}{R_{m}}\nabla^{2}\mathbf{b}, \tag{11}\]
where the polytropic equation (4) was used to replace the pressure in the equations.
### Compressible MHD waves modes
In order to study the CMHD normal modes, we used Eqs. (9)-(11) and obtain the dispersion relation \(\omega(\mathbf{k})\) of small amplitude waves propagation in a plasma. It is straightforward to show that there are three independent propagating modes (or waves), which correspond to the so-called Alfven waves (A), fast (F), and slow (S) magnetosonic waves (e.g., Fitzpatrick, 2014),
\[\omega_{A}^{2}(k) =k_{\parallel}^{2}u_{A}^{2}, \tag{12}\] \[\omega_{F,S}^{2}(k) =k^{2}u_{A}^{2}\left[\frac{\left(1+\beta\right)}{2}\pm\sqrt{ \frac{\left(1+\beta\right)^{2}}{4}-\beta\left(\frac{k_{\parallel}}{k}\right)^ {2}}\right], \tag{13}\]
where \(\beta=\left(C_{s}/u_{A}\right)^{2}\) is the plasma beta, i.e., the ratio of plasma pressure to magnetic pressure, which can expressed as \(\beta=1/(M_{s}B_{0})^{2})\) with \(u_{A}=B_{0}/\sqrt{4\pi\rho_{0}}\) the Alfven velocity, \(C_{s}\) as defined above and \(k=|\mathbf{k}|=\sqrt{\mathbf{k}_{\parallel}^{2}+\mathbf{k}_{\perp}^{2}}\) with \(\parallel\) and \(\perp\) the wavenumber component along and perpendicular to the external magnetic field, respectively. It is worth noting that \(B_{0}\) is a parameter and has no relation with the initial fluctuations of the system. On one hand, Alfven waves are incompressible fluctuations transverse to the external magnetic guide field \(\mathbf{B}_{0}\). On the other hand, both fast and slow modes, unlike Alfven modes, carry density fluctuations and their magnetic field perturbations have longitudinal and transverse components. In the case of fast modes, the magnetic field and the plasma are compressed by the wave motion, such that the restoring force is large and hence the frequency and the propagation speed are high. While, for slow modes, the magnetic field oscillation and the pressure variation are anti-correlated with each other such that the restoring force acting on the medium is weaker than that for the fast mode. For this reason the frequency and the propagation speed are the lowest among the three MHD waves branches. Note that for the perpendicular propagation (i.e., \(k_{\parallel}=0\) and \(k_{\perp}\neq 0\)), the Alfven and slow modes become non-propagating modes (i.e., \(\omega_{A,S}=0\)) and are degenerate, but they can be distinguished using their different polarization, since \(\delta B_{\parallel A}=0\) and \(\delta B_{\parallel S}\neq 0\)(Andres et al., 2017). Finally, it is worth mentioning that, we adopt the assumption that energy concentrated closely to the linear dispersion relation can be explained by linear and weak turbulence theories, while any spread away from those linear curves is a sign of strong turbulence that requires fully nonlinear theories to be understood (see, Andres et al., 2017; Brodiano et al., 2021).
### Spatio-temporal spectrum
The spatio-temporal spectrum consists of calculating the complete spectrum in wavenumber and frequency for all available Fourier modes in a numerical simulation or an experiment (Clark di Leoni et al., 2014, 2015). As a result, it can distinguish between modes that satisfy a given dispersion relation (and are thus associated with waves) from those associated with nonlinear structures or turbulent eddies, and quantify the amount of energy carried by each of them at different spatial and temporal scales. It is worth mentioning, that this method does not require the pre-existence of wave modes or eddies. In the following, the spatio-temporal magnetic energy spectral density tensor is defined as:
\[E_{ij}(\mathbf{k},\omega)=\frac{1}{2}\mathbf{\hat{B}}_{i}^{*}(\mathbf{k}, \omega)\mathbf{\hat{B}}_{j}(\mathbf{k},\omega), \tag{14}\]
where \(\mathbf{\hat{B}}_{i}(\mathbf{k},\omega)\) is the Fourier transform in space and time of the \(i\)-component of the magnetic field \(\mathbf{B}(\mathbf{x},t)\) and the asterisk implies the complex conjugate. The magnetic energy is associated with the trace of \(E_{ij}(\mathbf{k},\omega)\).
As the external magnetic field \(\mathbf{B}_{0}\) in the simulations points in \(\mathbf{\hat{z}}\), in practice, we will consider either \(i=j=y\) or \(i=j=z\), to identify different waves based on their polarization (either transverse or longitudinal with respect to the guide field). In all cases, the acquisition frequency was at least two times larger than the frequency of the fastest wave, and the total time of acquisition was larger than the period of the slowest wave in the system. It is worth mentioning that spatio-temporal spectra have been used before in numerical simulations and experiments of rotating turbulence (Clark di Leoni et al., 2014), stratified turbulence (Clark di Leoni & Mininni, 2015), quantum turbulence (Clark di Leoni et al., 2015), and IMHD turbulence simulations (Meyrand et al., 2015, 2016; Lugones et al., 2016), compressible MHD turbulence (Andres et al., 2017; Brodiano et al., 2021) and in spacecraft observations (Sahraoui et al., 2003, 2010). Quantifying the relative presence of waves and/or nonlinear structures is the main outcome expected from the spatio-temporal analysis. In particular, we use the spatio-temporal spectra to search for the presence of waves and quantify the energy found near their respective dispersion relation in a linear and nonlinear MHD turbulence.
### Test particle equations
We studied charged particles evolving with the dynamical MHD fields, but those fields are unaffected by the particles (i.e., test particles). The dynamics of a test particle in an electromagnetic field are given by the non-relativistic equation of motion:
\[\frac{d\mathbf{r}}{dt}=\mathbf{v},\quad\frac{d\mathbf{v}}{dt}=\alpha\left( \mathbf{E}+\mathbf{v}\times\mathbf{B}\right), \tag{15}\]
where the electric field \(\mathbf{E}\) is obtained from Ohm's Law and its dimensionless (using a characteristic electric field \(E_{0}=v_{0}B_{0}/c\)) expression is,
\[\mathbf{E}=\frac{\mathbf{J}}{R_{m}}-\mathbf{u}\times\mathbf{B}. \tag{16}\]
While a more general version of this law includes the electronic pressure (i.e., \(\nabla P_{e}/\rho\)) and the Hall term (i.e, \(\mathbf{J}\times\mathbf{B}/\rho\)), here we neglect them in order to simplify the analysis and interpretation of the results and to maintain consistency with the compressible MHD equations introduced in the previous
sections. Several works have studied the effect of these terms on test particles (Dmitruk & Matthaeus, 2006; Pugliese & Dmitruk, 2022; Balzarini et al., 2022). As such, density fluctuations will not directly affect particle motion through Eq. (16), but could allow the existence of magnetosonic wave modes for the particles to interact with.
The parameter \(\alpha\) is related to the charge-to-mass ratio and represents the gyrofrequency \(\omega_{g}\) in a magnetic field of intensity \(b_{rms}\)(Pugliese & Dmitruk, 2022). Its inverse value \(1/\alpha\) represents the nominal gyroradius \(r_{g}\) (in units of \(L_{0}\)) for particles with velocity \(v_{0}\) in a magnetic field \(b_{rms}\), and gives a measurement of the range of scales involved in the system (from the outer scale of turbulence to the particle gyroradius). For plasmas with a strong magnetic field amplitude \(B_{0}\), we can define an average gyroperiod \(\tau_{p}=2\pi/\omega_{g}=2\pi/\alpha B_{0}\). In the case of protons, we can relate \(\alpha\) to the MHD field parameter through,
\[\alpha=\frac{L_{0}}{d_{i}}, \tag{17}\]
where \(d_{i}=m_{p}c/\sqrt{4\pi\rho_{0}e^{2}}\) is the proton inertial length, with \(m\) and \(e\) the proton mass and charge, respectively. Furthermore, in the present paper, we identify the proton inertial length with the dissipation scale \(d_{i}=l_{d}\), given the solar wind observations supporting \(d_{i}\sim l_{d}\)(e.g., Leamon et al., 1998).
### Particle energization and clustering
Recently, Pugliese & Dmitruk (2022) reported a link between high energization of test particles and a preferential concentration (clustering) for test particles. In the present work, to quantify this preferential concentration, we employ the Voronoi tessellation method and compare volume statistics against a Random Poisson Process (RPP) (see, Monchaux et al., 2010; Obligado et al., 2014; Reartes & Mininni, 2021). In particular, this process provides a cell for each particle, whose volume \(\mathcal{V}_{i}\) can be interpreted as the inverse of the local particle density. Therefore, by studying the statistics of these volumes, we can quantify any preferential concentration that may be present in the system. In addition, we compared against those of a uniform distribution in space, known as RPP (Uhlmann, 2020). One of these statistic tools is the standard deviation of the volumes \(\sigma_{\mathcal{V}}\), which should coincide with \(\sigma_{RPP}\) for a uniform distribution and increases as the preferential concentration increases.
In the aforementioned work it was found that particles accumulated in regions where \(\nabla_{\perp}\cdot\mathbf{u}_{\perp}=\partial_{x}u_{x}+\partial_{y}u_{y}<0\). Through Eq. (16) this also implies \((\nabla\times\mathbf{E})_{z}<0\) and thus a clockwise rotating electric field. Therefore, in this regions the electric force and particle gyration are aligned and there is positive energization after a whole gyroperiod. This, in combination with the trapping effect of \(\nabla_{\perp}\cdot\mathbf{u}_{\perp}<0\), makes this mechanism very effective at energizing particles.
## 3 Numerical set up
### Nonlinear simulations
The nonlinear simulations are performed by numerically solving Eqs. (1)-(4) using a pseudospectral method with periodic boundary conditions in a cube of size \(L_{box}=2\pi\). This scheme ensures exact energy conservation for the continuous time spatially discrete equations (Mininni et al., 2011), where we use a spatial resolution of \(N^{3}=512^{3}\) Fourier modes. Time integration is achieved through a second order Runge-Kutta method. Aliasing is removed using the two-thirds truncation method (Orszag, 1971), such that the maximum wavenumber resolved is \(\kappa\equiv N/3=170\). To ensure the resolution of
the smallest scales (\(\kappa>k_{d}\)), we use kinematic and magnetic Reynolds numbers of \(R_{e}=R_{m}\approx 2400\). Here, \(k_{d}=(\epsilon_{d}/\nu^{3})^{1/4}\) is the Kolmogorov dissipation wavenumber (which defines the dissipation scale \(l_{d}=2\pi/k_{d}\)), with \(\nu=\mu/\rho_{0}\) the kinetic viscosity and \(\epsilon_{d}\) the energy dissipation rate.
In order to reach a steady turbulent state, the system is forced using mechanical and electromotive forces \(\mathbf{f}\) and \(\nabla\times\mathbf{m}\), respectively. These forces are generated with random phases in the Fourier \(k\)-shells \(2\leq|\mathbf{k}|\leq 3\) every correlation time \(\tau_{c}\), which is also a controlled parameter in the simulations. Forces at intermediate times are obtained by linearly interpolating the previous and next steps. We repeat this procedure for three different values of \(\tau_{c}\), yielding the three different stationary states summarized in Table 1. The length and time scales are defined individually for each simulation. For the characteristic length scale \(L_{0}\), we use the energy containing scale \(L_{0}=2\pi\int(E(k)/k)dk/\int E(k)dk\) where \(E(k)\) is the isotropic energy spectrum. For the velocity scale, we use the Alfven velocity of the magnetic field fluctuations \(v_{0}=b_{rms}/\sqrt{4\pi\rho_{0}}\). For the time scales, we alternatively use the large eddy turnover time \(t_{0}=L_{0}/v_{0}\) and the particle gyroperiod \(\tau_{p}=2\pi/\alpha B_{0}\), depending on whether we analyze field or particle properties. Finally, as seen in Table 1, we have a ratio of mean magnetic field to magnetic field fluctuation (\(B_{0}/b_{rms}\)) of \(1:9\), similar to values reported in the solar wind (Hadid et al., 2017; Andres et al., 2022).
### Linear simulations
The linear simulations are performed in the same way as the nonlinear ones but we cancel all the non-linear terms in the MHD equations, as shown in Eqs. (9)-(11). In the absence of nonlinear terms, there is no energy cascade from the injection scale \(L_{0}\) to smaller scales. Therefore, modes with wavenumber \(\mathbf{k}\) outside the forced shell would quickly vanish in the presence of the dissipation terms in Eqs. (10)-(11). To counter this, we evolve the fields in Eqs. (9)-(11) without forcing and without dissipation (i.e. \(R_{e},R_{m}\rightarrow\infty\)), thus ensuring a constant energy spectra. Furthermore, we use a fourth order Runge-Kutta scheme for temporal integration, as the second order scheme becomes unstable in the absence of dissipation terms.
The initial conditions for the linear simulations are given by different variations of the initial conditions used in NL1, as summarized in Fig. 1. For the L run, we use these initial conditions unchanged, while for the LR run we perform a phase randomization. We achieve this by transforming
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Run & \(\tau_{c}/\tau_{p}\) & \(L_{box}/L_{0}\) & \(L_{0}/l_{d}\) & \(B_{0}/b_{rms}\) & \(u_{rms}/v_{0}\) \\ \hline NL1 & \(1.146\times 10^{1}\) & 2.53 & 59.79 & 8.97 & 1.73 \\ NL2 & \(1.146\times 10^{0}\) & 2.43 & 54.69 & 9.16 & 1.71 \\ NL3 & \(2.865\times 10^{-1}\) & 2.83 & 65.86 & 8.49 & 1.49 \\ \hline \end{tabular} Note. – Energy injection scale, dissipation scale, mean magnetic field and characteristic velocity obtained with different forcing correlation times. All magnitudes are similar, allowing us to compare the simulations on equal footing.
\end{table}
Table 1: Global quantities for nonlinear simulations
each Fourier coefficient \(\psi_{\mathbf{k}}\mapsto e^{i\phi_{\mathbf{k}}}\psi_{\mathbf{k}}\) for all the scalar fields \(\rho\), \(u_{j}\) and \(b_{j}\), where \(\phi_{\mathbf{k}}\) are random phases independently chosen for each \(\mathbf{k}\) and \(\psi\). We then enforce the necessary conditions on the resulting fields (i.e. hermiticity, Coulomb gauge). These first two runs are built to have the same energy spectra as NL1, but thoroughly destroy any cross-field correlation and structures present in the initial conditions of NL1 for the LR run (Alexakis et al., 2007).
The final two runs LR80 and LR40 use the same Fourier coefficients as LR, but imposing \(\psi_{\mathbf{k}}=0\) for \(|\mathbf{k}|>80\) and \(|\mathbf{k}|>40\), respectively. As this process slightly reduces the total energy, we compensate by uniformly re-scaling all the Fourier coefficients, thus preserving any power-law behaviour.
### Particle integration
Initially, particles were uniformly distributed in the box and with a Gaussian velocity distribution function, with a root mean square (rms) of \(\langle v_{i}^{2}\rangle^{1/2}\approx 1.2v_{0}\). As shown in Table 1, this value lies between the Alfven velocity \(v_{0}\) and the rms value of the velocity field \(u_{rms}\). Furthermore, we chose \(\alpha=60\) for the particles, which is consistent with the values of \(L_{0}/l_{d}\) for all simulations. The numerical integration of Eq. (15) was done by a Runge-Kutta method with the same order as that used for field integration (see above). The values of the fields at each particle position are obtained by cubic splines in space from the three-dimensional grid of the MHD simulation. The particle trajectories were integrated for \(\sim 300\tau_{p}\).
## 4 Results
### Linear vs nonlinear
In this section, we compare the dynamics of test particles in the nonlinear run NL1 versus the dynamics in the linear counterparts L, LR, LR80 and LR40. We begin with the evolution of the mean particle energy, showing separately the parallel and perpendicular component to the magnetic guide field in Fig. 2(a) and Fig. 2(b), respectively. As it is known for protons, energization is much higher in the perpendicular direction (represented here by the \(x\) component) with respect to the parallel direction (represented here by the \(z\) direction). For both directions, it is clear that the linear case L has significantly smaller energization when compared to the full nonlinear simulation NL1. However, for the phase randomizing (LR run), the energization increases greatly and surpasses that of NL1 run. Truncating the spectra at \(|\mathbf{k}|=80\) (LR80 run) has very little effect, but truncating at \(|\mathbf{k}|=40\) (LR40 run) greatly reduces the energization, obtaining values similar to the linear L run. Furthermore, in the perpendicular case shown in Fig. 2(a) all the linear runs seem to display a diffusive or subdiffusive behaviour (in momentum space) \(\langle v_{x}^{2}\rangle\sim t^{\alpha}\) with \(\alpha\lesssim 1\). This fact suggests that the underlying mechanism for energization (in the linear case) is analogous to that of Brownian motion, with delta correlated energy increments. In this analogy, particles would have a very strong and time localized interaction with the fields. In the context of QLT, this very strong interaction could be understood as a resonance with some specific wave present in the system. This high energization
Figure 1: Schematic diagram showing how the initial conditions of each linear run are constructed from the initial conditions of the NL1 run.
quickly removes the particle from the resonance condition within a few gyroperiods, thus ensuring the interaction to be localized in time.
On the other hand, energization in the nonlinear case is superdiffusive, which is related to interaction between particles and self-consistent structures present in the plasma, as discussed in Section 2.5. With this in mind, we could relate the drop in energization as NL1\(\rightarrow\)L to the disappearance of structures in the plasma due to the linear evolution. In Eqs. (12) and (13) we see that waves in the CMHD model are indeed dispersive and as such they will destroy any structure initially present in the system. To investigate this, we calculated the radial two-point autocorrelation \(\Gamma\) of \(\nabla_{\perp}\cdot\mathbf{u}_{\perp}\) in the plane perpendicular to the magnetic guide field. In Fig. 3(a) we show the autocorrelation at multiple times (darker colors represent later times). From these curves, we calculated the correlation length \(\ell_{c}\), defined in this case as the displacement at which the autocorrelation drops below \(10\%\) (horizontal dotted black line). In Fig. 3(b) we show the autocorrelation length \(\ell_{c}\) as a function of time, where we note that the NL1 run holds \(\ell_{c}\approx 18l_{d}\) during the whole simulation. On the other hand, the linear run L starts with a similar value to the NL1 run but very quickly drops to \(\ell_{c}\approx 8l_{d}\) and then slowly decreases toward that of the LR run (\(\ell_{c}\approx 2l_{d}\)), which by construction should have negligible correlation (see Sec. 3.2). Therefore, this confirms that the linear evolution eliminates most structures initially present in the field in less than \(20\tau_{p}\), preventing particles from exploiting the energization mechanism described in Sec. 2.5. To further check this claim, in Fig. 4 we show the standard deviations of the Voronoi volumes as a function of time. We see that the NL1 run quickly increases its clustering while all the linear runs remain very close to uniformity. The slight increase in their \(\sigma_{\mathcal{V}}\) could mean that this mechanism still survives in some small scale regions, but given the diffusive energization of Fig. 2 we could consider this effect secondary at best.
We turn now to the comparison of the linear energization runs between themselves. Having discarded structure interaction in the linear cases, we are left only with wave-particle resonant interaction. The first observation is that the phase randomization L\(\rightarrow\)LR seems to vastly increase particle
Figure 2: Time evolution of the mean kinetic energy for the perpendicular (left) and parallel (right) components, for the non-linear and linear simulations. For the time scale, we used the particle gyroperiod \(\tau_{p}\).
energization. This fact mainly shows the importance of the phase mixing hypothesis in QLT. The L run clearly does not fulfill this hypothesis, as its initial conditions are derived from a fully nonlinear turbulent simulation and as such display high phase correlation. Alternatively, we could visualize the wave-particle interaction as a conjunction of resonance in terms of frequency and an initial alignment between the field and particle velocity. This alignment is more easily achieved under the phase
Figure 4: Standard deviation of the Voronoi volumes normalized by the standard deviation for a uniform distribution, as a function of time for non-linear and linear simulations.
Figure 3: (a) Radial two-point autocorrelation \(\Gamma\) of \(\nabla_{\perp}\cdot\mathbf{u}_{\perp}\) in the plane perpendicular to the magnetic guide field at multiple times (later times are represented by darker colors) as a function of the displacement length \(\ell\) normalized by the dissipation scale \(\ell_{d}\), for non-linear and linear simulations. (b) Correlation length \(\ell_{c}\), normalized by \(\ell_{d}\), as a function of time; the correlation length \(\ell_{c}\) is obtained as the displacement length \(\ell\) at which the autocorrelation \(\Gamma\) drops below 10% of its maximum value (indicated by the dotted line in part (a) of this figure).
mixing hypothesis, while for a turbulent state phases do not actually fill every possible value at all frequencies.
We can visualize these different energization mechanisms by computing the two-dimensional PDF of particle velocities at the end of each simulation. The clear geometrical difference of each PDF shown in Fig. 5 suggests that each mechanism is fundamentally different, as the data is normalized to the unit variance. Close to the origin, all distributions are elliptic (although with different foci). As we move from the origin, each distribution changes significantly. In particular, for the NL1 run, they tend to rhombi reminiscent of constant 1-norm (i.e., \(|v_{x}|+|v_{z}|=\)const.), suggesting that simultaneously high perpendicular and parallel energization is unlikely, as expected from this structure energization (high \(v_{z}\) particles are difficult to trap). For the L run, the shapes are clearly circular, corresponding to constant 2-norm (\(v_{x}^{2}+v_{z}^{2}=\)const.) or kinetic energy. This is consistent with two independent Gaussian variables, as expected from a diffusive process in momentum space. Finally, the LR run is less clear, with shapes similar to rectangles reminiscent of constant \(\infty\)-norm (\(\max\{v_{x},v_{y}\}=\)const.).
By comparing the LR run with its truncated counterparts LR80 and LR40, we first see that the first truncation (i.e., LR\(\rightarrow\)LR80) has very little effect on particle energy, implying that particles resonate mainly with waves of \(|\mathbf{k}|<80\). Furthermore, the second truncation LR80\(\rightarrow\)LR40 reduces energization to even lower than the L run, showing that particles resonate mainly with waves of \(|\mathbf{k}|>40\). Two-dimensional velocity distributions for the LR80 and LR40 runs (not shown here) are very similar to those of LR and L in Fig. 5, respectively. While the similarity between LR80 and LR is to be expected, that between LR40 and L is not obvious and seems to show that both runs share the same energization mechanism. This last fact implies that changes in the phase distribution of the waves could be as important as changes in the energy spectrum.
In order to distinguish between the presence of waves and structures in a plasma and to investigate which dominate the dynamics, the spatio-temporal spectra were used (e.g., Clark di Leoni et al., 2015; Andres et al., 2017). Therefore, we resolved the waves in time and space by choosing a very high frequency cadence to store the magnetic field, in particular we used \(dt=2.5\times 10^{-4}\) as the temporal
Figure 5: Two-dimensional PDF of particle velocities normalized by its standard deviation at the end of the simulation for the NL1, L and LR runs, respectively. The contours display different geometry, implying different underlying mechanism for energization.
sampling rate. It is worth to mention that, we assumed that the energy concentrated around the linear dispersion relation can be explained by linear and weak turbulence theories (Chandran, 2005, 2008), while any spread away from the dispersion relation is a sign of strong turbulence that requires fully nonlinear theories to be understood. Figure 6 shows the spatio-temporal spectrum of the parallel magnetic field \(E_{zz}(k_{x}=0,k_{y},k_{z}=0,\omega)\) for the NL1 and L runs (the rest of linear runs have similar spectra to the L run), i.e. we separate the linear from the nonlinear case. The dispersion relation for the fast and slow magnetosonic waves given by Eq. (13) is shown in green solid and green dashed-dotted lines, respectively. Also, we include the particle gyrofrequency \(\omega_{g}\) which lies between wavenumbers \(40<|\mathbf{k}|<80\) and the characteristic sweeping frequency given by \(\omega_{sw}\sim U_{rms}\sqrt{k_{\perp}^{2}+k_{\parallel}^{2}}\) (see, Lugones et al., 2016). As we expected, for the linear run L, the magnetic energy is located around the magnetosonic branches, while for the nonlinear run NL1, the energy accumulates mainly in the slow waves (with \(\omega/B_{0}=0\) or non-propagating modes) for all wavenumbers. We also noted a small portion of energy spread along the fast branch for low wavenumbers. However, in the nonlinear case, most of the energy is spread across the spectrum due to its turbulent dynamic.
Furthermore, the absence of sweeping in the linear case is also useful to understand the low energization in the L run. As shown in Fig. 3, correlation quickly drops during linear evolution but
Figure 6: Spatio-temporal spectrum \(E_{zz}(k_{x}=0,k_{y},k_{z}=0)\) for the magnetic field fluctuations parallel to \(B_{0}\), for the runs NL1 (left) and L (right). The spectrum is shown as a function of \(\omega\) and \(k_{y}\) for fixed \(k_{x}=k_{\parallel}=0\). The green solid and the green dashed-dotted lines correspond to the linear dispersion relation of fast magnetosonic waves (\(\omega_{F}\)) and of slow magnetosonic waves (\(\omega_{S}\)), respectively. We include the particle gyrofrequency \(\omega_{g}\) (dashed line) and the sweeping frequency \(w_{sw}\) (blue line).
not quite enough to reach the LR case, which implies that some structures may survive longer. The absence of sweeping produced by the linear evolution means these surviving structures are not advected by the flow. However, test particles are advected, thus making trapping more difficult and preventing any surviving structure from effectively energizing particles.
### The effect of \(\tau_{c}\) on particle energization
For this section, we repeated the analysis from the last section with the simulations NL2 and NL3, which differ from the NL1 run in the correlation time of the forcing \(\tau_{c}\) (see Table 1). Energization is qualitatively similar to that of NL1 (as shown in Fig. 2), but not quantitatively. The same occurs for the correlation length and volume deviation \(\sigma_{\mathcal{V}}\), as summarized in Table 2 (energization and \(\sigma_{\mathcal{V}}\) are calculated at the end of the simulation). We see that both energization and correlation length decrease along with \(\tau_{c}\), while \(\sigma_{\mathcal{V}}\) displays no clear tendency and remains mostly similar. First, the reduction of \(\ell_{c}/l_{d}\) could be expected, as faster forcing (lower correlation time) prevent structures from being stable, either for lack of size or intensity. Secondly, we could conclude that particles tend to accumulate similarly in all cases, but their ability to exploit energization mechanism diminishes for lower \(\tau_{c}\).
Therefore, we need another way to quantify particle interaction with structures, including some dynamical information. For this purpose, we performed Voronoi tessellation for each time step and calculated whether the particle was clustered. We define a particle as clustered when its cell volume \(\mathcal{V}\) is lower than some threshold value \(\mathcal{V}_{th}\), which is obtained by comparing the cell volume PDF with that of a RPP (see references in Section 2.5 for more details). As seen in Fig. 4, clustering takes some time to settle, and as such the label clustered may not mean much initially. Towards the end of the simulation, clustered is almost equivalent to trapped inside a structure. Then, we can determine when and where are particles accumulating and therefore calculate the amount of consecutive time they spent clustered. We define a streak as an interval of (discrete) time for which the particle is clustered and its duration as \(\mathcal{S}\). In particular, we calculated the mean streak time \(\langle\mathcal{S}\rangle\) for each simulation by averaging over all streaks from all particles and display it in the last column of Table 2. Particles are clustered for around 2 full gyroperiods in all cases, but the mean time decreases with \(\tau_{c}\). This shows that while the clustering is very similar instantaneously for all runs, there is more exchange (between clustered and not clustered particles) in low \(\tau_{c}\) simulations, suggesting clusters become more feeble and unstable with lower correlation time. For lower values of \(\tau_{c}\), the rapidly changing forcing may be experienced by particles as kicks that remove them from the structures trapping them.
To further perform this streak analysis, we related it directly with particle energization. For this, we separated our data into intervals of duration time \(n\tau_{p}\) (\(n=1,2,3\)) and we selected particles that are clustered during this time (whose streaks contained the interval under study). For these particles, we calculated their mean perpendicular energization during these intervals and display it in Fig. 7. We can see that after \(\sim 100\tau_{p}\) (long enough for clustering to settle), energization becomes exponential in time.
In order to understand this, we propose a very simple model for the interaction with these structures. If we write the particle velocity in terms of its parallel and perpendicular components \(\mathbf{v}=\mathbf{v}_{\perp}+\mathbf{v}_{\parallel}\), we can define the perpendicular energy as \(\varepsilon_{\perp}=|\mathbf{v}_{\perp}|^{2}/2\). Using Eq. (15), we could derive an evolution
equation for the perpendicular energy \(\varepsilon_{\perp}\),
\[\frac{\mathrm{d}\varepsilon_{\perp}}{\mathrm{d}t}=\alpha\left[\mathbf{E}_{\perp} \cdot\mathbf{v}_{\perp}-\left(\mathbf{v}_{\parallel}\times\mathbf{v}_{\perp} \right)\cdot\mathbf{b}_{\perp}\right]\equiv\mathcal{P}_{\perp}-\mathcal{P}_{ \times}. \tag{18}\]
The first term \(\mathcal{P}_{\perp}=\alpha\mathbf{E}\cdot\mathbf{v}_{\perp}\) corresponds to net energization while the second relates to exchange with parallel energy, such as pitch angle scattering. Averaging over one gyroperiod,
\[\left\langle\mathcal{P}_{\perp}\right\rangle_{\tau_{p}}=\frac{1}{\tau_{p}}\int _{0}^{\tau_{p}}\alpha\mathbf{E}_{\perp}\cdot\mathbf{v}_{\perp}\mathrm{d}t\approx \frac{\alpha}{\tau_{p}}\oint_{\mathcal{C}}\mathbf{E}_{\perp}\cdot\mathrm{d} \mathbf{l}=-\frac{\alpha}{\tau_{p}}\iint_{\mathcal{S}(\mathcal{C})}\mathbf{\nabla \times}\mathbf{E}_{\perp}\cdot\mathrm{d}\mathbf{S} \tag{19}\]
where we have approximated the trajectory on the perpendicular plane as circular and then used Stokes' Theorem, with a minus sign accounting for the clockwise circulation. For a strong guide field, we could approximate Eq. (16) as \(\mathbf{E}_{\perp}\approx-\mathbf{u}_{\perp}\times\mathbf{B}_{0}\) and therefore the parallel component of its curl is \((\mathbf{\nabla\times}\mathbf{E}_{\perp})_{\parallel}\approx B_{0}\mathbf{\nabla}_{ \perp}\cdot\mathbf{u}_{\perp}\). Expanding the integral at the lowest order in the gyroradius \(R_{g}\) we achieve,
\[\left\langle\mathcal{P}_{\perp}\right\rangle_{\tau_{p}}\approx-\frac{\alpha^{2 }B_{0}^{2}}{2\pi}\iint_{\mathcal{S}(\mathcal{C})}\nabla_{\perp}\cdot\mathbf{u} _{\perp}\mathrm{d}S\approx-\frac{\alpha^{2}B_{0}^{2}}{2\pi}\pi R_{g}^{2}\mathbf{ \nabla}_{\perp}\cdot\mathbf{u}_{\perp} \tag{20}\]
Using that \(R_{g}=|\mathbf{v}_{\perp}|/\alpha B_{0}\) and recalling the definition of \(\varepsilon_{\perp}\), we can substitute in Eq. (18) to obtain,
\[\frac{\mathrm{d}\varepsilon_{\perp}}{\mathrm{d}t}\approx\lambda\varepsilon_{ \perp},\quad\lambda=-\mathbf{\nabla}_{\perp}\cdot\mathbf{u}_{\perp}, \tag{21}\]
which for approximately constant \(\mathbf{\nabla}_{\perp}\cdot\mathbf{u}_{\perp}<0\) predicts exponential increase for \(\varepsilon_{\perp}\).
The slopes in Fig. 7 represent \(\lambda\) and are very similar, suggesting that clustered particles are energized at the same rate. The difference in energization observed in Table 2 must therefore be related to the time each particle spends clustered, as shown by \(\left\langle\mathcal{S}\right\rangle\). Particles can leave structures in basically 3 ways: (a) escaping vertically due to high \(v_{z}\), (b) reaching the maximum allowed gyroradius, or (c) being pushed out by some fluctuation. We disregard option (a) as particles in all simulations have very similar parallel energization (not shown). Option (b) implies that the gyroradius \(R_{g}=v_{\perp}/\alpha B_{0}\) is comparable to the size of the structures, which could be taken as \(\ell_{c}\) and would be reasonable
Figure 7: Mean of the structure function of the perpendicular particle energization for intervals with time duration \(n\tau_{p}\) (where \(n=1,2,3\)). We selected particles that are clustered during this time.
considering how it depends on \(\tau_{c}\) (see Table 2). We can calculate the required kinetic energy as \(v_{\perp}^{2}\sim(\alpha B_{0}\ell_{c})^{2}\sim 1600v_{0}^{2}\), which is achieved by practically no particle (less than 1 particle in \(10^{4}\)) and as such cannot be the dominant cause. This leaves option (c) as the last and main possibility, suggesting that structures are less robust against fluctuations and thus weaker. In the wave/turbulence dichotomy, we could attribute this weakness to the prevalence of waves over structures in the system, to which we will dedicate the next section.
### The effect of \(\tau_{c}\) on spatio-temporal spectra
For the study of the relevance of waves in the system for different \(\tau_{c}\), we quantitatively analyze the spatio-temporal magnetic spectra. Figure 8 shows the spatio-temporal spectrum of the perpendicular magnetic field fluctuation component \(E_{xx}(k_{x}=0,k_{y}=15,k_{z},\omega)\) for the NL1, NL2 and NL3 runs. For an easy comparison, we include the dispersion relation for the Alfven, fast and slow magnetosonic waves in blue dashed, green dashed-dotted and orange solid lines, respectively. We also added the particle gyrofrequency and the sweeping frequency in red dashed and solid blue lines, respectively. We observed that the energy is mainly located around the slow branch and, in lesser extent, around Alfven waves. Moreover, as \(\tau_{c}\) decreases (i.e., as we move from NL1 to NL3), the energy around wave modes increases for lower values of \(k_{z}\). In fact, it is worth noting that as we increase the correlation time the energy tends to spread slightly towards the sweeping frequency. This result is indeed compatible with the behaviour on \(\ell_{c}/l_{d}\), as the sweeping energy is mostly related to the non-linear structures (see Table 2).
In order to analyze the magnetosonic modes, we studied the spatio-temporal spectrum of parallel magnetic field fluctuation \(E_{zz}(k_{x}=0,k_{y},k_{z}=15)\). Figure 9 shows the same trend that we observed in Fig. 8. In fact, the energy around the fast branch decreases noticeably as we increase \(\tau_{c}\). It is worth mentioning that the slow branch is completely immerse in the sweeping region, therefore, we can not come to a conclusion about the increase (or decrease) of the magnetic power energy around the slow mode. However, in the case of NL1, the energy is more concentrated around \(\omega=0\) compared to lower values of the correlation time and the sweeping frequency correctly delimits this energy.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Run & \(\tau_{c}/\tau_{p}\) & \(\ell_{c}/l_{d}\) & \(\langle\Delta v_{x}^{2}\rangle/v_{0}^{2}\) & \(\sigma_{\mathcal{V}}/\sigma_{RPP}\) & \(\langle\mathcal{S}\rangle/\tau_{p}\) \\ \hline NL1 & \(1.146\times 10^{1}\) & 18.0 & 24.3 & 1.87 & 2.23 \\ NL2 & \(1.146\times 10^{0}\) & 15.2 & 20.3 & 1.98 & 2.13 \\ NL3 & \(2.865\times 10^{-1}\) & 14.4 & 15.6 & 1.70 & 1.85 \\ \hline \end{tabular} Note. – Forcing correlation time, field correlation length, mean value of the perpendicular energization, volume deviation and clustered time of the particles for non-linear simulations. While correlation length, energization and clustered time decrease with \(\tau_{c}\), volume deviation remains mostly constant.
\end{table}
Table 2: Particle related quantities in nonlinear simulations
Figure 8: Spatio-temporal spectrum \(E_{xx}(k_{x}=0,k_{y}=15,k_{z})\) for the magnetic field fluctuations perpendicular to \(B_{0}\). The spectrum is shown as a function of \(\omega\) and \(k_{\parallel}\) (\(k_{z}\)) for fixed \(k_{x}=0\) and \(ky=15\). The blue dashed, green dashed-dotted and orange solid lines correspond to the linear dispersion relation of Alfvén waves (\(\omega_{A}\)), of slow magnetosonic waves (\(\omega_{F}\)) and of fast magnetosonic waves (\(\omega_{S}\)), respectively. We include the particle gyrofrequency \(\omega_{g}\) and the sweeping frequency in red dashed and blue solid lines, respectively.
Figure 9: Spatio-temporal spectrum \(E_{zz}(k_{x}=0,k_{y},k_{z}=15)\) for the magnetic field fluctuations parallel to \(B_{0}\), for the runs L and NL1. The spectrum is shown as a function of \(\omega\) and \(k_{y}\) for fixed \(k_{x}=0\) and \(k_{\parallel}=15\). The green solid and the green dashed-dotted lines correspond to the linear dispersion relation of fast magnetosonic waves (\(\omega_{F}\)) and of slow magnetosonic waves (\(\omega_{S}\)), respectively. We include the particle gyrofrequency \(\omega_{g}\) and the sweeping frequency in red dashed and blue solid lines, respectively.
For a more precise study, we used an integration method to quantify the amount of energy near the different wave modes in each spatio-temporal spectrum. Clark di Leoni et al. (2015) calculated the ratio of the energy accumulated near these wave modes to the total energy in the same wavenumber as,
\[F(k_{z})=\frac{E_{xx}(k_{x}=0,k_{y}=0,k_{z},\omega=\omega_{A,F,S})}{\Sigma_{j}E_{ xx}(k_{x}=0,k_{y}=0,k_{z},\omega_{j})}, \tag{22}\]
with \(\omega_{A,F,S}\) the frequencies that satisfy a certain dispersion relation (and the \(E_{xx}(k_{z})\) component is an illustration example only). Figure 10 shows the energy around (a) Alfven and (b) slow magnetosonic waves for runs NL1, NL2 and NL3. In particular, we observed that the amount of energy near the slow branch is similar as we increase the correlation time. However, for small scales (\(k_{z}\sim 100\)), the energy in run NL1 tends to be smaller than the rest of the simulations as a result of the energy moving towards the sweeping frequency. In the case of Alfven modes, most of the energy is concentrated around low wavenumbers and tends to decrease for higher \(k_{z}\). It is worth noting that we are not including the fast branch since most of the energy is not located around this wave mode. Figure 11 shows (a) the energy near fast magnetosonic waves for runs NL1, NL2 and NL3 and (b) the difference between NL3 and the first two non-linear simulations. We observe that the energy located around the fast branch reaches its maximum value for the smallest wavenumbers. Moreover, we obtained a significant growth of the energy as the correlation time decreases. Finally, the NL1 run differs from the NL3 run by 11%.
## 5 Conclusions
In the present work, we investigated the interplay of linear waves and coherent structures in compressible MHD turbulence and how this affects test particle (proton) energization. We compared a nonlinear evolution against a linear evolution of the same initial turbulent state. This initial state is obtained from a fully nonlinear evolution, which is more realistic than the usual spectra models with random phases. We found that under this initial state, energization in the linear case is much lower than the nonlinear case. However, this situation inverts after the phases of the initial state are
Figure 10: Quantification of the amount of energy around (a) Alfvén and (b) slow magnetosonic branches presented in spatio-temporal spectrum \(E_{xx}(k_{x}=0,k_{y}=15,k_{z})\).
randomized, showing the relevance of phase distribution in particle energization. This last numerical result allow us to reinterpret the role of phases in particle energization, which are usually treated as secondary with respect to the spectra. Additionally, this result shows that linear evolution, no matter how realistic the spectra is, can not faithfully reproduce energization observed from a nonlinear evolution.
We showed that wave-particle interactions are mainly in modes \(40<|\mathbf{k}|<80\), which for Alfven waves is consistent with \(\omega_{A}\approx\omega_{g}\). This could be related to second order Fermi energization \(\omega\pm k_{z}v_{z}=\omega_{g}\) in the reasonable limit \(\omega_{g}\gg k_{z}v_{z}\). First order Fermi energization requires \(|v_{z}|\approx v_{A}\approx 9v_{0}\), which we observed from the parallel energization in Fig. 2 is not likely. A similar reasoning follows for the fast magnetosonic waves, whose frequency \(\omega\) and resonant velocity \(\omega/k_{z}\) are even higher. On the other hand, first order Fermi energization is more likely to happen in the slow magnetosonic waves and is the only possible mechanism left for the LR40 run.
The fact that the L and LR40 runs have similar energization mechanisms and rates is striking, as it further reinforces the role of phases in particle energization. If slow magnetosonic waves are responsible for the energization in the LR40 run, they must also be for that of the L run. However, this is unlikely, as their phase and energy distributions are different in each run because phase randomization decorrelates and tends to evenly distribute energy between the different branches. According to our numerical results, it would seem that phase distribution is very important for high wavelength waves and becomes practically irrelevant for low wavelength waves. This result suggests that the simple picture of particles resonating with one wave at a time may be misguiding and perhaps resonance broadening is enhanced for such complex spectra. The PDFs of Fig. 5 seem to imply that the energization of L/LR40 and LR/LR80 are fundamentally different.
Furthermore, we analysed the effect of the correlation time of the external forcings \(\tau_{c}\) on the spatio-temporal spectra and the particle energization. Here, we further discussed the trapping and energization mechanism of the structures, showing that clustered particles energize exponentially.
Figure 11: Quantification of the amount of energy around (a) fast magnetosonic branch presented in spatio-temporal spectrum \(E_{zz}(k_{x}=0,k_{y},k_{z}=15)\). Difference between the NL3 run and the rest of the nonlinear simulations (b).
Evolutions with higher \(\tau_{c}\) tend to trap particles for longer periods of time, thus allowing them to better exploit this mechanism. Based on spatio-temporal spectra, we showed that higher values of \(\tau_{c}\) reduce the energy in the fast magnetosonic and Alfven branches, while also moving energy closer to the sweeping region. This shows that higher frequency forcings (low \(\tau_{c}\)) induce relatively more linear waves in the system, taking away energy from nonlinear structures. As a result, trapping is less effective because the fluctuations find it easier to remove particles from coherent structures.
Therefore, we could argue that particle energization decreases as the fraction of linear energy increases. This further confirms that energization by structures is dominant, as the increase in wave energy is not enough to compensate the loss in particle energization due to weaker structures. We also showed strong evidence of the importance of trapping in the particle energization process and in particular the relationship between this effect and the persistence of structures with the correlation time of the forcing. An interesting follow up could be the study of particles with different charge-to-mass ratio (see previous work in Pugliese & Dmitruk (2022)). Finally, we believe that the present work can shed light on the understanding of the complex process of particle energization in turbulent scenarios (Marino & Sorriso-Valvo, 2023), like those found in the interplanetary space and more general astrophysical contexts (Bruno & Carbone, 2013).
## 6 Acknowledgment
The authors acknowledge financial support from CNRS/MINCyT ECOS SUD 2022 No. A22U02. N.A. acknowledges financial support from the following grants: UBACyT 20020190200035BA. P.D. acknowledges financial support from PIP Grant No. 11220150100324 and 11220200101752 and PICT Grant No. 2018-4298.
|
2306.08719 | Off-policy Evaluation in Doubly Inhomogeneous Environments | This work aims to study off-policy evaluation (OPE) under scenarios where two
key reinforcement learning (RL) assumptions -- temporal stationarity and
individual homogeneity are both violated. To handle the ``double
inhomogeneities", we propose a class of latent factor models for the reward and
observation transition functions, under which we develop a general OPE
framework that consists of both model-based and model-free approaches. To our
knowledge, this is the first paper that develops statistically sound OPE
methods in offline RL with double inhomogeneities. It contributes to a deeper
understanding of OPE in environments, where standard RL assumptions are not
met, and provides several practical approaches in these settings. We establish
the theoretical properties of the proposed value estimators and empirically
show that our approach outperforms competing methods that ignore either
temporal nonstationarity or individual heterogeneity. Finally, we illustrate
our method on a data set from the Medical Information Mart for Intensive Care. | Zeyu Bian, Chengchun Shi, Zhengling Qi, Lan Wang | 2023-06-14T19:48:30Z | http://arxiv.org/abs/2306.08719v4 | # Off-policy Evaluation in Doubly Inhomogeneous Environments
###### Abstract
This work aims to study off-policy evaluation (OPE) under scenarios where two key reinforcement learning (RL) assumptions - temporal stationarity and individual homogeneity are both violated. To handle the "double inhomogeneities", we propose a class of latent factor models for the reward and observation transition functions, under which we develop a general OPE framework that consists of both model-based and model-free approaches. To our knowledge, this is the first paper that develops statistically sound OPE methods in offline RL with double inhomogeneities. It contributes to a deeper understanding of OPE in environments, where standard RL assumptions are not met, and provides several practical approaches in these settings. We establish the theoretical properties of the proposed value estimators and empirically show that our approach outperforms competing methods that ignore either temporal nonstationarity or individual heterogeneity. Finally, we illustrate our method on a data set from the Medical Information Mart for Intensive Care.
_Keywords:_ Double Inhomogeneities; Off-policy Evaluation; Reinforcement Learning; Two-way Fixed Effects Model.
Introduction
Reinforcement learning (RL, Sutton and Barto, 2018) aims to optimize an agent's long-term reward by learning an optimal policy that determines the best action to take under every circumstance. RL is closely related to the dynamic treatment regimens (DTR) or adaptive treatment strategies in statistics research for precision medicine (Murphy, 2003; Robins, 2004; Qian and Murphy, 2011; Zhang et al., 2012; Chakraborty and Moodie, 2013; Kosorok and Moodie, 2015; Zhu et al., 2017; Tsiatis et al., 2019; Kosorok and Laber, 2019; Qi et al., 2020; Zhou et al., 2022), which seeks to obtain the optimal treatment policy in finite horizon settings with a few treatment stages that maximizes patient's expected outcome. Nevertheless, statistical methods for DTR mentioned above normally cannot handle large or infinite horizon settings. They require the number of trajectories to tend to infinity to achieve a consistent estimation, unlike RL, which can work even with finite sample size under certain conditions. In addition to precision medicine, RL has been applied to various fields, such as games (Silver et al., 2016), ridesharing (Xu et al., 2018), mobile health (Liao et al., 2021) and robotics (Levine et al., 2020).
In this article, we focus on the topic of off-policy evaluation (OPE) in RL, whose objective is to evaluate the value function of a given target policy using data collected from a potentially different policy, known as the behavior policy. OPE is important in applications in which directly implementing the target policy involves potential risks and high costs. For instance, in healthcare, it would be expensive (in terms of both time and budget) to conduct a randomized experiment to recruit a large number of individuals and follow them up for the duration of the entire experiment. Meanwhile, it might be unethical to directly apply a new treatment policy to some individuals without offline validation. It is therefore important to study and develop RL methods only using the observed and historical data (offline RL), and OPE appears to be particularly vital in offline RL. Generally speaking, the existing OPE methods can be divided into four categories: model-based methods (Yin and Wang,
2020), importance sampling-based methods (Liu et al., 2018; Wang et al., 2021), value function-based methods (Luckett et al., 2020; Hao et al., 2021; Liao et al., 2021; Shi et al., 2022), and doubly robust estimation methods (Jiang and Li, 2016; Thomas and Brunskill, 2016; Bibaut et al., 2021; Kallus and Uehara, 2022; Liao et al., 2022), which typically are a fusion of the former two approaches. See Uehara et al. (2022) and the references therein for an overview.
_Motivation_. Most methods in the RL literature would require the following two critical assumptions in order to obtain a valid estimator: temporal stationarity and individual homogeneity. The temporal stationarity assumption requires that the system dynamics for each subject do not depend on the time whereas individual homogeneity requires the system dynamics at each time to be identical across all subjects. Nonetheless, both conditions are likely to be violated in many of RL applications, e.g., mobile health and infectious disease control; see Hu et al. (2022) for a thorough discussion and examples. This work draws partial motivation from the longitudinal data of septic patients obtained from the Medical Information Mart for Intensive Care (MIMIC-III, Johnson et al., 2016), a database that contains information on critical care patients. Sepsis is a severe and potentially fatal condition that occurs when the human body's response to an infection injures its own tissues and organs (Singer et al., 2016). It can progress rapidly and cause multiple organ failures, resulting in a decline in a patient's health and an increased risk of death. Prompt treatment of sepsis is thus essential for improving patient outcomes and reducing mortality rates. However, the heterogeneity in patients' response to sepsis treatment (Evans et al., 2021), as well as a potential non-stationary environment (the data includes patients' medical information over 10 years) make it extremely challenging to effectively manage the illness using existing statistical methods. Our analysis provides insights into the impact of different treatment options on patient outcomes which helps develop more effective and personalized approaches to sepsis care.
In the statistical literature, Li et al. (2022) developed a hypothesis testing procedure to
assess the stationarity assumption in RL, based on which a policy learning procedure is proposed to handle possibly nonstationary environments. Chen et al. (2022a,b) developed a transferred Q-learning algorithm and an auto-clustered policy iteration algorithm to handle heterogeneous data. However, these methods require either temporal stationarity or individual homogeneity, and would fail in doubly inhomogeneous environments when both assumptions are violated. Hu et al. (2022) proposed an algorithm to adaptively split the data into rectangles in which the system dynamics are the same over time and across individuals. They studied policy learning instead of OPE. In addition, they imposed a latent group structure over time and population. This structural assumption can be violated when the dynamics vary smoothly over both population and time.
_Challenges_. OPE is substantially more challenging under the doubly inhomogeneous environments. First, the evaluation target is different. In particular, most existing solutions developed in doubly homogeneous environments have predominantly focused on evaluating the expected long-term reward following the target policy aggregated over time and population. In contrast, the following four time- and/or individual-specific values are of particular interest in the presence of double inhomogeneities:
1. The expected long-term reward aggregated over both time and population;
2. The expected long-term reward aggregated over time for a given subject;
3. The expected reward at a given time aggregated over population;
4. The expected reward at a given time for a given subject.
Second, an unresolved challenge is how to efficiently borrow information over time and population for OPE. On one hand, to account for the subject heterogeneity or temporal nonstationarity, one could conduct OPE based on the data within each individual trajectory or at a given time. However, this approach may result in estimators with high variance. On the other hand, naively pulling data over population and time without careful considerations would lead to biased estimators.
_Contributions_. This work makes the following contributions. First, to the best of our
knowledge, it is the first study to investigate OPE in doubly inhomogeneous RL domains. Unlike prior works that primarily focused on evaluating the average effect over time and population, we provide a systematic approach by examining values that are specific to time and/or individuals. These values hold particular importance in the context of double inhomogeneities.
Second, we present a comprehensive framework for doubly inhomogeneous OPE which comprises both model-free and model-based methods. To effectively utilize information in the presence of temporal nonstationarity and individual heterogeneity, we introduce a class of two-way doubly inhomogeneous decision process (TWDIDP) models and develop policy value estimators based on these models. Our proposal shares similar spirits with the two-way fixed effects model that is widely studied in the economics and social science literature (Angrist and Pischke, 2009; Wooldridge, 2010; De Chaisemartin and d'Haultfoeuille, 2020; Arkhangelsky et al., 2021; Imai and Kim, 2021). Nonetheless, our model is more complicated in that the current treatment not only affects its immediate outcome, but impacts the future outcomes as well through its effect on the future observation via the transition function. In contrast, the fixed effects models commonly employed in the panel data literature tend to exclude the possibility of carryover effects (Imai and Kim, 2019; Arkhangelsky et al., 2021).
Finally, we systematically explore the theoretical properties of the proposed model-free method. In particular, we derive the convergence rates of various proposed value estimators, showing that the estimated average effect, individual-specific effect, time-specific effect and individual- and time-specific effect converge at a rate of \((NT)^{-1/2}\), \(T^{-1/2}\), \(N^{-1/2}\) and \(\min^{-1/2}(N,T)\) respectively, up to some logarithmic factors, where \(N\) is the number of trajectories and \(T\) is the number of time points. We further establish the limiting distributions of these estimators.
_Organization_. The rest of this paper is organized as follows. In Section 2, we introduce
the proposed doubly inhomogeneous decision process model to incorporate temporal non-stationarity and individual heterogeneity. In Sections 3 and 4, we present our proposed model-free and model-based methods. We analyze their statistical properties in Section 5. A series of comprehensive simulation studies are conducted in Section 6. Finally, in Section 7, we illustrate the proposed approach using the MIMIC-III dataset.
## 2 Two-way Doubly Inhomogeneous Decision Processes
_Data_. We first describe the dataset. We assume the offline data consist of \(N\) independent trajectories, each with \(T\) many time points, and can be summarized as the following observation-action-reward triplets \(\{(O_{i,t},A_{i,t},R_{i,t}):1\leq i\leq N,1\leq t\leq T\}\) where \(i\) indexes the \(i\)th individual and \(t\) indexes the \(t\)th time point. For example, in mobile health applications, \(O_{i,t}\in\mathbb{R}^{d}\) denotes the vector of covariates measured from the \(i\)th individual at time \(t\), \(A_{i,t}\) denotes the treatment assigned to the \(i\)th individual at time \(t\), and \(R_{i,t}\in\mathbb{R}\) denotes the \(i\)th individual's clinical outcome at time \(t\). Let \(\mathcal{O}\) and \(\mathcal{A}\) denote the observation and action space, respectively. We assume \(\mathcal{A}\) is a discrete space whereas, \(\mathcal{O}\) is a compact subspace of \(\mathbb{R}^{d}\) where \(d\) denotes the dimension of the observation and that the reward is uniformly bounded. The bounded rewards assumption is commonly imposed in the RL literature (see e.g., Fan et al., 2020; Li et al., 2023).
_Model_. We next present the proposed two-way doubly inhomogeneous decision process model. In the RL literature, a common practice is to employ the Markov decision process (MDP, Puterman, 2014) to model the data generating process. Assuming both the observation and reward spaces are discrete, the MDP model essentially requires the reward and future observation to be conditionally independent of the past data history given the current observation-action pair so that the system dynamics are uniquely determined by the following Markov transition function \(p\),
\[\mathbb{P}(O_{i,t+1}=o^{\prime},R_{i,t}=r|A_{i,t}=a,O_{i,t}=o,\{O_{i,j},A_{i,j },R_{i,j}\}_{1\leq j<t})=p(o^{\prime},r|a,o), \tag{2.1}\]
which is assumed to be doubly homogeneous, i.e., constant over time and population.
The proposed model relies on two key assumptions. First, to model double inhomogeneities, we assume the existence of a set of individual- and time-specific latent factors \(\{U_{i}\}_{i=1}^{N}\) and \(\{V_{t}\}_{t=1}^{T}\) conditional on which the Markov assumption holds. More specifically, for any \(i\) and \(t\), we assume
\[\mathbb{P}(O_{i,t+1}=o^{\prime},R_{i,t}=r|U_{i}=u_{i},V_{t}=v_{t},A_{i,t}=a,O_{i,t}=o,\{O_{i,j},A_{i,j},R_{i,j},V_{j}\}_{1\leq j<t}) \tag{2.2}\] \[= p(o^{\prime},r|u_{i},v_{t},a,o).\]
**Remark 1**.: _Unlike (2.1), the transition function in (2.2) is both individual- and time-dependent due to the inclusion of \(U_{i}\) and \(V_{t}\). The individual-specific factors can be viewed as certain individual baseline information (e.g., educational background, salary) that does not vary over time whereas the time-specific factors correspond to certain external factors (e.g., holidays) that have common effects on all individuals._
**Remark 2**.: _Both \(\{U_{i}\}_{i=1}^{N}\) and \(\{V_{t}\}_{t=1}^{T}\) are unobserved in practice, leading to the violation of the Markov assumption. Indeed, the proposed data generating process can be viewed as a special class of partially observable MDPs (POMDPs, Sutton and Barto, 2018) where the unobserved factors either do not evolve over time (e.g., \(\{U_{i}\}_{i=1}^{N}\)) or do not vary across individuals (e.g., \(\{V_{t}\}_{t=1}^{T}\)). More generally, one may allow the latent factors to evolve over both time and population. However, this makes the subsequent policy evaluation extremely challenging. In contrast, our proposal decomposes these factors into individual-only and time-only effects, which can be consistently estimated when both \(N\) and \(T\) diverge to infinity. We also remark that latent factor models are widely used in finance (Ross, 1976), economics (Bai and Ng, 2002), psychology (Bollen, 2002) and machine learning (Nickel et al., 2011)._
**Remark 3**.: _When the observation and reward spaces are continuous, we use \(p(\bullet,\bullet|U_{i},V_{t},A_{i,t},O_{i,t})\) to denote the conditional density function of \((O_{i,t+1},R_{i,t})\) given \(U_{i}\), \(V_{t}\), \(A_{i,t}\) and \(O_{i,t}\). Our proposal is applicable to both discrete and continuous observation/reward spaces._
Second, we impose an additivity assumption, which requires the transition function \(p\) to be additive in \(u\), \(v\) and \((a,o)\), i.e.,
\[p(o^{\prime},r|u_{i},v_{t},a,o)=\pi_{u}p_{u_{i}}(o^{\prime},r|u_{i})+\pi_{v}p_{v _{t}}(o^{\prime},r|v_{t})+\pi_{0}p_{0}(o^{\prime},r|a,o) \tag{2.3}\]
for some non-negative constants \(\pi_{u}\), \(\pi_{v}\), and \(\pi_{0}\) that satisfy \(\pi_{u}+\pi_{v}+\pi_{0}=1\) and some unknown conditional probability density (mass) functions \(p_{u}\), \(p_{v}\) and \(p_{0}\).
The additivity assumption in (2.3) essentially assumes that the transition function corresponds to a mixture of \(p_{u}\), \(p_{v}\) and \(p_{0}\), with the mixing weights given by \(\pi_{u}\), \(\pi_{v}\) and \(\pi_{0}\), respectively. Under the additivity assumption, \(p_{u}\) and \(p_{v}\) correspond to the individual- and time-specific effects, and are independent of the current observation-action pair. The function \(p_{0}\) corresponds to the main effect shared over time and subjects.
Multiplying both sides of (2.3) by \(r\) and integrating with respect to \(r\) and \(o^{\prime}\), we obtain that
\[R_{i,t}=\theta_{i}+\lambda_{t}+r_{1}(a,o)+\varepsilon_{i,t}, \tag{2.4}\]
where \(\theta_{i}=\pi_{u}\int rp_{u_{i}}(o^{\prime},r|u_{i})drdo^{\prime}\), \(\lambda_{t}=\pi_{v}\int rp_{v_{t}}(o^{\prime},r|v_{t})drdo^{\prime}\), \(r_{1}(a,o)=\pi_{0}\int rp_{0}(o^{\prime},r|a,o)drdo^{\prime}\), and \(\varepsilon_{i,t}=R_{i,t}-\mathbb{E}(R_{i,t}|A_{i,t}=a,O_{i,t}=o)\) has conditional mean zero. To ease notation, in what follows, we assume the latent factors \(\{U_{i}\}_{i}\) and \(\{V_{t}\}_{t}\) are fixed, and use \(\{u_{i}\}_{i}\) and \(\{v_{t}\}_{t}\) to denote their realizations. All the expectations mentioned below are assumed to be implicitly conditional on \(\{U_{i}\}_{i}\) and \(\{V_{t}\}_{t}\). Notably, the policy value of interest is also defined as the conditional expectation of the reward given \(\{U_{i}\}_{i}\) and \(\{V_{t}\}_{t}\). Models of this type (Equation 2.4) are referred to as the two-way fixed-effects model in the panel data literature (see e.g., Angrist and Pischke, 2009; Imai and Kim, 2021). Nonetheless, our model has the distinctive feature that it allows the current treatment to not only affect the immediate outcome, but also to impact the future outcomes through its effect on the future observation via the transition function in (2.3).
**Remark 4**.: _Our additivity assumption (2.3) is motivated by the increased popularity of the fixed-effect models in the panel data literature, due to its ability to account for unobserved
variables. As commented by Green et al. (2001), fixed effects regression can scarcely be faulted for being the bearer of bad tidings. On the other hand, when \(\pi_{0}=1\), the proposed model reduces to the standard MDP studied in doubly homogeneous environments._
_Estimand_. Finally, we define our target estimand of interest. A policy prescribes how an agent should act and make decisions. Mathematically, it maps the space of observed data history to a probability mass function on the action space, determining the probability that a given individual receives a given treatment at each time point. Throughout this paper, we focus on evaluating _stationary_ policies where the action selection probability depends on history only through the current observation and this dependence is stationary over time. More specifically, following a given stationary policy \(\pi\), the \(i\)th individual will receive treatment \(a\) with probability \(\pi(a|O_{i,t})\). This policy class includes observation-agnostic policies, i.e., \(\pi(a|O_{i,t})=\pi(a)\) for any \(a\). Meanwhile, the proposed method can be extended to evaluate possibly nonstationary policies.
For a given target policy \(\pi\), we define the following four estimands of interest: (i) the average effect \(\eta^{\pi}\equiv(NT)^{-1}\sum_{i=1}^{N}\sum_{t=1}^{T}\mathbb{E}^{\pi}(R_{i,t})\); (ii) the individual-specific effect given the observed initial observation \(\eta^{\pi}_{i}\equiv T^{-1}\sum_{t=1}^{T}\mathbb{E}^{\pi}(R_{i,t}|O_{i,1})\) (which is a scalar instead of a function of the initial observation); (iii) the time-specific effect \(\eta^{\pi}_{t}\equiv N^{-1}\sum_{i=1}^{N}\mathbb{E}^{\pi}(R_{i,t})\) and (iv) the individual- and time-specific effect \(\eta^{\pi}_{i,t}\equiv\mathbb{E}^{\pi}(R_{i,t}|O_{i,1})\). Here, the notation \(\mathbb{E}^{\pi}\) means that the expectation is taken by assuming the system dynamics follow the target policy \(\pi\). In defining \(\eta^{\pi}_{i}\) and \(\eta^{\pi}_{i,t}\), we include \(O_{i,1}\) in the conditioning set to eliminate their variability resulting from marginalizing over the initial observation distribution. This is reasonable, as the initial observation distribution may no longer be identical across different subjects due to individual heterogeneity, making it impossible to infer consistently from the data.
We focus on estimating (iv) \(\eta^{\pi}_{i,t}\) in the next two sections, based on which estimators for (i)-(iii) can be easily derived by taking the average over time and/or population.
**Remark 5**.: _In stationary environments with certain mixing conditions (see e.g., Bradley,
2005), the system will reach its stationary distribution. In that case, it is immediate to see that \(\lim_{T}\eta_{i}^{\pi}=\lim_{t}\eta_{i,t}^{\pi}\) and \(\lim_{T}\eta^{\pi}=\lim_{t}\eta_{t}^{\pi}\). Meanwhile, under individual homogeneity, we obtain \(\lim_{T}\eta^{\pi}=\lim_{T}\eta_{i}^{\pi}\). As such, the aforementioned four targets are asymptotically the same in the classical doubly homogeneous environments._
## 3 Model-free OPE
In this section, we develop model-free methodologies to learn \(\eta_{i,t}^{\pi}\): the \(i\)th subject's average reward at time \(t\) given \(O_{i,1}\). Model-free methods construct the policy value estimator without directly learning the transition function. Compared to model-based methods which directly learn the transition function to derive the estimator, they are preferred in settings with large observation space, or where the transition function is highly complicated and can be easily misspecified.
_Challenge_. Before presenting our proposal, we outline the challenges in consistently estimating the policy value. First, existing model-free methods developed in the RL literature (see e.g., Luckett et al., 2020; Liao et al., 2021; Kallus and Uehara, 2022; Liao et al., 2022; Shi et al., 2022) focused on learning the long-term reward in a stationary environment. These methods are not applicable to learn the expected reward at a given time with nonstationary transition functions. Second, in the DTR literature, backward induction or dynamic programming (Bellman, 1957; Bather, 2000) is widely employed to evaluate the value function in the sparse reward setting where the reward is obtained at the last stage and all the immediate rewards equal zero. It is applicable to evaluate \(\mathbb{E}^{\pi}(R_{i,t})\) in nonstationary environments. Nonetheless, it requires all individual trajectories to follow the same distribution and is thus inapplicable to our setting.
_Q-function_. Our proposal extends the classical backward induction method to the doubly inhomogeneous environments. To begin with, we define the following individual- and
time-specific Q-function
\[Q_{i,t_{1},t_{2}}^{\pi}(o,a)=\mathbb{E}^{\pi}(R_{i,t_{2}}|A_{i,t_{1}}=a,O_{i,t_{ 1}}=o), \tag{3.1}\]
for any \(1\leq i\leq N\) and \(1\leq t_{1}\leq t_{2}\leq T\). To elaborate on this definition, we consider two particular choices of \(t_{1}\). First, when \(t_{1}=t_{2}\), (3.1) reduces to the conditional mean of \(R_{i,t_{2}}\) given (\(A_{i,t_{2}}\), \(O_{i,t_{2}}\)), which equals \(\theta_{i}+\lambda_{t}+r_{1}(A_{i,t},O_{i,t})\) (see Equation (2.4)) under additivity. Second, when \(t_{1}=1\), it is immediate to see that
\[\eta_{i,t_{2}}^{\pi}=\sum_{a}Q_{i,1,t_{2}}^{\pi}(O_{i,1},a)\pi(a|O_{i,1}). \tag{3.2}\]
As such, it suffices to learn \(Q_{i,1,t}^{\pi}\) to construct estimators for \(\eta_{i,t}^{\pi}\).
**Remark 6**.: _In both the DTR and RL literature, the Q-function is typically defined as the cumulative reward starting from a given time \(t_{1}\). Our Q-function in (3.1) differs in that: (i) it is individual-specific where the subscript \(i\) encodes its dependence upon the latent factor \(u_{i}\); (ii) it is the conditional mean of the immediate reward at time \(t_{2}\) only instead of the cumulative reward since our objective here lies in evaluating \(\mathbb{E}^{\pi}(R_{i,t_{2}})\)._
_Backward induction_. We propose to use backward induction to compute an estimated Q-function \(\widehat{Q}_{i,1,t}^{\pi}\) for \(Q_{i,1,t}^{\pi}\) and then plug the estimator into (3.2) to construct the policy value estimator. To begin with, consider the reward function \(\{Q_{i,t,t}^{\pi}\}_{i,t}\). As shown in (2.4), under the two-way fixed-effect model, we have \(Q_{i,t,t}^{\pi}(o,a)=r_{1}(o,a)+\theta_{i}+\lambda_{t}\) for any \(i\) and \(t\). This motivates us to consider the following optimization problem:
\[(\widehat{\mathbf{\theta}},\widehat{\mathbf{\lambda}},\widehat{r}_{1})=\operatorname* {arg\,min}_{\mathbf{\theta},\mathbf{\lambda},r_{1}}\sum_{i,t}[R_{i,t}-\theta_{i}- \lambda_{t}-r_{1}(O_{i,t},A_{i,t})]^{2}, \tag{3.3}\]
where \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{N})^{\top}\in\mathbb{R}^{N}\), \(\mathbf{\lambda}=(\lambda_{1},\ldots,\lambda_{T})^{\top}\in\mathbb{R}^{T}\).
To guarantee the uniqueness of the solution to (3.3), one could impose the identifiability constraints \(\sum_{i}\theta_{i}=\sum_{t}\lambda_{t}=0\). There are other constraints to consider, but they all lead to the same estimation of the Q-function. Many supervised learning algorithms can be employed to solve (3.3). We provide a concrete proposal based on the method of sieves
(Grenander, 1981) in Section 3.1. Alternatively, one can model \(r_{1}\) via deep neural networks (DNN) and estimate the parameters involved in the DNN as well as \(\{\theta_{i}\}_{i}\), \(\{\lambda_{t}\}_{t}\) via e.g., the Adam algorithm (Kingma and Ba, 2015).
We next estimate \(\{Q_{i,t-1,t}\}_{i,t}\). According to the Bellman equation, we obtain
\[Q_{i,t-1,t}(O_{i,t-1},A_{i,t-1})=\mathbb{E}\Big{[}\sum_{a}\pi(a|O_{i,t})Q_{i,t,t}(O_{i,t},a)\big{|}A_{i,t-1},O_{i,t-1}\Big{]}.\]
Under the additivity assumption, we can similarly obtain a two-way decomposition for \(Q_{i,t-1,t}\); see Proposition 1 below for a formal statement. This allows us to solve a similar constrained optimization problem to (3.3) to estimate \(Q_{i,t-1,t}\). We next repeat this procedure to recursively estimate \(\{Q_{i,t-2,t}\}_{i,t}\), \(\{Q_{i,t-3,t}\}_{i,t}\), \(\cdots\), \(\{Q_{i,1,t}\}_{i,t}\) based on the Bellman equation and finally construct the policy value estimator via (3.2). We summarize our estimating procedure in Algorithm 1. The following proposition formally states the two-way structure of these Q-functions.
**Proposition 1**.: _For any integer \(k\) such that \(1\leq k<t\), the Q-function \(Q^{\pi}_{i,t-k+1,t}(o,a)\) satisfies_
\[Q^{\pi}_{i,t-k+1,t}(o,a)=r^{\pi}_{k}(o,a)+\theta^{\pi}_{k,i}+\lambda^{\pi}_{k, t},\]
_where \(\theta^{\pi}_{k,i}\) and \(\lambda^{\pi}_{k,t}\) depend only on \(i,k,\pi\) and \(t,k,\pi\) respectively._
In what follows, we will omit the superscript \(\pi\) in \(r^{\pi}_{k}(o,a),\ \theta^{\pi}_{k,i}\), and \(\lambda^{\pi}_{k,t}\) when there is no confusion.
To conclude this section, we draw a comparison with the classical backward induction in the DTR literature (see e.g., Murphy, 2003; Robins, 2004). First, the classical backward induction algorithm aims to learn the Q-function under an optimal policy and derive the optimal policy as the greedy policy with respect to the estimated Q-function. To the contrary, the proposed algorithm learns the Q-function under a fixed target policy for the purpose of policy evaluation.
Second, classical backward induction requires the computation of the Q-function recursively
from time \(t\) till the beginning. However, it is worth mentioning that the Q-function converges exponentially fast to a constant function with respect to the lag \(k\) (see Section C.2 of the supplementary material). We refer to this phenomenon as Q-function degeneracy. As such, early stopping can be potentially employed in Algorithm 1 to facilitate the computation.
Third, the proposed backward induction allows us to to efficiently borrow information under the additivity assumption. Specifically, during each iteration, we pull all the relevant data together to estimate the Q-function. This allows us to consistently estimate the main effect (shared by all observations) at a rate of \((NT)^{-\alpha}\) which depends on both \(N\) and \(T\), and the exponent \(0<\alpha\leq 1/2\) depends on the nonparametric methods being used to solve the constrained optimization problem. Meanwhile, the two-way fixed effects \(\theta\)s and \(\lambda\)s will converge at \(T^{-1/2}\) and \(N^{-1/2}\) respectively, up to some logarithmic factors. To the contrary, the estimator obtained via the classical backward induction algorithm typically converges at a rate of \(N^{-\alpha^{\prime}}\) for some \(0<\alpha^{\prime}\leq 1/2\) in individual-homogeneous and history-dependent1 environments.
Footnote 1: The transition function depends on the entire history instead of the current observation-action pair.
### A Linear Sieve Estimator for Two-way Fixed Effects Model
_Notation_. Given arbitrary \(\{x_{i,t}\}_{1\leq i\leq N,1\leq t\leq T}\), let \(\mathbf{x}\in\mathbb{R}^{NT}\) denote the vector whose \(((t-1)N+i)\)-th element equals \(x_{i,t}\). That is, \(\mathbf{x}\) is constructed by stacking the \(N\) elements at the first time point, followed by the \(N\) elements at the second time point, and so on, until the \(N\) elements at the final time point, e.g.,
\[\mathbf{x}=(x_{1,1},x_{2,1},\ldots,x_{N,1},x_{1,2},\ldots,x_{N-1,T},x_{N,T})^{\top}.\]
Similarly, given a set of vectors \(\{\mathbf{x}_{i,t}\}_{i,t}\), let \(\mathbf{X}\) denote the matrix whose \(((t-1)N+i)\)th row of \(\mathbf{X}\) equals \(\mathbf{x}_{i,t}\).
To implement Algorithm 1, we need to solve two-way fixed effects models repeatedly for
value function estimation. To simplify the presentation, we focus on the estimation of \(Q_{i,t,t}^{\pi}(O_{i,t},a_{i,t})\) (see Equation (3.3)). We propose to approximate the main effect function \(r_{1}(o,a)\) using linear sieves (Huang, 1998; Chen and Christensen, 2015), i.e., \(r_{1}(o,a)\approx\mathbf{\Phi}_{L}^{\top}(o)\mathbf{\beta}_{a}\), where \(\mathbf{\Phi}_{L}(o)\) is a vector consisting of \(L\) sieve basis functions, e.g., splines or wavelet bases. Under mild conditions, for spline basis functions, there exists a vector \(\{\mathbf{\beta}_{a}^{*}\}\) such that the approximation error is negligible, i.e., \(\sup_{o,a}|r_{1}(o,a)-\mathbf{\Phi}_{L}(o)^{\top}\mathbf{\beta}_{a}^{*}|=O(L^{-p/d})\) where \(p>0\) measures the smoothness of the system dynamics; see Assumption 1 in Section 5(Schumaker, 2007). For simplicity, we now focus on the binary action space setting, in which \(\mathcal{A}=\{0,1\}\).
The two-way fixed effects model in (2.3) can be represented in the following matrix form: \(\mathbf{R}=\mathbf{B}\mathbf{\alpha}+\mathbf{M}+\mathbf{\varepsilon}\), where \(\mathbf{R}=(R_{1,1},R_{2,1},\ldots,R_{N,1},R_{2,1},\ldots,R_{N-1,T},R_{N,T})^{\top} \in\mathbb{R}^{NT}\),
\(\mathbf{\alpha}=(\mathbf{\theta}^{\top},\mathbf{\lambda}^{\top})^{\top}\), \(\mathbf{B}=(\mathbf{1}_{T}\otimes\mathbf{I}_{N},\mathbf{I}_{T}\otimes\mathbf{1}_{N})\in\mathbb{R}^{ NT\times(N+T)}\) is the design matrix, \(\mathbf{I}_{N}\) is a \(N\times N\) identity matrix, \(\mathbf{1}_{T}\) is a vector of length \(T\) with all elements one, and \(\otimes\) is the Kronecker product. In what follows, we will omit the indices of these matrices and vectors when there is no confusion.
Let \(\mathbf{\Phi}_{i,t}=((1-A_{i,t})\mathbf{\Phi}_{L}^{\top}(O_{i,t}),A_{i,t}\mathbf{\Phi}_{L }^{\top}(O_{i,t}))^{\top}\), and let \(\mathbf{\Phi}\) be the \(\mathbb{R}^{NT\times 2L}\) matrix
\[(\mathbf{\Phi}_{1,1}^{\top},\mathbf{\Phi}_{2,1}^{\top},\ldots,\mathbf{\Phi}_{N,1}^{\top}, \mathbf{\Phi}_{1,2}^{\top},\ldots,\mathbf{\Phi}_{N-1,T}^{\top},\mathbf{\Phi}_{N,T}^{\top} )^{\top}.\]
By the Frisch-Waugh-Lovell theorem (Frisch and Waugh, 1933; Lovell, 1963), a closed-form estimator of \(\mathbf{\beta}=(\mathbf{\beta}_{0}^{\top},\mathbf{\beta}_{1}^{\top})^{\top}\) can be obtained accordingly:
\[\widehat{\mathbf{\beta}}=(\mathbf{\Phi}^{\top}(\mathbf{I}-\mathbf{P})\mathbf{\Phi})^{-1}\mathbf{\Phi}^ {\top}(\mathbf{I}-\mathbf{P})\mathbf{R}, \tag{3.4}\]
where \(\mathbf{P}\) is the projection matrix: \(\mathbf{P}=\mathbf{B}(\mathbf{B}^{\top}\mathbf{B})^{+}\mathbf{B}^{\top}\), and \((\mathbf{B}^{\top}\mathbf{B})^{+}\) is the Moore-Penrose inverse of the matrix \(\mathbf{B}^{\top}\mathbf{B}\).
Given the estimated \(\mathbf{\beta}\), the estimator of main effect \(r_{1}(O_{i,t},A_{i,t})\) (denoted by \(\widehat{r}_{1}\)) can also be obtained, based on which the fixed effects can be calculated. Under the constraints that \(\sum_{i=1}^{N}\theta_{i}=\sum_{t=1}^{T}\lambda_{t}=0\), we have
\[\widehat{\theta}_{i}=T^{-1}\sum_{t=1}^{T}(R_{i,t}-\widehat{r}_{1}(O_{i,t},A_{ i,t})),\text{ and }\widehat{\lambda}_{t}=N^{-1}\sum_{i=1}^{N}(R_{i,t}-\widehat{r}_{1}(O_{i,t},A_{ i,t})).\]
The resulting estimated Q-function is given by
\[\widehat{Q}_{i,t,t}^{\pi}(O_{i,t},a)=\widehat{\theta}_{i}+\widehat{\lambda}_{ t}+\widehat{\mathbf{\Phi}}_{L}(O_{i,t})^{\top}\widehat{\mathbf{\beta}}_{a}.\]
**Remark 7**.: _By design, the matrix \(\mathbf{B}\) is a singular matrix, since the sum of the first \(N\) columns is equal to the sum of the last \(T\) columns. Consequently, the generalized inverse \((\mathbf{B}^{\top}\mathbf{B})^{+}\) is used to compute the matrix \(\mathbf{P}\)._
**Remark 8**.: _When \(N\) or \(T\) is large, solving the two-way fixed effects models directly can be computationally burdensome. An alternative approach is to separate the estimation procedure into two tasks so that the main effect can be computed first, followed by the estimation of
time and individual-specific effects. This avoids inverting a large \((N+T+2L)\times(N+T+2L)\) square matrix, making it computationally efficient (see Algorithm 2 in the Appendix)._
## 4 Model-based OPE
In this section, we develop model-based methods that derive the off-policy value estimator by learning the system dynamics. Recall that under the additivity assumption,
\[R_{i,t}=\theta_{i}+\lambda_{t}+r_{1}(O_{i,t},A_{i,t})+\varepsilon_{i,t}.\]
As we discussed in Section 3, the main effect \(r_{1}\), as well as the individual- and time-specific effects can be estimated by solving the following optimization,
\[\operatorname*{arg\,min}_{\{\theta_{i}\}_{i},\{\lambda_{t}\}_{t},r_{1}}\sum_{ i,t}[R_{i,t}-\theta_{i}-\lambda_{t}-r_{1}(O_{i,t},A_{i,t})]^{2}.\]
In addition, we need to estimate the mixing probabilities \(\pi_{u}\), \(\pi_{v}\), \(\pi_{0}\) as well as the distribution functions \(p_{u_{i}}(o^{\prime}|u_{i})\), \(p_{v_{t}}(o^{\prime}|v_{t})\), \(p_{0}(o^{\prime}|a,o)\), obtained by marginalizing over \(p_{u_{i}}(o^{\prime},r|u_{i})\), \(p_{v_{t}}(o^{\prime},r|v_{t})\), \(p_{0}(o^{\prime},r|a,o)\) in Equation (2.3).
Given these estimators, we employ simulation-based method to construct the policy value. To be more specific, based on the estimated transition function, we can simulate an observation \(O_{i,2}^{*}\) based on the observed \(O_{i,1}\) under the target policy \(\pi\). We next sequentially simulate a sequence of observations \(\{O_{i,t}^{*}\}_{t}\) under \(\pi\) and compute the estimated reward \(\pi(a|O_{i,t}^{*})(\widehat{r}_{1}(O_{i,t}^{*},a)+\widehat{\theta}_{i}+ \widehat{\lambda}_{t})\). Finally, we repeat this procedure sufficiently many times and average all the estimated rewards across different simulations.
_Likelihood_. It remains to estimate \(p_{u}\), \(p_{v}\), \(p_{0}\) and \(\pi_{u}\), \(\pi_{v}\), \(\pi_{0}\). Given the latent factors, the likelihood function is proportional to the following,
\[\prod_{i=1}^{N}\prod_{t=2}^{T}p(O_{i,t}|u_{i},v_{t-1},A_{i,t-1},O _{i,t-1};\mathbf{\Theta})\] \[= \prod_{i=1}^{N}\prod_{t=2}^{T}[\pi_{u}p_{u_{i}}(O_{i,t}|u_{i}; \mathbf{\Theta})+\pi_{v}p_{v_{t}}(O_{i,t}|v_{t-1},\mathbf{\Theta})+\pi_{0}p_{0}(O_{i, t}|A_{i,t-1},O_{i,t-1};\mathbf{\Theta})], \tag{4.1}\]
where we parameterize the transition model by \(\mathbf{\Theta}=\{\pi_{0},\pi_{u},\pi_{v},\mathbf{\Theta}_{0},\mathbf{\Theta}_{v},\mathbf{\Theta}_ {u}\}\), and \(\mathbf{\Theta}_{0},\mathbf{\Theta}_{v},\mathbf{\Theta}_{u}\) are the parameters corresponding with models \(p_{0},p_{v}\) and \(p_{u}\) respectively.
We introduce a latent variable \(Z_{i,t}\in\{0,1,2\}\) such that
\[Z_{i,t}=\begin{cases}0,\text{ if }O_{i,t}\text{ is generated by }p_{0}.\\ 1,\text{ if }O_{i,t}\text{ is generated by }p_{v},\\ 2,\text{ otherwise}.\end{cases}\]
However, directly maximizing the likelihood in (4.1) is challenging, since it requires to marginalize over \(Z_{i,t}\). Toward that end, we employ the expectation-maximization (EM, Dempster et al., 1977; Wu, 1983; Meng and Van Dyk, 1997) algorithm for parameter estimation. The EM algorithm is a recursive algorithm which alternates between an E-step for computing conditional expectation and an M-step for maximizing the likelihood. It has been widely used in various statistical problems including missing data, mixture model and clustering (Little and Rubin, 2019). We detail the two steps below.
_E-step_. Similar to (4.1), the complete log-likelihood involving both \(\{O_{i,t}\}_{i,t}\) and \(\{Z_{i,t}\}_{i,t}\) is given by
\[l(\mathbf{O},\mathbf{Z}|\mathbf{U},\mathbf{V},\mathbf{A};\mathbf{\Theta}) \propto\sum_{i=1}^{N}\sum_{t=2}^{T}\log p(O_{i,t}|Z_{i,t},u_{i},v_ {t-1},A_{i,t-1},O_{i,t-1};\mathbf{\Theta})p(Z_{i,t};\mathbf{\Theta})\] \[= \sum_{i=1}^{N}\sum_{t=2}^{T}[\mathbb{I}(Z_{i,t}=0)\log(\pi_{0}p_ {0}(O_{i,t}|A_{i,t-1},O_{i,t-1};\mathbf{\Theta}_{0}))\] \[+\mathbb{I}(Z_{i,t}=1)\log(\pi_{v}p_{v_{t}}(O_{i,t}|v_{t};\mathbf{ \Theta}_{v}))+\mathbb{I}(Z_{i,t}=2)\log(\pi_{u}p_{u_{i}}(O_{i,t}|u_{i};\mathbf{ \Theta}_{u}))]. \tag{4.2}\]
Given an current guess of \(\mathbf{\Theta}\), say \(\widetilde{\mathbf{\Theta}}\), define \(\Gamma(\mathbf{\Theta}|\widetilde{\mathbf{\Theta}})\) as the expected value of the \(l(\mathbf{O},\mathbf{Z}|\mathbf{U},\mathbf{V},\mathbf{A};\mathbf{\Theta})\) with respect to the currently estimated conditional distribution \(p(\mathbf{Z}|\mathbf{U},\mathbf{V},\mathbf{O},\mathbf{A};\widetilde{\mathbf{\Theta}})\), i.e., \(\Gamma(\mathbf{\Theta}|\widetilde{\mathbf{\Theta}})=\mathbb{E}_{Z\sim p(\mathbf{Z}|\mathbf{U}, \mathbf{V},\mathbf{O},\mathbf{A};\widetilde{\mathbf{\Theta}})}l(\mathbf{O},\mathbf{Z}|\mathbf{U},\mathbf{V}, \mathbf{A};\mathbf{\Theta})\). We aim to calculate \(\Gamma\) in this step. It
follows from (4.2) that
\[\begin{split}\Gamma(\mathbf{\Theta}|\widetilde{\mathbf{\Theta}})=& \sum_{i=1}^{N}\sum_{t=2}^{T}[p(Z_{i,t}=0|O_{i,t},A_{i,t-1},O_{i,t-1}; \widetilde{\mathbf{\Theta}}_{0})\log(\pi_{0}p_{0}(O_{i,t}|A_{i,t-1},O_{i,t-1};\mathbf{ \Theta}_{0}))\\ &+p(Z_{i,t}=1|O_{i,t},v_{t};\widetilde{\mathbf{\Theta}}_{v})\log(\pi_ {v}p_{v_{t}}(O_{i,t}|v_{t};\mathbf{\Theta}_{v}))\\ &+p(Z_{i,t}=2|O_{i,t},u_{i};\widetilde{\mathbf{\Theta}}_{u})\log(\pi_ {u}p_{u_{i}}(O_{i,t}|u_{i};\mathbf{\Theta}_{u}))].\end{split} \tag{4.3}\]
_M-step_. We aim to update the model parameter \(\mathbf{\Theta}_{new}\) that maximizes \(\Gamma(\mathbf{\Theta}|\widetilde{\mathbf{\Theta}})\) with respect to \(\mathbf{\Theta}\), i.e., \(\mathbf{\Theta}_{new}=\arg\max_{\mathbf{\Theta}}\Gamma(\mathbf{\Theta}|\widetilde{\mathbf{ \Theta}})\). It follows from (4.3) that
\[\pi_{0,new} = \frac{1}{N(T-1)}\sum_{i=1}^{N}\sum_{t=2}^{T}p(Z_{i,t}=0|O_{i,t},A _{i,t-1},O_{i,t-1};\widetilde{\mathbf{\Theta}}_{0}),\] \[\pi_{v,new} = \frac{1}{N(T-1)}\sum_{i=1}^{N}\sum_{t=2}^{T}p(Z_{i,t}=1|O_{i,t},v _{t};\widetilde{\mathbf{\Theta}}_{v}),\] \[\pi_{u,new} = \frac{1}{N(T-1)}\sum_{i=1}^{N}\sum_{t=2}^{T}p(Z_{i,t}=2|O_{i,t},u _{i};\widetilde{\mathbf{\Theta}}_{u}).\]
The rest of the parameters can be updated using any derivative-based (e.g., quasi-Newton) or derivative-free (e.g., Nelder-Mead) algorithm. Our final estimator is obtained by repeating the E-step and the M-step until convergence.
_Choice of the parametric family_. In our implementation, when the observation is continuous, we posit normal distribution functions for \(p_{u_{i}}(o^{\prime}|u_{i})\), \(p_{v_{t}}(o^{\prime}|v_{t})\), \(p_{0}(o^{\prime}|a,o)\), i.e., \(p_{u_{i}}(o^{\prime}|u_{i})=\phi(o^{\prime};\mu_{u_{i}},\Sigma_{u_{i}})\), \(p_{u_{i}}(o^{\prime}|v_{t})=\phi(o^{\prime};\mu_{v_{t}},\Sigma_{v_{t}})\) and \(p_{0}(o^{\prime}|a,o)=\phi(o^{\prime};\mu_{0}(a,o),\Sigma_{0}(a,o))\) where \(\phi(\bullet;\mu,\Sigma)\) denotes a \(d\)-dimensional multivariate normal density function with mean vector \(\mu\) and covariance matrix \(\Sigma\). We further use a linear model for the mean function \(\mu_{0}\), i.e., \(\mu_{0}(a,o)=\Lambda o+\psi a\) and a constant model for the covariance function, i.e., \(\Sigma_{0}(a,o)=\Sigma_{0}\) for any \(a\) and \(o\). As such, the set of parameters \(\mathbf{\Theta}\) can be summarized by \(\{\pi_{0},\pi_{v},\pi_{u},\{\mu_{u_{i}}\}_{i},\{\Sigma_{u_{i}}\}_{i},\{\mu_{v_ {t}}\}_{t},\{\Sigma_{v_{t}}\}_{t},\Lambda,\psi,\Sigma_{0}\}\).
## 5 Theoretical Results
In this section, we focus on investigating the theoretical properties of our proposed model-free estimators. Consistencies and convergence rates of the model-based estimators can be
established based on existing analyses of EM algorithms (see e.g., Wu, 1983; Balakrishnan et al., 2017) and we omit the details to save space.
We begin with a summary of our theoretical results. Theorem 1 is concerned with the convergence rates of the proposed value estimators. In particular, for a sufficiently large \(L\), we show that the estimated average effect \(\widehat{\eta}^{\pi}\), individual-specific effect \(\widehat{\eta}^{\pi}_{i}\), time-specific effect \(\widehat{\eta}^{\pi}_{t}\) and individual- and time-specific \(\widehat{\eta}^{\pi}_{i,t}\) converge at a rate of \((NT)^{-1/2}\), \(T^{-1/2}\), \(N^{-1/2}\) and \(\min^{-1/2}(N,T)\), respectively, up to some logarithmic factors. Theorem 2 establishes the limiting distributions of these estimators.
We next impose some technical assumptions.
**Assumption 1** (Holder smoothness).: _Assume that there exists some constants \(p\) and \(C\), such that for any \(a\in\mathcal{A}\) and \(o\in\mathcal{O}\), both the function \(r_{1}(\cdot,a)\) and \(p_{0}(o^{\prime}|\cdot,a)\) belong to the class of \(p\)-smooth functions \(\Lambda(p,C)\); see Appendix C.1 for the detailed definition._
**Assumption 2** (Basis functions).: _(i) \(\sup_{o}\|\boldsymbol{\Phi}_{L}(o)\|_{2}=O(\sqrt{L})\) and \(\lambda_{\max}[\int_{o\in\mathcal{O}}\boldsymbol{\Phi}_{L}(o)\boldsymbol{\Phi }_{L}^{\top}(o)do]=O(1)\); (ii) For any \(C>0\), \(\sup_{f\in\Lambda(p,C)}\inf_{\beta\in\mathbb{R}^{L}}\sup_{o}|\Phi_{L}^{\top}(o )\beta-f(o)|=O(L^{-p/d})\); (iii) \(L\ll\min(N,T)/\log(NT)\)._
**Assumption 3** (System dynamics).: _(i) Assume that there exist random errors \(\{e_{i,t}\}_{i,t}\) that are i.i.d copies of \(E\) such that the future observation \(O_{i,t+1}\) can be represented as \(\kappa(O_{i,t},A_{i,t},u_{i},v_{t},e_{i,t})\) for some function \(\kappa\) that satisfies_
\[\sup_{a,u,v}\mathbb{E}\|\kappa(o,a,u,v,E)-\kappa(o^{\prime},a,u,v,E)\|_{2}\leq q\|o-o^{\prime}\|_{2},\] \[\sup_{o,a}\|\kappa(o,a,u,v,E)-\kappa(o,a,u,v,E^{\prime})\|_{2}=O (\|E-E^{\prime}\|_{2}),\]
_for some \(0\leq q<1\); (ii) each element of the error \(E\) vector has sub-exponential tail, i.e., \(\max_{j}\mathbb{E}\exp(t|E_{j}|)<\infty\) for some \(t>0\), where \(E_{j}\) denotes the \(j\)th element of \(E\); (iii) the density functions \(p_{u}\), \(p_{v}\) and \(N^{-1}\sum_{i=1}^{N}p_{O_{i,1}}\) (\(p_{O_{i,1}}\) denotes the density function of \(O_{i,1}\)) are uniformly bounded._
**Assumption 4** (Stability).: _For any backward step \(k\) (the kth iteration in Algorithm 1),_
\[\lambda_{\min}[\mathbb{E}(\mathbf{\Phi}_{k}^{\top}\mathbf{S}_{k}\mathbf{\Phi}_{k-1}^{new})] \geq(NT)\rho_{0}\text{ and }\|[\mathbb{E}(\mathbf{\Phi}_{k}^{\top}\mathbf{S}_{k}\mathbf{\Phi}_{k})]^{-1} \mathbb{E}(\mathbf{\Phi}_{k}^{\top}\mathbf{S}_{k}\mathbf{\Phi}_{k-1}^{new})\|_{2}\leq\rho_{ 1},\]
_for some constants \(\rho_{0}>0\) and \(0<\rho_{1}<1\), where \(\mathbf{\Phi}_{k}\) are the matrices consisting of the first \(N(T-k+1)\) rows of matrix \(\mathbf{\Phi}\), \(\mathbf{S}_{k}\) and \(\mathbf{B}_{k}\) is the residual maker matrix, the design matrix for the fixed effects at step \(k\) respectively (similarly, \(\mathbf{S}_{k}\) and \(\mathbf{B}_{k}\) is the matrix consisting of the first \(N(T-k+1)\) rows of matrix \(\mathbf{S}\) and \(\mathbf{B}\) respectively, see Section C.4.5 of the supplementary article for the detailed formulation), and \(\mathbf{\Phi}_{k}^{new}\) is the design matrix such that_
\[\mathbf{\Phi}_{k,i,t}^{new}=\frac{1}{T-k+1}\sum_{t=1}^{T-k+1}\mathbf{\Phi}_{k,i,t}+ \frac{1}{N}\sum_{i=1}^{N}\mathbf{\Phi}_{k,i,t}+\sum_{a\in\mathcal{A}}\pi(a|O_{k,i, t})\mathbf{\Phi}_{k,i,t}. \tag{5.1}\]
Assumption 1 is frequently imposed in the sieve estimation literature (see e.g., Huang, 1998; Chen and Christensen, 2015). It has been recently employed in the RL literature to obtain sharp convergence rate for the estimated Q- and value estimator (Fan et al., 2020; Chen and Qi, 2022; Li et al., 2022; Shi et al., 2022). In doubly inhomogeneous environments, it allows us to show that \(Q_{i,t_{1},t_{2}}^{\pi}\) is \(p\)-smooth for any \(i,t_{1},t_{2}\) and \(\pi\); see Lemma 1 in Section C.1 of the supplementary article.
Assumption 2(i) and (ii) are automatically satisfied when tensor product B-spline or wavelet bases are employed; see Section 6 of Chen and Christensen (2015) for a review of these basis functions. Assumption 2(iii) is to guarantee the consistency of the sieve estimator.
Assumption 3(i) and (ii) allow us to establish concentration inequalities in doubly inhomogeneous environments. Under the additivity assumption (2.3), there exist functions \(\kappa_{0}\), \(\kappa_{u}\), \(\kappa_{v}\) and random errors \(E_{0}\), \(E_{u}\) and \(E_{v}\) such that \(\kappa(o,a,u,v,E)\overset{d}{=}\mathbb{I}(Z=0)\kappa_{0}(o,a,E_{0})+\mathbb{I} (Z=1)\kappa_{v}(v,E_{v})+\mathbb{I}(Z=2)\kappa_{u}(u,E_{u})\) where the latent variable \(Z\) is independent of \((E_{0},E_{u},E_{v})\) and satisfies that \(\mathbb{P}(Z=0)=\pi_{0}\), \(\mathbb{P}(Z=1)=\pi_{v}\), \(\mathbb{P}(Z=2)=\pi_{u}\). As such, Assumption 3 is automatically satisfied if \(E_{0},E_{v}\) and \(E_{u}\) have sub-exponential tails, \(\kappa_{0}\), \(\kappa_{u}\) and \(\kappa_{v}\) are Lipschitz continuous as functions of the error term and
\[\sup_{a}\mathbb{E}\|\kappa_{0}(o,a,E_{0})-\kappa_{0}(o^{\prime},a,E_{0})\|_{2 }\leq q\|o-o^{\prime}\|_{2}, \tag{5.2}\]
for some \(0\leq q<1\). Notice that (5.2) is automatically satisfied for the auto-regressive model \(O^{\prime}=f(O,A)+g(E_{0})\) for any \(f\) such that \(\sup_{a}|f(o,a)-f(o^{\prime},a)|\leq q\|o-o^{\prime}\|_{2}\). Other examples are provided in Diaconis and Freedman (1999). Notice that Assumption 1 implicitly implies the boundedness of the transition density function \(p_{0}\). This together with Assumption 3(iii) yields the uniform boundedness of marginal density functions of \(\{O_{i,t}\}_{i,t}\).
The first part of Assumption 4 essentially requires \((NT)^{-1}\mathbb{E}(\mathbf{\Phi}_{k}^{\top}\mathbf{S}_{k}\mathbf{\Phi}_{k})\) to be invertible. The second part is closely related to the irrepresentable or mutual incoherence condition in the variable selection literature for the selection consistency of the least absolute shrinkage and selection operator (Meinshausen and Buhlmann, 2006; Zhao and Yu, 2006; Zou, 2006). This type of assumption is necessary to ensure the consistency of the subsequent value estimator (Perdomo et al., 2022). Similar assumptions have been imposed in the statistics literature (Ertefaie and Strawderman, 2018; Luckett et al., 2020; Shi et al., 2022).
Finally, we present our theories. Recall that both \(\eta_{i}^{\pi}\), \(\eta_{i,t}^{\pi}\) as well as their estimators implicitly depend on the initial observation \(O_{i,1}\). As such, it is more proper to write them as functions of \(O_{i,1}\), e.g., \(\eta_{i,t}^{\pi}(o)=\mathbb{E}^{\pi}(R_{i,t}|O_{i,1}=o)\), \(\widehat{\eta}_{i,t}^{\pi}(o)=\sum_{a}\widehat{Q}_{i,1,t}^{\pi}(o,a)\pi(a|o)\) (\(\eta_{i}^{\pi}(o)\) and \(\widehat{\eta}_{i}^{\pi}(o)\) can be similarly defined). For these effects, instead of considering the differences \(\widehat{\eta}_{i,t}^{\pi}(O_{i,1})-\eta_{i,t}^{\pi}(O_{i,1})\) and \(\widehat{\eta}_{i,t}^{\pi}(O_{i,1})-\eta_{i}^{\pi}(O_{i,1})\), we focus on the aggregated differences \(\int_{o\in\mathcal{O}}[\widehat{\eta}_{i,t}^{\pi}(o)-\eta_{i,t}^{\pi}(o)]do\) and \(\int_{o\in\mathcal{O}}[\eta_{i}^{\pi}(o)-\widehat{\eta}_{i,t}^{\pi}(o)]do\) to eliminate the variability due to \(O_{i,1}\).
**Theorem 1** (Rates of Convergence).: _Assume Assumptions 1 - 4 hold. Then with probability approaching \(1\), we have for any \(1\leq i\leq N\) and \(1\leq t\leq T\) that_
\[\max_{i,t}\left|\int_{o\in\mathcal{O}}[\widehat{\eta}_{i,t}^{\pi} (o)-\eta_{i,t}^{\pi}(o)]do\right|=O(L^{-p/d})+O(N^{-1/2}\sqrt{\log(NT)})+O(T^ {-1/2}\sqrt{\log(NT)}),\] \[\max_{i}\left|\int_{o\in\mathcal{O}}[\widehat{\eta}_{i}^{\pi}(o)- \eta_{i}^{\pi}(o)]do\right|=O(L^{-p/d})+O(T^{-1/2}\sqrt{\log(NT)}),\] \[\max_{t}|\widehat{\eta}_{t}^{\pi}-\eta_{t}^{\pi}|=O(L^{-p/d})+O( N^{-1/2}\sqrt{\log(NT)})\text{ and }\ |\widehat{\eta}^{\pi}-\eta^{\pi}|=O(L^{-p/d})+O(\sqrt{\log(NT)/NT}).\]
Theorem 1 highlights a noteworthy property of our method: the error bounds of value
estimator depend solely on \(p\), \(d\), \(L\), \(N\) and \(T\), and are independent of the number of backward inductions conducted. This is due to an important feature of our approach: the error term at the \(k\)th backward stage is of the order \(O(\pi_{0}^{k})\) (as demonstrated by Lemma 2 in the supplementary article). Specifically, the error bounds for each value estimator comprise two components: the bias term \(O(L^{-p/d})\) and the variance term \(O(N^{-1/2}\sqrt{\log(NT)})\), \(O(T^{-1/2}\sqrt{\log(NT)})\) or \(O(\sqrt{\log(NT)/NT})\). Moreover, for sufficiently large \(L\), it is evident that due to aggregation over time and population, the convergence of the average effect \(\widehat{\eta}^{\pi}\) is the fastest, whereas the individual- and time-specific effect \(\widehat{\eta}^{\pi}_{i,t}\) demonstrates the slowest convergence.
**Theorem 2** (Asymptotically Normality).: _Assume Assumptions 1 - 4 hold, and there exists some constant \(c\geq 1\) such that \(\mathbb{E}(\varepsilon_{i,t}^{2}|O_{i,t}=o,A_{i,t}=a)>c^{-1}\), for any \(i\), \(t\), \(o\in\mathcal{O}\) and \(a\in\mathcal{A}\). Furthermore, assume that the number of basis functions \(L\) satisfies \(NT\ll L^{2p/d}\). Then when both \(N\) and \(T\) goes to infinity,_
\[\sqrt{\min(N,T)}\sigma_{\eta_{i,t}^{\pi}}^{-1}\int_{o\in\mathcal{ O}}\left(\widehat{\eta}_{i,t}-\eta_{i,t}\right)do\stackrel{{ d}}{{\longrightarrow}}\mathcal{N}(0,1)\text{, }\sqrt{T}\sigma_{\eta_{i}^{\pi}}^{-1}\int_{o\in\mathcal{O}}(\widehat{\eta}_{i} ^{\pi}-\eta_{i}^{\pi})do\stackrel{{ d}}{{\longrightarrow}} \mathcal{N}(0,1),\] \[\sqrt{N}\sigma_{\eta_{t}^{\pi}}^{-1}(\widehat{\eta}_{t}^{\pi}- \eta_{t}^{\pi})\stackrel{{ d}}{{\longrightarrow}}\mathcal{N}(0,1)\text{, and }\sqrt{NT}\sigma_{\eta^{\pi}}^{-1}(\widehat{\eta}^{\pi}-\eta^{\pi})\stackrel{{ d}}{{\longrightarrow}}\mathcal{N}(0,1),\]
_where \(\sigma_{\eta_{i,t}^{\pi}},\sigma_{\eta_{t}^{\pi}},\sigma_{\eta_{t}^{\pi}}\) and \(\sigma_{\eta^{\pi}}\) are some quantities bounded from below and above (for a detailed formulation, please refer to Section D in the Appendix)._
Theorem 2 establishes the asymptotic normality of the value estimators when both \(N\) and \(T\) diverges. Specifically, in Section D of the Appendix, we demonstrate that \(\sqrt{\min(N,T)}(\widehat{\eta}_{i,t}-\eta_{i,t})\) can be expressed as a sum of the martingale difference sequences plus some asymptotically negligible term, and a direct application of the martingale difference central limit theorem (McLeish, 1974) yields the above results.
Numerical Studies
In this section, we evaluate our proposed model-based and model-free approaches through extensive simulations. Our focus is on a binary action space, with a tabular observation space and a continuous observation space. As discussed earlier, we focus on evaluating the following four targets: the average reward \(\eta^{\pi}\); the \(i\)th subject's average reward aggregated over time \(\eta_{i}^{\pi}\); the average reward in the population at time \(t\), denoted by \(\eta_{t}^{\pi}\); the \(i\)th subject's expected reward at time \(t\), denoted by \(\eta_{i,t}^{\pi}\).
_Baseline methods_. We compare our proposed approach against the following:
1. _Doubly homogeneous model-free OPE_. As commented earlier, the four targets become asymptotically the same in doubly homogeneous environments. The first baseline method ignores both types of inhomogeneities and employ the Q-learning algorithm developed in doubly homogeneous environments to evaluate the long-term average reward and use the same estimator to estimate \(\eta_{i}^{\pi}\), \(\eta_{t}^{\pi}\) and \(\eta_{i,t}^{\pi}\). To begin with, we introduce the following relative value function, \[\widetilde{Q}^{\pi}(o,a)=\mathbb{E}^{\pi}\left[\sum_{j=t}^{\infty}(R_{j}-\eta^ {\pi})|O_{t}=o,A_{t}=a\right],\] where \(\eta^{\pi}\) is the average reward by letting \(T\rightarrow\infty\). Notice that \(\widetilde{Q}^{\pi}\) differs from the proposed Q-function defined in Section 3 under double inhomogeneities. According to the Bellman equation, \[\widetilde{Q}^{\pi}(O_{t},A_{t})=\mathbb{E}\left[R_{t}-\eta^{\pi}+\sum_{a}\pi( a|O_{t+1})\widetilde{Q}_{t+1}^{\pi}(O_{t+1},a)|O_{t},A_{t}\right].\] We also use linear sieves \(\mathbf{\beta}_{A_{t}}^{\top}\mathbf{\Phi}_{L}(O_{t})\) to approximate the Q-function. According to the Bellman equation, we estimate the average reward \(\eta^{\pi}\), as well as the parameters \(\mathbf{\beta}_{0}\), \(\mathbf{\beta}_{1}\) by solving the following unbiased estimating function \[\left[\begin{array}{c}\sum_{i,t}\left[R_{i,t}-\eta^{\pi}+\sum_{a}\pi(a|O_{i,t+1})\mathbf{\beta}_{a}^{\top}\mathbf{\Phi}_{L}(O_{i,t+1})-\mathbf{\beta}_{A_{i,t}}^{\top }\mathbf{\Phi}_{L}(O_{i,t})\right]\\ \sum_{i,t}A_{i,t}\mathbf{\Phi}_{L}(O_{i,t})\left[R_{i,t}-\eta^{\pi}+\sum_{a}\pi(a|O _{i,t+1})\mathbf{\beta}_{a}^{\top}\mathbf{\Phi}_{L}(O_{i,t+1})-\mathbf{\beta}_{1}^{\top} \mathbf{\Phi}_{L}(O_{i,t})\right]\\ \sum_{i,t}(1-A_{i,t})\mathbf{\Phi}_{L}(O_{i,t})\left[R_{i,t}-\eta^{\pi}+\sum_{a} \pi(a|O_{i,t+1})\mathbf{\beta}_{a}^{\top}\mathbf{\Phi}_{L}(O_{i,t+1})-\mathbf{\beta}_{0}^{ \top}\mathbf{\Phi}_{L}(O_{i,t})\right]\end{array}\right]=\mathbf{0}.\]
2. _Temporal stationary model-free OPE_. The second baseline method ignores temporal non-stationary. It applies the first baseline method to each individual data trajectory to estimate \(\eta_{i}^{\pi}\) for \(i=1,2,\cdots,N\). Denote the resulting estimator by \(\widehat{\eta}_{i}^{\pi}\). Under temporal stationarity, it sets \(\widehat{\eta}^{\pi}=\widehat{\eta}_{t}^{\pi}=\sum_{i=1}^{N}\widehat{\eta}_{i}^ {\pi}/N\), and \(\widehat{\eta}_{i}^{\pi}=\widehat{\eta}_{i,t}^{\pi}\) for any \(t\).
3. _Individual homogeneous model-free OPE_. The third baseline method ignores individual heterogeneity and adopts the classical backward induction algorithm developed in the DTR literature to evaluate \(\eta_{t}^{\pi}\). Denote the resulting estimator by \(\widehat{\eta}_{t}^{\pi}\). Under individual homogeneity, it sets \(\widehat{\eta}^{\pi}=\widehat{\eta}_{i}^{\pi}=\sum_{t=1}^{T}\widehat{\eta}_{t }^{\pi}/T\), and \(\widehat{\eta}_{t}^{\pi}=\widehat{\eta}_{i,t}^{\pi}\) for any \(i\).
4. _Doubly homogeneous model-based OPE_. The last baseline method ignores both types of inhomogeneities and adopts the standard model-based approach which estimates the transition and reward functions, and constructs the resulting value estimator based on Monte Carlo simulations as in Section 4.
In what follows, we refer to these baseline methods as B1, B2, B3, and B4. The mean squared error (MSE) and the mean absolute error (MAE) are used as the evaluation metrics, the true value function is calculated using 500 Monte Carlo experiments.
_Tabular setting_. We first consider a tabular setting in which both the observation and action spaces are binary, with \(N=T=80\). The data generation procedure is as follows. We set \(\pi_{0}=0.6\) and \(\pi_{u}=\pi_{v}=0.2\). The transition functions are given by \(p_{u_{i}}(o^{\prime}=1|u_{i})=|\sin(i)|\), \(p_{v_{t}}(o^{\prime}=1|v_{t})=|\cos(t)|\), \(p_{0}(o^{\prime}=1|o,a)=0.3\mathbb{I}(o+a\geq 1)+0.7\mathbb{I}(o+a<1)\), where \(\mathbb{I}(\cdot)\) is the indicator function. In addition, we set \(p_{u_{i}}(r|u_{i})=\phi(4\sin(i),1)\), \(p_{v_{t}}(r|v_{t})=\phi(3|\cos(2.4t)|,1)\) and \(p_{0}(r|a,o)=\phi(2o+3a,1)\). The behavior policy is \(b(a=1|o)=0.3\mathbb{I}(o=0)+0.7\mathbb{I}(o=1)\); the target policy we aim to evaluate is an observation-agnostic policy: \(\pi(a=1)=0.8\).
Figure 1 summarizes the MAE and the MSE of the six different estimators of \(\eta^{\pi}\), \(\eta^{\pi}_{t}\), \(\eta^{\pi}_{i}\), and \(\eta^{\pi}_{i,t}\), respectively. We make the following observations: (i) our proposed model-free method outperforms other methods in all settings; (ii) our proposed model-based method achieves a smaller MSE than other competing methods when considering the last three estimands; (iii) the doubly homogeneous model-based method B4 performs the worst across
Figure 1: MAE and MSE of the estimated value (four targets) in the tabular setting using our proposed two-way doubly inhomogeneous decision process model (TWDIDP1 and TWDIDP2, model-free and model-based respectively), and four baseline methods B1, B2, B3, and B4.
all scenarios.
_Continuous observation space_. We design a setting with a continuous observation space. Similar to the tabular setting, we set \(N=T=80\), \(\pi_{0}=0.6\) and \(\pi_{u}=\pi_{v}=0.2\). The transition function consists of \(p_{u_{i}}(o^{\prime}|u_{i})=\phi(\sin(3i),1)\), \(p_{v_{t}}(o^{\prime}|v_{t})=\phi(\cos(-1.8t),1)\), \(p_{0}(o^{\prime}|o,a)=\phi(-0.25o+a,1)\). Finally, we set \(p_{u_{i}}(r|u_{i})=\phi(4\sin(i),1)\), \(p_{v_{t}}(r|v_{t})=\phi(3|\cos(2.4t)|,1)\), and
Figure 2: MAE and MSE of the estimated value (four targets) in the continuous observation setting using our proposed two-way doubly inhomogeneous decision process model (TWDIDP1 and TWDIDP2, model-free and model-based respectively), and four baseline methods B1, B2, B3, and B4.
\(p_{0}(r|a,o)=\phi(2o+3a,1)\). The behavior policy is \(b(a=1)=0.5\); the target policy we aim to evaluate is also a randomized policy: \(\pi(a=1)=0.8\). A third order degree polynomial two-way fixed effects model is used to estimate the Q-function.
Figure 2 presents the MAE and the MSE of the six different estimators of \(\eta^{\pi}\), \(\eta^{\pi}_{t}\), \(\eta^{\pi}_{i}\), and \(\eta^{\pi}_{i,t}\), respectively. Similar to the tabular setting, our proposed model-free method outperforms other methods in all cases. In addition, the difference between our proposed approach and the baseline methods is more obvious, compared to the tabular setting. Our proposed model-based method also performs better than the baseline methods in general.
## 7 Real Data Analysis
In this section, we apply our proposed method to a sepsis dataset from MIMIC-III (Johnson et al., 2016), a database that contains information on critical care patients from Beth Israel Deaconess Medical Center in Boston, Massachusetts. As mentioned earlier, the heterogeneity in patient's response to treatment (Evans et al., 2021), along with a potential non-stationary environments makes it difficult to consistently assess the impact of conducting a given target policy on patient outcomes.
We focus on a subset of patients who received treatments sequentially over 20 stages. The primary outcome in this analysis is the sequential organ failure assessment (SOFA) score (Jones et al., 2009), which is used to monitor the progression of organ failure over time and measure the degree of organ dysfunction or failure in critically ill patients. A higher SOFA score indicates a higher risk of mortality. At any time point \(t\), we consider a binary treatment \(A_{t}\in\{0,1\}\) where \(A_{t}=1\) indicates that the patient received an intravenous fluid intervention with a dose greater than the median value for the group of patients being studied, and \(A_{t}=0\) otherwise. In previous studies, Raghu et al. (2017) and Zhou et al. (2022b) examined joint action spaces with both vasopressors and intravenous fluid interventions. We focus solely on intravenous fluid intervention due to the findings of Zhou
et al. (2022b), which suggest a limited impact of vasopressors.
The following five covariates are included in the observation: gender, age, the Elixhauser comorbidity index, weight, and the systemic inflammatory response syndrome score.
Three deterministic policies were evaluated using our proposed methods: (i) always administering a high dose (\(\mathbb{P}(A=1)=1\)), (ii) always administering a low dose (\(\mathbb{P}(A=1)=0\)), and (iii) administering a low dose when the SOFA score is less than 11, and a high dose otherwise. The third policy is tailored to the SOFA score, taking into account evidence
that a SOFA score of more than 11 is associated with a 100% mortality rate (Jones et al., 2009). To estimate the Q-function, we employed a third-order degree polynomial two-way fixed effects model at each time point. The average value estimators for the three policies are as follows: 7.26 (always high dose), 6.85 (always low dose), and 6.51 (tailored by SOFA score). These results indicate that the tailored policy is the most effective policy as it yields the lowest estimated SOFA score. Figure 3 summarizes the estimated \(\eta_{i}^{\pi}\)s and \(\eta_{t}^{\pi}\)s, clearly demonstrating that the tailored policy outperforms the other two policies, while the always high dose policy performs the poorest.Our conclusion is in line with these existing results, which recommend the low dose policy over the high dose policy. It is also consistent with physicians' recommendations in the behavior data.
## 8 Discussion
In this study, we introduce a novel class of latent factor models for the reward and observation transition functions. Our research also encompasses the development of a comprehensive OPE framework, combining both model-based and model-free approaches. To our knowledge, this is the first paper that develops statistically sound OPE methods in offline RL with double inhomogeneities. Our proposal relies on an additivity assumption. Without this assumption, one can employ model-based methods to simultaneously estimate the transition function and the random effects. Similar architectures have been considered in the literature on factor models and multi-relational learning. We leave it for future research.
|
2305.09157 | Weak gravitational lensing and shadow cast by rotating black holes in
axionic Chern-Simons theory | We investigate the impact of the axionic coupling parameter on the bending
angle of light and the shadow cast by slowly rotating black holes in
Chern-Simons modified gravity. We utilize the Ishihara \etal method to derive
the deflection angle of light for an observer and source located at finite
distances from a lens object in an asymptotically flat spacetime, using the
Gauss-Bonnet theorem. The deflection angle exhibits an increasing trend up to a
certain point, followed by a decrease as a function of the impact parameter,
with the presence of the axion matter field causing the observed increase.
Additionally, we calculate the Einstein ring radius as a direct application of
the weak deflection angle. We also investigate the effect of the axion matter
field on the time delay of light and analyze its impact on the shadow cast by
slowly rotating black holes. Our findings reveal a significant effect of the
axionic coupling parameter on the black hole's shadow. | Nashiba Parbin, Dhruba Jyoti Gogoi, Umananda Dev Goswami | 2023-05-16T04:17:51Z | http://arxiv.org/abs/2305.09157v2 | # Weak gravitational lensing and shadow cast by rotating black holes in axionic Chern-Simons theory
###### Abstract
We investigate the impact of the axionic coupling parameter on the bending angle of light and the shadow cast by slowly rotating black holes in Chern-Simons modified gravity. We utilize the Ishihara _et al._ method to derive the deflection angle of light for an observer and source located at finite distances from a lens object in an asymptotically flat spacetime, using the Gauss-Bonnet theorem. The deflection angle exhibits an increasing trend up to a certain point, followed by a decrease as a function of the impact parameter, with the presence of the axion matter field causing the observed increase. Additionally, we calculate the Einstein ring radius as a direct application of the weak deflection angle. We also investigate the effect of the axion matter field on the time delay of light and analyze its impact on the shadow cast by slowly rotating black holes. Our findings reveal a significant effect of the axionic coupling parameter on the black hole's shadow.
Dark matter; axionic Chern-Simons theory; Deflection angle; Black hole shadow
## I Introduction
The bending of light as it passes through the curved spacetime of the gravitational field persists as one of the most convenient observational tools to understand the spacetime geometry encompassing a strong gravitational source [1; 2; 3; 4]. Observed for the first time in \(1919\) during a solar eclipse [5], the gravitational deflection of light led to the first experimental verification of Einstein's theory of General Relativity (GR) [6]. A noteworthy implementation of gravitational bending is the study of weak lensing. The distribution of dark matter in galaxies and galaxy clusters, identification of extrasolar planets, etc. can be revealed by weak gravitational lensing studies. In recent times, weak lensing phenomena has become a central topic of research in modern astronomy and cosmology.
From the theoretical as well as the observational points of view, the study of null geodesics around a black hole plays a crucial role in determining gravitational field features of the black hole, such as its gravitational lensing and shadow. Gravitational lensing around black holes have been investigated in many scenarios such as the Schwarzschild-like black holes [7], AdS/dS black holes [8; 9], naked singularity and horizonless ultracompact objects [10; 11], etc. Gibbons and Werner altered the standard perspective by discovering a new geometrical method to derive the weak deflection angle using the Gauss-Bonnet theorem (GBT) [12; 13] for the static and asymptotically flat spacetimes [14]. In this technique, the integral of the theorem can be solved in an infinite region surrounded by the ray of light, and an exact form of the deflection angle can be derived. The Gibbons-Werner method was then applied in the geometry of a stationary black hole spacetime to obtain the deflection angle using a Finsler metric of Randers type [15; 16]. In \(2016\), Ishihara _et al._ extended the Gibbons-Werner method for the finite-distances [17]. Further extensions were carried out by Ono _et al._ to the axisymmetric spacetimes [18]. These generalizations have been employed by various authors for the stationary black holes as well as non-asymptotically flat spacetimes [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. Crisnejo and Gallo used the GBT to study the deflection angle of massive particles as well as light rays for stationary spacetimes in a plasma medium [31]. Few articles have also reported the study of the effect of dark matter on deflection angle using the GBT [32; 33; 34; 35].
Despite being the most successful theory, GR appears to be inadequate to interpret few observational phenomena. The requirement of missing mass in the form of dark matter [36; 37; 38; 39; 40; 41] to describe the galactic rotation dynamics [42; 43; 44; 45; 46] cannot be explained by GR, neither can the accelerated expansion of the Universe [47; 48]. Hence, many theories have been introduced to modify GR [49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59]. One of the interesting modified theories of gravity is the Chern-Simons (CS) modified gravity theory [60; 61]. The CS gravity is a widely familiar four dimensional scalar-tensor theory, proposed in Ref. [61]. This theory owns an extra dynamical scalar field nonminimally coupled to the Pontryagin density. Several studies have been carried out to explore the physical consequences of this theory [62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72]. CS black hole spacetimes are unique solutions of this theory, and one of its fundamental characteristics is the role played by the scalar field. This dynamical scalar field covers the black hole with hair by the process of scalarization. In the black hole solution under consideration, the scalar field is the string inspired axion matter field [73], and thus this black hole is dressed in axionic hair.
The shadow of black holes is another important optical property to understand their physical attributes. The studies on shadows of different black holes have been increased significantly in recent times [74; 75; 76; 77; 78; 79; 19; 10; 11; 12; 13; 14; 15; 16; 17]. This is due to the fact of the recent release
of images of black holes by the Event Horizon Telescope (EHT) collaboration [118]. The specific shape of the shadow of a black hole depends on the physical properties of the black hole being studied [117]. Therefore, the shadow can be used to extract information about the black hole's physical properties. Additionally, shadows can help to differentiate between various theories of gravity because they are unique to the physical properties of the related black holes [119, 120, 121, 117]. A significant amount of literature has also been devoted to studying the shadow cast by a black hole surrounded by dark matter [122, 123, 124, 125].
In this work, we intend to explore the impact of CS gravity on gravitational lensing of a slowly rotating black hole spacetime. CS gravity has already been investigated in various aspects, but for the case of gravitational lensing, we study for the first time the effects of this gravity on the deflection angle of light by a slowly rotating Kerr-type black hole using the GBT. We shall implement the Ishihara _et al._ method and explore the effect of the axion coupling parameter, emerging from the CS gravity, on the deflection angle. It is to be mentioned that the axion is one of the prime candidates of dark matter (DM) [126, 127, 128, 129, 130, 59], and hence, from this study we can understand the effect of DM on the deflection angle of black holes. We shall also evaluate the Einstein ring radius and study the effect of the coupling parameter on it. In addition to this, we shall calculate the time delay of light in the CS gravity theory and shall analyze the effect of the axion matter field on the time delay. We shall also investigate the impact this coupling parameter leaves on the shadow cast by the slowly rotating black hole in CS gravity.
The rest of our paper is organized as follows. In Sec. II, we briefly review the field equations related to the axionic CS modified gravity theory and mention the slowly rotating black hole solution for this theory. In Sec. III, we derive the deflection angle in the spacetime of the slowly rotating Kerr-type black hole using the Ishihara _et al._ method. We then analyze the effect of the axion coupling parameter on the deflection angle for three different black holes viz. SgrA\({}^{*}\), M87\({}^{*}\) and Cen A. In Sec. IV, we derive the Einstein ring radius and depict the effect of the coupling parameter. Next, we compute the time delay of light and the effect of the axion matter field on the time delay, in Sec.V. Furthermore, we study the shadow cast by the black hole in Sec. VI. Finally, in Sec. VII, we present the summary and conclusions of our work. Throughout our work, we use the sign convention (\(-,+,+,+\)) and the unit \(G=1\).
## II Chern-Simons gravitational theory and slowly rotating black hole solution
In the CS gravitational theory two independent fields are taken into consideration, one is the gravitational field represented by the spacetime metric \(g_{\mu\nu}\) and other is a scalar field of specific nature. So, in particular, the action for the axionic CS gravitational theory is given by [60]
\[\begin{split} S&=\int d^{4}x\sqrt{-g}\left[\frac{R_ {\mu}^{\mu}}{2\kappa^{2}}-\frac{1}{2}(\partial_{\mu}\varphi)(\partial^{\mu} \varphi)-\eta\,\varphi\,\mathfrak{R}_{CS}\right]\\ &=\int d^{4}x\sqrt{-g}\left[\frac{R_{\mu}^{\mu}}{2\kappa^{2}}- \frac{1}{2}(\partial_{\mu}\varphi)(\partial^{\mu}\varphi)\right]-\int d^{4}x \,\eta\,\varphi\,\hat{\mathfrak{H}}_{CS},\end{split} \tag{1}\]
where \(\kappa\) is the inverse of the reduced Planck mass \(M_{Pl}\), \(g=\det(g_{\mu\nu})\), \(R_{\mu}^{\mu}=g^{\mu\nu}R_{\mu\nu}\) is the Ricci scalar corresponding to Ricci tensor \(R_{\mu\nu}\) and \(\varphi\) is a pseudoscalar representing the axion matter field. In this axionic CS gravitational theory the axion matter field is considered to be coupled to the Pontryagin density term \(\mathfrak{R}_{CS}\)[131, 61], which in fact is the gravitational CS topological term. This term is expressed as
\[\mathfrak{R}_{CS}=\frac{1}{2}R_{\nu\rho\sigma}^{\mu}\tilde{R}_{\mu}^{\nu}\,{}^ {\rho\sigma}, \tag{2}\]
where \(\tilde{R}_{\mu}^{\nu}\,{}^{\rho\sigma}\) is the dual of the Riemann tensor \(R_{\nu\rho\sigma}^{\mu}\) and is defined as \(\tilde{R}_{\mu}^{\nu}\,{}^{\rho\sigma}=\frac{1}{2}\varepsilon^{\rho\sigma \gamma\delta}R_{\mu\gamma\delta}^{\nu}\) with \(\varepsilon^{\rho\sigma\gamma\delta}=\tilde{\epsilon}^{\rho\sigma\gamma\delta} /\sqrt{-g}\), the contravariant four dimensional Levi-Civita tensor. Thus the CS term is formed by the contraction of the Riemann tensor with its dual tensor. \(\hat{\mathfrak{H}}_{CS}\) is the CS term with the flat Levi-Civita tensor \(\tilde{\epsilon}^{\rho\sigma\gamma\delta}\) and \(\eta\) is the axion coupling parameter to the CS term. This coupling parameter has the dimension of length and is expressed in terms of the string Regge slope [132]\(\alpha^{\prime}=M_{s}^{-2}\), where \(M_{s}\) is the string scale, as \(\eta=\sqrt{2/3}\,\alpha^{\prime}/48\kappa\), which is of the order \(\mathcal{O}(M_{Pl}/M_{s}^{2})\).
Varying the action (1) with respect to the metric \(g_{\mu\nu}\) and also with the axion matter field \(\varphi\), the equations of motion can be obtained for the theory as [60],
\[G_{\mu\nu} =\kappa^{2}\,T_{\mu\nu}^{\varphi}+4\,\kappa^{2}\eta\,C_{\mu\nu}, \tag{3}\] \[\square\varphi =\eta\,\mathfrak{R}_{CS}, \tag{4}\]
where \(G_{\mu\nu}\) is the usual Einstein tensor, \(C_{\mu\nu}\) is the Cotton tensor [61]. Here, the energy-momentum tensor \(T_{\mu\nu}^{\varphi}\) is expressed as
\[T_{\mu\nu}^{\varphi}=\nabla_{\mu}\varphi\nabla_{\nu}\varphi-\frac{1}{2}g_{\mu \nu}(\nabla\varphi)^{2} \tag{5}\]
The Cotton tensor is obtained by varying the term \(\varphi\,\mathfrak{R}_{CS}\) with respect to the metric \(g_{\mu\nu}\) and can be expressed in the following form [60; 61]:
\[C_{\mu\nu}=-\frac{1}{2}\nabla^{\alpha}\left[(\nabla^{\beta}\varphi)\tilde{R}_{ \alpha\mu\beta\nu}+(\nabla^{\beta}\varphi)\tilde{R}_{\alpha\nu\beta\mu}\right]. \tag{6}\]
If we take into consideration the static, spherically symmetric metric, then the CS gravitational theory is reduced to GR because the Pontryagin density term disappears in this case leading to the absence of the axion matter field. This axion field is a string-theory inspired theoretical particle field and is a perfect candidate for low-mass DM [55; 59]. This field being a pseudoscalar, it will impose axial symmetry on any kind of spacetime that we work on. Hence, this nature of the theory provides convenient means to find solutions for rotating compact objects and accordingly, here we consider a metric ansatz for slowly rotating Kerr-type black holes as given by [60]
\[ds^{2}=-A(r)\,dt^{2}+B(r)dr^{2}+r^{2}d\Omega^{2}-2\,r^{2}a\sin^{2}\!\theta\,W(r )\,dt\,d\phi, \tag{7}\]
where \(d\Omega^{2}=d\theta^{2}+\sin^{2}\!\theta\,d\phi^{2}\), \(a\) is the spin parameter of the rotating black hole spacetime and \(W(r)\) is the off-diagonal correction term representing the possible backreaction in the spacetime. Considering only the leading order of the spin parameter \(a\), since the black hole is assumed to be of slowly rotating, the solutions of Eqs. (3) and (4) for the metric ansatz (7) provide the metric coefficients, similar to vacuum solutions of Einstein's equations as
\[A(r)=\left(1-\frac{2M}{r}\right)\ \ \text{and}\ \ B(r)=\left(1-\frac{2M}{r} \right)^{-1}\]
with the off-diagonal correction term as given by
\[W(r)=\frac{2M}{r^{3}}-\frac{\eta^{2}\kappa^{2}(189M^{2}+120Mr+70r^{2})}{14r^{ 8}}+\mathcal{O}(\eta^{2n}), \tag{8}\]
where \(n\) is a positive integer \(\geq 2\). It is to be noted that the slow rotation approximation on the black hole solutions implies that black holes are of sufficiently large mass \(M\), and for such a case the higher order terms \(\mathcal{O}(\eta^{2n})\) contribute as small perturbations to Eq. (8). However, it needs to be mentioned that although in the black hole solutions above the first or lowest order of the spin parameter \(a\) has been used, in our rest of the work we consider the orders of \(a\) more than this as per requirement in the analysis of the features of black holes involving variable that is the inverse of the radial distance from the center of the black holes [60].
## III Deflection Angle
In this section, we shall obtain the deflection angle of light in the weak field limit of a slowly rotating Kerr-type black hole using GBT approach, which was extended by Ishihara _et al._[17] in \(2016\) as mentioned earlier. In this approach, the black hole is considered as a lens (\(L\)), which is at a finite distance from the source (\(S\)) and the receiver (\(R\)) as shown in Fig. 1. For the case of an equatorial plane (\(\theta=\pi/2\)), the deflection angle can be expressed as [17; 18]
\[\hat{\Theta}=\Psi_{R}-\Psi_{S}+\Phi_{RS}, \tag{9}\]
where \(\Psi_{R}\) and \(\Psi_{S}\) are the angles that are measured at the \(R\) and the \(S\) positions respectively. \(\Phi_{RS}=\Phi_{R}-\Phi_{S}\) is the separation angle between the receiver and the source. Here, \(\Phi_{R}\) and \(\Phi_{S}\) are the angular coordinates of the receiver and the source respectively. The quadrilateral \(\overset{\infty}{R}\overset{\infty}{\underset{S}{\square}}\) shown in Fig. 1 is embedded in a curved space \({}^{(3)}\mathcal{M}\), which consists of a spatial curve representing a light ray from the \(S\) to the \(R\), two outgoing radial lines each from the \(S\) and the \(R\), and a circular arc segment \(C_{r}\) having the coordinate radius \(r_{C}\) (\(r_{C}\rightarrow\infty\)). Using the GBT to this quadrilateral \(\overset{\infty}{R}\overset{\infty}{\underset{S}{\square}}\), the deflection angle (9) can also be rewritten as [14]
\[\hat{\Theta}=-\iint_{\overset{\infty}{R}\overset{\infty}{\underset{S}{ \square}}}\mathcal{K}\,dS+\int_{S}^{R}K_{g}\,dl, \tag{10}\]
where \(\mathcal{K}\) is the Gaussian curvature of the surface of propagation of light, \(K_{g}\) is the geodesic curvature of the light curves, \(dS\) is the infinitesimal area element of the surface and \(dl\) is the infinitesimal line element of the arc. It is to be mentioned that for the prograde motion of photons \(dl>0\) and for the retrograde motion \(dl<0\).
Thus to obtain the deflection angle for our considered black hole metric (7), first we have to study the Gaussian curvature \(\mathcal{K}\) of the propagating light and then calculate its quadrilateral surface integral. For this we rewrite the metric (7) for the null geodesics (\(ds^{2}=0\)) (see Sec. VI) to obtain in the form [17; 18]:
\[dt=\pm\,\sqrt{\zeta_{ij}dx^{i}dx^{j}}+\beta_{i}dx^{i}, \tag{11}\]
where \(\zeta_{ij}\) defines the optical metric and \(\beta_{i}\) represents the one-form, which are given by
\[\zeta_{ij}dx^{i}dx^{j} =\frac{dr^{2}}{\left(1-\frac{2M}{r}\right)^{2}}+\frac{r^{2}d\theta^ {2}}{\left(1-\frac{2M}{r}\right)}+\frac{r^{2}\sin^{2}\theta}{\left(1-\frac{2M} {r}\right)}\] \[\times\left\{1+\frac{r^{2}a^{2}\sin^{2}\theta}{\left(1-\frac{2M} {r}\right)}\left[\frac{2M}{r^{3}}-\frac{\eta^{2}\kappa^{2}(189M^{2}+120Mr+70r^ {2})}{14r^{8}}\right]^{2}\right\}d\phi^{2} \tag{12}\]
and
\[\beta_{i}dx^{i}=-\frac{r^{2}a\sin^{2}\theta}{\left(1-\frac{2M}{r} \right)}\Bigg{[}\frac{2M}{r^{3}}-\frac{\eta^{2}\kappa^{2}(189M^{2}+120Mr+70r^ {2})}{14r^{8}}\Bigg{]}d\phi. \tag{13}\]
With this optical metric the Gaussian curvature of propagating light is defined as [14]
\[\mathcal{K}=\frac{{}^{(3)}\!R_{r\phi r\phi}}{\zeta}=\frac{1}{ \sqrt{\zeta}}\left[\frac{\partial}{\partial\phi}\left(\frac{\sqrt{\zeta}}{ \zeta_{rr}}\ ^{(3)}\zeta_{rr}^{\phi}\right)-\frac{\partial}{\partial\phi}\left( \frac{\sqrt{\zeta}}{\zeta_{rr}}\ ^{(3)}\zeta_{r\phi}^{\phi}\right)\right], \tag{14}\]
where \(\zeta\equiv\det(\zeta_{ij})\). For the slowly rotating black hole in CS gravitational theory, \(\mathcal{K}\) is computed for propagation of light in the equatorial plane as
\[\mathcal{K}=-\frac{2M}{r^{3}}+\frac{3M^{2}}{r^{4}}-\frac{24a^{2}M^ {2}}{r^{6}}+\frac{16a^{2}M^{3}}{r^{7}}+\mathcal{O}\Big{(}\frac{1}{r^{8}},\frac {a^{2}}{r^{8}},a^{3}\Big{)}. \tag{15}\]
Here terms \(\mathcal{O}(1/r^{8},a^{2}/r^{8},a^{3})\) are found to be very small in comparison to the retaining terms. Using this expression (15) the surface integral of the Gaussian curvature \(\mathcal{K}\) over the closed quadrilateral can be computed from the equation [18],
\[-\iint_{\widetilde{R}\square\widetilde{S}}\mathcal{K}\,dS=\int_{ \phi_{S}}^{\phi_{R}}\!\!\int_{\infty}^{r_{pq}}\mathcal{K}\sqrt{\zeta}\,dr\,d\phi, \tag{16}\]
where \(r_{ps}\) is the radius of the photon sphere (see Sec. VI). Further, we need to study the photon orbit equation in the equatorial plane for the metric (7) to derive the integrals in Eq. (16). For this one should note that there are two constants of motion along the orbit in the equatorial plane associated with two Killing vectors \(\partial/\partial\tau\) and \(\partial/\partial\phi\) as given by [17; 18]
\[\mathcal{E} =A(r)\dot{t}+r^{2}aW(r)\dot{\phi}, \tag{17}\] \[L_{z} =r^{2}\dot{\phi}-r^{2}aW(r)\,\dot{t}, \tag{18}\]
where the dot over the variables denotes the derivative with respect to the affine parameter \(\tau\). From these two constants of motion, we obtain the impact parameter from the usual definition as
\[\Upsilon\equiv\frac{L_{z}}{\mathcal{E}}=\frac{r^{2}\dot{\phi}-r^{2}aW(r)\,\dot{t }}{A(r)\dot{t}+r^{2}aW(r)\dot{\phi}}. \tag{19}\]
The null geodesic condition, \(ds^{2}=0\) in Eq. (7) leads to the photon or light orbit equation in the equatorial plane, which is given by
\[\left(\frac{dr}{d\phi}\right)^{2}=\frac{A(r)r^{2}+\left(r^{2}aW(r) \right)^{2}}{B(r)}\left[\frac{r^{2}-2r^{2}aW(r)\Upsilon-A(r)\Upsilon^{2}}{ \left(r^{2}aW(r)+A(r)\Upsilon\right)^{2}}\right]. \tag{20}\]
To make the line integration limit of Eq. (16) finite, we change the variable \(r\) to \(u\) by \(r\equiv\frac{1}{u}\). Accordingly the above equation can be rewritten as
\[\left(\frac{du}{d\phi}\right)^{2}=F(u), \tag{21}\]
where
\[F(u)=\frac{u^{2}(1-2Mu)\left[(1-2Mu)u^{2}+a^{2}W^{2}\right]\left[1 -(1-2Mu)\Upsilon^{2}u^{2}-2aW\Upsilon\right]}{\left[aW+(1-2Mu)\Upsilon u^{2} \right]^{2}}. \tag{22}\]
In the weak field limit as well as in the slow rotation approximation, the iterative solution of Eq. (21) is obtained as [18]
\[u=\frac{\sin\phi}{\Upsilon}+\frac{M(1+\cos^{2}\phi)}{\Upsilon^{2 }}-\frac{2aM}{\Upsilon^{3}} \tag{23}\]
and hence, we can rewrite Eq. (16) as
\[-\iint_{\widetilde{R}\square\widetilde{S}}\mathcal{K}dS=\int_{ \phi_{s}}^{\phi_{R}}\!\!\int_{0}^{u}-\frac{\mathcal{K}\sqrt{\zeta}}{u^{2}}\, du\,d\phi. \tag{24}\]
Thus for the slowly rotating black hole metric (7) in axionic CS gravity, the above equation can be integrated out using Eqs. (12), (15) and (23) as
\[-\iint_{\widetilde{R}\square\widetilde{S}} \mathcal{K}\,dS=\bigg{(}\frac{2M}{\Upsilon}+\frac{21M^{3}}{4 \Upsilon^{3}}-\frac{6aM^{3}}{\Upsilon^{4}}+\frac{57a^{2}M^{3}}{2\Upsilon^{5} }+\frac{767M^{5}}{32\Upsilon^{5}}-\frac{195aM^{5}}{4\Upsilon^{6}}+\frac{10615a ^{2}M^{5}}{32\Upsilon^{7}}\] \[+\frac{3M^{5}}{64\Upsilon^{5}}+\frac{aM^{5}}{12\Upsilon^{6}}- \frac{1081a^{2}M^{5}}{96\Upsilon^{7}}-\frac{1317M^{7}}{512\Upsilon^{7}}+ \frac{475aM^{7}}{64\Upsilon^{8}}\bigg{)}\Big{[}\big{(}1-\Upsilon^{2}u_{R}^{2} \big{)}^{3/2}-\big{(}1-\Upsilon^{2}u_{S}^{2}\big{)}^{3/2}\Big{]}\] \[-\bigg{(}\frac{3a^{2}M^{3}}{20\Upsilon^{5}}+\frac{M^{5}}{64 \Upsilon^{5}}+\frac{93a^{2}M^{5}}{160\Upsilon^{7}}+\frac{143M^{7}}{512\Upsilon ^{7}}-\frac{99aM^{7}}{320\Upsilon^{8}}\bigg{)}\Big{[}\big{(}1-\Upsilon^{2}u_{R }^{2}\big{)}^{5/2}-\big{(}1-\Upsilon^{2}u_{S}^{2}\big{)}^{5/2}\Big{]}\] \[+\bigg{(}\frac{5a^{2}M^{5}}{1568\Upsilon^{7}}-\frac{11M^{7}}{3584 \Upsilon^{7}}\bigg{)}\Big{[}\big{(}1-\Upsilon^{2}u_{R}^{2}\big{)}^{7/2}-\big{(} 1-\Upsilon^{2}u_{S}^{2}\big{)}^{7/2}\Big{]}+\bigg{(}-\frac{M^{2}}{4\Upsilon^{ 2}}+\frac{3a^{2}M^{2}}{\Upsilon^{4}}\] \[+\frac{37M^{4}}{16\Upsilon^{4}}-\frac{3aM^{4}}{\Upsilon^{5}}+ \frac{1925a^{2}M^{4}}{32\Upsilon^{6}}+\frac{10293M^{6}}{512\Upsilon^{6}}- \frac{183aM^{6}}{4\Upsilon^{7}}\bigg{)}\bigg{(}\Upsilon u_{R}\sqrt{1-\Upsilon^{2 }u_{R}^{2}}\] \[+\Upsilon u_{S}\sqrt{1-\Upsilon^{2}u_{S}^{2}}\bigg{)}+\bigg{(} \frac{15M^{2}}{4\Upsilon^{2}}-\frac{4aM^{2}}{\Upsilon^{3}}+\frac{9a^{2}M^{2}} {4\Upsilon^{4}}+\frac{543M^{4}}{64\Upsilon^{4}}-\frac{15aM^{4}}{\Upsilon^{5}} +\frac{3317a^{2}M^{4}}{48\Upsilon^{6}}\] \[\frac{9591M^{6}}{256\Upsilon^{6}}-\frac{1521aM^{6}}{16\Upsilon^{ 7}}\bigg{)}\big{[}\pi-\arcsin(\Upsilon u_{R})-\arcsin(\Upsilon u_{S})\big{]}+ \mathcal{O}\Big{(}\frac{1}{\Upsilon^{8}},\frac{a^{2}}{\Upsilon^{8}},a^{3} \Big{)}. \tag{25}\]
In the above expression, we use \(u_{R}\) as the reciprocal of the distance of the receiver from the black hole and \(u_{S}\) as that of the source from the black hole. We also use \(\cos\phi_{R}=-\sqrt{1-\Upsilon^{2}u_{R}^{2}}\) and \(\cos\phi_{S}=\sqrt{1-\Upsilon^{2}u_{S}^{2}}\)[18]. In the far distance limit, \(u_{R}\to 0\) and \(u_{S}\to 0\), this Eq. (25) takes the form:
\[-\iint_{\widetilde{R}\square\widetilde{S}}\mathcal{K}\,dS \approx\frac{4M}{\Upsilon}+\frac{15\pi M^{2}}{4\Upsilon^{2}}-\frac{ 4\pi aM^{2}}{\Upsilon^{3}}+\frac{32M^{3}}{3\Upsilon^{3}}+\frac{9\pi a^{2}M^{2}} {4\Upsilon^{4}}\] \[-\frac{12aM^{3}}{\Upsilon^{4}}+\frac{256a^{2}M^{3}}{\Upsilon^{5}}+ \mathcal{O}\Big{(}\frac{1}{\Upsilon^{6}},\frac{a^{2}}{\Upsilon^{6}},a^{3}\Big{)}. \tag{26}\]
Next, we have to study the geodesic curvature of light and then calculate its path integral. In the equatorial plane (\(\theta=\pi/2\)), the geodesic curvature in the manifold \({}^{(3)}\mathcal{M}\) can be expressed as
\[K_{g}=-\frac{1}{\sqrt{\zeta\zeta^{\theta\theta}}}\,\beta_{\phi,r}, \tag{27}\]
which for the slowly rotating black hole metric (7) yields,
\[K_{g}=-\frac{2aM}{r^{3}}-\frac{2aM^{2}}{r^{4}}-\frac{3aM^{3}}{r^{5}}+\mathcal{ O}\Big{(}\frac{a}{r^{6}},a^{3}\Big{)}. \tag{28}\]
To derive the path integral of the geodesic curvature, let us consider that the coordinate system is centered at the lens position. For such a case, the light curve can be approximated with, \(r=\Upsilon/\sin\vartheta\) and \(l=\Upsilon\tan\vartheta\)[18]. Using these two relations in Eq. (28), the path integral of the geodesic curvature can be written as
\[\int_{S}^{R}K_{g}dl=\int_{\phi_{S}}^{\phi_{R}}\bigg{[}-\frac{2aM}{\Upsilon^{2 }}\sin\!\vartheta-\frac{2aM^{2}}{\Upsilon^{3}}\sin^{2}\!\vartheta-\frac{3aM^ {3}}{\Upsilon^{4}}\sin^{3}\!\vartheta+\mathcal{O}\Big{(}\frac{a}{\Upsilon^{5 }},a^{3}\Big{)}\bigg{]}d\vartheta. \tag{29}\]
The evaluation of integration of this equation leads to its explicit form as
\[\int_{S}^{R}K_{g}dl= -\bigg{(}\frac{2aM}{\Upsilon^{2}}+\frac{9aM^{3}}{4\Upsilon^{4}} \bigg{)}\bigg{(}\sqrt{1-\Upsilon^{2}u_{R}^{2}}+\sqrt{1-\Upsilon^{2}u_{S}^{2}} \bigg{)}\] \[-\frac{aM^{2}}{\Upsilon^{3}}\bigg{(}\Upsilon u_{R}\sqrt{1- \Upsilon^{2}u_{R}^{2}}+\Upsilon u_{S}\sqrt{1-\Upsilon^{2}u_{S}^{2}}\bigg{)}\] \[-\frac{3aM^{3}}{4\Upsilon^{4}}\bigg{(}\Upsilon^{2}u_{R}^{2}\sqrt{ 1-\Upsilon^{2}u_{R}^{2}}+\Upsilon^{2}u_{S}^{2}\sqrt{1-\Upsilon^{2}u_{S}^{2}} \bigg{)}\] \[+\frac{aM^{3}}{4\Upsilon^{4}}\Big{[}\big{(}1-\Upsilon^{2}u_{R}^{2 }\big{)}^{3/2}-\big{(}1-\Upsilon^{2}u_{S}^{2}\big{)}^{3/2}\Big{]}\] \[-\frac{aM^{2}}{\Upsilon^{3}}\big{[}\pi-\arcsin(\Upsilon u_{R})- \arcsin(\Upsilon u_{S})\big{]}+\mathcal{O}\Big{(}\frac{a}{\Upsilon^{5}},a^{3 }\Big{)}, \tag{30}\]
where we use \(\cos\phi_{R}=-\sqrt{1-\Upsilon^{2}u_{R}^{2}}\) and \(\cos\phi_{S}=\sqrt{1-\Upsilon^{2}u_{S}^{2}}\)[18]. Moreover, in this derivation we consider the prograte motion (\(dl>0\)) wherein the orbital motion of photons is in the same direction with that of spin of the black hole. In far distance limit, \(u_{R}\to 0\) and \(u_{S}\to 0\), this equation becomes,
\[\int_{S}^{R}K_{g}dl\approx-\frac{4aM}{\Upsilon^{2}}-\frac{\pi aM^{2}}{\Upsilon ^{3}}-\frac{9aM^{3}}{2\Upsilon^{4}}+\frac{aM^{3}}{2\Upsilon^{4}}+\mathcal{O} \Big{(}\frac{a}{\Upsilon^{5}},a^{3}\Big{)}. \tag{31}\]
Hence, we arrive at the expression for the deflection angle of light by a slowly rotating black hole in the axionic CS gravity theory by combining Eqs. (26) and (31) in the asymptotically far distance limit, \(u_{R}\to 0\) and \(u_{S}\to 0\), which is obtained as
\[\hat{\Theta} \approx\frac{4M}{\Upsilon}-\frac{4aM}{\Upsilon^{2}}+\frac{15\pi M ^{2}}{4\Upsilon^{2}}-\frac{5\pi aM^{2}}{\Upsilon^{3}}+\frac{32M^{3}}{3 \Upsilon^{3}}\] \[\quad+\frac{9a^{2}M^{2}\pi}{4\Upsilon^{4}}-\frac{41aM^{3}}{2 \Upsilon^{4}}+\frac{256a^{2}M^{3}}{5\Upsilon^{5}}. \tag{32}\]
It is to be noted that in the limiting case of \(a=0\) and \(\eta=0\), we arrive at the deflection angle for a Schwarzschild black hole as given by [7]
\[\hat{\Theta}\approx\frac{4M}{\Upsilon_{s}}+\frac{15\pi M^{2}}{4\Upsilon_{s}^{ 2}}+\frac{32M^{3}}{3\Upsilon_{s}^{3}}, \tag{33}\]
where \(\Upsilon_{s}=r^{2}\dot{\phi}/A(r)\dot{t}\) is the impact parameter for a Schwarzschild black hole.
Here, we explore the deflection angle for three different black holes, viz. SgrA\({}^{*}\), M87\({}^{*}\) and Centaurus A (Cen A) with the respective masses of \(4\times 10^{6}M_{\odot}\), \(6.4\times 10^{9}M_{\odot}\) and \(5.5\times 10^{7}M_{\odot}\), and study the effect of the axion coupling parameter on the deflection angle. The axion coupling parameter depends on the string scale \(M_{s}\), which depends on the Planck mass
as \(M_{Pl}\gtrsim M_{s}\gtrsim 10^{-3}M_{Pl}\)[133]. In Fig. 2, we depict the variation of deflection angle \(\hat{\Theta}\) given by Eq. (32) as a function of the impact parameter for the three black holes and compare our results with the deflection angle (33) for the corresponding Schwarzschild black holes. For each black hole, we see a similar pattern in the behaviour of the deflection angle. It can be seen that for different values of the spin parameter \(a\), the deflection angle increases abruptly upto a certain point for small impact parameter values and then decreases in a similar manner as the deflection angle for the Schwarzschild case with the increasing values of the impact parameter. For all the three black holes, it is seen that as the value of the spin parameter \(a\) grows, the peak of the deflection angle decreases. However, for all spin parameter values, the deflection angle finally overlaps with that for the Schwarzschild case. It can be said from our results that the increase in the deflection angle for small impact parameter value can be due to the presence of axion hair of the black hole in the CS modified gravity.
## IV Einstein ring
The Einstein ring is an interesting manifestation of the phenomenon of gravitational lensing (GL). The shape and behaviour of an Einstein ring depends on the position of the lens with respect to the source as well as the DM distribution around the lens. A complete Einstein ring is formed when the source is perfectly positioned behind the lens. Here, we intend to study the effect of the axion coupling parameter \(\eta\) on the Einstein ring. To begin with, let us define \(D_{RS}\) as the distance between the source and the receiver, \(D_{LS}\) as the distance between the lensing object and the source, and \(D_{RL}\) as the distance between the receiver and the lensing object. With these notations the lens equation can be written as [134]
\[D_{RS}\tan\beta=\frac{D_{RL}\sin\theta-D_{LS}\sin(\hat{\Theta}-\theta)}{\cos( \hat{\Theta}-\theta)}, \tag{34}\]
where \(\beta\) denotes the angular position of the source, and \(\theta\) denotes the angular position of the lensed image of the source as detected by an observer. Again, in weak GL, if the source and observer are at infinite distances from each other, the above lens equation reduces to [135],
\[\beta=\theta-\frac{D_{LS}}{D_{RS}}\hat{\Theta}. \tag{35}\]
The angular radius of the Einstein ring is derived by taking \(\beta=0\)[135]. Hence, applying this condition to Eq. (35), we obtain the Einstein ring radius as
\[\theta_{E}= \frac{D_{LS}}{D_{RS}}\bigg{[}\frac{4M}{\Upsilon}-\frac{4aM}{ \Upsilon^{2}}+\frac{15\pi M^{2}}{4\Upsilon^{2}}-\frac{5\pi aM^{2}}{\Upsilon^{ 3}}\] \[+\frac{32M^{3}}{3\Upsilon^{3}}+\frac{9\pi a^{2}M^{2}}{4\Upsilon^{ 4}}-\frac{41aM^{3}}{2\Upsilon^{4}}+\frac{256a^{2}M^{3}}{5\Upsilon^{5}}\bigg{]}. \tag{36}\]
Furthermore, in weak GL, since the Einstein ring is taken to be small, the impact parameter satisfies the relation,
Figure 2: Deflection angle as a function of the impact parameter for the three black holes, SgrA\({}^{*}\), M87\({}^{*}\) and CenA with \(a=0.2,0.4,0.6\).
\(D_{RL}\sin\theta_{E}\approx D_{RL}\theta_{E}\). Hence, we arrive at the final expression of the angular radius of the Einstein ring as
\[\theta_{E}= \bigg{[}\frac{D_{LS}}{D_{RS}D_{RL}}\bigg{(}4M-\frac{4aM}{\Upsilon} +\frac{15\pi M^{2}}{4\Upsilon}-\frac{5\pi aM^{2}}{\Upsilon^{2}}\] \[+\frac{32M^{3}}{3\Upsilon^{2}}+\frac{9\pi a^{2}M^{2}}{4\Upsilon^ {3}}-\frac{41aM^{3}}{2\Upsilon^{3}}+\frac{256a^{2}M^{3}}{5\Upsilon^{4}}\bigg{)} \bigg{]}^{1/2} \tag{37}\]
As in the deflection angle case, we study the effect of the CS gravity on the Einstein ring by considering three supermassive black holes, SgrA\({}^{*}\), M87\({}^{*}\) and Cen A as the lens. From Eq. (37) it is clear that the axionic hair of the black hole has some impact on the angular radius of the Einstein ring. In Fig. 3, the angular radius is depicted as a function of the impact parameter for three different values of the spin parameter \(a=0.2,0.4,0.6\), for each black hole. It is seen that the angular radius increases rapidly upto a certain value of the impact parameter and then becomes almost constant for higher impact parameter values. This behaviour in the angular radius is seen to be similar for all the three black holes under consideration. Also, as the spin of the black hole increases, the value of the angular radius is seen to decrease. As an example, the Einstein ring radius of M87\({}^{*}\) black hole is depicted for the Schwarzschild case in Fig. 4. It can be seen that the Einstein ring is found to decrease with the impact parameter for a Schwarzschild black hole, while for a black hole spacetime in CS gravity, the Einstein ring increases, thus making it more feasible to be observed in weak lensing techniques.
Moreover, in Fig. 5 we study the behaviour of the angular radius of the Einstein ring as a function of the impact parameter for three different distances of the source to the lens. In the first panel, we have taken the SgrA\({}^{*}\) black hole, which is located at a distance of \(8.3\) kpc from the Earth. We consider the source to be situated behind the lens at a distance of \(6.5,8.3\) and \(10.5\) kpc
Figure 4: Angular radius of the Einstein ring of M 87\({}^{*}\) black hole as a function of impact parameter for the Schwarzschild case.
Figure 3: Angular radius of the Einstein ring of three black holes, SgrA\({}^{*}\), M87\({}^{*}\) and Cen A as a function of impact parameter for three different values of spin parameter, \(a=0.2,0.4,0.6\).
from the black hole. It can be seen from the figure that as the source moves further away, the angular radius becomes larger. In the second panel, we consider the M\(87^{*}\) black hole, which is located at a distance of \(16.4\) Mpc from the Earth. Here, we see that for a very small impact parameter, the angular radius for different source distances is almost the same. This angular radius becomes higher for the source at far away distances as the impact parameter increases. Again, in the third panel, we consider the Cen A black hole as a lens, which is situated at a distance of \(3.8\) Mpc from the Earth. Here also, it can be seen that as the source moves far away from the lens, the angular radius becomes higher.
Further, we study the behaviour of the angular radius as a function of the spin parameter of the black hole in Fig. 6. For each black hole, we take three different values of the impact parameter \(\Upsilon=1,3,5\) in units of kpc. It can be seen that with respect to the spin parameter, the angular radius behaves in a similar manner for all three black holes. As the spin grows, the angular radius of the Einstein ring decreases. Also, as the value of the impact parameter increases, the value of the angular radius becomes higher in agreement with the results of Figs. 3 and 5.
## V Time delay
If light takes two different paths to travel from a source towards an observer, then it also takes two different times to reach the observer. This difference in time to reach the observer is called time delay. In this section, we study the effect of the axion coupling parameter on the time delay of light while they travel from the source to an observer through the neighbourhood of a lensed black hole. For this purpose, we rewrite the line element (7) in the form:
\[ds^{2}=-\mathcal{A}(r)dt^{2}+B(r)dr^{2}+C(r)(d\theta^{2}+\sin^{2}\theta d \phi^{2}), \tag{38}\]
Figure 5: Angular radius of the Einstein ring of three black holes, SgrA\({}^{*}\), M\(87^{*}\) and Cen A as a function of the impact parameter for three different values of \(D_{LS}\) for each black hole.
Figure 6: Angular radius of the Einstein ring of three black holes, SgrA\({}^{*}\), M\(87^{*}\) and Cen A as a function of the spin parameter for three different values of impact parameter, \(\Upsilon=1,3,5\) in units of kpc.
where
\[\mathcal{A}(r) =\left(1-\frac{2M}{r}\right)+2r^{2}a\sin^{2}\theta W\frac{d\phi}{dt}, \tag{39}\] \[B(r) =\left(1-\frac{2M}{r}\right)^{-1}\!\!\!,\ \ \ C(r)=r^{2}. \tag{40}\]
From the expression of impact parameter \(\Upsilon\), given by Eq. (19), we get
\[\frac{d\phi}{dt}=\frac{\left(1-\frac{2M}{r}\right)\Upsilon+r^{2}aW}{r^{2}-r^{ 2}aW\Upsilon}. \tag{41}\]
Following Refs. [136; 137; 76], the time delay for the slowly rotating black hole in Chern-Simons gravity is obtained as
\[\Delta T =2\left(\sqrt{D_{RL}^{2}-r_{ps}^{2}}+\sqrt{D_{LS}^{2}-r_{ps}^{2}} \right)+4M\log\frac{\left(D_{RL}+\sqrt{D_{RL}^{2}-r_{ps}^{2}}\right)\left(D_{LS }+\sqrt{D_{LS}^{2}-r_{ps}^{2}}\right)}{r_{ps}^{2}}\] \[\quad-\frac{4a\Upsilon M}{r_{ps}^{2}}\left\{\left(3+2\frac{r_{ps} }{D_{RL}}\right)\sqrt{\frac{D_{RL}-r_{ps}}{D_{RL}+r_{ps}}}+\left(3+2\frac{r_{ ps}}{D_{LS}}\right)\sqrt{\frac{D_{LS}-r_{ps}}{D_{LS}+r_{ps}}}\right\}. \tag{42}\]
It is clear from Eq. (42) that the axionic hair of the black hole has some effect on the time delay. Fig. 7 shows the behaviour of the time delay of light with respect to the spin parameter \(a\) and the source distance \(D_{LS}\) for the Sgr A\({}^{*}\) black hole. It can be seen in the left panel that with the growth of the spin of the black hole, the time delay decreases. On the other hand, in the right panel, it is seen that when the source is assumed to move further away from the lens, the time delay increases. Decreasing time delay with spin of the black hole has also been studied in [138] in GR. The estimated time delay for three supermassive black holes have been displayed in Table 1. One can see that being the most massive and farthest among the considered black holes, M87\({}^{*}\) produced the maximum time delay amongst the black holes.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Galaxy & Distance & Mass & \(\Delta T\) & \(\Delta T\) (Schwarzschild case) \\ & (Mpc) & (\(M_{\odot}\)) & (hours) & (hours) \\ \hline SgrA\({}^{*}\) & \(0.0083\) & \(4\times 10^{6}\) & 0.27 & 0.19 \\ M87\({}^{*}\) & \(16.4\) & \(6.5\times 10^{9}\) & 546.67 & 274.05 \\ Cen A & \(3.8\) & \(5.5\times 10^{7}\) & 126.67 & 2.53 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Estimated time delay for supermassive black holes in Chern-Simons gravity.
Figure 7: Time delay \(\Delta T\) as a function of spin parameter \(a\) in the left panel and as a function of source distance \(D_{LS}\) in the right panel for the SgrA\({}^{*}\) black hole.
Optical behaviour of the black hole: shadow
The black hole shadow, a dark area that occupies the center of the bright accretion disk, is an impressive visual characteristic of a black hole. The shadow is a consequence of the black hole's intense gravitational field, which deflects and confines light, ultimately creating a photon sphere [120; 135; 139; 140; 121]. The size and shape of the shadow of a black hole depend on its mass, rotation and proximity to Earth, presenting an exceptional opportunity to scrutinize the features of black holes. The utilization of the Event Horizon Telescope to investigate the black hole shadow has furnished compelling evidence of black hole existence and tested the accuracy of GR in the powerful gravitational domain [118; 141]. The study of the black hole shadow as an optical attribute of black holes is a swiftly advancing domain that holds the potential to enhance our comprehension of gravity and the fundamental nature of spacetime.
### Null geodesics and photon sphere
To explore the shadow of the black hole defined by the metric (7) in CS gravity, first we start with the investigation of geodesics in the theory. To this end, it is most expedient to employ the Lagrangian framework. The Lagrangian associated with the theory is expressed as
\[L=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}, \tag{43}\]
where \(\dot{x}^{\mu}\) denotes the derivative of the coordinate \(x^{\mu}\) with respect to the affine parameter \(\tau\). Utilizing this Lagrangian, we can define the conjugate momenta and the Hamiltonian of the system as
\[p_{\mu}=g_{\mu\nu}\dot{x}^{\nu}\,,\quad H=\frac{1}{2}g^{\mu\nu}p_{\mu}p_{\nu}\,. \tag{44}\]
The Hamilton-Jacobi equation is given by
\[\frac{\partial S}{\partial\tau}=\frac{1}{2}g^{\mu\nu}\frac{\partial S}{ \partial x^{\mu}}\frac{\partial S}{\partial x^{\nu}}\,. \tag{45}\]
One may note that the Hamiltonian is independent of the variables \(t\), \(\phi\) and \(\tau\) explicitly, which gives us the privilege to write the expression for the action as [139]
\[S=-\frac{1}{2}\xi^{2}\tau-\mathcal{E}t+L_{z}\phi+\tilde{S}(r,\theta)\,, \tag{46}\]
where the terms \(L_{z}\), \(\xi^{2}\) and \(\mathcal{E}\) are constants. Now, as earlier for a slowly rotating black hole, we assume that the spin parameter \(a\) of the black hole is comparatively smaller and we consider that \(\tilde{S}(r,\theta)=\tilde{S}_{r}(r)+\tilde{S}_{\theta}(\theta)\). Implementing these assumptions in the Hamilton-Jacobi equation, we get
\[\left(\frac{\partial S_{\theta}}{\partial\theta}\right)^{2}+\frac{L_{z}^{2}}{ \sin^{2}\theta}=-r(r-2M)\left(\frac{\partial S_{r}}{\partial r}\right)^{2}- \frac{\mathcal{E}^{2}r^{3}}{2M-r}-\frac{2ar^{3}W(r)\mathcal{E}L_{z}}{2M-r}-r^ {2}\xi^{2}\,. \tag{47}\]
In deriving this expression we neglect the higher-order terms of \(a\) as they have negligible contributions being \(a\) is a small quantity.
As the left-hand side of the above equation solely relies on \(\theta\) and the right-hand side only on \(r\), it can be inferred that both sides of the equation are equivalent to a constant value, say \(j^{2}\). Using this condition, all the derivatives of \(S\) can be expressed as
\[\frac{\partial S}{\partial r} = \pm\sqrt{\frac{2a\mathcal{E}r^{3}L_{z}W(r)+r^{2}\big{(}\mathcal{E }^{2}r+\xi^{2}(2M-r)\big{)}+j^{2}(2M-r)}{r(r-2M)^{2}}}\,, \tag{48}\] \[\frac{\partial S}{\partial\theta} = \pm\sqrt{j^{2}-\frac{L_{z}^{2}}{\sin^{2}\theta}}\,,\] (49) \[\frac{\partial S}{\partial\phi} = L_{z}\,,\] (50) \[\frac{\partial S}{\partial t} = -\mathcal{E}\,. \tag{51}\]
These derivatives of \(S\) can be further used to obtain the orbit equations associated with the black hole spacetime. We use \(\partial_{\mu}S\to p_{\mu}\), where four-momenta \(p_{\mu}\) has the following explicit expressions for the black hole considered in this study:
\[p_{r} = \frac{\dot{r}}{1-2M/r}\,, \tag{52}\] \[p_{\theta} = r^{2}\dot{\theta}\,,\] (53) \[p_{\phi} = ar^{2}W(r)\sin^{2}\theta\dot{t}+r^{2}\sin^{2}\theta\dot{\phi}\,,\] (54) \[p_{t} = -(1-2M/r)\dot{t}+ar^{2}W(r)\sin^{2}\theta\dot{\phi}. \tag{55}\]
Using these equations, we can further obtain a set of first-order differential equations in terms of the conserved quantities \(\mathcal{E},L_{z},\xi\) and \(j\) as
\[\dot{r}^{2} = -(1-2M/r)\left(\xi^{2}+\frac{j^{2}}{r^{2}}\right)+\mathcal{E}^{2 }+2a\mathcal{E}L_{z}W(r)\,, \tag{56}\] \[r^{2}\dot{\theta} = \pm\sqrt{j^{2}-\frac{L_{z}^{2}}{\sin^{2}\theta}}\,,\] (57) \[r^{2}\dot{\phi} = \frac{L_{z}}{\sin^{2}\theta}+\frac{a\mathcal{E}r^{3}W(r)}{2M-r}\,,\] (58) \[r^{2}\dot{t} = -\frac{r^{3}\left(aL_{z}W(r)+\mathcal{E}\right)}{2M-r}\,, \tag{59}\]
These equations are referred to as the equations of geodesics. It is evident that in the asymptotic limit, \(j^{2}\) corresponds to the overall angular momentum of the orbit, while \(L_{z}\) denotes the angular momentum component along the \(z\) axis.
The null geodesics refer to paths followed by massless particles (such as photons) in curved spacetime. To describe these paths, we typically use a set of coordinates and an affine parameter \(\tau\) that represents the _affine distance_ travelled along the path. The null geodesics satisfy a specific condition, \(p_{\mu}p^{\mu}\equiv\xi^{2}=0\). One can rescale the affine parameter \(\tau\) in such a way that the energy \(\mathcal{E}\) of the photon is equal to one. Using this information, we can write an equation for the radial coordinate of the photon's path, which takes the form [110, 120, 121, 97, 98, 99, 100]:
\[\dot{r}^{2}+V_{ph}(r)=0, \tag{60}\]
where \(V_{ph}(r)\) is a function of the radial coordinate \(r\). Specifically, for our present case this function takes the form:
\[V_{ph}(r)=-2aL_{z}W(r)+\frac{j^{2}(r-2M)}{r^{3}}-1. \tag{61}\]
The photon sphere is a region of space that is defined by constant-\(r\) photon orbits, i.e. photons following these paths have a fixed value of \(r\). To find the radius of the photon sphere, we need to find the value of \(r\) that satisfies two conditions: \(V_{ph}(r_{ps})=0\) and \(V^{\prime}_{ph}(r_{ps})=0\), where the prime denotes a derivative with respect to \(r\)[120, 121]. These two conditions give us the radius of the photon sphere \(r_{ps}\) as well as the value of \(j_{ps}^{2}\), which is a constant related to the angular momentum of the photon. In this context, it is to be mentioned that up to the linear order of \(a\), one can have the following relations:
\[r_{ps}=r_{ps}^{(0)}+ar_{ps}^{(1)}\,,\quad j_{ps}^{2}=(j_{ps}^{(0)})^{2}+a(j_{ ps}^{(1)})^{2}, \tag{62}\]
where \(r_{ps}^{(0)}\) is defined by the equation [139],
\[r_{ps}^{(0)}f^{\prime}(r_{ps}^{(0)})-2f(r_{ps}^{(0)})=0\,, \tag{63}\]
when \(a=0\). Using these relations, one can find the following explicit expressions of \(r_{ps}\) and \(j_{ps}^{2}\) for the black hole considered in this study:
\[r_{ps} = 3M+\frac{2aL_{z}\left(81M^{4}-31\eta\kappa^{2}\right)}{729M^{5}}\,, \tag{64}\] \[j_{ps}^{2} = 27M^{2}+aL_{z}\left(4-\frac{131\eta\kappa^{2}}{189M^{4}}\right). \tag{65}\]
For a static black hole, these two equations give the usual expressions of photon radius and angular velocity of photons respectively as \(r_{ps}^{(0)}=3M\) and \(\omega_{ps}^{(0)}=1/3\sqrt{3}M\).
### Black hole shadow
Here for simplicity and convenience, we suppose there is an observer located at a distance \(r_{obs}\) from a black hole and at a polar angle \(\theta_{0}\) with \(\phi_{0}=0\). To determine the path of a photon moving towards the observer in the direction \(dr/dt>0\), we can use the angular momentum parameters \(j^{2}\) and \(L_{z}\). Our objective is to find the angle at which the photon hits the observer's plane perpendicular to the \(r\)-direction. To do so, we need the tangential vector at that specific point in space, which can be expressed as [139; 140]
\[\mathbf{U}=-\,\dot{r}\,\mathbf{e}_{r}+r_{obs}\dot{\theta}\,\mathbf{e}_{\theta }+r_{obs}\sin\theta_{0}\dot{\phi}\,\mathbf{e}_{\phi}\,, \tag{66}\]
where we introduce an orthonormal system for the observer. This system consists of three unit vectors:
\[\mathbf{e}_{r}=-\,\partial_{r}\,,\;\;\mathbf{e}_{\theta}=\frac{\partial_{ \theta}}{r_{obs}}\,,\;\;\mathbf{e}_{\phi}=\frac{\partial_{\phi}}{r_{obs}\sin \theta_{0}}\,.\]
Moreover, we assume that the observer is looking directly towards the black hole. Let's define the angle of incidence of the photon on the plane \(r=r_{obs}\) as \(\pi/2-\delta\) and the angle that the projected vector forms with the direction \(\mathbf{e}_{\phi}\) as \(\alpha\). In other words, we can express the tangent vector as
\[\mathbf{U}=-\,\dot{r}\,\mathbf{e}_{r}+\sin\delta\left(\mathbf{e}_{\theta}\sin \alpha+\mathbf{e}_{\phi}\cos\alpha\right).\]
To obtain the values of \(\sin\delta\) and \(\cos\alpha\), we introduce the following parametrization [139; 140]:
\[\sin\delta=r_{obs}\sqrt{\dot{\theta}^{2}+\sin^{2}\theta_{0}\dot{\phi}^{2}}\; \;\;\text{and}\;\;\;\cos\alpha=\frac{\sin\theta_{0}\dot{\phi}}{\sqrt{\dot{ \theta}^{2}+\sin^{2}\theta_{0}\dot{\phi}^{2}}}\,.\]
Further, we assume that \(r_{obs}\sqrt{\dot{\theta}^{2}+\sin^{2}\theta_{0}\dot{\phi}^{2}}\ll 1\) in these expressions because we are considering the limit where \(r_{obs}\) approaches infinity. By using the geodesic equations, we can relate these angles to the angular momentum, or alternatively, can express the angular momentum of the geodesic in terms of these angles, given by [139; 140]
\[j=r_{obs}\sin\delta\,,\quad L_{z}=r_{obs}\sin\theta_{0}\cos\alpha\sin\delta. \tag{67}\]
From the above results and using Eq. (65), one can obtain the relation,
\[r_{obs}^{2}\sin^{2}\delta=27M^{2}+a\sin\theta_{0}\left(4-\frac{131\eta\kappa^ {2}}{189M^{4}}\right)\cos\alpha\;r_{obs}\sin\delta\,. \tag{68}\]
This equation is responsible for determining the shape of the black hole shadow contour, denoted by \(\delta(\alpha)\). Upon performing a linear expansion in \(a\), the solution to this equation can be expressed as follows:
\[r_{obs}\sin\delta=3\sqrt{3}M+a\sin\theta_{0}\left(2-\frac{131\eta\kappa^{2}}{ 378M^{4}}\right)\cos\alpha\,. \tag{69}\]
This equation can be used to find the value of \(\delta\) as a function of \(\alpha\), which then characterizes the shape of the shadow. One may note that the curve \(\delta(\alpha)\) is an approximation to the circumference of radius \(R_{sh}=3\sqrt{3}M\) centered at \(\alpha=0\), with an assumption that \(\delta<<1\).
We use Eq. (69) to obtain the stereographic projections of the black hole shadow in Figs. 8 and 9. On the left panel of Fig. 8, we show the black hole shadow for different values of black hole mass \(M\). It is clear from the figure that with an increase in the value of \(M\), the shadow radius also increases gradually in accordance with that of the standard static Schwarzschild black hole. On the right panel of Fig. 8, we can see that with an increase in the black hole spin parameter \(a\), the shadow of the black hole gets deformed as expected. Although in the figure, we consider comparatively higher values of the parameter \(a\) for the graphical representation purposes, in the case of a slowly rotating black hole, \(a\) is a small quantity and hence a minimal distortion in the shadow is expected. Thus, from the shadow behaviour, one can estimate the spin parameter. Finally, in Fig. 9, we show the impacts of the axionic coupling parameter \(\eta\) on the shadow of the black hole along with the variation in the shadow with respect to \(\theta\). One can see that \(\eta\) affects the shadow minimally, and with an increase in the value of \(\eta\), the deformation of the shadow decreases very slowly. It implies that the parameter \(\eta\) may counter the effects of the black hole spin \(a\). It is seen that \(\theta\) also has significant impacts on the appearance of the shadow as expected.
## VII Summary and outlook
One of the popular extensions of GR is the CS modified gravity. The scalar field, in this case the axion field, is a basic feature of this widely known modified theory of gravity. In this paper, we study the effect of this axion field on the weak gravitational lensing. For this purpose, we first consider an exact slowly rotating Kerr-type black hole spacetime with an axionic hair. We then obtain the weak lensing angle of light around the black hole. Next, we study the behaviour of the Einstein ring formed due to deflection of light by the Kerr-type black hole. Then we move forward to obtain the time delay of light due to the presence of the black hole in its path. Finally, we study the impact of the axion coupling parameter on the shadow cast by the black hole under consideration.
To obtain the deflection angle of light around the black hole, we implement the extension of Gibbons-Werner method which was studied by Ishihara _et al._ in 2016. This method is independent of asymptotic flatness of the spacetime. We study the behaviour of the deflection angle as a function of the impact parameter for a set of three black holes, viz. SgrA\({}^{*}\), M87\({}^{*}\) and Cen A. We compare the deflection angle for the Kerr-type black hole in CS gravity with that of a Schwarzschild black hole. Our results show a rapid increase in the deflection angle at small impact parameters upto a certain limit and then a gradual decrease is seen. As the deflection angle decreases, it is found to mimic the behaviour of the deflection angle for the Schawarschild black hole. At high impact parameters, the deflection angle for both the cases are found to overlap. The increase in the deflection angle at low impact parameter can be due to the presence of axonic hair, which is a result of the axion coupling with the curvature term.
We obtain the Einstein ring radius and study its behaviour as a function of the impact parameter for the varying spin parameter and varying source distance. We also depict the change of the angular radius of the Einstein ring with change in the spin parameter. We see a gradual increase in the angular radius upto a certain limit and then it becomes constant for higher values of
Figure 8: Stereographic projection of the black hole shadow. On the left panel, we consider \(a=0.7,\theta=\pi/2\) and \(\eta=0.1\) for different values of the black hole mass \(M\) and on the right panel, \(M=1,\theta=\pi/2\) and \(\eta=0.1\) for different values of spin parameter \(a\). In both cases, we use \(\kappa=1\).
Figure 9: Stereographic projection of the black hole shadow. On the left panel, we consider \(a=0.7,\theta=\pi/2\) and \(M=1\) for different values of CS coupling parameter \(\eta\) and on the right panel, \(M=1,\eta=0.3\) and \(a=0.7\) for different values of \(\theta\). In both cases, we use \(\kappa=1\).
the impact parameter. Also, as the spin grows, the angular radius decreases. We can see that the Einstein ring radius is affected by the axionic hair of the slowly rotating black hole. The axionic hair also has some effect on the time delay of light. The time delay is seen to decrease with an increase in the spin of the black hole. As the source is assumed to be further away from the black hole, the time delay is seen to increase. This works comes with several futuristic outlooks. The magnification of the Einstein ring images formed around the black hole in CS gravity can be studied. Also, this work can be extended to obtain the deflection angle of light in wormhole backgrounds using different modified theories of gravity.
In the subsequent stage of our investigation, we directed our attention towards the characteristics of the black hole shadow. We have derived the photon sphere radius and shadow expressions by utilising a perturbative scheme presuming the spin parameter \(a\) to be sufficiently small enough. We have observed that the axionic coupling parameter \(\eta\) is capable of producing an opposing influence to the black hole's spin, potentially obscuring the actual spin data from observational outcomes. The shadow of this black hole can be constrained by using Einstein Hubble Telescope data [118; 141] to have an observational constraint on the model parameters. We keep this as a future prospect of the study.
###### Acknowledgements.
UDG is thankful to the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune for the Visiting Associateship of the institute.
|
2306.00345 | Fluctuation Theorems and Thermodynamic Inequalities for Nonequilibrium
Processes Stopped at Stochastic Times | We investigate thermodynamics of general nonequilibrium processes stopped at
stochastic times. We propose a systematic strategy for constructing
fluctuation-theorem-like martingales for each thermodynamic functional,
yielding a family of stopping-time fluctuation theorems. We derive
second-law-like thermodynamic inequalities for the mean thermodynamic
functional at stochastic stopping times, the bounds of which are stronger than
the thermodynamic inequalities resulting from the traditional fluctuation
theorems when the stopping time is reduced to a deterministic one. Numerical
verification is carried out for three well-known thermodynamic functionals,
namely, entropy production, free energy dissipation and dissipative work. These
universal equalities and inequalities are valid for arbitrary stopping
strategies, and thus provide a comprehensive framework with new insights into
the fundamental principles governing nonequilibrium systems. | Haoran Yang, Hao Ge | 2023-06-01T04:52:39Z | http://arxiv.org/abs/2306.00345v1 | # Fluctuation Theorems and Thermodynamic Inequalities for Nonequilibrium Processes
###### Abstract
We investigate thermodynamics of general nonequilibrium processes stopped at stochastic times. We propose a systematic strategy for constructing fluctuation-theorem-like martingales for each thermodynamic functional, yielding a family of stopping-time fluctuation theorems. We derive second-law-like thermodynamic inequalities for the mean thermodynamic functional at stochastic stopping times, the bounds of which are stronger than the thermodynamic inequalities resulting from the traditional fluctuation theorems when the stopping time is reduced to a deterministic one. Numerical verification is carried out for three well-known thermodynamic functionals, namely, entropy production, free energy dissipation and dissipative work. These universal equalities and inequalities are valid for arbitrary stopping strategies, and thus provide a comprehensive framework with new insights into the fundamental principles governing nonequilibrium systems.
Stochastic thermodynamics extends classical thermodynamics to individual trajectories of non-equilibrium processes, encompassing stationary or transient systems with or without external driving forces [1; 2; 3; 4]. A first-law-like energy balance equality and various second-law-like thermodynamic inequalities can be derived from fluctuating trajectories. Fluctuation theorems emerging from stochastic thermodynamics, as equality versions of the second law, impose constraints on probability distributions of thermodynamic functionals along single stochastic trajectories [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19].
Recently, a gambling demon, which stops the processes at random times, has been proposed for non-stationary stochastic processes without external driving force and feedback of control under an arbitrary deterministic protocol [20; 21]. The demon employs martingales, a concept that has been proposed in probability theory for more than 70 years. The authors constructed a martingale for dissipative work, and obtained a stopping-time fluctuation theorem by applying the well-known optional stopping theorem (or Doob's optional sampling theorem), which states that the average of a martingale at a stopping time is equal to the average of its initial value [22].
On the other hand, we already know that there are three faces in stochastic thermodynamics [23; 24; 10; 25], namely, (total) entropy production, housekeeping heat (non-adiabatic entropy production) and free energy dissipation (adiabatic entropy production). In a system with no external driving force, the housekeeping heat vanishes and the entropy production is equal to the free energy dissipation. However, in general non-stationary stochastic processes with an external driving force as well as an time-dependent protocol, we are curious about whether different martingales can be constructed for entropy production and free energy dissipation separately, while the martingale for housekeeping heat is straightforward to construct without any compensated term [26]. Both entropy production and free energy dissipation belong to a class of functionals along a single stochastic trajectory, i.e. general backward thermodynamic functionals, which has been rigorously defined in [27]. Housekeeping heat belongs to another class, called forward thermodynamic functionals [27].
Therefore, in this paper, we propose a systematic strategy for constructing martingales applicable to general backward thermodynamic functionals, with a focus on entropy production, free energy dissipation, and dissipative work as illustrative examples. Notably, the construction of martingales for forward thermodynamic functionals has been previously established in [27]. By leveraging our constructed martingales, we derive stopping-time fluctuation theorems that hold for general backward thermodynamic functionals, followed by second-law-like thermodynamic inequalities for arbitrary stopping times. When the stochastic stopping time reduces to a deterministic one, we exploit the additional degree of freedom present in our constructed martingales, enabling us to obtain a sharper nonnegative bound for the mean thermodynamic functional. In particular, we obtain a stronger inequality for the dissipative work than that obtained through classic Jarzynski equality.
_Stopping-time fluctuation theorems and thermodynamic inequalities_ First, we will give an even more general definition of the backward thermodynamic functional than [27]. We consider a stochastic thermodynamic system with temperature \(\beta=\frac{1}{k_{B}\mathbf{T}}\). We denote the state (discrete or continuous) of the system at time \(s\geqslant 0\) by \(X(s)\), whose stochastic dynamics is governed by a prescribed deterministic protocol \(\Lambda=\{\lambda(s)\colon s\geqslant 0\}\). For a given duration \([0,t]\), the trajectories are traced by the co
ordinates in phase space, denoted by \(x_{[0,t]}\equiv\{x(s)\}_{0\leqslant s\leqslant t}\). We further denote the probability of observing a given trajectory \(x_{[0,t]}\) by \(\mathcal{P}^{X}(x_{[0,t]})\), and the probability density of \(X(s)\) by \(\varrho^{X}(x,s)\) at any given time \(s\). The general backward thermodynamic functional in the duration \([0,t]\) is defined by \(\{X(s)\}_{0\leqslant s\leqslant t}\) and another stochastic process \(\{Y(s)\}\) with protocol \(\tilde{\Lambda}=\{\tilde{\lambda}(s)\colon s\geqslant 0\}\) (can be either the same as or different from \(\Lambda\)). The only condition is that the processes \(\{X(s)\}_{0\leqslant s\leqslant t}\) and \(\{Y(s)\}_{0\leqslant s\leqslant t}\) are absolutely continuous with each other, i.e. the probability \(\mathcal{P}^{X}(x_{[0,t]})>0\) if and only if \(\mathcal{P}^{Y}(x_{[0,t]})>0\) for any given trajectory \(x_{[0,t]}\). We define a third process \(\{Z^{t}(s)\}_{0\leqslant s\leqslant t}\) driven by the time-reversed protocol \(\tilde{\Lambda}^{r,t}=\{\tilde{\lambda}(t-s)\colon 0\leqslant s\leqslant t\}\) of \(\{Y(s)\}\) up to time \(t\). The probability density of \(Z^{t}(s)\) is denoted by \(\varrho^{Z^{t}}(x,s)\) for any given time \(s\leqslant t\). Note that there is also an additional degree of freedom, i.e. the arbitrary choice of the initial distribution \(\varrho^{Z^{t}}(x,0)\) of \(\{Z^{t}(s)\}_{0\leqslant s\leqslant t}\) for any \(t\), because for different \(t\), only the protocols inherited from \(\{Y(s)\}\) are closely related to each other, not the initial distributions.
The probability of observing a given trajectory \(x_{[0,t]}\) in \(\{Z^{t}(s)\}_{0\leqslant s\leqslant t}\) is denoted by \(\mathcal{P}^{Z^{t}}(x_{[0,t]})\). We define a general backward thermodynamic functional by
\[F_{t}(x_{[0,t]})\equiv\frac{1}{\beta}\ln\frac{\mathcal{P}^{X}(x_{[0,t]})}{ \mathcal{P}^{Z^{t}}(\tilde{x}_{[0,t]})},\]
where \(\tilde{x}_{[0,t]}\equiv\{x(t-s)\}_{0\leqslant s\leqslant t}\) denotes the time reversal of \(x_{[0,t]}\) in the duration \([0,t]\).
It is straightforward to derive the fluctuation theorem for \(F_{t}\):
\[\left\langle e^{-\beta F_{t}}\right\rangle=1.\]
However, \(F_{t}\) is generally not a martingale [27].
For any given time interval \([0,T]\), we would like to add a compensated term \(\delta_{t}\) as a function of \(X(t)\) and \(t\), to \(F_{t}\), so that \(e^{-\beta(F_{t}+\delta_{t})}\) be a martingale, i.e.
\[\left\langle e^{-\beta(F_{T}+\delta_{T})}\middle|X_{[0,t]}\right\rangle=e^{- \beta(F_{t}+\delta_{t})},\]
for any \(t\in[0,T]\).
Then we propose
\[\delta_{t}(X(t))\equiv\frac{1}{\beta}\ln\frac{\varrho^{Z^{t}}(X(t),0)}{\tilde {\varrho}^{Z^{T}}(X(t),T-t)}, \tag{1}\]
in which \(\tilde{\varrho}^{Z^{T}}(X(t),T-t)\) is the distribution of \(Z^{T}(t)\) with any arbitrary initial distribution \(\tilde{\varrho}^{Z^{T}}(\cdot,0)\), which is not necessarily the same as \(\varrho^{Z^{T}}(\cdot,0)\) and contributes another extra degree of freedom.
We apply the optional stopping theorem in martingale theory to derive the general stopping-time fluctuation theorem
\[\left\langle e^{-\beta(F_{\tau}+\delta_{\tau})}\right\rangle=1, \tag{2}\]
where \(\tau\) is a stopping time, defined by any stopping mechanism to decide whether to stop a process based on the current position and past events.
By Jensen's inequality,
\[\left\langle F_{\tau}\right\rangle\geqslant-\left\langle\delta_{\tau}\right\rangle. \tag{3}\]
The left-hand-side is independent of \(\tilde{\varrho}^{Z^{T}}\). Hence we can improve the above inequality into
\[\left\langle F_{\tau}\right\rangle\geqslant\sup_{\tilde{\varrho}^{Z^{T}}}- \left\langle\delta_{t}\right\rangle. \tag{4}\]
A special situation is when \(\tau=t\) with probability 1, followed by
\[\left\langle e^{-\beta(F_{t}+\delta_{t})}\right\rangle=1,\]
and
\[\left\langle F_{t}\right\rangle\geqslant\sup_{\tilde{\varrho}^{Z^{T}}}- \left\langle\delta_{t}\right\rangle=\frac{1}{\beta}\left\langle\ln\frac{ \varrho^{X}(X(t),t)}{\varrho^{Z^{t}}(X(t),0)}\right\rangle\geqslant 0, \tag{5}\]
in which \(\left\langle\ln\frac{\varrho^{X}(X(t),t)}{\varrho^{Z^{t}}(X(t),0)}\right\rangle\) is the relative entropy of \(\varrho^{X}(\cdot,t)\) with respect to \(\varrho^{Z^{t}}(\cdot,0)\). The inequality (5) is stronger than the traditional inequality \(\left\langle F_{t}\right\rangle\geqslant 0\) derived from the well-known fluctuation theorem \(\left\langle e^{-\beta F_{t}}\right\rangle=1\), as long as \(\varrho^{X}(\cdot,t)\) is different from \(\varrho^{Z^{t}}(\cdot,0)\).
As a corollary, we can derive certain bound for the infimum of \(F_{t}+\delta_{t}\) following the strategy in [28], which holds for both equilibrium processes and general nonequilibrium processes. According to Doob's maximal inequality, we find the following bound for the cumulative distribution of the supremum of \(e^{-\beta(F_{t}+\delta_{t})}\),
\[\Pr\left(\sup_{0\leqslant t\leqslant T}e^{-\beta(F_{t}+\delta_{t})}\geqslant \lambda\right)\leqslant\frac{1}{\lambda}\left\langle e^{-\beta(F_{t}+\delta_{ t})}\right\rangle=\frac{1}{\lambda},\]
for any \(\lambda\geqslant 0\). It is equivalent to a lower bound on the cumulative distribution of the infimum of \(\beta(F_{t}+\delta_{t})\) in the given duration \([0,T]\), i.e.
\[\Pr\left(\inf_{0\leqslant t\leqslant T}\{\beta(F_{t}+\delta_{t})\}\geqslant-s \right)\geqslant 1-e^{-s},\]
for \(s\geqslant 0\). It implies the random variable \(-\inf_{0\leqslant t\leqslant T}\{\beta(F_{t}+\delta_{t})\}\) dominates stochastically over an exponential random variable with the mean of 1. Thus, we find the following universal bound for the mean infimum of \(\beta(F_{t}+\delta_{t})\), i.e.
\[\left\langle\inf_{0\leqslant t\leqslant T}\{(F_{t}+\delta_{t})\}\right\rangle \geqslant-\frac{1}{\beta}=-k_{B}\mathbf{T}.\]
_Applications_ The thermodynamic functional \(F_{t}\) becomes the (total) entropy production \(S_{\text{tot}}(t)\) up to time \(t\) if the process \(\{Y(t)\}\) is driven by exactly the same protocol as \(\{X(t)\}\), and the initial distribution of \(Z^{t}\) is taken
to be the distribution of \(X(t)\)[29; 8], i.e. \(\varrho^{Z^{t}}(x,0)=\varrho^{X}(x,t)\). Then
\[\delta_{t}^{S_{\rm tot}}(X(t))\equiv\frac{1}{\beta}\ln\frac{\varrho^{X}(X(t),t)} {\tilde{\varrho}^{Z^{T}}(X(t),T-t)}, \tag{6}\]
and \(e^{-\beta(S_{\rm tot}(t)+\delta_{t}^{S_{\rm tot}})}\) is a martingale. It is followed by
\[\left\langle e^{-\beta\left(S_{\rm tot}(\tau)+\delta_{\tau}^{S_{\rm tot}} \right)}\right\rangle=1, \tag{7}\]
for any stopping time \(\tau\), and \(\left\langle S_{\rm tot}(\tau)\right\rangle\geqslant-\left\langle\delta_{\tau }^{S_{\rm tot}}\right\rangle\).
The thermodynamic functional \(F_{t}\) becomes the free energy dissipation (adiabatic entropy production) \(f_{d}(t)\) if the process \(\{Y(t)\}\) is taken to be the adjoint process of \(\{X(t)\}\), and also the initial distribution of \(Z^{t}\) is set as the distribution of \(X(t)\), i.e. \(\varrho^{Z^{t}}(x,0)=\varrho^{X}(x,t)\)[23; 24; 10; 25]. Then
\[\delta_{t}^{f_{d}}(X(t))\equiv\frac{1}{\beta}\ln\frac{\varrho^{X}(X(t),t)}{ \tilde{\varrho}^{Z^{T}}(X(t),T-t)}, \tag{8}\]
and \(e^{-\beta(f_{d}(t)+\delta_{t}^{f_{d}})}\) is a martingale. It is followed by
\[\left\langle e^{-\beta\left(f_{d}(\tau)+\delta_{\tau}^{f_{d}}\right)}\right\rangle =1, \tag{9}\]
for any stopping time \(\tau\), and \(\left\langle f_{d}(\tau)\right\rangle\geqslant-\left\langle\delta_{\tau}^{f_{ d}}\right\rangle\).
Let \(\pi^{X}(t)\) be the pseudo-stationary distribution of \(X(t)\) corresponding to the protocol \(\lambda(t)\), i.e. the stationary distribution of \(\{X(t)\}\) if the protocol is fixed at \(\lambda(t)\). The thermodynamic functional \(F_{t}\) becomes the dissipative work \(W_{d}(t)\) up to time \(t\), if the initial distribution of \(X(t)\) is \(\pi^{X}(0)\), the process \(\{Y(t)\}\) is taken to be the adjoint process of \(\{X(t)\}\), and the initial distribution of \(Z^{t}\) is taken as the pseudo-stationary distribution of \(X(t)\), i.e. \(\varrho^{Z^{t}}\left(x,0\right)=\pi^{X}(x,t)\)[20; 30]. Then
\[\delta_{t}^{W_{d}}(X(t))\equiv\frac{1}{\beta}\ln\frac{\pi^{X}(X(t),t)}{\tilde {\varrho}^{Z^{T}}(X(t),T-t)}, \tag{10}\]
and \(e^{-\beta\left(W_{d}(t)+\delta_{t}^{W_{d}}\right)}\) is a martingale. It is followed by
\[\left\langle e^{-\beta\left(W_{d}(\tau)+\delta_{\tau}^{W_{d}}\right)}\right\rangle =1, \tag{11}\]
for any stopping time \(\tau\), and \(\left\langle W_{d}(\tau)\right\rangle\geqslant-\left\langle\delta_{\tau}^{W_ {d}}\right\rangle\).
For the mean \(W_{d}\) up to any fixed time \(t\), we can obtain a stronger inequality than \(\left\langle W_{d}\right\rangle\geqslant 0\). Applying (5), we have
\[\left\langle W_{d}(t)\right\rangle\geqslant\frac{1}{\beta}\left\langle\ln \frac{\varrho^{X}(X(t),t)}{\pi^{X}(X(t),t)}\right\rangle\geqslant 0. \tag{12}\]
Actually, this inequality can be derived from the equality \(\frac{dH(t)}{dt}=-f_{d}(t)+W_{d}(t)\) with the inequality \(f_{d}(t)\geqslant 0\) from [23], in which \(H(t)\) is exactly \(\left\langle\ln\frac{\varrho^{X}(X(t),t)}{\pi^{X}(X(t),t)}\right\rangle\). In [20], the thermodynamic functional under investigation is \(W_{d}\) but the \(\delta_{t}\) they defined is the same as \(\delta_{t}^{S_{\rm tot}}\). The mathematical derivation here implies that we should use different \(\delta_{t}\) for different thermodynamic functionals.
_Numerical verifications_ Many mesoscopic biochemical processes such as the kinetics of enzyme or motor molecules, can be modeled in terms of transition rates between discrete states. We apply our theory to a simple stochastic process with only three states. The time-dependent transition rates between different discrete states are set as follows
\[k_{12}(t) =t,\ k_{23}(t)=3t^{2},\ k_{31}(t)=1;\] \[k_{21} =t^{2},\ k_{32}(t)=2,\ k_{13}(t)=2t,\]
in which the chemical driven energy
\[\Delta G(t)=k_{B}T\ln\frac{k_{12}(t)k_{23}(t)k_{31}(t)}{k_{21}k_{32}(t)k_{13}(t )}=k_{B}T\ln\frac{3}{4}<0.\]
For the three thermodynamic functionals \(S_{\rm tot}\), \(f_{d}\) and \(W_{d}\), the stopping strategy for \(\tau\) is set as follows: the process is stopped at \(\tau<T\) only when the functional reaches a given threshold value before \(T\); while the process is stopped at the final time \(\tau=T\) if the threshold value is never reached during the duration \([0,T]\).
Fig. 1(a-c) shows the numerical results of \(\left\langle S_{\rm tot}(\tau)\right\rangle\) versus \(-\left\langle\delta_{\tau}^{S_{\rm tot}}\right\rangle\), \(\left\langle f_{d}(\tau)\right\rangle\) versus \(-\left\langle\delta_{\tau}^{f_{d}}\right\rangle\), and \(\left\langle W_{d}(\tau)\right\rangle\) versus \(-\left\langle\delta_{\tau}^{W_{d}}\right\rangle\), as functions of the threshold value. The initial distribution is set to be uniform among the three states in Fig. 1(a-b), and concentrated on the second state in Fig. 1(c). Fig. 1(d-f) test the stopping-time fluctuation relations (7), (9) and (11), with and without the compensated term \(\delta_{t}\).
In the special situation that \(\tau=t\) with probability 1, Fig. 1(g) shows that the inequality (12) for \(\left\langle W_{d}(t)\right\rangle\) is not only stronger than the inequality \(\left\langle W_{d}(t)\right\rangle\geqslant-\left\langle\delta_{t}^{W_{d}}\right\rangle\), but also the traditional Jarzynski inequality \(\left\langle W_{d}(t)\right\rangle\geqslant 0\).
Another example is the stochastic dynamics of a colloidal particle with diffusion coefficient \(D\) in a time-dependent potential \(V(t)\). The dynamics obeys the Langevin equation
\[\frac{{\rm d}X(t)}{{\rm d}t}=-\frac{\partial V}{\partial x}(X(t),t)+\xi(t),\]
where \(\xi\) is a Gaussian white noise with zero mean and autocorrelation \(\left\langle\xi(t)\xi(t^{\prime})\right\rangle=2D\delta(t-t^{\prime})\).
In such a stochastic system, the housekeeping heat equals to zero and thus the entropy production \(S_{\rm tot}(t)\) coincides with the free energy dissipation \(f_{d}(t)\). We follow the same stopping strategy as in the discrete model of Fig. 1, and show the numerical results of \(\left\langle S_{\rm tot}(\tau)\right\rangle\) versus \(-\left\langle\delta_{\tau}^{S_{\rm tot}}\right\rangle\) in Fig. 2(a) and \(\left\langle W_{d}(\tau)\right\rangle\) versus \(-\left\langle\delta_{\tau}^{W_{d}}\right\rangle\) in Fig. 2(b) with \(T=3\). Fig. 2(c-d) test the stopping-time fluctuation relations (7) and (11), with and without the compensated term \(\delta_{t}\).
In the special situation that \(\tau=t\) with probability 1, Fig. 2(e) shows that the conclusion (12) for \(\left\langle W_{d}(t)\right\rangle\) is
stronger than the inequality \(\left\langle W_{d}(t)\right\rangle\geqslant-\langle\delta_{t}^{W_{d}}\rangle\) and the Jarzynski inequality \(\left\langle W_{d}(t)\right\rangle\geqslant 0\).
In Fig. 1 and 2, the averaged thermodynamic functionals may be negative under certain stopping strategy, but the general stopping-time fluctuation relations and related thermodynamic inequalities always hold.
_Derivation_ First, we notice that
\[\left\{\frac{\mathcal{P}^{\tilde{Z}^{T,t}}(\widetilde{X}_{[0,t]})}{\mathcal{P} ^{X}(X_{[0,t]})}\right\}_{0\leqslant t\leqslant T}, \tag{13}\]
is a martingale, where \(\widetilde{X}_{[0,t]}\equiv\{X(t-s)\}_{0\leqslant s\leqslant t}\) denotes the time reversal of \(X_{[0,t]}\) in the duration \([0,t]\), and \(\mathcal{P}^{\tilde{Z}^{T,t}}(x_{[0,t]})\) denotes the probability of observing a given trajectory \(x_{[0,t]}\) in \(\{\tilde{Z}^{T,t}(s)=Z^{T}(s+T-t)\}_{0\leqslant s\leqslant t}\). The distribution of \(Z^{T}(0)\) is \(\tilde{\varrho}^{Z^{T}}(\cdot,0)\).
Since
\[\mathcal{P}^{X}(X_{[0,T]})=\mathcal{P}^{X}(X_{[0,T]}|X_{[0,t]})\mathcal{P}^{X }(X_{[0,t]}),\]
we have
\[\left\langle\frac{\mathcal{P}^{\tilde{Z}^{T,T}}(\widetilde{X}_{[0,T]} )}{\mathcal{P}^{X}(X_{[0,T]})}\bigg{|}X_{[0,t]}\right\rangle\] \[= \sum_{X_{[t,T]}}\frac{\mathcal{P}^{\tilde{Z}^{T,T}}(\widetilde{X}_ {[0,T]})}{\mathcal{P}^{X}(X_{[0,T]})}\mathcal{P}^{X}(X_{[0,T]}|X_{[0,t]})\] \[= \sum_{X_{[t,T]}}\frac{\mathcal{P}^{\tilde{Z}^{T,T}}(\widetilde{X}_ {[0,T]})}{\mathcal{P}^{X}(X_{[0,t]})}.\]
For \(0\leqslant u\leqslant s\leqslant T\), let \(\widetilde{X}_{[0,T]}(u,s)\) be the part of the trajectory \(\widetilde{X}_{[0,T]}\) in the duration \([u,s]\), then \(X_{[t,T]}\) and \(\widetilde{X}_{[0,T]}(0,T-t)\) are exactly the time reversal of each
other. Thus
\[\sum_{X_{[0,T]}}\mathcal{P}^{\tilde{Z}^{T,T}}(\widetilde{X}_{[0,T]})\] \[=\sum_{\widetilde{X}_{[0,T]}(0,T-t)}\mathcal{P}^{\tilde{Z}^{T,T}}( \widetilde{X}_{[0,T]})\] \[=\mathcal{P}^{\tilde{Z}^{T,T}}(\widetilde{X}_{[0,T]}(T-t,T))\] \[=\mathcal{P}^{\tilde{Z}^{T,t}}(\widetilde{X}_{[0,t]}),\]
in which the last equality comes from the definition of \(\mathcal{P}^{\tilde{Z}^{T,t}}(\widetilde{X}_{[0,t]})\). So
\[\left\langle\frac{\mathcal{P}^{\tilde{Z}^{T,t}}(\widetilde{X}_{[0,T]})}{ \mathcal{P}^{X}(X_{[0,T]})}\bigg{|}X_{[0,t]}\right\rangle=\frac{\mathcal{P}^{ \tilde{Z}^{T,t}}(\widetilde{X}_{[0,t]})}{\mathcal{P}^{X}(X_{[0,t]})},\]
which is exactly the definition of martingale for (13).
Second, we show that \(\{e^{-\beta(F_{t}+\delta_{t})}\}_{0\leqslant t\leqslant T}\) is exactly the martingale (13). By the definition of \(F_{t}\), we have
\[\frac{\mathcal{P}^{\tilde{Z}^{T,t}}(\widetilde{X}_{[0,t]})}{\mathcal{P}^{X}( X_{[0,t]})}=\frac{\mathcal{P}^{\tilde{Z}^{T,t}}(\widetilde{X}_{[0,t]})}{ \mathcal{P}^{Z^{t}}(\widetilde{X}_{[0,t]})}e^{-\beta F_{t}}. \tag{14}\]
Since
\[\mathcal{P}^{\tilde{Z}^{T,t}}(\widetilde{X}_{[0,t]})=\mathcal{P}^ {\tilde{Z}^{T,t}}(\widetilde{X}_{[0,t]}|\widetilde{X}(0))\tilde{\varrho}^{ \tilde{Z}^{T}}(X(t),T-t),\] \[\mathcal{P}^{\tilde{Z}^{t}}(\widetilde{X}_{[0,t]})=\mathcal{P}^ {\tilde{Z}^{t}}(\widetilde{X}_{[0,t]}|\widetilde{X}(0))\varrho^{\tilde{Z}^{t} }(X(t),0),\]
and notice that \(\{\tilde{Z}^{T,t}(s)\}_{0\leqslant s\leqslant t}\) and \(\{Z^{t}(s)\}_{0\leqslant s\leqslant t}\) are driven by the same protocol \(\{\lambda(t-s)\colon 0\leqslant s\leqslant t\}\), we have
\[\mathcal{P}^{\tilde{Z}^{T,t}}(\widetilde{X}_{[0,t]}|\widetilde{X}(0))= \mathcal{P}^{\tilde{Z}^{t}}(\widetilde{X}_{[0,t]}|\widetilde{X}(0)),\]
which implies
\[\frac{\mathcal{P}^{\tilde{Z}^{T,t}}(\widetilde{X}_{[0,t]})}{\mathcal{P}^{ \tilde{Z}^{t}}(\widetilde{X}_{[0,t]})}=\frac{\tilde{\varrho}^{\tilde{Z}^{T}}( X(t),T-t)}{\varrho^{\tilde{Z}^{t}}(X(t),0)}=e^{-\beta\delta_{t}}. \tag{15}\]
Combining (14) and (15) shows that \(\{e^{-\beta(F_{t}+\delta_{t})}\}_{0\leqslant t\leqslant T}\) is exactly the martingale (13), then the general stopping-time fluctuation theorem (2) follows from the optional stopping theorem.
When \(\tau=t\) with probability 1, we decompose
\[-\left\langle\delta_{t}\right\rangle\] \[=\frac{1}{\beta}\left\langle\ln\frac{\tilde{\varrho}^{\tilde{Z}^{ T}}(X(t),T-t)}{\varrho^{Z^{t}}(X(t),0)}\right\rangle\] \[=\frac{1}{\beta}\left\langle\ln\frac{\varrho^{X}(X(t),t)}{ \varrho^{Z^{t}}(X(t),0)}\right\rangle+\frac{1}{\beta}\left\langle\ln\frac{ \tilde{\varrho}^{\tilde{Z}^{T}}(X(t),T-t)}{\varrho^{X}(X(t),t)}\right\rangle.\]
By Jensen's inequality, we know
\[\left\langle\ln\frac{\tilde{\varrho}^{\tilde{Z}^{T}}(X(t),T-t)}{\varrho^{X}( X(t),t)}\right\rangle\leqslant\ln\left\langle\frac{\tilde{\varrho}^{\tilde{Z}^{T}}( X(t),T-t)}{\varrho^{X}(X(t),t)}\right\rangle=0.\]
Furthermore, for any given \(t\), we can choose \(\tilde{\varrho}^{\tilde{Z}^{T}}(\cdot,0)\) such that \(\tilde{\varrho}^{\tilde{Z}^{T}}(x,T-t)=\varrho^{X}(x,t)\), which leads to
\[\sup_{\tilde{\varrho}^{\tilde{Z}^{T}}}-\left\langle\delta_{t}\right\rangle= \frac{1}{\beta}\left\langle\ln\frac{\varrho^{X}(X(t),t)}{\varrho^{\tilde{Z}^{ T}}(X(t),0)}\right\rangle\geqslant 0.\]
_Conclusion_ In summary, our study contributes a general framework for understanding martingales constructed upon thermodynamic functionals. We have successfully derived and proven the stopping-time fluctuation theorems, accompanied by second-law-like inequalities for mean thermodynamic functionals stopped at stochastic times. Our results generalize the recent gambling strategy and stopping-time fluctuation theorems [20] to a very general setting. Our framework encompasses the general definition of thermodynamic functionals, accommodates various types of stochastic dynamics, and allows for arbitrary stopping strategies. The validity and applicability of our framework are supported by numerical verifications conducted in stochastic dynamics with both discrete and continuous states.
Furthermore, we highlight the significance of the additional degree of freedom introduced through the compensated term \(\delta_{t}\), which leads to a strengthening of the inequality for dissipative work compared to the well-known Jarzynski inequality when the stopping time is reduced to a deterministic one. Overall, our results provide novel insights, new interpretations, and improved bounds for the fundamental principles underlying the Second Law of Thermodynamics in the context of stochastic processes.
H. Ge is supported by NSFC 11971037 and T2225001.
|
2303.13281 | Uncertain Short-Run Restrictions and Statistically Identified Structural
Vector Autoregressions | This study proposes a combination of a statistical identification approach
with potentially invalid short-run zero restrictions. The estimator shrinks
towards imposed restrictions and stops shrinkage when the data provide evidence
against a restriction. Simulation results demonstrate how incorporating valid
restrictions through the shrinkage approach enhances the accuracy of the
statistically identified estimator and how the impact of invalid restrictions
decreases with the sample size. The estimator is applied to analyze the
interaction between the stock and oil market. The results indicate that
incorporating stock market data into the analysis is crucial, as it enables the
identification of information shocks, which are shown to be important drivers
of the oil price. | Sascha A. Keweloh | 2023-03-23T14:02:54Z | http://arxiv.org/abs/2303.13281v2 | # Uncertain Prior Economic Knowledge and Statistically Identified Structural Vector Autoregressions
###### Abstract
This study proposes an estimator that combines statistical identification with economically motivated restrictions on the interactions. The estimator is identified by (mean) independent non-Gaussian shocks and allows for incorporation of uncertain prior economic knowledge through an adaptive ridge penalty. The estimator shrinks towards economically motivated restrictions when the data is consistent with them and stops shrinkage when the data provides evidence against the restriction. The estimator is applied to analyze the interaction between the stock and oil market. The results suggest that what is usually identified as oil-specific demand shocks can actually be attributed to information shocks extracted from the stock market, which explain about \(30-40\%\) of the oil price variation.
Introduction
Traditional identification methods of structural vector autoregressions (SVAR) rely on imposing prior economic knowledge on the interaction, such as short- and long-run restrictions, sign restrictions, or proxy variables. More recently, data-driven methods have been used to ensure identification by imposing structure on the stochastic properties of the shocks, such as time-varying volatility or non-Gaussian and independent shocks. Statistical identification approaches do not rely on prior economic knowledge on the interaction to ensure identification. However, prior economic knowledge is still required to attach a structural interpretation to the shocks, i.e., label the identified shocks. Therefore, prior economic knowledge remains necessary, even if it is not required for identification. Consequently, the question is not if we have prior economic knowledge, but how we want to use it?
The question relates to the critique of the "all-or-nothing approach" w.r.t. prior economic knowledge raised by Baumeister and Hamilton (2019) and Baumeister and Hamilton (2022). That is, traditional methods usually impose prior knowledge as uncontestable truth, e.g. by restricting a given response to zero and only zero without the ability to ever update the restriction by the data, and sometimes completely ignore any prior knowledge. In this line of thought, estimators based on statistical identification approaches ignoring any prior economic knowledge on the interaction represent the extreme case of the "nothing approach".
This study proposes to incorporate uncertain prior economic knowledge into the estimation of SVAR models identified by stochastic properties. The proposed Ridge SVAR-GMM estimator (RGMM) uses higher-order moment conditions to ensure identification by non-Gaussian and (mean) independent shocks and adds a ridge penalty to penalize deviations from restrictions implied by prior economic knowledge. The intuition of the shrinkage estimator is that before seeing the data, the researcher believes that the restrictions are correct and thus shrinks towards the restrictions. However, the restrictions representing prior knowledge are not required for identification and therefore, the estimator can update
the prior knowledge and shrinks less towards the restriction if the data provide evidence against a given restriction.
The main contribution of this study is to incorporate uncertain prior economic knowledge using a ridge penalty with adaptive weights (see, e.g., Zou (2006)). This approach allows to shrink towards economically motivated restrictions while accounting for uncertainty in the prior knowledge. The adaptive weights have an important feature: it is cheap to deviate from false restrictions and costly to deviate from correct restrictions. Therefore, the weights determine the importance of a given restriction in a data-driven way. The simulation results demonstrate that correctly imposed restrictions increase the accuracy of the estimation, highlighting the value of prior economic knowledge beyond identification. Additionally, the approach is able to detect false restrictions and reduce their impact, as shown in the simulation. Furthermore, the RGMM estimator does not require restrictions for identification. Therefore, researchers can include an arbitrary number of theoretically well-founded restrictions and is not forced to include additional controversial ones.
This approach is only possible if the restrictions are not required to ensure identification. In this study, identification follows from non-Gaussian and (mean) independent shocks. The assumption of independent shocks has been criticized in the literature with the common objection that shocks driven by the same volatility process are not independent, see, e.g. Montiel Olea et al. (2022). However, the RGMM estimator addresses this limitation by relaxing the independence assumption. Specifically, the estimator can identify the SVAR even if the shocks are driven by the same volatility process, as long as the shocks are sufficiently skewed. Only if all shocks are symmetric, identification requires independent shocks with at most one shock exhibiting zero excess kurtosis. To achieve this, the RGMM estimator uses a modified version of the fast SVAR-GMM estimator proposed by Keweloh (2021) which corresponds to a computationally cheap version of a generalized method of moments (GMM) estimator minimizing all covariance, coskewness, and cokurtosis moment conditions implied by mean independent structural shocks.
Bayesian approaches offer a natural way to incorporate economic knowledge using the prior
distribution of the parameters. However, traditional Bayesian SVAR models rely on exact prior information for identification, whereas recent advances by Baumeister and Hamilton (2019) and Baumeister and Hamilton (2022) propose using imperfect prior information for identification. One limitation of using prior information for identification is that it cannot be updated. Alternatively, Bayesian models identified by independence and non-Gaussianity in principle allow for imposing and updating economically motivated priors. However, these estimators are not widely used in the literature, with only a few exceptions such as Lanne and Luoto (2020), Anttonen et al. (2021), Braun (2021), and Keweloh et al. (2023). The estimator proposed in this study offers a frequentist alternative to Bayesian methods with the advantage that the proposed estimator does not require the oftentimes criticized assumption of mutually independent structural shocks.
The application analyzes the interaction of the oil and stock market. Kilian and Park (2009) propose recursive restrictions to identify and estimate the effects of different oil and stock market shocks. The proposed restrictions are widely used to analyze the impact of oil market shocks on the stock market, see, e.g., Apergis and Miller (2009), Abhyankar et al. (2013), Kang and Ratti (2013), Sim and Zhou (2015), Ahmadi et al. (2016), Lambertides et al. (2017), Mokni (2020), Arampatzidis et al. (2021), Kwon (2022). However, the impact and importance of stock market shocks for the oil price is usually not analyzed. The application in this study fills this gap. I present evidence that oil and stock prices cannot be ordered recursively. Allowing both variables to interact simultaneously, I find that information shocks derived from the stock market do contain important information on the oil price. The non-recursive inclusion of stock market information reveals that a significant portion of the shocks that were previously classified as oil-specific demand shocks can actually be attributed to information shocks. These findings highlights the crucial role of stock market information in explaining oil price fluctuations.
The remainder of the paper is organized as follows: Section 2 contains a brief overview on SVAR models. Section 3 introduces the RGMM estimator. Section 4 uses a Monte Carlo simulation to illustrate the ability of the estimator to exploit correctly and discard falsely
imposed restrictions. Section 5 applies the estimator to an oil and stock market SVAR. Section 6 concludes.
## 2 Overview: SVAR
Consider the SVAR with \(n\) variables
\[y_{t} =\nu+A_{1}y_{t-1}+...+A_{p}y_{t-p}+u_{t} \tag{1}\] \[u_{t} =B_{0}\varepsilon_{t}, \tag{2}\]
with parameter matrices \(A_{1},...,A_{p}\in\mathbb{R}^{n\times n}\) which satisfy \(det(I-A_{1}c-...-A_{p}c^{p})\neq 0\) for \(|c|\leq 1\), an intercept term \(\nu\in\mathbb{R}^{n}\), an invertible matrix \(B_{0}\in\mathbb{B}:=\{B\in\mathbb{R}^{n\times n}|det(B)\neq 0\}\), an \(n\)-dimensional vector of time series \(y_{t}=[y_{1t},...,y_{nt}]^{\prime}\), an \(n\)-dimensional vector of reduced form shocks \(u_{t}=[u_{1t},...,u_{nt}]^{\prime}\), and an \(n\)-dimensional vector of structural shocks \(\varepsilon_{t}=[\varepsilon_{1t},...,\varepsilon_{nt}]^{\prime}\) with mean zero and unit variance. The parameter matrices \(A_{1},...,A_{p}\) and the intercept term can consistently be estimated to obtain the reduced form shocks \(u_{t}=y_{t}-\nu-A_{1}y_{t-1}-...-A_{p}y_{t-p}\). To simplify, I treat the reduced form shocks as observable random variables and focus on the simultaneous interaction in Equation (2).
Define the innovations
\[e(B)_{t}:=B^{-1}u_{t}, \tag{3}\]
equal to the innovations obtained by unmixing the reduced form shocks with a matrix \(B\in\mathbb{B}\). For \(B=B_{0}\), the innovations are equal to the structural shocks, i.e., \(e(B_{0})_{t}=\varepsilon_{t}\). For estimates \(\hat{B}\) of \(B_{0}\), I refer to \(e(\hat{B})_{t}\) as the estimated structural shocks. In the remainder of this section I show how imposed structure on \(B_{0}\) and \(\epsilon_{t}\) can be used to identify the SVAR, i.e., ensure that \(B=B_{0}\) and \(e(B_{0})_{t}=\varepsilon_{t}\).
The imposed structure includes assumptions on the mutual (in)dependencies of the structural shocks. Therefore, I first state the definitions of uncorrelated, mean independent, and
independent structural shocks. Two shocks are uncorrelated if \(E\left[\varepsilon_{it}\varepsilon_{jt}\right]=E\left[\varepsilon_{it}\right]E \left[\varepsilon_{jt}\right]\) for \(i\neq j\) which has no implications on the higher-order dependencies of the shocks. The \(i\)-th structural shock is mean independent of the \(j\)-th shock if \(E\left[\varepsilon_{it}g(\varepsilon_{jt})\right]=E\left[\varepsilon_{it} \right]E\left[g(\varepsilon_{jt})\right]\) for \(i\neq j\) with a bounded, measurable function \(g(\cdot)\), meaning the \(j\)-th shock contains no information on the mean of the \(i\)-th shock. Two structural shocks are independent if \(E\left[h(\varepsilon_{it})g(\varepsilon_{jt})\right]=E\left[h(\varepsilon_{ it})\right]E\left[g(\varepsilon_{jt})\right]\) for \(i\neq j\) and any bounded, measurable functions \(g(\cdot)\) and \(h(\cdot)\), meaning a given shock contains no information on other shocks.
Almost all identification approaches assume uncorrelated structural shocks. Therefore, the matrix \(B\) should generate uncorrelated innovations with unit variance, which yields \((n+1)n/2\) moment conditions. However, the matrix \(B\) has \(n^{2}\) coefficients. Therefore, infinitely many matrices \(B\in\mathbb{B}\) generate uncorrelated innovations with unit variance, meaning the assumption of uncorrelated structural shocks is not sufficient to identify the SVAR. Traditional identification methods solve the identification problem by imposing structure on the interaction of the shocks (e.g. short-run restrictions in Sims (1980), long-run restrictions in Blanchard and Quah (1989), sign restrictions in Uhlig (2005), or proxy variables in Mertens and Ravn (2013)). The probably most frequently imposed structure are short-run restrictions, meaning restrictions on coefficients of the \(B\) matrix which reduce the number of free coefficients to \((n+1)n/2\) such that the remaining unrestricted coefficients are identified by the \((n+1)n/2\) moment conditions implied by uncorrelated shocks with unit variance. Note that identification requires at least \((n-1)n/2\) restrictions to ensure identification. Moreover, incorrect restrictions lead to inconsistent estimates. Additionally, with \((n-1)n/2\) restrictions, the SVAR is just identified. Therefore, even when the sample size goes to infinity, we are not be able to detect incorrect restrictions.
More recently, identification approaches based on additional structure imposed on the stochastic properties of the structural shocks have been put forward in the literature. These approaches use properties like time-varying volatility (see, e.g., Rigobon (2003), Lanne et al. (2010), Lutkepohl and Netsunajev (2017), Lewis (2021), or Bertsche and Braun (2022)) or the non-Gaussianity and independence of the shocks (see, e.g., Matteson and Tsay (2017),
Herwartz and Plodt (2016), Gourieroux et al. (2017), Lanne et al. (2017), Lanne and Luoto (2021), Keweloh (2021b), Guay (2021), or Lanne et al. (2022)) to ensure identification without any imposed structure on the interaction of the shocks. In this study, I use information in third and fourth moments of the shocks to identify a non-Gaussian SVAR.
The assumption of mutually independent shocks can be used to derive higher-order moment conditions which allow to identify the \(n^{2}\) coefficients of \(B_{0}\) up to sign and permutation without any restrictions on \(B_{0}\), see, e.g. Lanne and Luoto (2021), Keweloh (2021b), Guay (2021), or Lanne et al. (2022). For example, if the structural shocks are mutually independent, it holds that the coskewness \(E[\epsilon_{it}\epsilon_{jt}\epsilon_{kt}]\) is equal to zero for any combination of structural shocks \(i,j,k\in 1,...,n\) except for \(i=j=k\). Analogously to the traditional approach sketched above, the matrix \(B\) should generate innovations which satisfy the conditions \(E[e(B)_{it}e(B)e(B)_{kt}]=0\). The same logic can be applied to derive cokurtosis conditions. Therefore, imposing more structure on the (in)dependencies of the structural shocks allows to derive additional moment conditions, such that the number of moment conditions exceeds the number of unknown parameters of \(B_{0}\). Lanne and Luoto (2021), Keweloh (2021b), Guay (2021), and Lanne et al. (2022) then derive explicit conditions, including conditions on the non-Gaussianity of the shocks, required to ensure that the higher-order moment conditions identify the SVAR.
## 3 Ridge SVAR-GMM estimator
This section proposes to incorporate short-run restrictions using a ridge penalty with adaptive weights into the data-driven SVAR estimation approach based on non-Gaussianity and (mean) independent shocks. The restrictions reflect prior economic knowledge and, in contrast to traditional SVAR estimators based on restrictions, the restrictions are not required to ensure identification. Therefore, the researcher is not forced to include a fixed number of restrictions and false restrictions can be detected and neglected.
Let \(\mathcal{R}\) be the set of all tuples \((i,j)\) with \(i,j\in\{1,...,n\}\) corresponding to restricted elements
\(B_{ij}\) and let \(R_{ij}\in\mathbb{R}\) be the restrictions corresponding to the element \(B_{ij}\). For example, imposing a recursive order implies that all elements in the upper-triangular of \(B_{0}\) are equal to zero, which corresponds to \(R_{ij}=0\) for \((i,j)\in\mathcal{R}=\{(i,j)\in\{1,...,n\}^{2}|j>i\}\). Importantly, various other identification approaches including long-run restrictions and proxy variables can be written as short-run restrictions and thus can be implemented using the ridge estimator proposed in this section. Moreover, the same approach can be applied to restrictions in an \(A\)-Type SVAR model referring to the system \(A_{0}u_{t}=\varepsilon_{t}\) instead of \(u_{t}=B_{0}\varepsilon_{t}\).
In general, a penalized data-driven SVAR estimator can be written as
\[\hat{B}=\operatorname*{arg\,min}_{\hat{B}\in\mathbb{B}}\ Q(B,u)+ \lambda\sum_{(i,j)\in\mathcal{R}}v_{ij}p(B_{ij},R_{ij}) \tag{4}\] \[\text{s.t. }J(B,u)=0,\]
where \(Q(B,u):\mathbb{B},\mathbb{R}^{T\times n}\rightarrow\mathbb{R}\) is a loss function, \(p(B_{ij},R_{ij}):\mathbb{R},\mathbb{R}\rightarrow\mathbb{R}^{\geq 0}\) is a penalty function which penalizes deviations of \(B_{ij}\) from \(R_{ij}\), \(v_{ij}\) are positive data-dependent weights for the penalty of the element \(B_{ij}\), \(\lambda\) is a non-negative tuning parameter, and \(J(B,u):\mathbb{B},\mathbb{R}^{T\times n}\rightarrow\mathbb{R}^{s}\) contains \(s\) further constraints, e.g. constraints on values of the \(B\) matrix or constraints on the combination of \(B\) and \(u\) like for instance narrative sign restrictions or the constraint that \(B\) unmixes \(u\) into innovations with unit variance. Importantly, the combination of the loss function \(Q(B,u)\) and constraints \(J(B,u)\) need to ensure identification of the SVAR.
In this study, I use a loss function and constraints which correspond to a modified version of the fast SVAR-GMM estimator in Keweloh (2021b). The loss and constraints lead to a computationally cheap estimator which is identified under the assumption of skewed mutually mean independent shocks or independent shocks with non-zero excess kurtosis,
see Proposition 1 below. In particular, this study uses the loss
\[Q(B,u):= -\sum_{i=1}^{n}\left(1/T\sum_{t=1}^{T}e(B)_{it}^{3}\right)^{2}-\sum _{i=1}^{n}\left(1/T\sum_{t=1}^{T}e(B)_{it}^{4}-3\right)^{2} \tag{5}\] \[-6\sum_{i=1}^{n}\sum_{i=j+1}^{n}\left(1/T\sum_{t=1}^{T}e(B)_{it}^ {2}e(B)_{jt}^{2}-1\right)^{2}\]
and the constraint
\[J(B,u):= 1/T\sum_{t=1}^{T}e(B)_{t}e(B)_{t}^{\prime}-I, \tag{6}\]
which ensure that the estimated structural shocks are uncorrelated with unit variance. For any \(B\) matrix satisfying the constraint \(J(B,u)\), the first term of the loss in Equation (5) is equal to a weighted sum of all squared coskewness and cokurtosis conditions implied by mutually independent shocks, see Keweloh (2021b). I propose to add the second term to the loss function in Equation (5), such that the loss \(Q(B,u)\) under the constraint \(J(B,u)\) is equal to a weighted sum of all squared coskewness and cokurtosis conditions implied by mutually mean independent shocks. In particular, the loss \(Q(B,u)\) under the constraint \(J(B,u)\) can be written as
\[Q(B,u)= \,\omega+3\sum_{i=1}^{n}\sum_{j=1;j\neq i}^{n}\left(1/T\sum_{t=1} ^{T}e(B)_{it}^{2}e(B)_{jt}\right)^{2}\] \[+6\sum_{i=1}^{n}\sum_{j=i+1}^{n}\sum_{k=j+1}^{n}\left(1/T\sum_{t= 1}^{T}e(B)_{it}e(B)_{jt}e(B)_{kt}\right)^{2}\] \[+4\sum_{i=1}^{n}\sum_{j=1;j\neq i}^{n}\left(1/T\sum_{t=1}^{T}e(B) _{it}^{3}e(B)_{jt}\right)^{2} \tag{7}\] \[+12\sum_{i=1}^{n}\sum_{j=1;j\neq i}^{n}\sum_{k=j+1;k\neq i}^{n} \left(1/T\sum_{t=1}^{T}e(B)_{it}^{2}e(B)_{jt}e(B)_{kt}\right)^{2}\] \[+24\sum_{i=1}^{n}\sum_{j=i+1}^{n}\sum_{k=j+1}^{n}\sum_{l=k+1}^{n} \left(1/T\sum_{t=1}^{T}e(B)_{it}e(B)_{jt}e(B)_{kt}e(B)_{lt}\right)^{2},\]
where \(\omega\) is a scalar invariant with respect to \(B\) satisfying \(J(B,u)\) in Equation (6), compare Keweloh (2021b). Importantly, the number of coskewness and cokurtosis conditions in Equation (7) increases quickly with the dimension of the SVAR and thus leads to an increase of the computational complexity. The advantage of the loss \(Q(B,u)\) in Equation (5) is that it remains computationally cheap to evaluate even when the dimension of the SVAR increases.
The following proposition provides the conditions under which the loss in Equation (5) and the constraint in Equation (6) identify the SVAR.
**Proposition 1**.: _Consider the SVAR \(u_{t}=B_{0}\epsilon_{t}\) with \(B_{0}\in\mathbb{B}\) and structural shocks with zero mean, unit variance, and finite third and fourth moments._
1. _If the components of the structural shocks are mutually mean independent and at most one component of the structural shocks has zero skewness, global identification of_ \(B_{0}\) _up to sign and permutation follows from Bonhomme and Robin (_2009_)._
2. _If the components of the structural shocks are mutually independent and at most one component of the structural shocks has zero excess kurtosis, local identification of_ \(B_{0}\) _follows from Lanne and Luoto (_2021_)._
Proof.: See the appendix.
The proposition shows that if the shocks are sufficiently skewed, identification only requires mean independent shocks and thus allows that multiple shocks are driven by the same volatility process. Only if the shocks are not sufficiently skewed, identification relies on independent shocks with sufficient excess kurtosis.
To be precise, the proof of the first statement in Proposition 1 technically only requires that coskewness conditions implied by mutually independent shocks hold, which follows from mutually mean independent shocks. As for the second identification statement, it relies on the local identification result in Lanne and Luoto (2021), which is based on asymmetric cokurtosis conditions. Nonetheless, the proof of the proposition technically
assumes that all cokurtosis conditions (not just the asymmetric ones) implied by mutually independent shocks hold. Therefore, the statement does not necessarily require independent shocks, but it necessitates that all cokurtosis conditions resulting from independent shocks hold. However, the conditions do not follow from mean independent shocks and finding an economically plausible process other than independent shocks that would yield shocks satisfying all such conditions is not straightforward.
Note that the fast SVAR-GMM estimator is not necessarily asymptotically efficient, meaning an efficiently weighted SVAR-GMM estimator can have a smaller asymptotic variance. However, with higher-order moment conditions the asymptotically efficient weighting matrix is difficult to estimate in sample sizes and in this case would require finite moments up to order eight. Keweloh (2021a) shows that standard approaches to estimate the asymptotically efficient weighting matrix lead to estimators with poor small sample performance. Instead, the goal of this study is to use structure from economic theory included via the penalty function to increase the precision of estimation in small samples.
In particular, I use a \(L_{2}\) (ridge) penalty
\[p(B_{ij},R_{ij})=(B_{ij}-R_{ij})^{2} \tag{8}\]
and adaptive weights \(v_{ij}=1/(\hat{B}_{ij(0)}-R_{ij})^{2}\) where \(\hat{B}_{ij(0)}\) is an initial consistent estimator of the corresponding \(B_{0}\) element, i.e. obtained by a data-driven estimator without penalties. Due to the adaptive weights, it becomes more costly to deviate from the restriction for elements where the initial estimate is close to the restriction and less costly if the initial estimate is further away from the restriction, compare Zou (2006) for adaptive Lasso and Dai et al. (2018) for adaptive Ridge estimators.1
Footnote 1: The weights are implemented using the formula provided by Frommlet and Nuel (2016) to avoid numerical instabilities.
Proposition 1 only ensures identification up to sign and permutation. The indeterminacy implies that it is not possible to statistically discriminate between different orders of the structural shock, e.g. for any sign-permutation matrix \(P\) the models \(u_{t}=B_{0}\epsilon_{t}\) and
\(\tilde{B}_{0}\tilde{\epsilon}_{t}\) with \(\tilde{B}_{0}=B_{0}P^{-1}\) and \(\tilde{\epsilon}_{t}=P\epsilon_{t}\) have the same dependency structure and hence it is not possible to use the assumption of independent shocks to discriminate between different ordering of the shocks. In contrast to traditional restriction based estimators where the restrictions imply the labels a priori to the estimation, the shocks estimated using a data-driven estimator are typically labeled a posterior to the estimation. However, if the position of a structural shock of interest is unknown a priori, it is not possible to impose a priori restrictions or shrinkage on the impact of the structural shock. Therefore, similar to Keweloh et al. (2023), I use a constrained set of admissible mixing matrices containing unique sign-permutation representatives centered at an initial labeled estimator or guess, \(\bar{B}\) of \(B_{0}\), i.e.,
\[\bar{\mathbb{B}}_{\bar{B}}:=\{B\in\mathbb{B}||C_{kk}|>|C_{kl}|\text { for }l=k+1,...,n\text{ and }C:=\bar{B}^{-1}BD \tag{9}\] \[\text{ and a scaling matrix }D\text{ scaling each column of }C\text{ to Euclidean norm one}\}.\]
For \(\bar{B}=I\) the set \(\bar{\mathbb{B}}_{\bar{B}=I}\) is equal to the set of unique sign-permutation representatives proposed by Lanne et al. (2017). Without centering the set near \(B_{0}\), the set can contain estimators corresponding to different orders of the shocks. For example, consider the bivariate SVAR with
\[B_{0}=\begin{bmatrix}\sqrt{0.5}&-\sqrt{0.499}\\ \sqrt{0.5}&\sqrt{0.501}\end{bmatrix}\text{ \ \ \ and }u_{t}=B_{0}\begin{bmatrix} \epsilon_{1t}\\ \epsilon_{2t}\end{bmatrix}\]
with \(B_{0}\in\bar{\mathbb{B}}_{\bar{B}=I}\). Consider an estimator \(\hat{B}\) and its sign permutation \(\tilde{B}\) with
\[\hat{B}=\begin{bmatrix}\sqrt{0.5}&-\sqrt{0.501}\\ \sqrt{0.5}&\sqrt{0.499}\end{bmatrix}\text{ \ \ \ and \ \ \ }\tilde{B}= \begin{bmatrix}\sqrt{0.501}&\sqrt{0.5}\\ -\sqrt{0.499}&\sqrt{0.5}\end{bmatrix} \tag{10}\]
which both solve the optimization problem in Equation (11). Obviously, \(\hat{B}\) corresponds to the same order of the shocks while \(\tilde{B}\) corresponds to the reverse order compared to the
order in \(B_{0}\). However, even with \(B_{0}\in\bar{\mathbb{B}}_{\bar{B}=I}\) the estimator \(\hat{B}\) close to \(B_{0}\) is not contained in \(\bar{\mathbb{B}}_{\bar{B}=I}\) while the estimator \(\tilde{B}\) corresponding to the reverse order is contained in \(\bar{\mathbb{B}}_{\bar{B}=I}\). This problem of different orders contained in \(\bar{\mathbb{B}}_{\bar{B}}\) occurs if \(B_{0}\) is located close to the boundary of \(\bar{\mathbb{B}}_{\bar{B}}\). Centering the set close to \(B_{0}\) mitigates the problem, i.e., \(\hat{B}\) corresponding to the correct order is contained in \(\bar{\mathbb{B}}_{\bar{B}=B_{0}}\) and \(\tilde{B}\) corresponding to the reverse order is not. The initial labeled estimator or guess \(\bar{B}\) determines the order of the structural shocks a priori to the estimation of the penalized data-driven estimator which allows to apply the shrinkage penalty on the impact of structural shocks in the set \(\bar{\mathbb{B}}_{\bar{B}}\).
Put together, the Ridge SVAR-GMM (RGMM) estimator is given by
\[\hat{B}=\operatorname*{arg\,min}_{B\in\bar{\mathbb{B}}_{B}} -\sum_{i=1}^{n}\left(1/T\sum_{t=1}^{T}e(B)_{it}^{3}\right)^{2}- \sum_{i=1}^{n}\left(1/T\sum_{t=1}^{T}e(B)_{it}^{4}-3\right)^{2} \tag{11}\] \[-6\sum_{i=1}^{n}\sum_{i=j+1}^{n}\left(1/T\sum_{t=1}^{T}e(B)_{it} ^{2}e(B)_{jt}^{2}-1\right)^{2}\] \[+\lambda\sum_{(i,j)\in\mathcal{R}}\frac{(B_{ij}-R_{ij})^{2}}{( \hat{B}_{ij(0)}-R_{ij})^{2}}\] s.t. \[1/T\sum_{t=1}^{T}e(B)_{t}e(B)_{t}^{\prime}-I.\]
The RGMM estimator has several appealing features. First, identification only requires that the coskewness and cokurtosis conditions implied by mutually mean independent shocks hold. In particular, all moment conditions remain valid if multiple shocks are driven by the same variance process. Second, the estimator allows to include restrictions motivated by prior knowledge from economic theory in a non-invasive manner, meaning restrictions which fit the data are costly to discard whereas restrictions which do not fit the data get less penalized and deviations from these restrictions are less costly. Third, due to the loss and the \(L_{2}\) penalty, the estimator is computationally cheap.
With the \(L_{2}\) penalty, the RGMM estimator never shrinks \(\hat{B}_{ij}\) to \(R_{ij}\) and thus does not allow to select valid restrictions. Using a \(L_{1}\) (Lasso) penalty would allow to shrink \(\hat{B}_{ij}\) exactly to \(R_{ij}\) and thus allows to select valid restrictions. However, the loss function \(Q(B,u)\) is
non-linear and solving the optimization problem is computationally demanding with the \(L_{1}\) penalty. Dai et al. (2018) propose the broken adaptive ridge estimator which iteratively updates the adaptive weights of the \(L_{2}\) penalty and show that the estimator approximates an \(L_{0}\) penalty and allows to select variables. If selecting restrictions is desired, an analogous iterative procedure can be applied to the RGMM estimator.
## 4 Finite sample performance
The following Monte Carlo study illustrates the benefits of correctly imposed restrictions via the penalty term of the RGMM estimator and sheds light on its ability to distinguish between correct and incorrect restrictions. Correctly imposed restrictions via the penalty term lead to an increase in performance in terms of bias and MSE and the impact of false restrictions decreases with increasing sample size.
I simulate an SVAR with four variables
\[\begin{bmatrix}u_{1t}\\ u_{2t}\\ u_{3t}\\ u_{4t}\end{bmatrix}=\begin{bmatrix}10&0&0&0\\ 5&10&0&0\\ 5&5&10&5\\ 5&5&5&10\end{bmatrix}\begin{bmatrix}\varepsilon_{1t}\\ \varepsilon_{2t}\\ \varepsilon_{3t}\\ \varepsilon_{4t}\end{bmatrix}, \tag{12}\]
where the structural shocks are independently and identically drawn from the two-component mixture \(\epsilon_{it}\sim 0.79\ \mathcal{N}(-0.2,0.7^{2})+0.21\ \mathcal{N}(0.75,1.5^{2})\), where \(\mathcal{N}(\mu,\sigma^{2})\) indicates a normal distribution with mean \(\mu\) and standard deviation \(\sigma\). The shocks have skewness \(0.9\) and excess kurtosis \(2.4\).
The estimators use two different sets of restrictions for the penalty
\[R_{1}=\begin{bmatrix}.&0&0&0\\.&.&0&0\\.&.&.&.\\.&.&.&.\end{bmatrix}\text{ and }R_{2}=\begin{bmatrix}.&0&0&0\\.&.&0&0\\.&.&.&0\\.&.&.&.\end{bmatrix}. \tag{13}\]
The first set of restrictions \(R_{1}\) contains the correct zero restrictions. The second set \(R_{2}\) imposes a recursive structure and thus contains one incorrect restrictions. In each simulation, I calculate three estimators: The first estimator, denoted by fGMM, is the modified fast SVAR GMM estimator and uses no restrictions. The second estimator, denoted by RGMM(\(R_{1}\)), is the RGMM estimator with a penalty based on the restrictions \(R_{1}\). The third estimator, denoted by RGMM(\(R_{2}\)), is the RGMM estimator with a penalty based on the restrictions \(R_{2}\). The adaptive weights of the RGMM estimators are calculated based on the unrestricted fGMM estimator. The tuning parameter \(\lambda\) of the RGMM estimators is chosen using a repeated cross-validation with two folds, 50 repetitions, and a sequence of 40 potential \(\lambda\) values.2 The tuning parameter is chosen as the maximum of the parameters minimizing the median, 40% and 60% quantiles of the loss in the let-out fold.
Footnote 2: The loss in the let-out fold in the cross-validation is calculated as the weighted GMM loss with the variance and covariance conditions from the constraint \(J(B,u)\) and the coskewness and cokurtosis conditions implied by mutually mean independent shock and each moment is weighted by the inverse of the variance of the corresponding moment under the assumption of independent and normal distributed shocks.
Table 1 shows the average and MSE of each estimated element for the three estimators. Imposing structure with the correct \(R_{1}\) penalty substantially reduces the average bias and MSE compared to the unpenalized fGMM estimator. The greatest performance improvements are observed in the penalized elements, where the MSE of the RGMM(\(R_{1}\)) estimator is approximately \(70-80\%\) smaller than that of the fGMM estimator. Importantly, the penalty also enhances the performance of the unpenalized elements, where the MSE of the RGMM(\(R_{1}\)) estimator is approximately \(50-70\%\) smaller than that of the fGMM estimator. In contrast to the \(R_{1}\) penalty, the \(R_{2}\) penalty contains a false restric
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c|c c c|c c c|c c c} & \multicolumn{6}{c}{\(fGMM\)} & \multicolumn{6}{c}{\(RGMM(R_{1})\)} & \multicolumn{6}{c}{\(RGMM(R_{2})\)} \\ \hline \multirow{3}{*}{\(\sum_{m=1}^{M}\left(\begin{array}{cccc}9.53&-0.04&-0.05&-0.03\\ 1.36&(2.98)&(2.55)&(2.59)\end{array}\right)\)} & \multicolumn{3}{c}{\(\left[\begin{array}{cccc}9.89&0.0&-0.02&0.01\\ 0.85&(0.52)&(0.4)&(0.39)\end{array}\right]\)} & \multicolumn{3}{c}{\(\left[\begin{array}{cccc}9.83&0.01&-0.03\\ 0.96&1.02&0.01\\ 0.81&0.02&0.01\\ 0.88&0.01&0.01\\ 0.88&0.01&0.01\\ 0.88&0.01&0.01\\ 0.88&0.01&0.01\\ 0.88&0.01&0.01\\ 0.88&0.01&0.01\\ 0.88&0.01&0.01\\ 0.88&0.01&0.01\\ 0.88&0.01\\ 0.88&0.01&0.01\\ 0.88&0.01\\ 0.88&0.01&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.888&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.888&0.01\\ 0.888&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.888&0.01\\ 0.888&0.01\\ 0.88&0.01\\ 0.888&0.01\\ 0.888&0.01\\ 0.88&0.01\\ 0.888&0.01\\ 0.888&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.888&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.888&0.01\\ 0.888&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.88&0.01\\ 0.888&0.01\\ 0.88&0.
to zero, even though the true value of the parameter is five. The simulation shows that the estimator is able to detect and dismiss false restrictions. Specifically, the RGMM(\(R_{2}\)) estimator deviates from the incorrect restriction and does not force the estimated \(\hat{B}_{34}\) element to zero. Nevertheless, the \(R_{2}\) penalty leads to an increase of the bias and MSE of the \(B_{34}\) element, however, the bias and MSE decreases with the sample size. At the same time, the correct restrictions of the \(R_{2}\) penalty lead to a performance increase of the correctly penalized and also unpenalized estimated elements. Overall, the positive impact of the correct restrictions outweighs the impact of the incorrect restrictions, and the overall performance of the RGMM(\(R_{2}\)) estimator is superior to the performance of unrestricted fGMM estimator.
These results demonstrate that the RGMM estimator can effectively use correctly imposed restrictions to achieve more precise estimates. Moreover, the estimator can detect and ignore incorrect restrictions, especially as more data and information become available against a given restriction.
## 5 Application
This section analyzes the effects of different approaches to incorporate prior knowledge on the oil and stock market interaction. The results using the traditional recursive approach suggest that the stock market provides no additional information about the oil price. In contrast, when prior knowledge is incorporated through the proposed ridge estimator, it becomes apparent that information shocks originating from stock prices are significant drivers of oil price fluctuations. Specifically, including stock market information in a non-recursive manner reveals that a considerable portion of what is usually classified as oil-specific demand shocks can actually be attributed to information shocks. These findings highlight the importance of stock market information when examining oil price fluctuations, as it provides valuable insights into the underlying forces driving the oil market.
### Specification and estimators
The SVAR uses monthly data from February 1974 to September 2022 with
\[\begin{bmatrix}q_{t}\\ y_{t}\\ p_{t}\\ s_{t}\end{bmatrix}=\alpha+\sum_{i=1}^{12}A_{i}\begin{bmatrix}q_{t-i}\\ y_{t-i}\\ p_{t-i}\\ s_{t-i}\end{bmatrix}+\begin{bmatrix}b_{11}&b_{12}&b_{13}&b_{14}\\ b_{21}&b_{22}&b_{23}&b_{24}\\ b_{31}&b_{32}&b_{33}&b_{34}\\ b_{41}&b_{42}&b_{43}&b_{44}\end{bmatrix}\begin{bmatrix}\varepsilon_{S,t}\\ \varepsilon_{Y,t}\\ \varepsilon_{D,t}\\ \varepsilon_{SM,t}\end{bmatrix}, \tag{14}\]
where \(q_{t}\) is 100 times the log of world crude oil production, \(y_{t}\) is 100 times the log of global industrial production, \(p_{t}\) is 100 times the log of the real oil price, and \(s_{t}\) is 100 times the log of a monthly U.S. stock price index.3
Footnote 3: Global oil production is given by the global crude oil including lease condensate production obtained from the U.S. EIA. Global industrial production is given by the monthly industrial production index in the OECD and six major other countries obtained from Baumeister and Hamilton (2019). The real oil price is equal to the refiner’s acquisition cost of imported crude oil from the U.S. EIA deflated by the U.S. CPI. Real stock prices correspond to the aggregate U.S. stock index constructed by the OECD deflated by the U.S. CPI.
Kilian and Park (2009) estimate a similar four variable oil and stock market model and propose to identify four shocks using a recursive order.4 Specifically, in the recursive model economic activity shocks \(\varepsilon_{Y,t}\) cannot simultaneously affect oil supply, oil-specific demand shocks \(\varepsilon_{D,t}\) cannot simultaneously affect oil supply and economic activity, and stock market information shocks \(\varepsilon_{SM,t}\) cannot simultaneously affect oil supply, economic activity, and the oil price.
Footnote 4: The model analyzed in Kilian and Park (2009) uses a slightly different specification. Specifically, the authors use an economic activity index based on shipping costs. However, as noted by Baumeister et al. (2022), the shipping index may not always be a reliable indicator of changes in global economic activity. Therefore, I follow the approach taken by Baumeister and Hamilton (2019) and use a conventional measure of economic activity based on industrial production. Despite the potential limitations of the shipping index, Baumeister et al. (2022) note that it can serve as a forward-looking indicator. As such, I include an alternative specification in the appendix where the shipping index replaces the stock price. The results indicate that information shocks based on the shipping index have a similar impact on the oil price compared to information shocks based on stock prices analyzed in this section.
The recursive restrictions have two major limitations. Firstly, the reduced form oil supply shocks are, by construction, identified as structural oil supply shocks which implies that oil supply cannot respond simultaneously to demand shocks. Secondly, the reduced form oil price shocks that cannot be explained by supply and economic activity shocks are, by
construction, identified as oil-specific demand shocks. However, if the oil price responds immediately to information shocks extracted from stock prices, these information shocks would end up in the oil-specific demand shock of the recursive model. The former issue regarding the response of oil supply to non-supply shocks received a lot of attention in the literature, see, e.g. Kilian and Murphy (2012), Kilian and Murphy (2014), Baumeister and Hamilton (2019), Caldara et al. (2019), and Braun (2021), while the latter issue on the impact of stock market information shocks on the oil price received little attention.
In contrast to the recursive estimator, which relies on a fixed number of restrictions to ensure identification, the proposed RGMM estimator does not use restrictions to ensure identification. As a result, it does not require to impose the two assumptions in question. The simulations in the previous section show that the ridge estimator can handle false restrictions, however, including them can lead to a small sample bias and a performance decrease. Therefore, the ridge estimator in this section does not impose the two restrictions in question. Instead, the estimator uses the set of restrictions:
\[R=\begin{bmatrix}.&0&.&0\\ 0&.&0&0\\.&.&.&.\\.&.&.&.\end{bmatrix}. \tag{15}\]
This allows oil supply to simultaneously respond to demand shocks \(\varepsilon_{D,t}\) and it allows the oil price to simultaneously respond to stock market information shocks \(\varepsilon_{SM,t}\). Furthermore, the restriction in Equation (15) additionally incorporates the assumption that economic activity behaves sluggishly and does not simultaneously respond to oil supply shocks, which is the same argument motivating the zero response of economic activity to oil-specific demand and stock market information shocks.
I use three different estimators to analyze the impact of incorporating prior economic knowledge on the interaction between the oil and stock market in the SVAR model:
1. Recursive: Recursive SVAR estimated using a Cholesky decomposition.
2. Ridge: RGMM estimator penalizing deviations from \(R\) in Equation (15).
3. Unrestricted: RGMM estimator without a penalty.
The recursive estimator represents the traditional approach to include prior economic knowledge using restrictions. The estimator requires a fixed number of restrictions, which are used to ensure identification and cannot be updated by the data. The ridge estimator represents the proposed approach to include prior knowledge. It shrinks towards the economically motivated restrictions in Equation (15), but does not rely on them to ensure identification. As a result, the estimator can deviate from the restrictions if they are not consistent with the data. The unrestricted estimator represents a data-driven estimation approach, which ignores any prior knowledge on the interaction.
The labeling of all estimators is determined by the solution to the recursive model. Specifically, I estimate the recursive model using the Cholesky decomposition, and use the resulting estimated simultaneous interaction to construct a set of unique-sign permutation representatives centered at the recursive solution, see Section 3. This set restricts the admissible \(B\) matrices in all estimations and determines the labeling: within the set and in line with the recursive labeling, the first shock represents an oil supply shock, the second shock is an economic activity shock, the third shock is an oil-specific demand shock, and the last shock is the stock market shock. In addition, the weights of the ridge estimators are constructed based on the unrestricted estimator. Furthermore, the tuning parameter \(\lambda\) required for the ridge estimator is determined in a similar fashion to the previous section, using a repeated cross-validation with two folds and 100 repetitions. The results are shown in the appendix. Lastly, the non-Gaussianity measured by the skewness, excess kurtosis, and Jarque-Bera test of the reduced form and estimated structural form shocks from all estimators are presented in the appendix. The results indicate that all shocks are left skewed with heavy tails.
### Empirical results
Figures 1 and 2 display the estimated response of the oil price to oil supply shocks and the estimated response of oil supply to oil-specific demand shocks, respectively. The recursive estimator suggests that a one standard deviation oil supply shock leads to an immediate oil price increase of around one percent, while the unrestricted and ridge estimator both find a larger response to supply shocks. The smaller oil price response in the recursive SVAR can be attributed to the response of oil supply to oil-specific demand shocks. Specifically, in the recursive model, oil supply cannot contemporaneously respond to oil-specific demand shocks. Therefore, the correlation of oil supply and the oil price is determined by the response of the oil price to oil-specific demand shocks. In contrast, the ridge estimator and the unrestricted estimator both find a positive response of oil supply to oil-specific demand shocks. Thus, the response of the oil price to a (negative) supply shock needs to increase in order to explain the correlation of oil supply and the oil price.
Figure 3 presents the estimated response of the oil price to stock market information shocks. All models suggest that the stock market information shock leads to an immediate increase in the stock price of around three percent and a medium-run increase in oil supply and economic activity, as documented in the appendix. This suggests that the stock market information shock contains news about future economic activity in all models. The ridge estimator and the unrestricted estimator allow to disentangle the simultaneous movement of
Figure 1: Oil price response to a one standard deviation oil supply shock.
residual oil and stock prices into oil-specific demand and stock market information shocks, which can affect both variables simultaneously. Both estimators find that the oil price immediately increases in response to the stock market information shock. In contrast, in the recursive model, the response of the oil price to stock market information shocks is restricted to zero on impact and the recursive model suggests that the information shock has no significant impact on the oil price in the medium and long run. In the recursive model, a shock that simultaneously affects the residual oil price unexplained by supply and economic activity shocks is by construction identified as an oil-specific demand shock. Therefore, information shocks which affect the oil price will be contained in the oil-specific demand shock of the recursive model, while the stock market information shock in the recursive model contains a mixture of demand and information shocks with no immediate impact on the oil price.
Table 2 shows the effect of the recursiveness restrictions on the forecast error variance decomposition of the oil price. According to the recursive model, oil-specific demand shocks are the primary driver, explaining more than 80% of the variation, while supply shocks account for less than 6% and stock market information shocks even less. In contrast, the ridge estimator suggests that oil supply and oil-specific demand both explain around \(20-30\%\) of the oil price variation each. Therefore, the importance of supply shocks for the oil price variation is in line with the results in Baumeister and Hamilton (2019). Moreover,
Figure 2: Oil supply response to a one standard deviation oil-specific demand shock.
the ridge estimator allows to disentangle oil-specific demand and stock market information shocks which simultaneously affect stock and oil prices. Allowing both shocks to affect the oil price simultaneously results in a larger importance of stock market information shocks, which now explain around \(30-40\%\) of the oil price variation.
Figure 4 presents the historical decomposition of the oil price and sheds light on the importance of supply, demand, and information shocks in different periods. In the recursive SVAR, the oil price is predominately driven by oil-specific demand shocks, as seen in the oil price movements during significant events such as the collapse of OPEC in 1985, the Persian Gulf War in 1990, the oil price run up in \(2007-2008\), the oil price decline and recovery following the collapse of Lehman Brothers in 2008, the oil price decline in \(2014-2016\), the oil price collapse and recovery at the beginning of the COVID-19 pandemic, and the recent oil price increase in 2022 following the invasion of Ukraine.
On the other hand, the ridge estimator provides are more nuanced picture. Firstly, it
\begin{table}
\begin{tabular}{c|c c c c} \multicolumn{5}{c}{Recursive estimator} & \multicolumn{3}{c}{Ridge estimator} \\ horizon & \(\varepsilon_{S}\) & \(\varepsilon_{Y}\) & \(\varepsilon_{D}\) & \(\varepsilon_{SM}\) \\ \hline
4 & 0.04 & 0.04 & 0.91 & 0.01 \\
0.02/0.08 & 0.02/0.07 & 0.86/0.93 & 0.0/0.02 \\
12 & 0.06 & 0.1 & 0.84 & 0.0 \\
0.03/0.11 & 0.05/0.16 & 0.74/0.87 & 0.01/0.03 \\
24 & 0.04 & 0.09 & 0.86 & 0.01 \\
0.03/0.1 & 0.05/0.15 & 0.73/0.87 & 0.01/0.06 \\ \end{tabular}
\begin{tabular}{c|c c c c} \multicolumn{5}{c}{Ridge estimator} \\ horizon & \(\varepsilon_{S}\) & \(\varepsilon_{Y}\) & \(\varepsilon_{D}\) & \(\varepsilon_{SM}\) \\ \hline
4 & 0.25 & 0.06 & 0.27 & 0.42 \\
0.17/0.54 & 0.03/0.09 & 0.1/0.34 & 0.18/0.54 \\
12 & 0.26 & 0.12 & 0.24 & 0.38 \\
0.18/0.53 & 0.06/0.18 & 0.07/0.32 & 0.15/0.49 \\
24 & 0.23 & 0.12 & 0.32 & 0.34 \\
0.14/0.51 & 0.05/0.18 & 0.1/0.4 & 0.13/0.46 \\ \end{tabular}
\begin{tabular}{c|c c c c} \multicolumn{5}{c}{Ridge estimator} \\ horizon & \(\varepsilon_{S}\) & \(\varepsilon_{Y}\) & \(\varepsilon_{D}\) & \(\varepsilon_{SM}\) \\ \hline
4 & 0.25 & 0.06 & 0.27 & 0.42 \\
0.17/0.54 & 0.03/0.09 & 0.1/0.34 & 0.18/0.54 \\
12 & 0.26 & 0.12 & 0.24 & 0.38 \\
0.18/0.53 & 0.06/0.18 & 0.07/0.32 & 0.15/0.49 \\
24 & 0.23 & 0.12 & 0.32 & 0.34 \\
0.14/0.51 & 0.05/0.18 & 0.1/0.4 & 0.13/0.46 \\ \end{tabular}
\end{table}
Table 2: Forecast error variance decomposition of the real price of oil.
Figure 3: Oil price response to a one standard deviation stock market news shock.
suggests that supply shocks played a larger role during the collapse of OPEC in 1985, the Persian Gulf War in 1990, and the oil price run up in \(2007-2008\). Secondly, the ridge estimator suggests that information shocks extracted from stock prices contributed largely to the oil price increase in \(2007-2008\), the oil price decrease following the collapse of Lehman Brothers in 2008, the oil price decrease in \(2014-2016\), and also to the oil price decrease and recovery at the beginning of the COVID-19 pandemic.
Overall, the analysis suggests that the oil price responds immediately to information shocks extracted from stock prices and that these information shocks contributed largely to the oil price variation. Furthermore, including stock market information non-recursively reveals that a considerable portion of what is typically classified as oil-specific demand shocks can
Figure 4: Historical decomposition of the real oil price.
actually be attributed to information shocks. This finding underscores the importance of considering stock market information when examining oil price fluctuations, as it provides valuable insights into the underlying factors driving the market.
## 6 Conclusion
This study demonstrates the value of prior economic knowledge in combination with statistical identification approaches for SVAR models. The estimator proposed in this study uses a data-driven approach which only requires mean independent shocks to ensure identification and adds a ridge penalty to include restrictions representing uncertain prior economic knowledge. The simulations show that incorporating uncertain prior economic knowledge can enhance the accuracy of estimates and that the estimator can detect and reduce the impact of incorrect restrictions as sample size increases. The application illustrates the usefulness of the proposed estimator and highlights the role of stock market information in explaining oil price fluctuations.
|
2305.17369 | Modularized Zero-shot VQA with Pre-trained Models | Large-scale pre-trained models (PTMs) show great zero-shot capabilities. In
this paper, we study how to leverage them for zero-shot visual question
answering (VQA). Our approach is motivated by a few observations. First, VQA
questions often require multiple steps of reasoning, which is still a
capability that most PTMs lack. Second, different steps in VQA reasoning chains
require different skills such as object detection and relational reasoning, but
a single PTM may not possess all these skills. Third, recent work on zero-shot
VQA does not explicitly consider multi-step reasoning chains, which makes them
less interpretable compared with a decomposition-based approach. We propose a
modularized zero-shot network that explicitly decomposes questions into sub
reasoning steps and is highly interpretable. We convert sub reasoning tasks to
acceptable objectives of PTMs and assign tasks to proper PTMs without any
adaptation. Our experiments on two VQA benchmarks under the zero-shot setting
demonstrate the effectiveness of our method and better interpretability
compared with several baselines. | Rui Cao, Jing Jiang | 2023-05-27T05:00:14Z | http://arxiv.org/abs/2305.17369v2 | # Modularized Zero-shot VQA with Pre-trained Models
###### Abstract
Large-scale pre-trained models (PTMs) show great zero-shot capabilities. In this paper, we study how to leverage them for zero-shot visual question answering (VQA). Our approach is motivated by a few observations. First, VQA questions often require multiple steps of reasoning, which is still a capability that most PTMs lack. Second, different steps in VQA reasoning chains require different skills such as object detection and relational reasoning, but a single PTM may not possess all these skills. Third, recent work on zero-shot VQA does not explicitly consider multi-step reasoning chains, which makes them less interpretable compared with a decomposition-based approach. We propose a modularized zero-shot network that explicitly decomposes questions into sub reasoning steps and is highly interpretable. We convert sub reasoning tasks to acceptable objectives of PTMs and assign tasks to proper PTMs without any adaptation. Our experiments on two VQA benchmarks under the zero-shot setting demonstrate the effectiveness of our method and better interpretability compared with several baselines.
## 1 Introduction
Visual Question Answering (VQA), the task of answering textual queries based on information contained in an image, is a multimodal task that requires comprehension and reasoning of both visual and textual content Agrawal et al. (2017); Hudson and Manning (2019). Most previous work on VQA either trains VQA models from scratch (e.g., Fukui et al. (2016); Anderson et al. (2018)) or fine-tunes pre-trained vision-language models for VQA (e.g., Li et al. (2019); Lu et al. (2019)). Thus, they rely heavily on labeled VQA data, which are expensive to obtain. VQA models based on supervised learning are also hard to generalize to new domains or new datasets Xu et al. (2020); Chao et al. (2018); Zhang et al. (2021).
Recently, large-scale pre-trained models (PTMs) have demonstrated strong transferability to different downstream tasks under zero-shot settings, i.e., without any training data for the downstream tasks Brown et al. (2020); Radford et al. (2021). With increased pre-training data size, these models show strong zero-shot performance on various down-stream tasks, such as image classification and face detection with the CLIP model Radford et al. (2021) and sentiment analysis and commonsense question answering with the GPT-3 model Brown et al. (2020). However, few studies have focused on zero-shot VQA from pre-trained models.
Despite the power of these PTMs, it is not straightforward to directly apply them to VQA under zero-shot settings, because they are not pre-trained with the same objective as VQA. Some recent work converts images to tokens that pre-trained language models can understand so that VQA can be converted to text-based QA Yang et al. (2022); Tong et al. (2022); Tsimpoukelli et al. (2021); Jin et al. (2022); Dai et al. (2022). However, this approach requires either a strong pre-trained image captioning model that can capture sufficient visual details or auxiliary training to obtain such a captioning model. Some other work converts VQA into a multimodal matching problem so that pre-trained vision-language models (PT-VLMs) such as CLIP can be used Song et al. (2022); Shen et al. (2022). However, complex VQA questions such as those found in the GQA dataset Hudson and Manning (2019) often require spatial reasoning and/or multi-step reasoning, which PT-VLMs may not be strong at Subramanian et al. (2022); Thrush et al. (2022).
VQA questions can be complicated and often require different reasoning steps such as object detection and spatial reasoning, as the example question in Figure 1 illustrates. Previously, people proposed Neural Module Networks Andreas et al. (2016); Hu et al. (2017), which are modularized net
works where each pre-defined module performs a specific reasoning task. These pre-defined modules are trained end-to-end from labeled VQA data. Motivated by the idea of modularization, in this paper, we propose a modularized zero-shot network for VQA (**Mod-Zero-VQA**) by decomposing questions into sub-tasks and assigning appropriate sub-tasks to PTMs without any adaptation. Given a question, we first parse the question into basic reasoning steps explicitly. These reasoning steps will then be reconfigured and mapped to different PTMs based on a set of rules we define. Specifically, we consider the following PTMs: _OWL_(Minderer et al., 2022) as the object detector, _MDETR_(Kamath et al., 2021) for reference expression localization (including several skills such as relational and spatial reasoning) and _CLIP_(Radford et al., 2021) as the answer generator for open-ended questions. Considering the limited capabilities of current pre-trained vision-language models in spatial relation understanding (Subramanian et al., 2022), we also define simple and general heuristics to aid spatial reasoning. Note that only when we decompose questions and reasoning chains step by step can we insert human heuristics for spatial reasoning, because we have the intermediate outputs such as objects' bounding boxes from previous steps.
We evaluate the proposed method on the GQA dataset (Hudson and Manning, 2019) where questions are compositional and require multi-step reasoning. The experiment result shows that the proposed model surpasses the baselines significantly on GQA, with near \(\mathbf{13}\%\) of relative improvement over the strongest baseline (from 41.9 to 47.3). The results confirm the benefit of modularization when using PTMs for zero-shot VQA. In addition, our method is interpretable because of the explicit reasoning steps generated.
The contributions of our work can be summarized as follows: (1) We propose a novel modularized zero-shot VQA method that utilizes different pre-trained models for different reasoning steps; (2) We design rules to map different VQA reasoning steps to suitable PTMs so that we can leverage these PTMs without any adaptation; 3) Experiment results show the effectiveness of the proposed method, especially when questions consist of multiple steps of reasoning.
## 2 Background
Task Definition.Given an image \(I\) and a question \(Q\), a VQA system is expected to return an answer \(a\). Traditional fully supervised VQA relies on a training set consisting of (image, question, answer) triplets. For zero-shot VQA, no such training data is given. However, in this paper we assume that we can use pre-trained models (PTMs) to help us with zero-shot VQA.
Existing Zero-shot VQA Methods.Work on zero-shot VQA is very limited. We can organize existing work into the following categories. One line of work leverages the question answering capability in pre-trained language model (LMs). Some of them adopt prefix language modeling with weakly-supervised data other than VQA data (i.e., image-text pairs) to convert visual information into discrete tokens (prefix) that LMs can understand. Frozen (Tsimpoukelli et al., 2021), VLKD (Dai et al., 2022) and FewVLM (Jin et al., 2022) fall under this category. Some directly convert VQA images into textual descriptions so that the task of VQA changes to text-based QA and LMs can be applied. Methods in this category include PICa (Yang et al., 2022) and PnP-VQA (Tiong et al., 2022). Recent work (Song et al., 2022; Shen et al., 2022)
Figure 1: An overview of our proposed method. Instead of training modules in NMN, we propose a modularized zero-shot VQA method leveraging pre-trained models to perform different reasoning tasks.
converts VQA to an image-text matching problem and prompts the CLIP model (Radford et al., 2021), a large-scale vision-language model pre-trained on the image-text matching task. The prompts can be either question irrelevant such as _Question:_[Ques];
_Answer:_[MASK] (QIP by Shen et al. (2022)) or question-related by converting questions into a masked statement (TAC-P by Song et al. (2022)).
However, a limitation with these methods is that several of them still require training, although the training data is not in the form of VQA. Besides, converting images to captions and leveraging text-based QA may lose important visual details during the caption generation step. The two methods above using CLIP do not address the issue that CLIP model lacks compositional and spatial reasoning abilities, which has been observed in previous work (Subramanian et al., 2022; Thrush et al., 2022).
## 3 Modularized Zero-shot VQA
Our method is motivated by Neural Module Network (NMN) based VQA, which decomposes questions into reasoning steps, where each module in the NMN is pre-defined to perform a specific reasoning task. The idea allows us to select appropriate pre-trained models to handle different reasoning tasks in a question. Specifically, in NMN-based VQA, we first manually define a set of reasoning steps such as object detection and spatial reasoning, each represented by a _module_. A question is then explicitly decomposed and converted into a _layout_ of modules, which is an executable program showing the reasoning chain to reach the final answer. The top section of Figure 1 shows the layout corresponding to the sample question. To train an NMN-based VQA system, usually a layout generator is separately built first, which either uses hand-crafted rules over dependency parses of questions or is a trained seq2seq model. Then, the parameters of the various VQA modules are learned from VQA training data.
For our work, we do not want to use VQA data for training. But we observe that many modules in NMN-based VQA can be supported by pre-trained models that have already acquired the capabilities needed by these modules. The key component of our method is therefore to map a layout of modules produced by traditional NMN-based VQA to a simplified layout of zero-shot components that can be implemented directly using pre-trained models.
### Traditional VQA Modules
There is not any standard set of modules for VQA. We largely adopt the design of modules introduced by Hu et al. (2017) with some minor changes. We assume that the image has been pre-processed and \(N\) bounding boxes have been detected, each represented as an embedding vector, collectively denoted as \(\mathbf{V}=(\mathbf{v}_{1},\mathbf{v}_{2},\ldots,\mathbf{v}_{N})\). An attention map \(\mathbf{\alpha}\) is defined to be a distribution over the \(N\) bounding boxes.
Table 1 lists the most important traditional VQA modules that we will replace with pre-trained models. The full list of modules can be found in Table 7 in the appendices. It is worth explaining that besides taking in \(\mathbf{V}\) and \(\mathbf{\alpha}\) as either input or output, many modules also take in the word embeddings of some text description extracted from the question. These text embeddings are arguments to control the behaviors of the modules. For example, the Find module's objective is to locate an object among all the bounding boxes given. The textual input \(\mathbf{g}_{\text{OBJ}}\) is therefore the word embedding of the name of the object to be found. Similarly, \(\mathbf{g}_{\text{RELA}}\).\(\mathbf{g}_{\text{ATTR}}\) and \(\mathbf{g}_{\text{QUERY}}\) are word embeddings for the description of relation (e.g., _to the left of_), attribute (e.g., _red_) and aspect to query (e.g., querying _name_).
Traditionally, the parameters of the modules in Table 1 need to be learned from VQA training data. In other words, these modules' underlying capabilities such as object recognition and relational reasoning need to be acquired from VQA data. However, we hypothesize that recently developed pre-trained models may already have some of these capabilities and can therefore directly equip these modules with such capabilities. For example, the Find module is mainly responsible for object recognition, and previously the parameters of Find have to be learned from scratch using VQA data. Now with
\begin{table}
\begin{tabular}{l l} \hline \hline
**Module** & **Inputs** \\ \hline Find & \(\mathbf{V}\), \(\mathbf{g}_{\text{OBJ}}\) \\ Relocate & \(\mathbf{\alpha}\), \(\mathbf{V}\), \(\mathbf{g}_{\text{RELA}}\) \\ Filter & \(\mathbf{\alpha}\), \(\mathbf{V}\), \(\mathbf{g}_{\text{CONDI}}\) \\ \hline Choose & \(\mathbf{\alpha}_{1}\), \(\mathbf{\alpha}_{2}\), \(\mathbf{V}\), \(\mathbf{g}_{\text{RELA}^{1}}\), \(\mathbf{g}_{\text{RELA}^{2}}\) \\ Query & \(\mathbf{\alpha}\), \(\mathbf{V}\), \(\mathbf{g}_{\text{QUERY}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: A subset of the modules in traditional NMN that we replace with pre-trained models. Modules in the first block output an attention map and those in the second block generate an answer.
a powerful pre-trained model such as OWL (Minderer et al., 2022) that can recognize a wide range of objects, we can presumably directly use a model like OWL to replace the traditional Find module.
### Pre-trained Models
We utilize three pre-trained models that we believe are highly relevant to VQA.
Owl.The Vision Transformer for Open-World Localization (OWL) model (Minderer et al., 2022) is a model for open-vocabulary object detection. It is first pre-trained on large-scale image-text pairs and then fine-tuned with added detection heads and medium-sized detection data. Given the category name of an object and an image, the model is able to locate bounding box(es) in the image containing the object together with a confidence score for each box.
Mdetr.The modulated DETER (DEtection TRansformer) model (Kamath et al., 2021) is an end-to-end detector that can detect an object in an image conditioned on a piece of textual description of the object such as its attributes and its relation with another object in the image. The model is pre-trained on image-text pairs with explicit alignment between phrases in the text and bounding boxes of objects in the image. Given an image and the description of an object, MDETR is able to locate the bounding box(es) in the image containing the object satisfying the description. Note that different from OWL, MDETR is able to understand textual descriptions that may contain attribute information and/or complex visual relations. For example, given the description _a man holding a yellow cup is talking_, MDETR will detect the bounding box containing the man holding a yellow cup in the given image, whereas OWL is not able to use the description and will only recognize all bounding boxes containing a man. Note that we use the version of MDETR pre-trained on general modulated detection **without fine-tuning** for any downstream tasks.
Clip.CLIP is a well-known large-scale vision-language model by OpenAI. It is pre-trained with 400M image-caption pairs through contrastive learning. Given an (image, text) pair, CLIP uses its separate image encoder and text encoder to turn the image and the text each into a vector, and the cosine similarity between the two vectors directly measures the compatibility of the two. Recent work has shown that CLIP can be directly used for VQA in a zero-shot setting, if we can come up with a set of candidate answers and transform each (question, answer) pair into a statement (Song et al., 2022).
### Zero-shot NMN using Pre-trained Models
Based on the descriptions of the traditional VQA modules in Section 3.1 and of the three PTMs we consider in Section 3.2, we can see that there are obvious connections between the capabilities desired by the traditional modules and the capabilities that these PTMs have already acquired.
However, the mapping between them is not trivial. First of all, there is no simple one-to-one mapping from traditional VQA modules to the PTMs. For example, the MDETR model can already perform multiple steps of reasoning to locate the desired object, so it can be used to cover a sequence of modules in an NMN layout. Second, there may be capabilities required when applying PTMs but not captured by modules defined in NMN-based VQA. In particular, the MDETR model always assumes that the object to be grounded exists in the given image, but for those questions asking for the existence of a specified object, we cannot directly use MDETR.
To address these challenges, we carefully design a mapping mechanism that can map an NMN-based module layout to a simplified layout consisting of a few zero-shot modules. Three of these zero-shot modules (OWL, MDETR and CLIP) correspond exactly to the three PTMs introduced earlier. The rest of the zero-shot modules are defined by simple heuristic rules. We list these zero-shot modules in Table 2.
We now give a high-level summary of the mapping mechanism below. We first look at the last module in the NMN layo
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Module** & **Inputs** & **Output** \\ \hline OWL & \(I\), OBJ & \(\mathcal{B}\), \(\mathbf{s}\) \\ MDETR & \(I\), SENT & \(\mathcal{B}\), \(\mathbf{s}\) \\ CLIP & \(\mathcal{B}\), \(I\), \(\mathcal{V}\) & Ans. \\ \hline Count & \(\mathcal{B}\) & Num. \\ Exist & \(\mathcal{B}\), (ATTR/RELA) & _Yes/No_ \\ And & \(\mathtt{Exist}_{1},\mathtt{Exist}_{2}\) & _Yes/No_ \\ Or & \(\mathtt{Exist}_{1},\mathtt{Exist}_{2}\) & _Yes/No_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Zero-shot modules with either pre-trained models or heuristics. The \(I\) is the VQA image, \(\mathcal{V}\) is the answer vocabulary and \(\mathcal{B}\) is the set of bounding boxes.
is one of Choose, Compare and Query, we know that the input to this last module is either a single attention map or two attention maps, where each attention map essentially tries to capture an object matching some textual descriptions. By tracing the path in the layout leading to the attention map, we choose either the zero-shot OWL module (when the path has a length of 1) or the zero-shot MDETR module (when the path is longer than 1 hop). This is because when the path length equals to one, it involves only object detection (corresponding to a single Find module in the NMN layout for generation of the attention map). When the path length is more than one, it indicates the generation of the attention map in the NMN layout involves other modules such as Filter and Relocate, which calls for the other abilities than object detection, such as language understanding, attribute recognition and relational reasoning. Different from NMN modules which takes in image features and object embeddings to generate an attention map, our zero-shot OWL and zero-shot MDETR takes in the raw image and raw texts to locate (OBJ for OWL and SENT for MDETR) to generate a set of detected bounding boxes \(\mathcal{B}=\{\mathbf{b}_{n}\}_{n=1}^{N}\) together with their confident scores \(\mathbf{s}\in\mathbb{R}^{N}\), where \(\mathbf{b}_{n}\in\mathbb{R}^{4}\) represents the relative position and size of the detected bounding box in the image. We keep only the bounding box from either OWL or MDETR with the highest confident score and feed it to CLIP. We generate an answer by leveraging the capability of multimodal matching of CLIP. Specifically, given \(\mathcal{B}\), we generate an input image (which we refer to as \(I^{\text{in}}\)) by either masking regions not containing those detected boxes (\(|\mathcal{B}|=2\)) or cropping the image so that only the part containing the box remains (\(|\mathcal{B}|=1\)). If the final NMN module is Choose, we generate a masked template by question conversion as in (Song et al., 2022); otherwise the masked template will be a simple "[MASK]". Then we match the image \(I^{\text{in}}\) with the template where the [MASK] token is replaced by each of the answer candidates in \(\mathcal{V}\). We then select the answer that, when placed inside the template, best matches the image.
If the module is Exist, we trace back the path leading to Exist to determine whether the module is asking for the existence of an object, an attribute or a relation. For object existence (e.g., _is there a car_), we use the zero-shot OWL module. For attribute existence and relation existence, we first verify whether all mentioned nouns (objects) detected by a POS tagger in the question exist with the OWL module. Once we detect an object that does not exist, the predicted answer will be _no_. If all objects exist, then we generate corresponding bounding boxes leveraging either OWL or MDETR following the method described in the paragraph above. For attribute existence, we generate a pair of a positive and a negative descriptions: (ATTR, _not_ ATTR), e.g., (_red_, _not red_). We then find which description aligns better with the cropped image according to \(\mathbf{b}\). If the image aligns better with the positive statement, then the answer will be _yes_; otherwise, _no_. For relation existence, we generate the masked image \(I^{\text{in}}\) according to \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\) (the bounding boxes of the two objects whose relation is to be checked) and a pair of opposite statements regarding the relation to be checked, following (Song et al., 2022). For example, if the question is to check whether \(A\) is holding \(B\), the two opposite statements will be \(A\) is holding \(B\) and \(A\) is not holding \(B\). For both attribute and relation existence, we use zero-shot CLIP for the alignment between the input image and the statements. More details and the work flows of existence-related questions are provided in Appendix C.
If the module is Count, we directly count the number of bounding boxes in \(\mathcal{B}\) returned either from OWL or MDETR. Finally, if the last module is a logical AND or logical OR, we further trace to the inputs of this module, which should both be an Exist module. We then use the same mechanism described above for Exist to process the module. By receiving the outputs from the Exist modules, logical operations will be applied to determine the output. The deterministic logical operations can be found in Appendix B.
### Spatial Heuristics
As mentioned in (Subramanian et al., 2022), CLIP is less capable of spatial reasoning. Using CLIP for answer generation may not be enough when it involves spatial relation understanding. Following (Subramanian et al., 2022), we define simple and general heuristics to perform certain types of spatial reasoning. Note that only when we decompose questions explicitly can we insert the spatial heuristics into CLIP-based answer generation because we have the intermediate outputs from previous reasoning steps.
First of all, given the coordinates and the size of a bounding box, we use manual rules (named
as **SpD**) to decide its position in the image as _left, right, bottom, top_. Besides, we define heuristics, denoted as **SpC**, to solve spatial relations between two bounding boxes (e.g., _to the left of_ and _to the right of_).
Details of the implementation of the spatial relation solvers can be found in Appendix D.
## 4 Experiments
### Dataset
We evaluate the proposed modularized zero-shot VQA method on two benchmarks: GQA (Hudson and Manning, 2019) and VQAv2 (Goyal et al., 2017). The GQA dataset consists of questions requiring multi-step reasoning and various reasoning skills. Around \(94\%\) of the questions require multiple reasoning steps. We regard it as the main dataset to demonstrate the effectiveness of the proposed method compared with the baselines. Compared with GQA, questions on the VQAv2 dataset require fewer reasoning steps and are of diverse semantics. We use VQAv2 to show the validity of our method in real-world VQA. We report standard accuracy for the GQA dataset while soft accuracy (Goyal et al., 2017) for VQAv2 dataset as there are multiple ground-truth answers. We report the statistics of the datasets in Appendix E.
### Implementation Details
We conduct experiments on NVIDIA Tesla V100 GPU. The thresholds for the OWL and the MDETR model to filter out detected bounding boxes of low confidente scores are set to be \(0.2\) and \(0.7\) respectively. We follow (Song et al., 2022) for the generation of the answer vocabulary \(\mathcal{V}\) for open-ended questions. More details about answer vocabulary generation can be found in Appendix G and more information about experiment settings can be found in Appendix G.
### Main Results
Zero-shot VQA performance of the baselines mentioned in Section 2 and our proposed method are summarized in Table 31.
Footnote 1: For FEWVLM and PNP-VQA model, we show their reported performances on GQA test-dev, which should have similar distributions as the validation split of GQA.
First of all, we observe that the proposed Mod-Zero-VQA method ismore effective on the GQA dataset, which contains many multi-step reasoning questions. Mod-Zero-VQA clearly surpasses all baselines on GQA. The results suggest that it is effective under zero-shot settings to decompose questions when questions are compositional and require several steps of reasoning to reach the answer. Such decomposition allows us to take advantage of the capabilities of different pre-trained models. We also test the validity of the proposed method on real-world VQAv2 dataset, where questions require fewer reasoning steps and of diverse semantics. We can see that our method still achieves the best performance among zero-shot methods that utilize CLIP. Although better performance is achieved by several methods that utilize large language models (as shown in the first block of Table 3), it is worth pointing out that these methods often require caption generation as a pre-processing step, and this step poses challenges. For example, PNP-VQA generates \(\mathbf{100}\) captions per question, which is laborious. There may also be redundancy because many captions are irrelevant for question answering. Another advantage of our Mod-Zero-VQA method over the other zero-shot baselines is that our method offers high interpretability by showing the explicit multi-step reasoning chain, which has not been considered by any previous work. With question decomposition, we can design modularized networks and assign reasoning tasks to pre-trained models (PTMs) which are more capable of the tasks, and with more powerful pre-trained models coming out, our method can be easily extended to utilize newer and more effective PTMs. Meanwhile, it is easier to pinpoint the weakest chain in a system and insert human heuristics to aid these modules.
\begin{table}
\begin{tabular}{l|c c} \hline \hline
**Method** & **GQA** & **VQA** \\ \hline Frozen & - & 29.5 \\ VLKD\({}_{\text{VELLI/14}}\) & - & 42.6 \\ FEWVL\({}_{\text{base}}\) & 27.0 & 43.4 \\ FEWVL\({}_{\text{large}}\) & 29.3 & 47.7 \\ PNP-VQA\({}_{\text{GM}}\) & 34.6 & 54.3 \\ PNP-VQA\({}_{\text{11B}}\) & **41.9** & **63.3** \\ \hline QIP & 35.9 & 21.4 \\ TAP-C & 36.3 & 38.7 \\ Mod-Zero-VQA & **47.3** & **41.0** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experimental results on the GQA and VQA datasets. The first block are models using the text-based QA capability of LMs and the second blocks are models incorporating CLIP.
### Ablation Study
In our Mod-Zero-VQA method, PTMs play an important role. In this section, we show the performance of Mod-Zero-VQA when we replace PTMs listed in Section 3.2 with alternative models.
**Replacing OWL:** We tried replacing OWL with other object detectors. First, we consider an object detector combining Faster-RCNN Ren et al. (2015) and CLIP (**CLIP-FR**). Specifically, Faster-RCNN is used to detect objects in an image and CLIP is applied to classify each detected object. Second, we use the ground-truth object annotations from Visual Genome Krishna et al. (2017) to replace object detection results (**GT**), which serves as an upper bound. Results of our zero-shot NMNs with different object detectors are provided in Table 4. We divide the questions into Yes/No (bindary) questions and other questions. We observe that the quality of object detection is important to the performance of zero-shot NMNs. Our model with OWL surpasses the one with CLIP-FR, which has poorer detection performance than OWL. We also observe more substantial performance drop with binary questions. We believe that this is because these questions are mostly about the existence of objects, so the object detection results affect the VQA performance more. Using Mod-Zero-VQA with the ground-truth object detection results would further improve the performance, as shown in the last row of Table 4. This suggests that when more accurate object detection models are developed, we can further improve the zero-shot VQA performance with our approach.
**Replacing CLIP:** We show the performance of replacing zero-shot CLIP (which is CLIP\({}_{\text{VIT-B/16}}\) by default in our experiments), with either CLIP\({}_{\text{Res50}\times 16}\) or ALBEF Li et al. (2021), in Table 5. Because QIP and TAC-P convert VQA to a multi-modal matching task and both use PT-VLMs as the answer generator, we also replace the original CLIP\({}_{\text{VIT-B/16}}\) in these two baselines with the other PTMs. We observe that Mod-Zero-VQA gives stable performance regardless of the vision-language model used, and it always outperforms the baselines substantially. This indicates that these PTMs can all be good substitutes for the zero-shot CLIP module. Compared with the two CLIP models (i.e., with either ViT Dosovitskiy et al. (2021) or ResNet He et al. (2016) as the visual backbone), we also notice that using ALBEF Li et al. (2021) as the answer generator can enhance the performance. To better understand the advantage of using ALBEF over CLIP, we provide more detailed performance in Table 9 in Appendix H. ALBEF mostly benefits the proposed method in the _Query_ type of questions, which usually ask about _objects_, _attributes_ and _relations_. Consistent with Zhao et al. (2022), end-to-end models (i.e., ALBEF in this case) perform better than dual-encoder models (i.e., CLIP in this case) in vision understanding tasks on average. A future direction may be to select the best pre-trained model per question.
### Out-of-Domain Generalization
Because our Mod-Zero-VQA method is not trained on any domain-specific VQA data but rather utilizes pre-trained models that are supposedly trained on data from a wide range of domains, we suspect that our Mod-Zero-VQA method is more robust across different domains compared with VQA models trained on specific domains and applied in cross-domain settings. We therefore also compare our Mod-Zero-VQA with fully-supervised models in the Out-of-Domain Generalization (OOD) setting. Specifically, we consider an OOD setting where test images are related to scenes not observed during training. We first identify a set of scene-related objects and restrict all training images to only those that do not contain these objects. For example, in the _Indoor_ OOD setting, none of the training im
\begin{table}
\begin{tabular}{l|c c c} \hline \hline
**Detector** & **Yes/No Qns** & **Other Qns** & **Overall** \\ \hline CLIP-FR & 56.80 & 33.82 & 41.39 \\ OWL & 69.26 & 36.48 & 47.28 \\ GT & 76.48 & 38.06 & 50.72 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of Mod-Zero-VQA with different object detectors on GQA.
\begin{table}
\begin{tabular}{l l|c} \hline \hline
**Method** & **PT-VLMs** & **Overall** \\ \hline \multirow{3}{*}{QIP} & CLIP\({}_{\text{VIT-B/16}}\) & 35.93 \\ & CLIP\({}_{\text{Res50}\times 16}\) & 35.11 \\ & ALBEF & 34.75 \\ \hline \multirow{3}{*}{TAP-C} & CLIP\({}_{\text{VIT-B/16}}\) & 36.32 \\ & CLIP\({}_{\text{Res50}\times 16}\) & 38.16 \\ & ALBEF & 38.36 \\ \hline \multirow{3}{*}{Mod-Zero-VQA} & CLIP\({}_{\text{VIT-B/16}}\) & 47.28 \\ & CLIP\({}_{\text{Res50}\times 16}\) & 46.49 \\ \cline{1-1} & ALBEF & **48.68** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of the Mod-Zero-VQA model with different PT-VLMs as the zero-shot CLIP for answer generation on GQA.
ages should contain _sofa_, _bed_ or any of the other objects that we have identified to be related to _Indoor_ scenes. To build fully-supervised VQA models for comparison, we consider (1) **BUTD**[1], a classic two-stream VQA models, (2) traditional **NMNs**[1], and (3) finetuned pre-trained vision-language models, including **VilBert**[12], **VisualBert**[13] and **ALBEF**[13].
The results are shown in Table 6. We can see from the table that for those supervised VQA models, when they are trained on images with different scenes, their performance on the target domain is clearly lower than our Mod-Zero-VQA. Furthermore, our Mod-Zero-VQA method achieves steady performance across different scenes, whereas the supervised VQA models give fluctuated performance in different scenes. This demonstrates the robustness of our proposed method.
### Case Study
As a case study, we visualize the outputs of the reasoning steps from the proposed method and compare the prediction of the proposed method with those of QIP and TAC-P, which also leverage CLIP as the answer generator. We show two example questions and the outputs in Figure 2. Both questions require multiple reasoning steps.
We can see that our method gives the correct predictions while the two other methods answer wrongly. We can also see that by decomposing the questions, our method assigns each sub reasoning task to a pre-trained model capable of the task (i.e., MDETR for reference expression localization and OWL for object detection). With question decomposition, we can also better pinpoint the weaknesses of pre-trained models and insert human knowledge by defining simple but general heuristics (e.g., adding spatial heuristics to zero-shot CLIP and defining logical operations). More examples with visualization are provided in Appendix G.
## 5 Related Work
### Visual question answering
Although great progress has been made in the supervised VQA setting [13, 14, 15, 16], few studies have explored the zero-shot VQA setting. One line of work converts VQA to text-based QA so that language models (LMs) can be applied. Some of them require auxiliary training though not with VQA data [15, 16, 17, 18]. Some suffer from insufficient visual details [16] or laborious generation of irrelevant captions [15]. Others [16, 17] convert VQA to multimodal matching and leverage CLIP [19]. However, CLIP is limited when compositional reasoning and spatial reasoning are required [13, 17]. In this work, we propose to decompose questions and propose a modularized zero-shot VQA method by assigning reasoning tasks to proper pre-trained models without any adaptation.
### Zero-shot applications of pre-trained models
Models pre-trained on a large corpus have strong zero-shot transferability when performing downstream tasks whose objectives are similar to the pre-training objectives of these models. For instance, GPT-3 [18] is powerful for zero-shot QA by treating the QA as a text genera
\begin{table}
\begin{tabular}{l|c c c} \hline \hline
**Method** & **Indoor** & **Food** & **Street** \\ \hline BUTD & 39.27 & 32.28 & 35.96 \\ NNNs & 39.45 & 32.47 & 36.05 \\ VilBert & 39.87 & 32.12 & 36.68 \\ VisualBert & 41.14 & 33.47 & 38.51 \\ ALBEF & 45.55 & 38.87 & 41.60 \\ \hline Mod-Zero-VQA & 48.86 & 47.80 & 49.54 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison between our Mod-Zero-VQA method and fully-supervised VQA models under the out-of-domain setting.
Figure 2: Visualization of intermediate outputs from reasoning steps of the Mod-Zero-VQA model.
tion problem. CLIP Radford et al. (2021) demonstrates good zero-shot image recognition capability by treating the classification task as multimodal matching. For multimodal QA tasks, LMs can be applied once information from other modalities are translated to tokens LMs understand Tiong et al. (2022); Yang et al. (2022). In our work, we decompose VQA questions into sub-reasoning tasks and assign sub-tasks to corresponding pre-trained models whose pre-training objectives match the sub-tasks.
## 6 Conclusion and Future Work
In this work, we propose a modularized zero-shot VQA method, motivated by the idea of Neural Module Network (NMN). Instead of training modules in NMN with VQA data, we decompose questions into reasoning tasks explicitly, leverage pre-trained models and assign proper reasoning tasks to them. Experiments show that our model is powerful on questions requiring multi-step reasoning and applicable for real-world VQA. Besides, the proposed model is highly interpretable, which helps to pinpoint weaknesses of a VQA system, making it easier to improve a system. Our model highlights a future direction of leveraging pre-trained models for other complicated tasks requiring multiple reasoning capabilities.
## Limitations
In this section, we discuss few limitations of the proposed method and point out future directions to improve the model. First, our method needs to decompose questions into a symbolic representation, but such representations are hard for humans to comprehend, and therefore this decomposition mechanism is hard to be trained with human annotation. A promising direction is to leverage pre-trained language models such as ChatGPT 2 to automate this decomposition step, leveraging ChatGPT's internal knowledge of decomposing a complex question into sub-questions. Second, the execution of the zero-shot NMNs is conducted in a deterministic manner, leading to high risks of error propagation if the reasoning chain gets longer. In the future, we can consider a softer way of reasoning over the image with pre-trained models.
Footnote 2: [https://openai.com/blog/chatgpt/](https://openai.com/blog/chatgpt/)
## Acknowledgement
This research was supported by the SMU-A*STAR Joint Lab in Social and Human-Centered Computing (Grant No. SAJL-2022-HAS002).
|
2306.16527 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text
Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | 2023-06-21T14:01:01Z | http://arxiv.org/abs/2306.16527v2 | # OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
###### Abstract
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.1.
Footnote 1: OBELICS reproduction code: [https://github.com/huggingface/OBELICS](https://github.com/huggingface/OBELICS)
IDEFICS models: [https://huggingface.co/HuggingFaceM4/idefics-80b](https://huggingface.co/HuggingFaceM4/idefics-80b)
## 1 Introduction
Recent systems demonstrate the effectiveness of training large multimodal models such as Flamingo on naturally occurring multimodal documents (Alayrac et al., 2022; Aghajanyan et al., 2022; Huang et al., 2023). A multimodal document is a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on these web documents outperform vision and language models trained solely on image-text pairs on various benchmarks (Alayrac et al., 2022). They can also generate long and coherent text about a set of multiple images.
While these results are compelling, they have not been replicable. The datasets used in these works are not publicly available, and relatively little information is known about their creation process and composition. This state motivates the creation of large-scale collections of high-quality multimodal web documents to support the creation of the next generation of models.
We take inspiration from existing large open image-text datasets such as LAION (Schuhmann et al., 2022) and COYO (Byeon et al., 2022), comprised of hundreds of millions of image-text
pairs obtained through web crawling. These datasets have been critical to developing and replicating numerous recent multimodal models (Radford et al., 2021; Wang et al., 2022; Yu et al., 2022; Wang et al., 2022; Liu et al., 2023). While this approach allows for building extremely large and diverse training datasets, we note several limitations to using only image-text pairs. From a language perspective, these datasets rely primarily on alt-text, meaning the text given is brief, captures an approximate snapshot of the image's content, and often lacks grammatical correctness. From a document perspective, image-text pairs remove an image from its natural context on a page and its relationship with other documents.
In this work, we introduce \(\mathtt{OBELICS}\)2, an openly-accessible curated web-scale dataset consisting of 141 million multimodal English web documents which contain 353 million associated images and 115 billion tokens. \(\mathtt{OBELICS}\) collects full multimodal documents interleaving text and images as shown in Figure 1. We describe the dataset creation process, outline the filtering and curation steps and shed light on the dataset's content and limitations. To demonstrate the viability of \(\mathtt{OBELICS}\), we train \(\mathtt{IDEFICS}\), an 80 billion parameter multimodal model and show competitive performance against large-scale multimodal models such as Flamingo (Alayrac et al., 2022).
Footnote 2: Open Bimodal Examples from Large ffltered Commoncrawl Snapshots
## 2 Related Works
Image-text pairs datasetsThe largest multimodal datasets, such as LAION (Schuhmann et al., 2021, 2022), Conceptual Captions (Sharma et al., 2018; Changpinyo et al., 2021), ALIGN (Jia et al., 2021), COYO (Byeon et al., 2022), and DataComp (Gadre et al., 2023), contain billions of image-text pairs and are usually obtained through web-crawling and alt-text extraction. A variety of multimodal models have been trained on this type of dataset: multimodal encoder models which use a contrastive objective (Radford et al., 2021; Wang et al., 2022), image generation based on Transformers or diffusion processes (Nichol et al., 2022; Ramesh et al., 2022; Rombach et al., 2021; Saharia et al., 2022). While the scale of these datasets makes them attractive candidates for training, our work focuses on extracting images and the textual context in which they appear instead of extracting the associated alternative text.
Web document datasetsInsights from scaling language models (Kaplan et al., 2020; Hoffmann et al., 2022) emphasize the need for increasingly bigger datasets. For instance,
Figure 1: A comparison of extraction from the same web document. For image-text pairs, the alt-text of images is often short or non-grammatical. For \(\mathtt{OBELICS}\), the extracted multimodal web document interleaves long-form text with the images on the page.
LLaMA (Touvron et al., 2023) was trained on a dataset of 1.4T tokens created exclusively from openly accessible English web content. The authors noticed that an even bigger dataset would have benefited the model. To address that need, multiple web-scale datasets have been introduced and made available: c4 (Raffel et al., 2019), ROOTS (Laurencon et al., 2022), Pile (Gao et al., 2020), OSCAR (Ortiz Suarez et al., 2020). Although OBELICS falls in the same category of making accessible large collections of curated web documents, the additional extraction of images changes the nature of the resulting dataset. It allows training models with additional vision capabilities.
Multimodal web document datasetsThe recent most performant vision and language models are trained on large sets of multimodal web documents. For instance, Flamingo (Alayrac et al., 2022), an 80 billion multimodal model, was trained on a mix of 2.1 billion image-text pairs, 27 million video-text pairs, and 43 million multimodal web documents. The latter called M3W, includes 185 million images. Similarly, COSMOS-1 (Huang et al., 2023) was trained on a mixture containing 71 million multimodal web documents. However, in both cases, the dataset is not publicly available, and little information is accessible as to the dataset's content, the strategies employed to create that dataset (including filtering strategies), and the quality of the resulting web documents, which ultimately hinders further research.
Concurrently to our work, the Multimodal C4 (mmc4) dataset (Zhu et al., 2023) was recently made accessible. It consists of 103 million multimodal web documents that include 585 million images. Although there are similarities between our datasets, it is important to highlight particular distinctions. First, our dataset is based on more recent documents from February 2020 to February 2023, whereas mmc4 uses documents from April 2019. Additionally, our filtering heuristics appear to be more comprehensive: we leverage the HTML DOM trees to filter out undesirable texts and images, whereas mmc4 uses the HTML to find images in order to merge them with the original C4 dataset by solving a bipartite assignment problem based on a CLIP model similarities. Last, we implement additional deduplication steps at the image, document, and paragraph levels.
## 3 Creation of the Multimodal Web Document Dataset
This section provides an overview of the critical choices of the creation and filtering process. Figure 2 gives a high-level summary of the main steps involved. Many details are omitted from this section, and we invite the reader to refer to the app
Figure 2: Overview of the steps involved in creating OBELICS.
### Collecting a Large Number of HTML Files
First, we collect a vast amount of raw web documents by considering the 25 most recent Common Crawl dumps at the time of the creation, spanning from February 2020 to January/February 20233. We extract the main text from the documents while discarding documents with text of insufficient quality. This process results in 41.2 billion documents.
Footnote 3: [https://commoncrawl.org/](https://commoncrawl.org/)
To filter out non-English content, we apply the FastText classifier (Joulin et al., 2017) to the extracted text, which removes 63.6% of the documents. We perform a MinHash (Broder, 1997) deduplication to remove duplicate content. Additionally, we filter out documents with significant proportions of repeated paragraphs and n-grams, following the methodology used in MassiveText (Rae et al., 2022). Previous studies (Lee et al., 2022; Abbas et al., 2023) have demonstrated the prevalence of duplication in crawled data and the benefits of training on deduplicated data.
Similar to Brown et al. (2020), we employ a logistic regression classifier with hashed token frequencies to ensure high-quality text. This classifier, trained using curated datasets like Wikipedia or OpenWebText (Gokaslan and Cohen, 2019) as positive examples and documents sampled from Common Crawl as negative ones, is fast and effective at detecting human-written text. After these steps, we are left with 1.1 billion documents and their HTML sources from the associated Common Crawl WARC files.
### Simplifying HTML Files
The original HTML content of a document contains a wealth of valuable information that proves highly beneficial in the process of filtering out undesirable text and images. Therefore, we prioritize pre-processing the raw HTML into simplified HTML, making the subsequent extraction of textual and visual elements more efficient.
To this aim, we devise multiple pre-processing strategies for an HTML DOM tree. By manually inspecting instances of all HTML nodes, we differentiate nodes likely to contain relevant texts or images from those that should be discarded, and we formulate specific rules for each type of node. After these pre-processing steps, the resulting simplified HTML files are more than ten times smaller and have been stripped of a large proportion of generic text (spam, ads, boilerplate template, etc.) and generic images, such as logos, while retaining the relevant content.
### Extracting Multimodal Web Documents
In this step, we transform the simplified HTML files previously obtained into a structured web multimodal web document format. This format consists of interleaved texts and images.
We meticulously preserve the original structure of the web pages from the simplified HTML files by extracting the texts and image links while maintaining their rendering defined by the DOM tree. Given that each HTML tag denotes a distinct separation between the preceding and subsequent nodes, we leverage that information to retain line breaks and line feeds on the original page, preserving the formatting and visual rendering of the content.
We obtain 3.6 billion image links and successfully download 55% of them (approximately 2 billion images).
### Filtering Multimodal Web Documents
The filtering process comprises two distinct steps operating at different granularity levels. In the first step, filtering occurs at the node level for images and the paragraph level for text. This step guarantees that only high-quality and relevant images and paragraphs are retained. Each paragraph or image is evaluated based on specific criteria and may undergo modifications or be eliminated if necessary. The second step, conducted at the document level, involves deciding whether to retain or discard the output documents obtained from the
first step. Most text filters used in both steps are primarily derived from Laurencon et al. (2022).
Node-level image filteringWe discard images that are too small, excessively large or have disproportionate dimensions. We observe that these images are often indicative of low-quality or irrelevant content. To eliminate some logos and generic images, we remove images whose URLs contain one of the banned sub-strings, like _logo_.
Paragraph-level text filteringWe apply multiple filters to text paragraphs to remove undesirable content. Specifically, paragraphs that contain an insufficient number of words are discarded. Additionally, we filter out paragraphs with high repetition ratios, excessive ratios of special characters, low ratios of stop words, low punctuation ratios, high proportions of flagged words associated with adult or inappropriate content, or excessively high perplexity scores (as measured by an n-gram language model trained on Wikipedia (Heafield, 2011)). To identify boilerplate sentences or invitations to share articles on social networks, we create a list of frequently used words associated with these paragraphs and remove paragraphs containing an excessive proportion of words from this list. To further identify machine-generated content, we extract words from web-crawled documents to form a list of common words and discard documents with a low ratio of common words.
Document-level filteringAt the document level, we remove all documents with no or excessively high number of images. For text filters, the same filters used at the paragraph level are applied, with sometimes stricter cutoff values.
After these filtering steps, we are left with 365 million web documents and 1.4 billion images. At this step, images can be duplicated across documents.
### Responsible Filtering and Deduplication
We take measures to minimize the amount of inappropriate content in the dataset. In particular, based on manual inspections and tool availability, we implement filters to respect data consent and remove images with pornographic content. Additionally, we also heavily deduplicate content.
Exclusion of opted-out imagesTo respect the preferences of content creators, we remove all images for which creators explicitly opted out of AI model training. We used the Spawning API4 to verify that the images in the dataset respect the original copyright owners' choices.
Footnote 4: [https://api.spawning.ai/spawning-api](https://api.spawning.ai/spawning-api)
Image deduplication based on URLSome images could be present across different documents. We observe that it is particularly true for browser-specific icons or common advertisements encountered during the crawling process. To address this issue, we remove all images that appear more than ten times across the entire dataset. We intentionally do not perform strict deduplication, as we notice that when an image is duplicated only a few times across different documents, the surrounding text and contextual information tend to be different. We also deduplicate images within the same document.
NSFW image filteringTo reduce explicit adult content, we use an open-source NSFW classifier to remove entire documents containing pornographically classified images. We also filter out images with URLs containing banned sub-strings.
Document deduplication based on URL and set of imagesWe complete the initial deduplication step by forming clusters of documents with the same URLs and retaining the most recent document within each cluster. We repeat this operation by forming clusters of documents containing identical sets of images.
Paragraph deduplication across documents of the same domain namesTo remove generic spam phrases commonly found at the end of documents, we perform paragraph-level
exact deduplication within documents sharing the same domain name, resulting in the elimination of approximately 15% of the text.
Following these filtering and deduplication steps, the final dataset contains 141 million documents and 353 million images, of which 298 million are unique. We observe that using stricter values for the filtering steps yields fewer multimodal documents, although not of higher quality. As such, we invite users who are interested in manipulating a smaller subset of OBELICS to start with a random subset.
## 4 Analysis of Obelics
Figure 1 provides an example showcasing an original webpage alongside the resulting multimodal web document. Extracting and filtering the multimodal document is non-trivial as it requires carefully removing undesirable information on the left, top, and bottom of the page, such as menus and navigation bars. We provide other examples at [https://huggingface.co/spaces/HuggingFaceM4/obelics_visualization](https://huggingface.co/spaces/HuggingFaceM4/obelics_visualization) and in Figures 7, 8 and 9.
Given the scale of OBELICS, it would be prohibitive to describe its content exhaustively. Instead, we provide high-level statistics and analyses that shed light on the dataset's properties.
### General Statistics
Table 1 compares OBELICS against the largest existing alternatives. mmc4-ff is the mmc4 dataset with fewer faces. Our dataset has the highest number of unique documents and total tokens while containing a huge number of images.
It is worth mentioning that we have fewer images than mmc4 (Zhu et al., 2023). This discrepancy can be attributed to two reasons. First, our analysis reveals that mmc4 contains many duplicated images, with only 60.6% being unique compared to 84.3% for OBELICS. We found that images duplicated multiple times often indicate spam or unrelated generic content. Second, mmc4 does not limit the number of images within a document. As a result, the distribution of images across documents is highly uneven, with a substantial portion of them concentrated in documents with excessive image counts (see Figure 3). The images in these documents are often unrelated to each other and exhibit spam or advertisement content. Moreover, these documents often have little text, making them unsuitable for learning the alignment between text and images (see an example in Figure 10).
Figure 4 shows the joint distribution of a number of tokens and a number of images in OBELICS. Although we limit the number of images in a document to 30, we cut the plot at 6 images for clarity. The documents of OBELICS contain a median number of images of 1 and a median number of tokens of 677.
Perplexity analysisTo assess the quality of our text in comparison to reference datasets used for training large language models, we leverage an n-gram language model trained on Wikipedia (Heafield, 2011; Laurencon et al., 2022). This allows us to compute perplexity
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Images} & \multicolumn{2}{c}{\%} & \multirow{2}{*}{Docs} & \multirow{2}{*}{Tokens} & \multirow{2}{*}{Open} \\ & & & & & \\ & & & & & \\ \hline KOSMOS-1 & - & - & 71M & - & ✗ \\ M3W & 185M & - & 43M & - & ✗ \\ mmc4-ff & 385M & 60.6\% & 79M & 34B & ✓ \\ mmc4 & **585M** & - & 103M & 43B & ✓ \\ OBELICS & 353M & **84.3**\% & **141M** & **115B** & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: General statistics of OBELICS and the current largest alternatives.
Figure 3: Distribution of images.
scores for 100,000 documents from each dataset. Lower perplexity scores indicate a higher resemblance to Wikipedia documents. Figure 5 displays the distributions of these scores. Our results demonstrate that the texts in OBELICS have a significantly lower average perplexity compared to the texts in c4 (Raffel et al., 2019), mmc4 (Zhu et al., 2023), and OSCAR (Ortiz Suarez et al., 2020). Furthermore, our distribution aligns closely with the one from The Pile (Gao et al., 2020), which was thoughtfully curated from diverse, high-quality sources.
### Topic Modeling
Similar to Zhu et al. (2023), we employ a Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to understand the diversity of the dataset. The LDA gives us insights into the distribution of topics in the dataset, along with estimated proportions and frequently associated words. Table 5 and 6 present the results of the LDA with respectively 20 and 200 topics, offering both a high-level and a more granular analysis of the dataset's content. We observe that the dataset covers topics ranging from Politics to Health by way of Music. Additionally, we compute the most frequent domains and show that news sites are systematically the most represented (Table 4).
### Qualitative Assessment of Dataset Samples
We manually inspect 250 documents from OBELICS to verify the dataset's quality and asses the risks contained in the dataset. We focus on the images' content in relation to the text since it's the core addition compared to a language modeling dataset.
80% of documents have photo images, while 29% have graphic images (drawings, cartoons, etc.). 90% of the documents have all images clearly related to the text content. 30% of documents have images containing at least one written word, and 5% of documents have images that are structured text (slides, tables, scanned documents, etc.), which can help models learn OCR capabilities. 7% of documents have content (images or text) that hasn't been captured by cleaning filters (non-English text, spam or advertisement, etc.). 46% of documents contain images with faces (portraits or group photos). No obvious Personally Identifiable Information (PII) texts were found, except for public personalities and people mentioned in news articles. No NSFW images were found. Only 3% of documents contain images with watermarks, and 2% have images with logos.
## 5 Validating the Viability of Obelics
To confirm the viability of our dataset, we first show that vision and language models trained on our multimodal web documents outperform the same models trained on image-text pairs on various multimodal benchmarks. Following that, we demonstrate the effectiveness of
OBELICS as an alternative to closed datasets by training models of different sizes on par with closed-source models.
Model detailsWe follow the Flamingo (Alayrac et al., 2022) architecture closely: we combine two frozen unimodal backbones - LLaMA (Touvron et al., 2023) for the language model, and OpenClip 5 for the vision encoder - add learnable cross-attention Transformer blocks to connect the language and vision blocks. For multimodal web documents, we feed the model sequences corresponding to the succession of text paragraphs and images. For image-text pairs, we form the training sequences by packing images with their captions. The images are encoded with the vision encoder and vision hidden states are pooled with Transformer Perceiver blocks and then fused into the text sequence through the cross-attention blocks. The training objective is the standard next token prediction. For more details, we refer to the original paper.
Footnote 5: [https://laino.ai/blog/large-openclip/](https://laino.ai/blog/large-openclip/)
Following Alayrac et al. (2022), we evaluate our models on a series of multimodal benchmarks spanning visual question answering (VQAv2 (Antol et al., 2015), OKVQA (Marino et al., 2019), TextVQA (Singh et al., 2019), VizWiz (Gurari et al., 2018)), visual dialogs (VisDial (Das et al., 2017)), hateful speech detection (HatefulMeme (Kiela et al., 2020)), image captioning (COCO (Lin et al., 2014), Flickr30k (Young et al., 2014)), and OCR (IIIT5k (Mishra et al., 2012)).
Additional details about the architecture, the training, the compute and the evaluation are present in Appendix A.4.
Training on different mixture of dataFigure 6 shows the result of the first experiment, which consists in training 9B-parameter models on different mixture of data. Training on multimodal web documents allows reaching the same performance using an order of magnitude fewer images than training on image-text pairs, even though the images from the two datasets come from Common Crawl. This underlines the benefit of having longer text contexts for training multimodal models. Moreover, the model trained on multimodal web documents performs better on average. This is particularly striking on visual question-answering benchmarks on which the model trained on image-text pairs slowly degrades through the training. We note, however, that the model trained on image-text pairs has a slight advantage performance-wise in captioning, classification, and OCR tasks (see more details in Appendix A.4.5). We hypothesize that this is due to the nature of image-text pairs: captions can be seen as fuzzy class labels. Last, similarly to Alayrac et al. (2022), we observe that combining the two types of datasets leads to increased performance for a given number of images, tokens, or training compute.
Models trained on OBELICS achieve competitive performance at different scales. Following these insights, we show that OBELICS is a viable open alternative to other datasets.
Figure 6: Aggregated 4-shot performance through the training using LIAON only, OBELICS only and a mixture of both. The training sequences from multimodal documents and the packed sequences obtained from image-text pairs have different numbers of images but the same number of tokens. Thus, we plot the performance over two log x-axes. The initial uptick of the model trained on image-text pairs is attributed to the fact the performance on VQA tasks starts by increasing and then slowly degrades.
\begin{table}
\begin{tabular}{c c c c c c c c c c} & Shot & \(\mathcal{O}\) & \(\mathcal{O}\) & \(\mathcal{O}\) & \(\mathcal{O}\) & \(\mathcal{O}\) & \(\mathcal{O}\) & \(\mathcal{O}\) & \(\mathcal{O}\) \\ \hline Flamingo-9B & & 79.4 & 61.5 & 51.8 & 44.7 & 31.8 & 22.8 & 48.0 & 57.0 \\ OpenFlamingo-9B & 0 & 79.5 & 59.5 & 52.7 & 37.8 & 24.2 & 27.5 & - & 51.6 \\ IDEFICS-9B & & 46.0 & 27.3 & 50.9 & 38.4 & 25.9 & 35.5 & 48.7 & 51.8 \\ \hline Flamingo-9B & & 93.1 & 72.6 & 56.3 & 49.3 & **33.6** & 34.9 & 50.4 & 62.7 \\ OpenFlamingo-9B & 4 & 89.0 & 65.8 & 54.8 & 40.1 & 28.2 & 34.1 & - & 54.0 \\ IDEFICS-9B & & 93.0 & 59.7 & 55.4 & 45.4 & 27.6 & 36.9 & 47.9 & 50.7 \\ \hline Flamingo-9B & & 99.0 & **73.4** & 58.0 & 50.0 & **33.6** & 39.4 & 51.2 & 63.9 \\ OpenFlamingo-9B & 8 & 96.3 & 62.9 & 54.8 & 41.1 & 29.1 & 38.5 & - & 54.7 \\ IDEFICS-9B & & 97.0 & 61.9 & 56.4 & 47.7 & 27.5 & 40.4 & 47.6 & 51.1 \\ \hline Flamingo-9B & & 102.2 & 72.7 & 59.4 & 50.8 & 33.5 & 43.0 & **51.3** & **64.5** \\ OpenFlamingo-9B & 16 & 98.8 & 62.8 & 54.3 & 42.7 & 27.3 & 42.5 & - & 53.9 \\ IDEFICS-9B & & 99.7 & 64.5 & 57.0 & 48.4 & 27.9 & 42.6 & - & 50.1 \\ \hline Flamingo-9B & & **106.3** & 72.8 & **60.4** & **51.0** & 32.6 & **44.0** & 50.4 & 63.5 \\ OpenFlamingo-9B & 32 & 99.5 & 61.3 & 53.3 & 42.4 & 23.8 & **44.0** & - & 53.8 \\ IDEFICS-9B & & 98.0 & 64.3 & 57.9 & 49.6 & 28.3 & 43.7 & - & 49.8 \\ \hline \hline Flamingo & 0 & 84.3 & 67.2 & 56.3 & 50.6 & 35.0 & 31.6 & 52.0 & 46.4 \\ IDEFICS & & 91.8 & 53.7 & 60.0 & 45.2 & 30.9 & 36.0 & 48.9 & 60.6 \\ \hline Flamingo & 4 & 103.2 & 75.1 & 63.1 & 57.4 & 36.5 & 39.6 & 55.6 & 68.6 \\ IDEFICS & & 110.3 & 73.7 & 63.6 & 52.4 & 34.4 & 40.4 & 48.4 & 57.8 \\ \hline Flamingo & 8 & 108.8 & 78.2 & 65.6 & 57.5 & 37.3 & 44.8 & 56.4 & **70.0** \\ IDEFICS & & 114.3 & 76.6 & 64.8 & 55.1 & 35.7 & 46.1 & 47.9 & 58.2 \\ \hline Flamingo & 16 & 110.5 & 78.9 & 66.8 & **57.8** & 37.6 & 48.4 & **56.8** & **70.0** \\ IDEFICS & & **116.6** & 80.1 & 65.4 & 56.8 & 36.3 & 48.3 & - & 57.8 \\ \hline Flamingo & 32 & 113.8 & 75.4 & **67.6** & **57.8** & **37.9** & 49.8 & 55.6 & **70.0** \\ IDEFICS & & **116.6** & **81.1** & 65.9 & **57.8** & 36.7 & **50.0** & - & 52.5 \\ \hline \end{tabular}
\end{table}
Table 2: Performance of IDEFICS against OpenFlamingo and Flamingo. The evaluations were done with random in-context examples, and in an open-ended setting for VQA tasks. (Task, Metric, Query split): (COCO, CIDEr, test), (Flickr30k, CIDEr, test (Karpathy)), (VQAv2, VQA acc., testdev), (OKVQA, VQA acc., val), (TextVQA, VQA acc., val), (VizWiz, VQA acc., testdev), (VisDial, NDCG, val), (HatefulMemes, ROC-AUC, test seen).
We train IDEFICS, an 80 billion parameters Flamingo-like model on a mixture of image-text pairs from LAION (Schuhmann et al., 2022), openly accessible captioning datasets (Singh et al., 2022), OBELICS and multimodal web documents obtained from Wikipedia using a similar extraction strategy. We also train a smaller version of 9 billion parameters, IDEFICS-9B. We compare these models against OpenFlamingo v2 (Awadalla et al., 2023) and Flamingo of the same sizes and trained on a similar mixture of multimodal web documents and image-text pairs. We report the results in Table 2.
IDEFICS is often on par with Flamingo on various multimodal benchmarks. Out of the 8 evaluation tasks, with 32 in-context examples, it either performs better or obtain the same result as Flamingo on 4 of them. At the 9 billion parameter scale, we are still behind Flamingo-9B. However, it is important to highlight that we outperform OpenFlamingo-9B, which was trained on mmc4, in terms of aggregated performance. We achieved a score of 56.5, compared to their score of 55.8, by selecting the best performance across all numbers of in-context examples for each task. This highlights the advantages of OBELICS as an open alternative to a multimodal web document dataset.
## 6 Conclusion
With the goal of supporting open-source large multimodal models, we introduce OBELICS, an open web-scale collection of filtered interleaved multimodal web documents based on Common Crawl snapshots. We document a collection and filtering process that balances the scale and removal of undesirable texts and images while addressing some of the well-documented ethical concerns of large-scale multimodal datasets, notably data consent and pornographic content. To demonstrate the usefulness of models trained on multimodal documents, we train IDEFICS on OBELICS and show that it is a viable alternative to closed datasets. Open datasets of multimodal documents with scale, quality, and diversity of sources can help support the ability to train competitive open models.
## Acknowledgments and Disclosure of Funding
The authors were granted access to the HPC resources of the Institut du developpement et des ressources en informatique scientifique (IDRIS) du Centre national de la recherche scientifique (CNRS) under the allocation 2022-A0121013450 made by Grand equipement national de calcul intensif (GENCI). The initial development of the dataset was done on Jean-Zay cluster of IDRIS, and we thank the IDRIS team for their responsive support throughout the project, in particular Remi Lacroix. We thank Guillaume Salou for setting up the virtual machines used to download the images of our dataset, and Sebastian Nagel for his valuable assistance in providing insights on Common Crawl. We thank Yacine Jernite and Daniel van Strien for conducting a bias analysis of the models trained on OBELICS.
|
2308.00831 | Spectral Density Classification For Environment Spectroscopy | Spectral densities encode the relevant information characterising the
system-environment interaction in an open-quantum system problem. Such
information is key to determining the system's dynamics. In this work, we
leverage the potential of machine learning techniques to reconstruct the
features of the environment. Specifically, we show that the time evolution of a
system observable can be used by an artificial neural network to infer the main
features of the spectral density. In particular, for relevant examples of
spin-boson models, we can classify with high accuracy the Ohmicity parameter of
the environment as either Ohmic, sub-Ohmic or super-Ohmic, thereby
distinguishing between different forms of dissipation. | Jessica Barr, Giorgio Zicari, Alessandro Ferraro, Mauro Paternostro | 2023-08-01T20:42:59Z | http://arxiv.org/abs/2308.00831v2 | # Spectral Density Classification For Environment Spectroscopy
###### Abstract
Spectral densities encode the relevant information characterising the system-environment interaction in an open-quantum system problem. Such information is key to determining the system's dynamics. In this work, we leverage the potential of machine learning techniques to reconstruct the features of the environment. Specifically, we show that the time evolution of a system observable can be used by an artificial neural network to infer the main features of the spectral density. In particular, for relevant example of spin-boson models, we can classify with high accuracy the Ohmicity parameter of the environment as either Ohmic, sub-Ohmic or super-Ohmic, thereby distinguishing between different forms of dissipation.
## I Introduction
Recent progress in the field of quantum technologies has advanced our capabilities to control quantum systems and exploit their non-classical properties. Yet, this task presents significant challenges. Quantum systems are inherently open, as they inevitably interact with their surrounding environment [1; 2]. They are thus susceptible to gain and losses, as well as to the genuine quantum phenomenon of decoherence [3; 4; 5], which disrupts the phase coherence of superposition states, posing a major obstacle in preserving quantum states [6; 7]. If we are to effectively devise strategies for mitigating adverse environmental influences on a system, it is crucial to have a comprehensive understanding of the effects that need to be addressed when facing open quantum dynamics. This, in turn, requires a full characterisation of the mechanism governing the system-environment interaction.
To address such challenge, here we tackle the problem of characterising environmental effects on an open quantum system harnessing recent advances in the field of Machine Learning (ML). The latter has opened up new data-driven approaches, which have shown their effectiveness in various applications in the field of quantum technologies [8]. Among those, some are very close to the spirit of this work. ML-based methodologies have been applied to quantum tomography [9; 10; 11], quantum channel discrimination [12], simulation of open quantum systems [13; 14; 15; 16], as well as quantum control [17; 18; 19].
We focus on the typical open quantum system scenario, where we are able to effectively describe and control the reduced system, as opposed to the infinitely many uncontrollable environmental degrees of freedom which are responsible for dissipation and decoherence. In this setting, we focus on the interaction of a given system with an external environment in terms of the Spectral Density (SD), which, by encoding full information about the system-environment coupling, allows us to determine the two-time correlation function of the environment. Having full knowledge of this quantity allows us to predict the temporal behaviour of an open quantum system without a full microscopic description of the environment.
The SD for a given system-environment interaction, however, is rarely directly available and challenging to calculate from first principles. The form of a SD is at best phenomenologically inferred through empirical data gathered from experimental observations, and at worst _guessed_ using _ad hoc_ assumptions, which might result in significant discrepancies between the predicted and actual dynamics of the system [20]. In this work, we consider the case of a quantum system interacting with a bosonic thermal bath. Depending on the nature of the system-bath interaction, the system dynamics can manifest as either pure dephasing or amplitude damping [21; 22; 23]. The particular choice of the SD in this setting is responsible for possible memory effects. On one hand, we can encounter a scenario where the information is monotonically flowing from the system to the bath, i.e. the usual scenario characterising quantum Markovian processes [24; 25]. On the other hand, some functional forms of the SD are suitable to model a physical situation in which the system, dynamically interacting with its environment, can partially retrieve the information that was previously lost -- these processes are dubbed as non-Markovian instead [26; 27].
Prior works have studied the use of ML for noise characterisation in open quantum systems. Various methods have been explored, such as studying the noise in qubit systems using two-pulse echo decay curves [28], and random pulse sequences that are applied to the qubit [29]. Additionally, other studies have focused on constructing the power spectral density for ensembles of carbon impurities around a nitrogen vacancy centre in diamond [30], and inferring the environment affecting superconducting qubits [31].
In this work, we show that an artificial Neural Network (NN) can be used to _classify_ the SD characterising the dynamics of a system, based on its features. Previous research has examined the classification of _aspects_ of noise in open quantum systems. For instance, in Ref. [32] ML techniques were used to discern between Markovian and non-Markovian noise. More pertinent to the matter at hand, aspects of the problem of distinguishing between Ohmic, sub-Ohmic and super-Ohmic SDs have already been studied: in Ref. [33], a scenario where a probe qubit is used to access a second inaccessible one is proposed to infer the Ohmicity class by using NNs and leveraging the special features of quantum synchronisation. In Ref. [34], a different use of NNs was put forward as tomographic data at just two instants of time were used,
rather than a time-series approach. In contrast, this work takes a simpler approach by utilising the time evolution of a system observable for classification without the need for a probe system or tomographically complete information. We focus on the case of a general Spin-Boson (SB) model to show that, even when the environment cannot be exactly traced out to infer the reduced dynamics of a system, a NN can classify the SD with high accuracy. Furthermore, we discuss the limitations imposed by the fluctuation of the parameters in the SD and the number of sampled points in the time signals. Our study emphasises the potential of ML techniques to characterise environments with arbitrary SDs.
The remainder of this paper is structured as follows: in Section II we provide an introduction to the general setting under consideration, as well as the ML approach utilised. Specifically, we examine an arbitrary system that is interacting with a bosonic environment and we give some background on the ML model used, namely, NNs. Next, in Section III we detail the physical models that are considered. We investigate two SB models: in the first case, we are able to exactly derive the pure dephasing dynamics starting from the full system-bath unitary evolution; in the second case, we work in the weak coupling limit to approximately derive the reduced dynamics of the system. In both cases, the dynamics can feature non-Markovian effects, depending on the SD we select. In Section IV we discuss the architecture of the NNs, along with a detailed discussion of the results of training and testing for each model. Finally, we give our conclusive remarks and discuss our future outlook in Section V.
## II General setting and methods
Let us consider the general setting of an arbitrary system interacting with an environment which is comprised of infinitely many bosonic modes, as shown in Figure 1. This scenario reproduces the ubiquitous Caldeira-Leggett model, which describes the motion of a quantum particle undergoing a Brownian motion [35, 36]. The full (time-independent) Hamiltonian reads as
\[\hat{H}=\hat{H}_{S}+\hat{H}_{B}+\hat{H}_{I}, \tag{1}\]
where \(\hat{H}_{S}\) and \(\hat{H}_{B}\) are the Hamiltonian operators of the system and the environment, respectively. The system-environment interaction term \(\hat{H}_{I}\) is expressed in the form
\[\hat{H}_{I}=\hat{X}\otimes\hat{B}, \tag{2}\]
where \(\hat{X}\) is a generic system operator, while \(\hat{B}\) is an operator of the bath. We take the latter as
\[\hat{B}=\sum_{k}\left(g_{k}\hat{b}_{k}^{\dagger}+g_{k}^{*}\hat{b}_{k}\right), \tag{3}\]
where the coefficient \(g_{k}\) accounts for the interaction strength between the \(k\)-th mode of frequency \(\omega_{k}\), while \(\hat{b}_{k}^{\dagger}\) and \(\hat{b}_{k}\) are the creation and annihilation operators associated with it. The coupling coefficients enter in the formal definition of the SD, i.e. \(J(\omega)=\sum_{k}|g_{k}|^{2}\delta\left(\omega-\omega_{k}\right)\), the latter encoding all the information about the system-environment interaction. Since we are interested in the typical irreversible open system scenario, we will assume that the distribution of modes forms a continuum, so that the system dynamics does not display recurrences [1, 37, 38]. In this limit, the SD appears in the expression for the correlation function of a bosonic bath, defined as \(\alpha_{\beta}(t)\equiv\langle\tilde{B}(t)\hat{B}(0)\rangle_{B}\), where \(\tilde{B}(t)\) is the bath operator in the interaction picture with respect to the free Hamiltonian \(\hat{H}_{0}=\hat{H}_{S}+\hat{H}_{B}\). In Appendix A, we show that if the environment is in a thermal Gibbs state, the correlation function can also be expressed as
\[\alpha_{\beta}(t)\equiv v(t)+i\mu(t)\,, \tag{4}\]
where
\[\begin{bmatrix}v(t)\\ \mu(t)\end{bmatrix}=\int\limits_{0}^{\infty}J(\omega)\begin{bmatrix}\cos \left(\omega t\right)\coth\left(\frac{\delta\omega}{2}\right)\\ -\sin\left(\omega t\right)\end{bmatrix}\mathrm{d}\omega\,, \tag{5}\]
with \(\beta=1/T\). Note that hereafter we will work in units such that \(\hbar=1\) and \(k_{B}=1\). The two functions \(v(t)\) and \(\mu(t)\) are also referred to as noise and dissipation kernels, respectively: the latter is independent of the temperature of the environment. The effective dynamics of the system, governed by a master equation, crucially depends on the correlation function \(\alpha_{\beta}(t)\), which represents the fingerprint of the environment. The function \(\alpha_{\beta}(t)\) is ultimately determined by the shape of the SD, which essentially contains all of the information about the environment needed to solve the dynamics of the system, and, thus, obtain the time evolution of any of its observables. The expectation value of a generic system observable at time \(t\) is indeed given by
\[\langle\hat{O}(t)\rangle\equiv\mathrm{Tr}_{SB}\left(\hat{O}e^{-i\hat{H}t}\, \hat{\rho}_{SB}^{0}\,e^{i\hat{H}t}\right), \tag{6}\]
Figure 1: Sketch of a generic open quantum system \(S\) interacting with a bosonic environment composed of infinitely many harmonic oscillators labelled by the integer \(n\). Each oscillator has frequency \(\omega_{n}\) and is coupled to the system at a rate \(g_{n}\).
where \(\hat{H}\) is the system-environment Hamiltonian of Equation (1), while the global initial state is factorised as \(\hat{\rho}^{0}_{SB}=\hat{\rho}^{0}\otimes\hat{\rho}_{B}\), with \(\hat{\rho}^{0}\) and \(\hat{\rho}_{B}\) being the initial system and environmental states, respectively. We assume the environment to be given by a large bosonic thermal reservoir, i.e. \(\hat{\rho}_{B}=e^{-\beta\hat{H}_{B}}/\mathcal{Z}_{B}\), where \(\mathcal{Z}_{B}\equiv\mathrm{tr}_{B}(e^{-\beta\hat{H}_{B}})\) is the reservoir partition function. Under these hypotheses, it can be shown that the only environmental quantity entering in the expression of \(\langle\hat{O}(t)\rangle\) is the SD \(J(\omega)\).
Here we focus on special classes of SDs, which can be expressed as [36; 39]
\[J(\omega)=\eta\omega_{c}^{1-s}\omega^{s}f(\omega,\omega_{c})\,, \tag{7}\]
where \(s>0\) is known as Ohmicity parameter, and \(\eta>0\) is the coupling strength between the system and the environment. The constant \(\omega_{c}\) is the cut-off frequency, while \(f(\omega,\omega_{c})\) is the cut-off function, which ensures that \(J(\omega)\to 0\) in the limit of large frequencies, i.e. \(\omega\to\infty\). In what follows we consider the exponential cutoff, namely \(f(\omega,\omega_{c})=e^{-\omega/\omega_{c}}\). Depending on the value of \(s\), we model different system-environment couplings, corresponding to various physical scenarios [36; 39; 40]. SDs with \(s=1\) (i.e. linear in the frequency \(\omega\)) are called Ohmic, while those for which \(s>1\) (\(s<1\)) are known as super-Ohmic (sub-Ohmic).
In this work, we will use the tools provided by ML to classify the SD characterising the system-environment interaction. Specifically, we use an artificial NN that comprises many artificial neurons - essentially a computational unit - arranged in a series of layers, as in Fig. 2[8; 41]. Given a set of inputs \(\{x_{i}\}\), each neuron computes the weighted sum
\[z=\sum_{i}w_{i}x_{i}+b\,, \tag{8}\]
with weights \(w_{i}\) and a bias term \(b\). A non-linear activation function \(f\) is then applied to the result \(z\), yielding the output of the neuron \(y=f(z)\). The activation function used in this work is the standard sigmoid function, i.e. \(f(z)=1/(1+e^{-z})\). The aforementioned weights and biases are free parameters to be optimised. In addition, the outputs from each layer are input to the next layer. In this way, the input data propagates through the network, so that outputs from later layers become increasingly complex functions of the data. The first layer receives the input data and passes it to the subsequent layer, without performing any computation, while the final layer computes the final output of the network. Accordingly, we refer to these layers as the _input layer_ and _the output layer_, respectively. The layers between the input and output layers are known as _hidden layers_. Note that we opt for the aforementioned architecture due to its success in accomplishing the intended objective, without necessitating the use of a more complex architecture, such as a recurrent neural network [41].
For the purpose of classifying the SD using ML techniques, let us suppose we have the time evolution of a family of system observables \(\langle\hat{O}_{j}(t)\rangle\) (for a set of indices \(j\)) as input. These time signals can be gathered as outcomes of an experiment carried out in a laboratory, or, as in our case, they can be generated by solving the system dynamics (either exactly or approximately).
As each signal is a time series, we Fourier-decompose the signal. To this end, we compute the Fourier coefficients as
\[X_{k}^{j}=\sum_{n=0}^{N-1}\langle\hat{O}_{j}(t_{n})\rangle e^{-2\pi i\hbar n/ N}\,, \tag{9}\]
where \(N\) is the total number of time-steps and \(\langle\hat{O}_{j}(t_{n})\rangle\) denotes the \(n\)-th sampled point. We can reconstruct the original signal by inverting Equation (9), where, \(X_{k}^{j}\in\mathbb{C}\), and the sum runs over all the sampled points in the time series, and \(k\in[0,N-1]\). We split each coefficient \(X_{k}^{j}\) into its real and imaginary parts and train the network using the Fourier coefficients rather than the time series \(\langle\hat{O}_{j}(t)\rangle\) directly. Using the resulting dataset, we address the ternary classification problem of distinguishing between three different families (i.e. _classes_) of SDs according to their value of the Ohmicity pa
Figure 2: Schematic of the setup: given the time evolution of an observable, denoted as \(\langle\hat{O}_{i}(t)\rangle\), we compute the corresponding Fourier coefficients \(\{X_{k}\}\). Then, with the aim of determining which class of spectral density is most compatible with the observed dynamics, we input the real and imaginary parts of the coefficients to a Neural Network. The outputs of the three artifical neurons in the output layer are the probabilities that the input belongs to each of the classes.
rameter \(s\) [cf. Equation (7)]. In our case, the output layer of the NN has three artificial neurons which compute weighted sums \(z_{j}\) and apply the softmax activation function [42], defined as
\[f(z_{j})=\frac{e^{z_{j}}}{\sum_{j=1}^{N_{\mathrm{c}}}e^{z_{j}}}\,, \tag{10}\]
where \(N_{\mathrm{c}}\) is the number of classes (in our case \(N_{\mathrm{c}}=3\)). It follows that the outputs of the network are the predicted probabilities that the input belongs to a particular class. As is common for classification problems, we use the categorical cross-entropy as a loss function. Given a dataset containing \(N_{l}\) trajectories, let \(y_{ij}\) represent the true probability that the \(i\)-th trajectory belongs to the \(j\)-th class and let \(\hat{y}_{ij}\) denote the predicted probability of the same. Then the categorical cross-entropy is defined as [43]
\[L(\hat{y},y)=-\frac{1}{N_{l}}\sum_{i=1}^{N_{\mathrm{f}}}\sum_{j=1}^{N_{ \mathrm{c}}}y_{ij}\log(\hat{y}_{ij})\,. \tag{11}\]
The task of training the network reduces to an optimisation problem where the aim is to find the set of parameters that minimises the loss function. A schematic view of the setup is shown in Figure 2.
## III Generation of the dataset: spin-boson models
Given the general framework outlined in Section II, we now identify the systems to scrutinise. We focus on the dynamics of a Spin-Boson (SB) model consisting of a two-level system interacting with a bosonic bath. Therefore, in Equation (1), we choose
\[\hat{H}_{\mathrm{S}}=\frac{\omega_{0}}{2}\hat{\sigma}_{z}\,,\,\hat{H}_{ \mathrm{B}}=\sum_{k}\omega_{k}\hat{b}_{k}^{\dagger}\hat{b}_{k} \tag{12}\]
with \(\hat{\sigma}_{z}\) being the \(z\) Pauli operator. The choice of the system-environment coupling Hamiltonian \(\hat{H}_{I}\) leads to different physical scenarios, in general requiring different techniques to solve the dynamics. In Section III.1, we introduce an exactly solvable SB model, where the full system-environment unitary dynamics can be accessed, and the system dynamics is obtained by tracing out the environmental degrees of freedom. In Section III.2 we then choose a different form of coupling, which requires further approximations to effectively trace out the environment.
In both cases, the reduced dynamics of the system is governed by a master equation of the form
\[\hat{\rho}=\mathcal{L}_{t}\hat{\rho}\,, \tag{13}\]
where \(\mathcal{L}_{t}\) is the Liouvillian (super)-operator accounting for both the unitary and non-unitary dynamics, and \(\hat{\rho}\) is the reduced density operator. Given the initial state of the system \(\hat{\rho}(0)=\hat{\rho}^{0}\), Equation (13) can be formally solved yielding \(\hat{\rho}=\hat{\rho}(t)=e^{\mathcal{L}_{t}t}\hat{\rho}^{0}\) at any time \(t\). It is thus immediate to obtain the expectation value of a generic observable \(\hat{O}\), i.e. \(\langle\hat{O}(t)\rangle\equiv\mathrm{tr}_{\mathrm{S}}\left(\hat{O}\hat{\rho} (t)\right)\). Since we are considering a SB model, a natural choice of the observable would be given by the Pauli operators, i.e. \(\left(\hat{O}_{1},\hat{O}_{2},\hat{O}_{3}\right)=\left(\hat{\sigma}_{xx},\hat {\sigma}_{y},\hat{\sigma}_{z}\right)\) or a combination thereof.
### Pure Dephasing
Let us consider the case in which \(\hat{X}=\hat{\sigma}_{z}\) in Equation (2). Owing to this choice, the interaction Hamiltonian commutes with the system Hamiltonian and the populations of the reduced density matrix are left invariant by the dynamics. In this case, we can access the full unitary evolution, and exactly trace out the environmental degrees of freedom, thus yielding an analytical solution for the reduced dynamics [44; 45; 1]. In Appendix B, we explicitly solve the dynamics under the standard assumption of an initially uncorrelated system-environment state, where we assume the environment to be in a thermal Gibbs state. Working in the interaction picture, the evolved reduced density matrix at time \(t\) can be written in the \(\partial_{z}\) basis \(\{\ket{0},\ket{1}\}\) as
\[\hat{\rho}(t)=\begin{pmatrix}\rho_{00}^{0}&\rho_{01}^{0}e^{-\Gamma(t)}\\ \rho_{01}^{0*}e^{-\Gamma(t)}&1-\rho_{00}^{0}\end{pmatrix}\,, \tag{14}\]
with the decoherence function
\[\Gamma(t)=4\int_{0}^{\infty}\mathrm{d}\omega J(\omega)\coth\left(\frac{\beta \omega}{2}\right)\frac{1-\cos(\omega t)}{\omega^{2}}\,. \tag{15}\]
From Equation (14) we can easily deduce that the interaction with the environment induces pure dephasing in the \(\partial_{z}\) basis, with no dissipation (as deduced by comparing Equation (15) with Equation (5)). Moreover, it is worth emphasising that there might be choices of the SD leading to negative values of \(\Gamma(t)\). In such intervals of time, the system \(r\)e-coheres as a result of (non-Markovian) memory effects of the dynamics [46].
### Amplitude Damping
Alternatively, we can turn to a set-up beyond pure dephasing, just by choosing \(\hat{X}=-\hat{\sigma}_{x}/2\) in the interaction Hamiltonian of Equation (2). Unlike the case discussed in Section III.1, the Hamiltonian does not exhibit any explicit symmetry, therefore we are not able to provide an exact solution for the dynamics. We can nevertheless effectively solve the dynamics, provided that we rely on further assumptions. Starting from an initial uncorrelated state, we can derive a master equation in the weak coupling regime, where we are still able to obtain non-Markovian effects. As outlined in the Appendix C, we can derive a second-order approximated master equation that is local in time [47; 1; 48] and can be written in terms of dynamical equations for the components of the Bloch vector \(\langle\vec{\sigma}(t)\rangle=\left(\langle\vec{\sigma}_{x}(t)\rangle,\langle \vec{\sigma}_{y}(t)\rangle,\langle\vec{\sigma}_{z}(t)\rangle\right)^{\mathrm{T}}\), with
\(\langle\hat{\sigma}_{l}\rangle=\text{tr}_{S}(\hat{\sigma}_{l}\hat{\rho})\). These equations can be cast in the form
\[\frac{\text{d}\langle\vec{\sigma}(t)\rangle}{\text{d}t}=A(t)\langle\vec{\sigma} (t)\rangle+\vec{b}(t) \tag{16}\]
with \(\vec{b}(t)=(0,\,0,\,b_{z}(t))^{T}\) and \(b_{z}(t)=\int_{0}^{t}\text{d}s\,\mu(s)\sin\left(\omega_{0}s\right)\). We have also introduced the matrix
\[A(t)=\begin{pmatrix}0&-\omega_{0}&0\\ \omega_{0}+a_{yx}(t)&a_{zz}(t)&0\\ 0&0&a_{zz}(t)\end{pmatrix} \tag{17}\]
with the time-dependent entries
\[a_{yx}(t) =\int_{0}^{t}\text{d}s\,\nu(s)\sin\left(\omega_{0}s\right), \tag{18}\] \[a_{zz}(t) =-\int_{0}^{t}\text{d}s\,\nu(s)\cos\left(\omega_{0}s\right). \tag{19}\]
The noise and dissipation kernels \(\nu(t)\) and \(\mu(t)\) are defined in Equation (5).
## IV Analysis and Results
In this Section, we present the results of our numerical experiments. We consider a two-level system, whose open dynamics depends on the choice of the coupling between the system and the bosonic environment, as discussed in Section III. For a given initial state, we generate a set of curves reproducing the time evolution of the expectation value of a system observable, i.e. \(\langle\hat{O}(t)\rangle\). Each signal is sampled at N = 400 successive and equally spaced points over a certain time interval \([t_{\text{min}},t_{\text{max}}]\), to ensure a sufficient resolution of the dynamics. As discussed in Section II, instead of directly using the time series, we input the \(2N\) real and imaginary parts of the Fourier coefficients \(X_{k}\). For this reason, we build the input layer with \(2N\) input neurons. The NN for each model consists of the input layer followed by 2 hidden layers where the first hidden layer comprises 250 neurons, and the second comprises 80 neurons. The output layer, instead, is made of 3 neurons, which matches the number of classes (Ohmic, sub-Ohmic, super-Ohmic). The choice of network architecture was iteratively refined, adding layers and neurons until the network achieved a high accuracy without overfitting.
In order to evaluate the performance of the NN, we use the classification accuracy which is defined as the percentage of trajectories that are classified correctly. We generate a training dataset containing \(N_{\text{Train}}\) trajectories which is used to train the model, a validation dataset containing \(N_{\text{Valid}}\) trajectories which is used to assess the performance during training, and a test dataset containing \(N_{\text{Test}}\) trajectories which is used to assess the final accuracy of the network. We optimise the NNs using whole batch gradient descent and the Adam optimiser with a learning rate of \(1\times 10^{-4}\).
### Pure Dephasing
We consider the evolution of the pure dephasing model introduced in Section III.1. We solve the system dynamics choosing the initial state \(\hat{\rho}^{0}=|+\rangle\langle+|\), with \(|+\rangle=(|0\rangle+|1\rangle)/\sqrt{2}\), while -- without loss of generality -- we keep the thermal bath at zero temperature, i.e. \(\beta\rightarrow\infty\). With this choice, we obtain the expectation value \(\langle\hat{\sigma}_{x}(t)\rangle\) within the time interval \(t\in[0,40]\). It is worth noting that alternative choices for the initial state can be made, however, it should be recognised that the initial state can influence the determination of an appropriate observable to consider. For instance, in the context of the pure dephasing model, when the initial state is an eigenstate of \(\hat{\sigma}_{x}\), the time evolutions of \(\langle\hat{\sigma}_{y}(t)\rangle\) and \(\langle\hat{\sigma}_{z}(t)\rangle\) are trivial. Alternatively, if an eigenstate of \(\hat{\sigma}_{y}\) is chosen as the initial state, then the time signals \(\langle\hat{\sigma}_{x}(t)\rangle\) and \(\langle\hat{\sigma}_{z}(t)\rangle\) are trivial. As input to the NN we use the real and imaginary components of the Fourier coefficients obtained using Equation (9). We generate a training, validation, and test set of size \(N_{\text{Train}}/2=N_{\text{Valid}}=N_{\text{Test}}=2400\). The number of trajectories in the Ohmic, sub-Ohmic and super-Ohmic classes are equal in all datasets.
We assess the performance of the NN in two scenarios: the first being where \(\omega_{c}\) and \(\eta\) are fixed, and the second being where they vary. In the first scenario, we consider the case where \(\eta=0.25\), \(\omega_{c}=0.5\), while the only parameter that varies is \(s\). At first, we want to test how the model performs when the classes are easy to differentiate. To that end, we consider trajectories with \(s\in(0,0.5]\) if the SD is sub-Ohmic and \(s\in[1.5,4]\) if it is super-Ohmic. If the SD is Ohmic then s = 1. In Figure 2(a) a subset of trajectories from the resulting training set are plotted where the green curves correspond to sub-Ohmic dissipation, while the yellow and blue curves are trajectories characterised by Ohmic and super-Ohmic dissipation, respectively. Given the substantial separation in the permissible values for \(s\) across the different classes, we expect that the performance of the NN will be high. In Figure 2(a), it is evident that the classes are easily distinguishable due to distinct characteristics exhibited by each of them. Specifically, the majority of the super-Ohmic curves display non-monotonic behaviour, while the Ohmic, sub-Ohmic and a small minority of super-Ohmic curves are monotonically decreasing. Moreover, there are distinct disparities in the decay rates among the different classes. Confirming our expectation, the accuracy of the network evaluated on both the training and the test set reaches 100% after \(\approx 20\) training iterations.
We then make the task a bit more difficult for the network by allowing \(s\in(0,1)\) for the sub-Ohmic dissipation and \(s\in(1,4]\) for the super-Ohmic dissipation. We anticipate that the task will be more difficult in this scenario due to the reduced separation in the allowed values for \(s\) across the classes. This is reflected in the resulting training trajectories, a subset of which are plotted in Figure 2(b), where we observe that the decay rates among the classes can be comparable in some cases. The final training accuracy of the network in this case reaches 99.72% after around 4000 training iterations, while the final test accuracy reaches 99.54%.
Next, to challenge the NN further, we consider the second scenario where we let \(\eta\) and \(\omega_{c}\) vary: the idea is to assess the performance as we increase the upper bounds of the intervals from which they are sampled. We let \(s\in(0,1)\) for the sub-Ohmic spectral densities and \(s\in(1,4]\) for the super-Ohmic
spectral densities. Initially, we set both \(\eta\) and \(\omega_{c}\) equal to \(0.1\), then we let them vary into the interval \([0.1,0.3]\). We increase the upper bound in increments of \(0.2\) until the interval becomes \([0.1,1.9]\). Figure (c)c shows some example trajectories from the training set for \(\eta=\omega_{c}=0.1\), while Figure (d)d shows some for \(\eta,\omega_{c}\in[0.1,1.9]\). From Figure, (c)c, we can observe that the scenario is similar to that depicted in Figure (b)b since the majority of super-ohmic curves are non-monotonic, while a small fraction are monotonically decreasing. In addition, the decay rates across the three classes are comparable in some instances. In Figure (d)d we observe that there is considerable overlap in the decay rates among the three classes, making the classification task significantly more difficult. The classification results after \(10^{4}\) training iterations are shown in figure 4 where the blue curve is the accuracy evaluated on the training set and the green curve is the accuracy evaluated on the test set. As expected, we can see that the accuracy decreases as we consider larger intervals \(\eta\) and \(\omega_{c}\). This is indeed the case, as taking larger intervals essentially increases the amount of noise in the dataset. It is worth noting that the accuracy may improve with larger datasets or more training iterations.
### Amplitude Damping
We shall now analyse the amplitude damping model detailed in section III.2. We choose the initial state \(\hat{\rho}^{0}=|+\rangle\langle+|\), the bare frequency of the oscillator \(\omega_{0}=1\), while we keep the environmental inverse temperature \(\beta=0.1\). We subsequently solve for the dynamics of the system and determine the expectation value \(\langle\hat{\sigma}_{x}(t)\rangle\) within the time interval \(t\in[0,10]\). As for the previous model, we use the real and
Figure 3: Pure dephasing model: some of the curves from the datasets used to train the NN. Each curve represents the time evolution of the observable \(\langle\hat{\sigma}_{x}(t)\rangle\) for the initial state \(\hat{\rho}^{0}=|+\rangle\langle+|\), with \(\beta\to\infty\). The curves shown in panels (a) and (b) are generated by choosing \(\eta=0.25\) and \(\omega_{c}=0.5\). In panel (a) we have taken \(s\in(0,0.5]\) (\(s\in[1.5,4]\)) if the spectral density is sub-Ohmic (super-Ohmic). The curves in panels (b), (c) and (d) are generated by choosing \(s\in(0,1)\) (\(s\in(1,4]\)) if the spectral density is sub-Ohmic (super-Ohmic). In panel (c) we have taken \(\eta=\omega_{c}=0.1\) while \(\eta,\omega_{c}\in[0.1,1.9]\) in panel (d). The green curves in each panel correspond to sub-Ohmic dissipation while the yellow and blue correspond to Ohmic and super-Ohmic dissipation, respectively.
imaginary components of the Fourier coefficients obtained through Equation (9) as input to the NN. We let \(\eta\in(0,0.2]\), \(\omega_{c}\in[0.1,2]\). In addition, we take \(s\in(1,2]\) [\(s\in[0.3,1)\)] if the SD is super-Ohmic [sub-Ohmic] and \(s=1\) if the SD is Ohmic. We generate a training, validation, and test set such that \(N_{\text{Train}}=1500\), and \(N_{\text{Valid}}=N_{\text{Test}}=300\). In all datasets, the Ohmic, sub-Ohmic and super-Ohmic classes have an equal number of trajectories. Figure 5 shows some of the curves from the resulting training set where, as before, the green curves represent sub-Ohmic dissipation while the yellow and blue curves correspond to trajectories characterised by Ohmic and super-Ohmic dissipation, respectively. The final training accuracy of the network in this case reaches \(97.93\%\) after \(10^{4}\) training iterations while the test accuracy is significantly lower and reaches \(93.00\%\).
Firstly, we would like to assess the number of time-points required to attain a high level of accuracy. It should be noted that it is generally advisable to avoid having highly correlated features in a dataset, whose linear dependence implies that the value of one can be derived from that of the other [42]. Hence, mutually correlated features convey redundant information to the model since each feature provides little or no additional information beyond what the other features already capture. Including all of the features will not improve the ability of the model to discriminate, but will increase the complexity of the algorithm, thus increasing the computational cost.
To this end, we introduce the Pearson correlation coefficient, which is a statistical measure of linear correlation between two variables [49, 50]. It ranges from a value of \(-1\), indicating perfect anti-correlations, to \(1\), when the variables are perfectly correlated. A value of \(0\) indicates that there is no linear relationship between the two variables. Let \(\langle\hat{\sigma}_{x}\rangle_{n}^{i}\) denote the \(n\)-th time-point of the \(i\)-th trajectory in a given dataset. Then the Pearson correlation coefficient between the \(n\)-th and \(m\)-th time-points, denoted as \(C_{nm}\), is given by the formula
\[C_{nm}\equiv\frac{\sum_{i=1}^{N}\Delta\langle\hat{\sigma}_{x}\rangle_{i}^{i} \leavevmode\nobreak\ \Delta\langle\hat{\sigma}_{x}\rangle_{m}^{i}}{\sqrt{\sum_{i=1}^{N}\left( \Delta\langle\hat{\sigma}_{x}\rangle_{n}^{i}\right)^{2}}}\sqrt{\sum_{i=1}^{N} \left(\Delta\langle\hat{\sigma}_{x}\rangle_{m}^{i}\right)^{2}}\,, \tag{20}\]
where \(\Delta\langle\hat{\sigma}_{x}\rangle_{j}^{i}\equiv\langle\hat{\sigma}_{x} \rangle_{j}^{i}-\overline{\langle\hat{\sigma}_{x}\rangle}_{j}\), with \(\overline{\langle\hat{\sigma}_{x}\rangle}_{n}\) the average value of the \(n\)-th time step, and \(N\) the total number of trajectories in the dataset. We calculate the Pearson correlation coefficient between each time step in our training set and generate a correlation matrix, \(\mathbf{C}\), whose entries quantify the correlation between time-points. The resulting correlation heatmap, a graphical representation of the correlation matrix, is shown in Figure 6. From the heatmap, it can be observed that there is a high degree of correlation between adjacent and near-adjacent time points.
To address this issue, a common approach is to perform _feature selection_, identifying a subset of features that are the most informative and non-redundant. Retaining only one of two correlated features may expedite the learning process, without compromising the accuracy of the model. While we ideally want to avoid correlation between the features in a dataset, it is preferable to retain features which are correlated with the dependent variable [51]. Correlations make it possible to use the value of one variable to predict the value of another, meaning that features which are correlated with the output are predictive of the output.
Note that the Pearson correlation coefficient is only suitable for measuring the correlation between two continuous variables. As, in our case, the dependent variable consists of discrete labels, we can instead determine the degree of correlation between a feature and the dependent variable by examining whether the variance of the feature can be explained by the dependent variable. To do this, we group the feature into classes based on the discrete labels, compute the variance of each class, and calculate the difference between the mean of the resulting variances and the overall variance of the feature. If the mean of the class variances is significantly lower than the overall variance, this suggests that the feature and the dependent variable are correlated.
Figure 4: Pure dephasing model: the classification accuracy against the length of the interval from which \(\eta\) and \(\omega_{c}\) are sampled.
Figure 5: Amplitude damping model: some example curves from the training set. Each curve represents the time evolution of the observable \(\langle\hat{\sigma}_{x}(t)\rangle\) for the initial state \(\beta^{0}=\ket{+}\bra{+}\), where \(\beta=0.1\), \(\eta\in(0,0.2]\), \(\omega_{c}\in[0.1,2]\). We choose \(s\in(1,2]\) if the spectral density is super-Ohmic, and \(s\in[0.3,1)\) if the spectral density is sub-Ohmic. The green curves correspond to sub-Ohmic dissipation while the yellow and blue correspond to Ohmic and super-Ohmic dissipation, respectively.
A possible strategy for performing feature selection and identifying the most salient features for learning is thus to sort the entries in the correlation matrix into descending order. Then, starting from the highest correlation, one can remove the contributing feature that exhibits the lowest correlation with the dependent variable. Using the above strategy, we can obtain a ranking of the features based on their importance and determine the order in which to remove features if we are to maintain a high classification accuracy. In this scenario, given that the time intervals between points may not be uniformly distributed, it becomes necessary to compute the Fourier coefficients using the non-uniform discrete Fourier transform [52]
\[\mathrm{X}_{k}=\sum_{n=0}^{N-1}\langle\hat{\sigma}_{x}(t_{n})\rangle e^{-2\pi ikp _{n}}\,, \tag{21}\]
where \(p_{n}\) are the non-uniform time points suitably scaled to fall between 0 and 1, while \(\langle\hat{\sigma}_{x}(t_{n})\rangle\) denotes the \(n\)-th sampled point in a given trajectory. As for the discrete Fourier transform, \(k\) is the frequency which is an integer number between 0 and \(N-1\). Note that if \(p_{n}=n/N\), then this equation reduces to the discrete Fourier transform shown in Equation (9).
We compare the results obtained using the proposed feature selection algorithm with the results obtained by selecting time points uniformly, i.e. choosing time points that are evenly spaced throughout the datasets. For example, we might select the first in every 5 points or the first in every 100. Figure 7 shows a plot of the test accuracy against the number of selected time points for the two different selection methods. The blue curve show the results obtained using uniform sampling, while the green curve shows the results obtained using the proposed feature selection algorithm. Firstly, the plot shows that the test accuracy remains consistently high until the number of time points is reduced to approximately 20. Beyond this point, a sharp decline in the accuracy is observed, as shown in the inset of Figure 7. Analysis of the plot indicates that the performance of the two time point selection methods is comparable across different ranges of selected time points. Specifically, when we take a number of time points between \(\approx\) 250 and \(\approx\) 400, there is little difference between the accuracy obtained using uniform sampling and the feature selection algorithm. However, in the range of approximately 40 to 250 time points, the feature selection algorithm shows slightly better results compared to uniform sampling. Lastly, taking less than \(\approx\) 40 points, the test accuracy fluctuates, but we can conclude that the performance of both methods is similar.
For the sake of completeness, we also explored various other methods for feature selection. For instance, after grouping each feature according to the discrete labels, we used one-way-analysis-of-variance (ANOVA) to determine if there were statistically significant differences between the three groups [53]. We also considered the ratio of the mean of the variances of the groups and the overall variance, as opposed to the difference. Lastly, we attempted to assess the importance of each feature using principal component analysis. Specifically, we examined the degree to which each feature contributed to the principal components, as a large contribution to the principal components suggests that a feature is important in explaining the overall variability of the data [54]. We observed that none of the aforementioned methods outperformed the correlation based feature selection algorithm employed in Figure 7.
## V Conclusions
We have shown that, in a standard open system scenario, a NN can perform SD-classification with high accuracy. First, we have considered an exactly solvable, pure-dephasing model, and assessed the performance of the NN as a classi
Figure 6: Amplitude damping model: The correlation heatmap for the entries \(C_{ij}\) of the correlation matrix \(\mathbf{C}\), where \(C_{ij}\) is the Pearson Correlation coefficient between the \(i\)-th and \(j\)-th time-point in the training set [cf. Equation (20)].
Figure 7: Amplitude damping model: the test accuracy of the NN against the number of time points when the time points are selected uniformly or using the proposed feature selection method described in the maintext.
fier, highlighting the limiting role played by the fluctuations of the SD parameters. We have then considered a SB model that, under a number of reasonable approximations, results in a master equation accounting for energy losses and decoherence. We observed that, despite the approximations being invoked, the NN can perform the SD-classification task with high accuracy. Furthermore, we thoroughly discussed the interplay between high accuracy in the classification task and the number of sampled point for the system observable.
The methodology introduced in this paper, as well as the case studies analysed therein, highlight the capability of ML techniques to characterise environments with arbitrary SDs, thus embodying a reliable tool for environment characterization and the provision of useful information for control and process diagnosis. This paves the way to, and leaves great hopes for, the full characterization of an unknown SD through, for instance, regression of the parameters rather than classification. We also stress that the method put forward here does not rely critically on how the information on the dynamics is specifically acquired. In this sense, we expect the method to maintain effectiveness even when considering classes of SDs leading to long-lived correlations that, in turn, would hinder the direct derivation of master equations in Lindblad-like form. In such cases, one should rely on more sophisticated simulation techniques - such as Hierarchical Equation Of Motion (HEOM) [55, 56], Time-Evolving Matrix Product Operators (TEMPO) [57], or Time-Evolving Density with Orthogonal Polynomials Algorithm (TEDOPA) [58, 59, 60], just to name a few. The combinations of one of these methods with the ML will help achieving the successful characterization and control of the environment affecting a given open system.
###### Acknowledgements.
We acknowledge insightful discussions with Ricardo Puebla during the early stages of this project. JB and MP thank the Leverhulme Trust Doctoral Scholarship grant LINAS. MP acknowledges support by the European Union's Horizon 2020 FET-Open project TEQ (766900), the Horizon Europe EIC Pathfinder project QuCoM (Grant Agreement No. 101046973), the Leverhulme Trust Research Project Grant UltraQuTe (grant RGP-2018-266), the Royal Society Wolfson Fellowship (RSWF/R3/183013), the UK EPSRC (EP/T028424/1), and the Department for the Economy Northern Ireland under the US-Ireland R&D Partnership Programme.
## Appendix A The Correlation Function of a Bosonic Bath
Here we explicitly derive the correlation function for an arbitrary quantum system that is interacting with an environment which is made up of infinitely many independent harmonic oscillators, i.e. Equations (4) and (5) of the main text. Given an interaction Hamiltonian in the form of Equation (2) and the bath operator \(\hat{B}\) given by Equation (3), we can compute the correlation function which is defined as
\[\alpha_{\hat{B}}(t)=\langle\hat{B}(t)\,\hat{B}(0)\rangle_{B}=\mathbf{tr}_{B} \left(\hat{B}(t)\hat{B}(0)\hat{\rho}_{B}\right)\;. \tag{10}\]
We now move to the interaction picture via the relation \(\hat{B}(t)=e^{it\hat{H}_{B}}\hat{B}e^{-it\hat{H}_{B}}\), where \(\hat{H}_{B}=\sum_{k}\omega_{k}\hat{b}_{k}^{\dagger}\hat{b}_{k}\) is the Hamiltonian of a set of independent harmonic oscillators. Therefore, we have
\[\hat{B}(t) =\sum_{k}\left(g_{k}\hat{b}_{k}^{\dagger}e^{i\omega_{k}t}+g_{k}^ {*}\hat{b}_{k}e^{-i\omega_{k}t}\right)\;, \tag{11}\] \[\hat{B}(0) =\sum_{k}\left(g_{k}\hat{b}_{k}^{\dagger}+g_{k}^{*}\hat{b}_{k} \right)\;. \tag{12}\]
Thus, the expression for the correlation function reads
\[\langle\hat{B}(t)\hat{B}(0)\rangle_{B}=\sum_{k}|g_{k}|^{2}\left(\langle\hat{b} _{k}^{\dagger}\hat{b}_{k}\rangle_{B}\,e^{i\omega_{k}t}+\langle\hat{b}_{k}\hat {b}_{k}^{\dagger}\rangle_{B}\,e^{-i\omega_{k}t}\right)\;, \tag{13}\]
where we have utilised the fact that \(\langle\hat{b}_{k}\hat{b}_{l}\rangle_{B}=\langle\hat{b}_{k}^{\dagger}\hat{b}_{ l}^{\dagger}\rangle_{B}=0\) and that \(\langle\hat{b}_{k}\hat{b}_{l}^{\dagger}\rangle_{B}\) and \(\langle\hat{b}_{k}^{\dagger}\hat{b}_{l}\rangle_{B}\) are non-zero if and only if \(k=l\). If we further assume that the environment is thermal equilibrium at a temperature \(T\), then \(\hat{\rho}_{B}\) is represented by a thermal Gibbs state of the form
\[\hat{\rho}_{B}=\frac{e^{-\hat{\rho}\hat{H}_{B}}}{\mathcal{Z}_{B}}\;, \tag{14}\]
where \(\mathcal{Z}_{B}\) is the reservoir partition function. As a result, we find that the quantity \(\langle\hat{b}_{k}^{\dagger}\hat{b}_{k}\rangle_{B}=N_{k}=(e^{\hat{\rho}\omega _{k}}-1)^{-1}\) is the mean occupation number of the \(k\)-th mode of the environment. Finally, assuming that the bath modes form a continuum, we obtain the following expression for the correlation function:
\[\alpha_{\beta}(t)=\int_{0}^{\infty}\mathrm{d}\omega\,J(\omega)\,\left[\coth \left(\frac{\beta\omega}{2}\right)\cos\left(\omega t\right)-i\sin\left(\omega t \right)\right] \tag{15}\]
which can be recast in the form of Equations (4) and (5).
## Appendix B SB model (pure dephasing)
We now derive the equations governing the dynamics of the system described in section III.1. We work in the interaction picture and begin by deriving an expression for the unitary evolution operator \(\hat{U}(t)\) which acts on the composite system. Let us first notice that the two-time commutator of the interaction Hamiltonian is non-zero, i.e.
\[\left[\hat{H}_{I}(t),\hat{H}_{I}(t^{\prime})\right]=-2i\,\mathfrak{I}_{S}\otimes \sum_{k}|g_{k}|^{2}\sin(\omega_{k}(t-t^{\prime})), \tag{16}\]
where \(\mathfrak{I}_{S}\) is the identity operator acting on the system only. The latter is useful to evaluate the time evolution operators as
\[\hat{U}(t)=\mathcal{T}_{\leftarrow}\exp\left[-i\int_{0}^{t}\hat{H}_{I}(\tau) \,\mathrm{d}\tau\right]\,, \tag{17}\]
where \(\mathcal{T}_{\leftarrow}\) denotes the time ordering operator. Following the ideas in Ref. [61] (see also Ref. [62]), we can formally discretise the integral in the exponent of the unitary evolution
operator and denote \(\mathcal{H}_{n}=-i\hat{H}_{I}(n\mathrm{d}t)\), where \(\mathrm{d}t=t/N\). Taking the limit as \(N\to\infty\) we obtain
\[\hat{\mathcal{U}}(t)=\mathcal{T}_{\leftarrow}\lim_{\mathrm{d}t\to 0}\exp\left[\sum_{n=1}^{N }\mathcal{H}_{n}\,\mathrm{d}t\right]\,. \tag{30}\]
We use a generalisation of the Baker-Campbell-Hausdorff formula to calculate the exponential
\[e^{\sum_{n=1}^{N}\mathcal{H}_{n}}=\left(\prod_{n=1}^{N}e^{\mathcal{H}_{n}} \right)\left(\prod_{n<m}e^{-\frac{1}{2}[\mathcal{H}_{n},\mathcal{H}_{m}]} \right)\,, \tag{31}\]
which holds since the second order commutators vanish. The unitary evolution operator becomes
\[\hat{\mathcal{U}}(t)=\lim_{\mathrm{d}t\to 0}\prod_{n<m}e^{-\frac{1}{2}[ \mathcal{H}_{n},\mathcal{H}_{m}](\mathrm{d}t)^{2}}\prod_{n}e^{\mathcal{H}_{n}}\,, \tag{32}\]
where we have noticed that the commutator in the first exponent is just a complex number, so we may omit the time ordering operator. Recombining the exponentials of the operators we find
\[\begin{split}\hat{\mathcal{U}}(t)&=\lim_{\mathrm{d} t\to 0}e^{\frac{1}{2}\sum_{n<m}[\mathcal{H}_{n},\mathcal{H}_{m}](\mathrm{d}t)^{2}}e^{ \sum_{n}\mathcal{H}_{n}\mathrm{d}t}\\ &=e^{\frac{1}{2}\int_{0}^{t}\mathrm{d}t_{1}}\int_{0}^{t_{1}} \mathrm{d}t_{2}[\hat{H}_{I}(t_{2}),\hat{H}_{I}(t_{1})]e^{-i\int_{0}^{t}\hat{H }_{I}(\tau)\mathrm{d}\tau}\,,\end{split} \tag{33}\]
where the first exponent - as a consequence of Equation (29) - only applies a global phase to the qubit. As a result, the dynamics of the system are solely governed by the operator
\[e^{-i\int_{0}^{t}\hat{H}_{I}(\tau)\mathrm{d}\tau}=e^{\hat{\sigma}_{z}\otimes \sum_{k}\left(\kappa_{k}(t)\hat{b}_{k}^{\dagger}-\kappa_{k}^{*}(t)\hat{b}_{k} \right)}\equiv e^{\hat{\sigma}_{z}\otimes\hat{A}(t)}\,, \tag{34}\]
with \(\kappa_{k}(t)=g_{k}\left(1-e^{i\omega_{k}t}\right)/\omega_{k}\). It is convenient to rewrite this operator in the form:
\[e^{\hat{\sigma}_{z}\otimes\hat{A}(t)} =I\otimes\sum_{n=0}^{\infty}\frac{\hat{A}(t)^{2n}}{2n!}+\hat{ \sigma}_{z}\otimes\sum_{n=1}^{\infty}\frac{\hat{A}(t)^{2n+1}}{(2n+1)!} \tag{35}\] \[=I\otimes\cosh(\hat{A}(t))+\hat{\sigma}_{z}\otimes\sinh(\hat{A}( t))\,. \tag{36}\]
The matrix elements of the reduced density matrix are determined by explicitly tracing out the environmental degrees of freedom, i.e.
\[\hat{\rho}_{\bar{i}\bar{j}}(t)=\langle i|\operatorname{tr}_{B}\left\{\hat{ \mathcal{U}}(t)\hat{\rho}^{0}\otimes\hat{\rho}_{\bar{B}}\hat{\mathcal{U}}^{ \dagger}(t)\right\}|j\rangle\,. \tag{37}\]
It follows that the coherences of the reduced density matrix evolve as
\[\hat{\rho}_{01}(t)=\hat{\rho}_{01}^{0}\left\langle e^{2\hat{A}(t)}\right\rangle\,, \tag{38}\]
with \(\hat{\rho}_{10}(t)=\hat{\rho}_{01}^{*}(t)\). Resorting to the identity \(\langle e^{\hat{A}}\rangle=e^{\langle\hat{A}\rangle^{2}/2}\), where the operator \(\hat{A}\) which is a linear combination of creation and annihilation operators [63], we find that
\[\langle e^{2\hat{A}(t)}\rangle =e^{-2\sum_{k}|\kappa_{k}(t)|^{2}\langle b_{k}b_{k}^{\dagger}+b_ {k}^{\dagger}b_{k}\rangle} \tag{39}\] \[=e^{-2\sum_{k}|\kappa_{k}(t)|^{2}(2N_{k}+1)}\,.\]
Finally, substituting the expressions for \(\kappa_{k}\) and the mean occupation number of the \(k\)-th mode of the environment, \(N_{k}\), we obtain
\[\langle e^{2\hat{A}(t)}\rangle=e^{-\Gamma(t)}\,, \tag{40}\]
where we have assumed the the bath modes form a continuum. The function \(\Gamma(t)\) is the decoherence function given in Equation (15) of the main text.
## Appendix C SB model (amplitude damping)
Here, we derive the equations governing the dynamics of the system described in Section III.2. The second-order generator of the TCL master equation leads to the following equation for the reduced density matrix in the interaction picture \(\tilde{\tilde{\rho}}\)[1; 2]:
\[\frac{d\tilde{\tilde{\rho}}}{dt}=-\int_{0}^{t}\mathrm{d}s\operatorname{tr}_{B }\left[\hat{H}_{I}(t),\left[\hat{H}_{I}(s),\tilde{\tilde{\rho}}\otimes\hat{ \rho}_{\bar{B}}\right]\right]\,, \tag{41}\]
where \(\tilde{\rho}\) and \(\hat{H}_{I}(t)=-\hat{\sigma}_{x}(t)\otimes\hat{B}(t)/2\) are expressed in the interaction picture with respect to the free Hamiltonian \(H_{S}\). The form of the bath operator \(\hat{B}(t)\) is given by Equation (26). By explicitly performing the calculations, changing the integration variable as \(s\to t-s\), and moving to the Schrodinger picture, we are able to rewrite such master equation as
\[\begin{split}\frac{d\hat{\rho}}{dt}=&-i\left[\hat{H} _{S},\hat{\rho}\right]-\frac{1}{4}\int_{0}^{t}\mathrm{d}s\,\left(\nu(s) \left[\hat{\sigma}_{x}\left[\hat{\sigma}_{x}(-s),\hat{\tilde{\rho}}\right] \right]\right.\\ &\left.+i\eta(s)\left[\hat{\sigma}_{x},\left\{\hat{\sigma}_{x}(-s),\hat{\rho}\right\}\right]\right),\end{split} \tag{42}\]
where \(\nu(s)\) and \(\eta(s)\) are respectively the real and imaginary parts of the correlation function given in Equation (5) of the main text. The corresponding dynamical equations for the components of the Bloch vector \(\langle\sigma_{\bar{j}}(t)\rangle=\operatorname{tr}_{S}\left[\sigma_{\bar{j}} \rho(t)\right]\) read
\[\frac{d\langle\hat{\sigma}_{x}(t)\rangle}{dt} =-\omega_{0}\langle\hat{\sigma}_{y}(t)\rangle\,, \tag{43}\] \[\frac{d\langle\hat{\sigma}_{y}(t)\rangle}{dt} =\left(\omega_{0}+a_{yx}(t)\right)\langle\hat{\sigma}_{x}(t)\rangle+a _{yy}(t)\langle\hat{\sigma}_{y}(t)\rangle\,,\] (44) \[\frac{d\langle\hat{\sigma}_{z}(t)\rangle}{dt} =a_{zz}(t)\langle\hat{\sigma}_{z}(t)\rangle+b_{z}(t)\,, \tag{45}\]
where the time-dependent coefficients are defined in the main text [Cf. Equations (18) and (19)]. This set of coupled differential equations can be recast in the matrix form of Equations (16) and (17). |
2305.00615 | Streaming $k$-edit approximate pattern matching via string decomposition | In this paper we give an algorithm for streaming $k$-edit approximate pattern
matching which uses space $\widetilde{O}(k^2)$ and time $\widetilde{O}(k^2)$
per arriving symbol. This improves substantially on the recent algorithm of
Kociumaka, Porat and Starikovskaya (2022) which uses space $\widetilde{O}(k^5)$
and time $\widetilde{O}(k^8)$ per arriving symbol. In the $k$-edit approximate
pattern matching problem we get a pattern $P$ and text $T$ and we want to
identify all substrings of the text $T$ that are at edit distance at most $k$
from $P$. In the streaming version of this problem both the pattern and the
text arrive in a streaming fashion symbol by symbol and after each symbol of
the text we need to report whether there is a current suffix of the text with
edit distance at most $k$ from $P$. We measure the total space needed by the
algorithm and time needed per arriving symbol. | Sudatta Bhattacharya, Michal Koucký | 2023-05-01T00:51:59Z | http://arxiv.org/abs/2305.00615v1 | # Streaming \(k\)-edit approximate pattern matching via string decomposition
###### Abstract
In this paper we give an algorithm for streaming \(k\)-edit approximate pattern matching which uses space \(\widetilde{O}(k^{2})\) and time \(\widetilde{O}(k^{2})\) per arriving symbol. This improves substantially on the recent algorithm of Kociumaka, Porat and Starikovskaya [10] which uses space \(\widetilde{O}(k^{5})\) and time \(\widetilde{O}(k^{8})\) per arriving symbol. In the \(k\)-edit approximate pattern matching problem we get a pattern \(P\) and text \(T\) and we want to identify all substrings of the text \(T\) that are at edit distance at most \(k\) from \(P\). In the streaming version of this problem both the pattern and the text arrive in a streaming fashion symbol by symbol and after each symbol of the text we need to report whether there is a current suffix of the text with edit distance at most \(k\) from \(P\). We measure the total space needed by the algorithm and time needed per arriving symbol.
## 1 Introduction
Pattern matching is a classical problem of finding occurrences of a given pattern \(P\) in text \(T\). It can be solved in time linear in the size of the pattern and text [1, 2, 11]. The classical algorithms use space that is proportional to the pattern size. In a surprising work [20], Porat and Porat were the first to design a pattern matching algorithm that uses less space. They designed an _on-line_ algorithm that pre-processes the pattern \(P\) into a small data structure, and then it receives the text symbol by symbol. After receiving each symbol of the text, the algorithm is able to report whether the pattern matches the current suffix of the text. The algorithm uses poly-logarithmic amount of memory for storing the data structure and processing the text. This represents a considerable achievement in the design of pattern matching algorithms.
Porat and Porat also gave a small-space online algorithm that solves approximate pattern matching up-to Hamming distance \(k\), \(k\)_-mismatch approximate pattern matching_. In this problem we are given the pattern \(P\) and a parameter \(k\), and we should find all substrings of the text \(T\) that are at Hamming distance at most \(k\) from \(P\). Their algorithm uses \(\widetilde{O}(k^{3})\) space, and requires \(\widetilde{O}(k^{2})\) time per arriving symbol of the text.
Subsequently this was improved to space \(\widetilde{O}(k)\) and time \(\widetilde{O}(\sqrt{k})\)[16]. There has been a series of works [1, 1, 2
Kociumaka and Porat [14] on the stream of symbols coming from the decomposition as a black box. Bhattacharya and Koucky [1] also constructed a rolling sketch with limited update abilities, namely adding and deleting a symbol. We do not use that sketch here.
### Related work
Landau and Vishkin [13] gave the first algorithm for the \(k\)_-mismatch approximate pattern matching_ problem which runs in time \(O(k(m\log m+n))\) and takes \(O(k(m+n))\) amount of space. This was then improved to \(O(m\log m+kn)\) time and \(O(m)\) space by Galil and Giancarlo [1]. Later, Amir, Lewenstein and Porat [1] proposed two algorithms running in time \(O(n\sqrt{k\log k})\) and \(\widetilde{O}(n+k^{3}(n/m))\). The latter was improved by Clifford, Fontaine, Porat, Sach and Starikovskaya [14] who gave an \(\widetilde{O}(n+k^{2}(n/m))\) time algorithm. Charalampopoulos, Kociumaka and Wellnitz, in their FOCS'20 paper [14], also proposed an \(\widetilde{O}(n+k^{2}(n/m))\) time algorithm with slightly better \(polylog\) factors. An \(\widetilde{O}(n+kn/\sqrt{m})\) time algorithm was given by Gawrychowski and Uznanski [1], which showed a nice tradeoff between the \(O(n\sqrt{k\log k})\) and \(\widetilde{O}(n+k^{2}(n/m))\) running times. Not only that, they also showed that their algorithm is essentially optimal upto \(polylog\) factors, by proving a matching conditional lower bound. The \(polylog\) factors in the running time were then improved further by a randomized algorithm by Chan, Golan, Kociumaka, Kopelowitz and Porat [1], with running time \(O(n+kn(\sqrt{\log m/m}))\). This problem is thus quite well studied.
For the edit distance counterpart of the problem however, there is still a significant gap between the best upper bound and the known conditional lower bound. Landau and Vishkin [13] proposed an \(O(nk)\) time algorithm for the problem. This algorithm is still the state of the art for larger values of \(k\). Cole and Hariharan [1] gave an algorithm running in time \(O(n+m+k^{4}(n/m))\)(this runs faster if \(m\geq k^{3}\)). In their unified approach paper [14], Charalampopoulos, Kociumaka and Wellnitz also proposed an algorithm running in time \(O(n+m+k^{4}(n/m))\). The same authors in their FOCS'22 paper [14] gave an algorithm running in time \(O(n+k^{3.5}\sqrt{\log m\log kn/m})\), finally improving the bound after 20 years. For the lower bound, Backurs and Indyk [1] proved that a truly subquadratic time algorithm for computing edit distance would falsify SETH. This would imply that an algorithm for the \(k\)-edit approximate pattern matching which is significantly faster than \(O(n+k^{2}(n/m))\) is highly unlikely.
Online \(k\)_-mismatch approximate pattern matching_ problem was first solved by Benny Porat and Ely Porat in 2009 [2]. They gave an online algorithm with running time \(\widetilde{O}(k^{2})\) and space \(\widetilde{O}(k^{3})\) per arriving symbol of the text. Clifford, Fontaine, Porat, Sach and Starikovskaya in their SODA'16 paper [14], improved it to \(\widetilde{O}(k^{2})\) space and \(O(\sqrt{k}\log k+poly(\log(n)))\) time per arriving symbol of the text. Clifford, Kociumaka and Porat [14] proposed a randomized streaming algorithm which uses \(O(k\log{(m/k)})\) space and \(O(\log{(m/k)}(\sqrt{k\log k}+\log^{3}m))\) time per arriving symbol. The space upper bound is optimal up-to logarithmic factors, matching the communication complexity lower bound. All these algorithms use some form of rolling sketch.
In the streaming model, Starikovskaya proposed a randomized algorithm [15] for the \(k\)-edit approximate pattern matching problem, which takes \(O(k^{8}\sqrt{m}\log^{6}m)\) space and \(O((k^{2}\sqrt{m}+k^{13})\log^{4}m)\) time per arriving symbol. Kociumaka, Porat and Starikovskaya [13] proposed an improved randomized streaming algorithm, which takes \(\widetilde{O}(k^{5})\) space and \(\widetilde{O}(k^{8})\) amortized time per arriving symbol of the text.
Notations and preliminaries
We use a standard notation. For any string \(x=x_{1}x_{2}x_{2}\dots x_{n}\) and integers \(p,q\), \(x[p]\) denotes \(x_{p}\), \(x[p,q]\) represents substring \(x^{\prime}=x_{p}\dots x_{q}\) of \(x\), and \(x[p,q)=x[p,q-1]\). If \(q<p\), then \(x[p,q]\) is the empty string \(\varepsilon\). \(x[p,\dots]\) represents \(x[p,|x|]\), where \(|x|\) is the length of \(x\). "."-operator is used to denote concatenation, e.g \(x\cdot y\) is the concatenation of two strings \(x\) and \(y\). For strings \(x\) and \(y\), \(\operatorname{ED}(x,y)\) is the minimum number of modifications (_edit operations_) required to change \(x\) into \(y\), where a single modification can be adding a character, deleting a character or substituting a character in \(x\). All logarithms are based-2 unless stated otherwise. For integers \(p>q\), \(\sum_{i=p}^{q}a_{i}=0\) by definition regardless of \(a_{i}\)'s.
### Grammars
We will use the following definitions from [1]. They are taken essentially verbatim. Let \(\Sigma\subseteq\Gamma\) be two alphabets and \(\#\not\in\Gamma\). A _grammar_\(G\) is a set of _rules_ of the type \(c\to ab\) or \(c\to a^{r}\), where \(c\in(\Gamma\cup\{\#\})\setminus\Sigma\), \(a,b\in\Gamma\) and \(r\in\mathbb{N}\). \(c\) is the _left hand side_ of the rule, and \(ab\) or \(a^{r}\) is the _right hand side_ of the rule. \(\#\) is the starting symbol. The size \(|G|\) of the grammar is the number of rules in \(G\). We only consider grammars where each \(a\in\Gamma\cup\{\#\}\) appears on the left hand side of at most one rule of \(G\), we call such grammars _deterministic_. The \(\operatorname{eval}(G)\) is the string from \(\Sigma^{*}\) obtained from \(\#\) by iterative rewriting of the intermediate results by the rules from \(G\). If the rewriting process never stops or stops with a string not from \(\Sigma^{*}\), \(\operatorname{eval}(G)\) is undefined. We use \(\operatorname{eval}(G_{1},G_{2},\dots,G_{t})\) to denote the concatenation \(\operatorname{eval}(G_{1})\cdot\operatorname{eval}(G_{2})\cdots\operatorname{ eval}(G_{t})\). Using a depth-first traversal of a deterministic grammar \(G\) we can calculate its _evaluation size_\(|\operatorname{eval}(G)|\) in time \(O(|G|)\). Given a deterministic grammar \(G\) and an integer \(m\) less or equal to its evaluation size, we can construct in time \(O(|G|)\) another grammar \(G^{\prime}\) of size \(O(|G|)\) such that \(\operatorname{eval}(G^{\prime})=\operatorname{eval}(G)[m,\dots]\). \(G^{\prime}\) will use some new auxiliary symbols.
We will use the following observation of Ganesh, Kociumaka, Lincoln and Saha [1]:
**Proposition 2.1** ([1]).: _There is an algorithm that on input of two grammars \(G_{x}\) and \(G_{y}\) of size at most \(m\) computes the edit distance \(k\) of \(\operatorname{eval}(G_{x})\) and \(\operatorname{eval}(G_{y})\) in time \(O((m+k^{2})\cdot\operatorname{poly}(\log(m+n)))\), where \(n=|\operatorname{eval}(G_{x})|+|\operatorname{eval}(G_{y})|\)._
We remark that the above algorithm can be made to output also full information about edit operations that transform \(\operatorname{eval}(G_{x})\) to \(\operatorname{eval}(G_{y})\). We will also use the following proposition which can be obtained from Landau-Vishkin algorithm [11] see e.g. a combination of Lemma 6.2 and Theorem 7.13 in [1]:
**Corollary 2.2**.: _For every pair of grammars \(G_{x}\) and \(G_{y}\) representing strings \(x\) and \(y\), respectively, and given a parameter \(k\) we can find in time \(O((m+k^{2})\cdot\operatorname{poly}(\log(m+n)))\), where \(n=|x|+|y|\) and \(m=|G_{x}|+|G_{y}|\), the length of a suffix of \(x\) with the minimum edit distance to \(y\) among all the suffixes of \(x\), provided that the edit distance of the suffix and \(y\) is at most \(k\). If the edit distance of all the suffixes of \(x\) to \(y\) is more than \(k\) then the algorithm stops in the given time and reports that no suffix was found._
## 3 Decomposition algorithm
Bhattacharya and Koucky [1] give a string decomposition algorithm (_BK-decomposition algorithm_) that splits its input string into blocks, each block represented by a small grammar. With high probability over the choice of randomness of the algorithm, two strings of length at most \(n\) and edit distance at most \(k\) are decomposed so that the number of blocks is the same and at most \(k\) corresponding pairs of blocks differ. The edit distance between the two strings corresponds to the sum of edit distances of differing pairs of blocks.
More specifically, the BK-decomposition algorithm gets two parameters \(n\) and \(k\), \(k\leq n\), and an input \(x\). It selects at random pair-wise independent functions \(C_{1},\ldots,C_{L}\) and \(S\)-wise independent functions \(H_{0},\ldots,H_{L}\) from certain hash families, and using those hash functions it decomposes \(x\) into blocks, and outputs a grammar for each of the block. We call the sequence of the produced grammars the _BK-decomposition of \(x\)_. Here, parameters \(L=\lceil\log_{3/2}n\rceil+3\) and \(S=O(k\log^{3}n\log^{*}n)\). As shown in [1], the algorithm satisfies the following property.
**Proposition 3.1** (Theorem 3.1 [1]).: _Let \(x\) be a string of length at most \(n\). The BK-decomposition algorithm outputs a sequence of grammars \(G_{1},\ldots,G_{s}\) such that for \(n\) large enough:_
1. _With probability at least_ \(1-2/n\)_,_ \(x=\operatorname{eval}(G_{1},\ldots,G_{s})\)_._
2. _With probability at least_ \(1-2/\sqrt{n}\)_, for all_ \(i\in\{1,\ldots,s\}\)_,_ \(|G_{i}|\leq S\)_._
_The randomness of the algorithm is over the random choice of functions \(C_{1},\ldots,C_{L}\) and \(H_{0},\ldots,H_{L}\)._
The functions \(C_{1},\ldots,C_{L}\) can be described using \(O(\log^{2}n)\) bits in total and the \(S\)-wise independent functions \(H_{0},\ldots,H_{L}\) can be described using \(O(S\log^{2}n)\) bits in total. We also need the following special case of Theorem 3.12 [1].
**Proposition 3.2** (Theorem 3.12 [1]).: _Let \(u,x,y\in\Gamma^{*}\) be strings such that \(|ux|,|y|\leq n\) and \(\operatorname{ED}(x,y)\leq k\). Let \(G_{1}^{x},\ldots,G_{s}^{x}\) and \(G_{1}^{y},\ldots,G_{s^{\prime}}^{y}\) be the sequence of grammars output by the BK-decomposition algorithm on input \(ux\) and \(y\) respectively, using the same choice of random functions \(C_{1},\ldots,C_{L}\) and \(H_{0},\ldots,H_{L}\). With probability at least \(1-1/5\) the following is true: There exist an integer \(r\geq 1\), such that_
\[x=\operatorname{eval}(G_{s-s^{\prime}+1}^{x})[r,\ldots]\cdot \operatorname{eval}(G_{s-s^{\prime}+2}^{x},\ldots,G_{s}^{x})\quad\&\quad y= \operatorname{eval}(G_{1}^{y},\ldots,G_{s^{\prime}}^{y}),\]
_and_
\[\operatorname{ED}(x,y)=\operatorname{ED}(\operatorname{eval}(G_{s-s^{\prime} +1}^{x})[r,\ldots],\operatorname{eval}(G_{1}^{y}))+\sum_{i=2}^{s^{\prime}} \operatorname{ED}(\operatorname{eval}(G_{s-s^{\prime}+i}^{x}),\operatorname{ eval}(G_{i}^{y})).\]
The grammars for \(x\) can be built incrementally. For a fixed choice of functions \(C_{i},H_{i}\), and a string \(x\) we say that grammars \(G_{1}^{x},\ldots,G_{t}^{x}\) are _definite_ in its BK-decomposition \(G_{1}^{x},\ldots,G_{s}^{x}\) if for any string \(z\) and the BK-decomposition \(G_{1}^{xz},\ldots,G_{s^{\prime}}^{xz}\) of \(xz\) obtained using the same functions \(C_{i},H_{i}\), \(G_{1}^{x}=G_{1}^{xz}\),..., \(G_{t}^{x}=G_{t}^{xz}\). It turns out that all, but \(\widetilde{O}(1)\) last grammars in the BK-decomposition of \(x\) are always definite. The following claim appears in [1]:
**Proposition 3.3** (Lemma 4.2 [1]).: _Let \(n\) and \(k\) be given and \(R=O(\log n\log^{*}n)\) be a suitably chosen parameter. Let \(x,z\in\Gamma^{*}\), \(|xz|\leq n\). Let \(H_{0},\ldots,H_{L},C_{1},\ldots,C_{L}\) be given. Let \(G_{1}^{x},G_{2}^{x},\ldots,G_{s}^{x}\) be the output of the BK-decomposition algorithm on input \(x\), and \(G_{1}^{xz},G_{2}^{xz},\ldots,G_{s^{\prime}}^{xz}\) be the output of the decomposition algorithm on input \(xz\) using the given hash functions._
1. \(G_{i}^{x}=G_{i}^{xz}\) _for all_ \(i=1\ldots,s-R\)_._
2. \(|x|\leq\sum_{i=1}^{\min(s+R,s^{\prime})}|\operatorname{eval}(G_{i}^{xz})|\)_._
The following claim bounds the resources needed to update BK-decomposition of \(x\) when we append a symbol \(a\) to it.
**Proposition 3.4** (Theorem 5.1 [11]).: _Let \(k\leq n\) be given and \(R=O(\log n\log^{*}n)\) be a suitably chosen parameter. Let functions \(C_{1},\ldots,C_{L}\) and \(H_{0},\ldots,H_{L}\) be given. Let \(a\in\Sigma\) and \(x\in\Sigma^{*}\) be of length at most \(n\), and let \(G_{1}^{x},\ldots,G_{s}^{x}\) be the grammars output by the BK-decomposition algorithm on input \(x\) using functions \(C_{1},\ldots,C_{L},H_{0},\ldots,H_{L}\). \(\operatorname{UpdateActiveGammars}(G_{s-\min(s,R+1)+1}^{x},\ldots,G_{s}^{x},a)\) outputs a sequence of grammars \(G_{1}^{\prime},\ldots,G_{t^{\prime}}^{\prime}\) such that \(G_{1}^{x},\ldots,G_{s-\min(s,R+1)}^{x},G_{1}^{\prime},\ldots,G_{t^{\prime}}^{\prime}\) is the sequence that would be output by the BK-decomposition algorithm on \(x\cdot a\) using the same functions \(C_{1},\ldots,C_{L},H_{0},\ldots,H_{L}\). The update algorithm runs in time \(\widetilde{O}(k)\) and outputs \(t^{\prime}\leq 4RL\) grammars._
### Encoding a grammar
Let \(S\) and \(M=O(S\log n)=O(k\log^{4}n\log^{*}n)\) be parameters determined by the BK-decomposition algorithm. [11] shows that each grammar of size at most \(S\) can be encoded as a string of size \(M\) over some polynomial-size alphabet \(\{1,\ldots,2\alpha\}\), where the integer \(\alpha\) can be chosen so that \(2M/\alpha\leq 1/n\). The encoding \(\operatorname{Enc}\) satisfies that if two grammars differ, their encodings differ in every coordinate. The encoding is randomized, and one needs \(O(\log n)\) random bits to select the encoding function. The encoding can be calculated in time linear in \(M\), and given \(\operatorname{Enc}(G)\) we can decode \(G\) in time \(O(M)\). The encoding satisfies:
**Proposition 3.5**.: _Let \(G,G^{\prime}\) be two grammars of size at most \(S\) output by BK-decomposition algorithm. Let encoding \(\operatorname{Enc}\) be chosen at random._
1. \(\operatorname{Enc}(G)\in\{1,\ldots,2\alpha\}^{M}\)_._
2. _If_ \(G=G^{\prime}\) _then_ \(\operatorname{Enc}(G)=\operatorname{Enc}(G^{\prime})\)_._
3. _If_ \(G\neq G^{\prime}\) _then with probability at least_ \(1-(2M/\alpha)\)_,_ \(\operatorname{Ham}(\operatorname{Enc}(G),\operatorname{Enc}(G^{\prime}))=M\)_, that is they differ in every symbol._
### \(k\)-mismatch approximate pattern matching
Clifford, Kociumaka and Porat [12] design a streaming algorithm for _\(k\)-mismatch approximate pattern matching_ with the following properties. The algorithm first reads a pattern \(P\) symbol by symbol, and then it reads a text \(T\) symbol by symbol. Upon reading each symbol of the text it reports whether the word formed by the last received \(|P|\) symbols of the text are within Hamming distance at most \(k\) from the pattern. If they are within Hamming distance at most \(k\) we can request the algorithm to report the mismatch information between the current suffix of the text and the pattern. The parameters \(k\) and \(n\) are given to the algorithm at the beginning, where \(n\) is an upper bound on the total length of the pattern and the text. By _mismatch information_ between two strings \(x\) and \(y\) of the same length we understand \(\operatorname{MIS}(x,y)=\{(i,x[i],y[i]);\)\(i\in\{1,\ldots,|x|\}\)_and_\(x[i]\neq y[i]\}\). So the Hamming distance of \(x\) and \(y\) is \(\operatorname{Ham}(x,y)=|\operatorname{MIS}(x,y)|\). Clifford, Kociumaka and Porat [12] give the following main theorem.
**Proposition 3.6** ([12]).: _There exists a streaming \(k\)-mismatch approximate pattern matching algorithm which uses \(O(k\log n\log(n/k))\) bits of space and takes \(O((\sqrt{k\log k}+\log^{3}n)\log(n/k))\) time per arriving symbol. The algorithm is randomised and its answers are correct with high probability, that is it errs with probability inverse polynomial in \(n\). For each reported occurrence, the mismatch information can be reported on demand in \(O(k)\) time._
Algorithm overview
Now we provide the high-level view of how we proceed. We will take the pattern \(P\) and apply on it the BK-decomposition algorithm. That will give us grammars \(G_{1}^{P},G_{2}^{P},\ldots,G_{r}^{P}\) encoding the pattern. This has to be done incrementally as the symbols of \(P\) arrive. Then we will incrementally apply the BK-decomposition algorithm on the text \(T\).
We will not store all the grammars in memory, instead we will use the \(K\)_-mismatch approximate pattern matching algorithm_ of Clifford, Kociumaka and Porat [1] (_CKP-match algorithm_) on the grammars. Here \(K=k\cdot M\), where \(M\) is the encoding size of each grammar. For a suitable parameter \(R=\widetilde{O}(1)\), we will feed the grammars \(G_{1}^{P},\ldots,G_{r-R}^{P}\) to the CKP-match algorithm as a pattern. In particular, we will encode each grammar by the encoding function \(\mathrm{Enc}\) from Section 3.1, and we will feed the encoding into the CKP-match algorithm symbol by symbol.
Then as the symbols of the text \(T\) will arrive, we will incrementally build the grammars for \(T\) while maintaining only a small set of _active_ grammars. Grammars that become _definite_ will be fed into the CKP-match algorithm as its input text. (Again each one of the grammars encoded by \(\mathrm{Enc}\).) The CKP-match algorithm will report \(K\)_-mismatch_ occurrences of our pattern in the text. Each \(K\)_-mismatch_ occurrence corresponds to a match of the pattern grammars to the text grammars, with up-to \(k\) differing pairs of grammars. We will recover the differing pairs of grammars and calculate their overall edit distance. We will combine this edit distance with the edit distance of the last \(R\) grammars of the pattern from the last \(R\) grammars of the text. (The last \(R\) grammars of the text contain the active grammars which were not fed into the CKP-match algorithm, yet.) If the total edit distance of the match does not exceed the threshold \(k\), we report it as an \(k\)-edit occurrence of \(P\) in \(T\). If required we can also output the edit operations that transform the pattern into a suffix of \(T\). (Among the current suffixes of \(T\) we pick the one which gives the smallest edit distance from \(P\).)
The success probability of our scheme in reporting a particular occurrence of \(P\) in \(T\) is some constant \(\geq 1/2\). Thus, we run the processes in parallel \(O(\log n)\) times with independently chosen randomness to achieve small error-probability.
We describe our algorithm in more details next.
## 5 Description of the algorithm
Now we describe one run of our algorithm. The algorithm receives parameters \(n\) and \(k\), based on them it sets parameters \(L=O(\log n)\), \(R=O(\log n\log^{*}n)\), \(S=O(k\log^{3}n\log^{*}n)\), \(M=O(k\log^{4}n\log^{*}n)\), \(K=k\cdot M=O(k^{2}\log^{4}n\log^{*}n)\). Then it chooses at random pair-wise independent functions \(C_{1},\ldots,C_{L}\) and \(S\)-wise independent functions \(H_{0},\ldots,H_{L}\) needed by the BK-decomposition algorithm. It also selects the required randomness for the encoding function \(\mathrm{Enc}\). It initializes the CKP-match algorithm for \(K\)_-mismatch_ approximate pattern matching on strings of length at most \(n\cdot M\).
There are two phases of the algorithm. In the first phase the algorithm receives a pattern \(P\) symbol by symbol and incrementally builds a sequence of grammars \(G_{1}^{P},\ldots,G_{r}^{P}\) representing the pattern \(P\). All but the last \(R\) grammars are encoded using \(\mathrm{Enc}\) and sent to our instance of CKP-match algorithm as its pattern (symbol by symbol of each encoding). In the second phase our algorithm receives an input text \(T\) symbol by symbol. It will incrementally build a sequences of grammars \(G_{1}^{T},G_{2}^{T},\ldots\) representing the received text. Whenever one of the grammars becomes _definite_ it is encoded by \(\mathrm{Enc}\) and sent to our instance of CKP-match algorithm as the next part of its input text (symbol by symbol).
In the first phase, our algorithm uses the procedure given by Proposition 3.4 to construct the grammars
\(G_{1}^{P},\ldots,G_{r}^{P}\) incrementally by adding symbols of \(P\). The algorithm maintains a buffer of \(2R\)_active_ grammars which are updated by the addition of each symbol. Whenever the number of active grammars exceeds \(2R\) we encode the _oldest_ (left-most) grammars that are definite and pass them to our instance of CKP-match algorithm as the continuation of its pattern. The precise details of updating the grammars of the pattern are similar to that of updating them for text which we will elaborate on more. After the input pattern ends, we keep only \(R\) grammars \(G_{r-R+1}^{P},\ldots,G_{r}^{P}\), and we send all the other grammars to the CKP-match algorithm. Then we announce to the CKP-match algorithm the end of its input pattern. So the CKP-match algorithm received as its pattern encoding of grammars \(G_{1}^{P},\ldots,G_{r-R}^{P}\) in this order. (In the case we end up with fewer than \(R+1\) grammars representing \(P\) (\(r\leq R\)), we apply a _naive_ pattern matching algorithm without need for the CKP-match algorithm. We leave this simple case as an exercise to the reader.) For the rest of this description we assume that \(r>R\).
In the second phase, the algorithm will receive the input text \(T\) symbol by symbol. It will incrementally build a sequence of grammars representing the text using the algorithm from Proposition 3.4. We will keep at most \(R\)_active_ grammars \(G_{1}^{a},\ldots,G_{t}^{a}\) on which the algorithm from Proposition 3.4 will be applied. The active grammars represent a current suffix of \(T\). The prefix of \(T\) up-to that suffix is represented by grammars \(G_{1}^{T},\ldots,G_{s}^{T}\) which are definite. Out of those definite grammars we will explicitly store only the last \(R\) in a buffer, the other grammars will not be stored explicitly. (They will be used to calculate the current edit distance and to run the update algorithm from Proposition 3.4.) The encoding of all the definite grammars will be fed into the CKP-match algorithm as its input text whenever we detect that a grammar is definite.
As the algorithm proceeds over the text it calculates a sequence of integers \(m_{1},m_{2},\ldots,m_{s}\), where the algorithm stores only the last \(R\) of them in a buffer. Each value \(m_{i}\) is the minimal edit distance of \(\operatorname{eval}(G_{1}^{P},\ldots,G_{r-R}^{P})\) (a prefix of the pattern) to any suffix of \(\operatorname{eval}(G_{1}^{T},\ldots,G_{i}^{T})\) (a suffix of a prefix of the text) if the edit distance is less than \(k\). \(m_{i}\) is considered infinite otherwise. (Values \(m_{1},\ldots,m_{r-R-1}\) are all considered to be infinite.) The value \(m_{i}\) will be calculated after \(G_{i}^{T}\) becomes definite and we send the grammar to our CKP-match algorithm. (The CKP-match algorithm will facilitate its calculation.) Values \(m_{i}\) will be used to calculate the edit distance of the current suffix of the input text received by the algorithm. See Fig. 1 for an illustration.
We are ready to describe the basic procedures performed by the algorithm.
_Symbol arrival._ Upon receiving the next symbol \(a\) of the input text, our algorithm invokes the algo
Figure 1: The alignment of text and pattern grammars after arrival of some text symbol. The pattern \(P\) is represented by grammars \(G_{1}^{P},\ldots,G_{r}^{P}\). Grammars \(G_{1}^{P},\ldots,G_{r-R}^{P}\) are encoded by \(\operatorname{Enc}\) and sent to the CKP-match algorithm as its pattern. The current text \(T\) is represented by the sequence of grammars \(G_{1}^{T},\ldots,G_{s}^{T},G_{1}^{a},\ldots,G_{t}^{a}\). Grammars \(G_{1}^{T},\ldots,G_{s}^{T}\) are encoded and committed to the CKP-match algorithm as its text. Grammars \(G_{1}^{a},\ldots,G_{t}^{a}\) are active grammars of the text, and might change as more symbols are added to the text.
rithm from Proposition 3.4 on the \(R+1\) grammars \(G_{s-R+t}^{T},\ldots,G_{s}^{T},G_{1}^{a},\ldots,G_{t}^{a}\) to append the symbol \(a\). From the algorithm we receive back grammars \(G_{s-R+t}^{T},\ldots,G_{s}^{T},G_{1}^{\prime a},\ldots,G_{t^{\prime}}^{\prime a}\), where \(t^{\prime}<4RL\). (Here, \(\operatorname{eval}(G_{1}^{\prime a},\ldots,G_{t^{\prime}}^{\prime a})= \operatorname{eval}(G_{1}^{a},\ldots,G_{t}^{a})\cdot a\). The grammars \(G_{s-R+t}^{T},\ldots,G_{s}^{T}\) received from the algorithm are discarded as they are definite and should not change. The update algorithm needs them to have the proper context for compression.) If \(t^{\prime}>R\) then grammars \(G_{1}^{\prime a},\ldots,G_{t^{\prime}-R}^{\prime a}\) become definite and we will _commit_ each of them to the CKP-match algorithm as explained further. We will commit them in order \(G_{1}^{\prime a},\ldots,G_{t^{\prime}-R}^{\prime a}\). The remaining grammars \(G_{t^{\prime}-R+1}^{\prime a},\ldots,G_{t^{\prime}}^{\prime a}\) are relabelled as \(G_{1}^{a},\ldots,G_{t}^{a}\) and become the active grammars for the addition of the next symbol.
At this point our algorithm can output the minimal possible edit distance of the pattern to any suffix of the text received up-to this point. We explain below how such query is calculated.
_Committing a grammar._ When a grammar \(G\) becomes definite the algorithm commits the grammar as follows. Thus far, grammars \(G_{1},\ldots,G_{s}\) were committed and the sequence of values \(m_{1},\ldots,m_{s}\) was calculated. We set \(G_{s+1}=G\), calculate encoding \(\operatorname{Enc}(G_{s+1})\) and send the encoding symbol by symbol to our CKP-match algorithm. At this point we can calculate \(m_{s+1}\) using the mismatch information provided by our CKP-match algorithm. If \(s+1<r-R\) then we set \(m_{s+1}\) to \(\infty\) otherwise we continue as follows to calculate \(m_{s+1}\).
We query our CKP-match algorithm for the Hamming distance between encoding of \(G_{1}^{P},\ldots,G_{r-R}^{P}\) (the pattern to the CKP-match algorithm) and the encoding of \(G_{s-r+R+2}^{T}\), \(G_{s-r+R+3}^{T},\ldots,G_{s+1}^{T}\) (the current suffix of the text of the CKP-match algorithm). If the Hamming distance is less than \(K=k\cdot M\), then we let the CKP-match algorithm to recover the mismatch information. By the design of the encoding function, if two grammars differ then their encodings differ in all \(M\) positions (unless the encoding function \(\operatorname{Enc}\) fails which happens only with negligible probability.) Hence, the mismatch information consists of encoding of up-to \(k\) pairs of grammars, with their indexes relative to the pattern. Thus, from the mismatch information we recover pairs of grammars \((G_{1},G_{1}^{\prime}),\ldots,(G_{k^{\prime}},G_{k^{\prime}}^{\prime})\), for some \(k^{\prime}\leq k\) where \(G_{i}\) come from the text and \(G_{i}^{\prime}\) come from the pattern.
If \((G_{1},G_{1}^{\prime})\) is not the very first grammar pair \((G_{s-r+R+2}^{T},G_{1}^{P})\) (which we recognize by their index in the mismatch information) then we compute the edit distance for each pair of strings \(\operatorname{eval}(G_{i})\) and \(\operatorname{eval}(G_{i}^{\prime})\), \(i=1,\ldots,k^{\prime}\). We set \(m_{s+1}\) to be the sum of those distances.
If \((G_{1},G_{1}^{\prime})\) is the pair \((G_{s-R+2}^{T},G_{1}^{P})\) then we apply the algorithm from Corollary 2.2 to calculate the minimal edit distance between any suffix of \(\operatorname{eval}(G_{1})\) and the string \(\operatorname{eval}(G_{1}^{\prime})\). For \(i=2,\ldots,k^{\prime}\), we compute the edit distance of \(\operatorname{eval}(G_{i})\) and \(\operatorname{eval}(G_{i}^{\prime})\). We set \(m_{s+1}\) to be the sum of the \(k^{\prime}\) calculated values.
However, if the CKP-match algorithm declares that the Hamming distance of its pattern to its current suffix is more than \(K\), we set \(m_{s+1}=\infty\).
Finally, we discard \(G_{s-r+R}\) from the buffer of the last \(R\) committed grammars, and we discard \(m_{s-R+2}\) from the buffer of values \(m_{i}\). We set \(s\) to be \(s+1\). This finishes the process of committing a single grammar \(G\), and a next grammar might be committed.
_Pattern edit distance query._ After we process the arrival of a new symbol, update the active grammars as described above and commit grammars as necessary, the algorithm is ready to answer the edit distance query on the current suffix of the text \(T\) and the pattern \(P\). At this point grammars \(G_{1}^{T},\ldots,G_{s}^{T}\) were already committed to the CKP-match algorithm. There are current active grammars \(G_{1}^{a},\ldots,G_{t}^{a}\) which were not committed to the CKP-match algorithm, and there are \(R\) grammars \(G_{r-R+1}^{P},\ldots,G_{r}^{P}\) of the input pattern that were not committed to the CKP-match algorithm as part of its pattern. To answer the edit distance query we will compare the edit distance of those last \(R\) grammars of pattern \(P\) with the last grammars of the text, and we will combine this with a certain value \(m_{i}\), namely \(m_{s-R+t}\).
Let \(d=R-t\). If \(d>0\), for \(i=1,\ldots,d\) compute the edit distance of each pair \(\operatorname{eval}(G^{T}_{s-d+i})\) and \(\operatorname{eval}(G^{P}_{r-R+i})\). (Each grammar \(G^{T}_{s-d+i}\) is available in the buffer of the last \(R\) committed grammars.) For \(i=d+1,\ldots,R\), compute the edit distance of each pair \(\operatorname{eval}(G^{a}_{i-d})\) and \(\operatorname{eval}(G^{P}_{r-R+i})\). Sum those \(R\) values together with \(m_{s-d}\). If the sum is less than \(k\) output it, otherwise output \(\infty\).
Since we are running \(O(\log n)\) independent copies of our algorithm, each of the copies produces an estimate on the edit distance and we output the smallest estimate. That is the correct value with high probability.
## 6 Correctness of the algorithm
In this section we argue that the algorithm produces a correct output. First we analyze the probability of certain bad events happening when the algorithm fails and then we argue the correctness of the output assuming none of the bad events happens. There are several sources of failure in our algorithm.
1. The BK-decomposition algorithm might produce a decomposition of either the pattern or some suffix of the text with a grammar that is too big or with grammars that do not represent expected strings. (A failure of Proposition 3.1.)
2. The BK-decomposition algorithm produces a correct decomposition of the pattern and all suffixes of the text but grammars of some suffix of the text \(T\) and the pattern \(P\) do not align well. (A failure of Proposition 3.2.)
3. The encoding function \(\operatorname{Enc}\) fails for some pair of grammars produced by the BK-decomposition algorithm that the CKP-match algorithm is supposed to compare. (A failure of Proposition 3.5.)
4. BK-decomposition algorithm does not fail but the CKP-match algorithm fails to identify a \(K\)_-mismatch_ occurrence of its pattern or fails to produce correct mismatch information. (A failure of Proposition 3.6.)
The failure probability of events 1), 3) and 4) will be each bounded by inverse polynomial in \(n\), where \(n\) is the parameter sent to those algorithms as an upper bound on the length of the processed strings. Thus, if we expect our algorithm to process a text and a pattern of size at most \(N\), we can set the parameter \(n\) for the BK-decomposition algorithm to be \(N^{4}\) and for the CKP-algorithm to be \(N^{4}\cdot M=\widetilde{O}(N^{5})\), where \(M\) is calculated from \(n=N^{4}\) and \(k\) of the BK-decomposition algorithm. (Parameter \(k\) for the BK-decomposition algorithm is set to \(k\), and for the CKP-algorithm to \(K=k\cdot M=\widetilde{O}(k^{2})\).) We will run \(2\log N\) independent copies of our algorithm on the same text and pattern. Next we calculate the probability of failure in case 1), 3) and 4) in a particular copy of the algorithm.
**Event 1.** There is one pattern \(P\) of length at most \(N\), the probability of either of the two conditions in Proposition 3.1 failing on \(P\) is at most \(4/\sqrt{n}=4/N^{2}\). The probability of failure of Proposition 3.1 on any the at most \(N\) prefixes of the text \(T\) is at most \(N\cdot 4/\sqrt{n}=4/N\). Thus the probability of the bad event 1) happening is at most \(4/N+4/N^{2}\).
**Event 3.** There are at most \(N\) grammars of the pattern encoded by \(\operatorname{Enc}\) and there are at most \(N\) grammars of the text encoded by \(\operatorname{Enc}\) and committed. Thus there are at most \(N^{2}\) pairs of grammars on which Proposition 3.5 could fail by encoding two distinct grammars by strings of Hamming distance less than \(M\) (failure in the third part of Proposition 3.5). Given our setting of parameters, the probability of the bad event 3) happening is at most \(N^{2}/n=1/N^{2}\).
**Event 4.** The probability that the CKP-match algorithm fails during its execution is at most \(1/n=1/N^{4}\).
Thus, the probability of a failure of 1), 3) or 4) is at most \(5/N\), for \(N\) large enough. We run \(2\log N\) copies of the algorithm so the probability that any of the copies fails because of events 1), 3), or 4) is at most \(10\log N/N\).
If none of the events 1), 3) and 4) occurs during the execution of the algorithm then the pattern and the text are correctly decomposed into grammars by the BK-decomposition, the grammars are properly encoded by \(\mathrm{Enc}\), and the CKP-match algorithm correctly identifies all the occurrences of the pattern grammars in the committed text grammars, and for each of the occurrences we correctly recover the differing pairs of pattern and text grammars. Assuming this happens, we want to argue that with a high probability our algorithm will correctly identify \(k\)-edit occurrences of the pattern \(P\) in the text \(T\).
After receiving a prefix of the text \(T[1,\ell]\), \(\ell\leq N\), we want to determine whether some suffix of \(T[1,\ell]\) has edit distance at most \(k\) from the pattern \(P\). Let \(a\) be such that \(T[a,\ell]\) has the minimal distance from \(P\). Clearly, if the edit distance between \(T[a,\ell]\) and \(P\) is at most \(k\) then \(a\in\{\ell-|P|-k+1,\ldots,\ell-|P|+k+1\}\). By Proposition 3.2 applied on \(u=T[1,a-1]\), \(x=T[a,\ell]\) and \(y=P\), each of the \(2\log N\) copies of our algorithm has probability at least \(4/5\) that the grammars of \(T\) are well aligned with grammars of \(P\). Being well aligned means that \(T[a,\ell]\) is a suffix of \(\mathrm{eval}(G^{T}_{s-r+t+1},\ldots,G^{T}_{s},G^{a}_{1},\ldots,G^{a}_{t})\) and
\[\mathrm{ED}(T[a,\ell],P) = \mathrm{ED}(\mathrm{eval}(G^{T}_{s-r+t+1})[b,\ldots],\mathrm{eval }(G^{P}_{1}))\] \[+ \sum_{i=2}^{r-t}\mathrm{ED}(\mathrm{eval}(G^{T}_{s-r+t+i}), \mathrm{eval}(G^{P}_{i}))\] \[+ \sum_{i=r-t+1}^{r}\mathrm{ED}(\mathrm{eval}(G^{a}_{i-r+t}), \mathrm{eval}(G^{P}_{i})),\]
for appropriate \(b\). Moreover, the minimality of \(a\) implies that
\[\mathrm{ED}(T[a,\ell],P) = \min_{b}\mathrm{ED}(\mathrm{eval}(G^{T}_{s-r+t+1})[b,\ldots], \mathrm{eval}(G^{P}_{1}))\] \[+ \sum_{i=2}^{r-t}\mathrm{ED}(\mathrm{eval}(G^{T}_{s-r+t+i}), \mathrm{eval}(G^{P}_{i}))\] \[+ \sum_{i=r-t+1}^{r}\mathrm{ED}(\mathrm{eval}(G^{a}_{i-r+t}), \mathrm{eval}(G^{P}_{i})).\]
Notice, regardless of whether Proposition 3.2 fails or not, the right-hand-side of the last equation is always at least \(\mathrm{ED}(T[a,\ell],P)\) since it is an upper-bound on the true edit distance of \(P\) to some suffix of \(T\). We will argue that each copy of the algorithm outputs the right-hand-side value of that equation if it has value at most \(k\), and \(\infty\) otherwise. Moreover, if at least one of the copies of our algorithm has \(T[a,\ell]\) and \(P\) well aligned, then the minimum among the values output by the different copies of our algorithm is \(\mathrm{ED}(T[a,\ell],P)\).
Since we have \(2\log N\) copies of the algorithm, the probability that none of the decompositions aligns \(T[a,\ell]\) and \(P\) well is at most \((1/5)^{2\log N}<1/N^{4}\). This upper-bounds the probability of error of outputting a wrong value of \(\min_{b}\mathrm{ED}(T[b,\ell],P)\) after receiving \(\ell\) symbols of the text. As there will be at most \(N\) distinct values of \(\ell\), the probability of outputting a wrong estimate of the edit distance of \(P\) to some suffix of \(T\) is at most \(N\cdot 1/N^{4}=1/N^{3}\), conditioned on none of the bad events 1), 3) or 4) happening. Overall, the probability of a failure of our algorithm is at most \(O(\log N/N)\leq 1/\sqrt{N}\), for \(N\) large enough, and it could be made an arbitrary small polynomial in \(N\) by choosing the parameters differently (\(n\) vs \(N\)).
It remains to argue that the copy of our algorithm which aligns \(T[a,\ell]\) and \(P\) well, outputs their edit distance. Consider the copy of the algorithm that aligns grammars of \(T[a,\ell]\) and \(P\) well. After arrival of the symbol \(T[\ell]\) and updating the grammars, there are active grammars \(G^{a}_{1},\ldots,G^{a}_{t}\), committed grammars \(G^{T}_{1},\ldots,G^{T}_{s}\) and the pattern grammars \(G^{P}_{1},\ldots,G^{P}_{r}\). If \(\mbox{ED}(T[a,\ell],P)\) is at most \(k\) then the number of grammars in which \(P\) differs from the last \(r\) grammars of \(T\) is at most \(k\). Thus the CKP-match algorithm can identify the differing grammars when computing the value \(m_{s-R+t}\) which is set to
\[m_{s-R+t} = \min_{b}\mbox{ED}(\mbox{eval}(G^{T}_{s-r+t+1})[b,\ldots],\mbox{ eval}(G^{P}_{1}))\] \[+ \sum_{i=2}^{r-R}\mbox{ED}(\mbox{eval}(G^{T}_{s-r+t+i}),\mbox{eval }(G^{P}_{i})).\]
Since, \(m_{s-R+t}\leq\mbox{ED}(T[a,\ell],P)\leq k\), we have the true value of \(m_{s-R+t}\). Thus,
\[\mbox{ED}(T[a,\ell],P) = m_{s-R+t}\] \[+ \sum_{i=r-R+1}^{r-t}\mbox{ED}(\mbox{eval}(G^{T}_{s-r+t+i}),\mbox{ eval}(G^{P}_{i}))\] \[+ \sum_{i=r-t+1}^{r}\mbox{ED}(\mbox{eval}(G^{a}_{i-r+t}),\mbox{eval }(G^{P}_{i})).\]
That is precisely how we evaluate the edit distance query of our algorithm.
If \(\mbox{ED}(T[a,\ell],P)>k\) then we will output a value \(>k\) as we output some upper bound on the edit distance. Any value \(>k\) is treated as the infinity.
## 7 Time complexity of the algorithm
In the first phase, we incrementally construct the grammars for the pattern \(P\), using the BK-decomposition algorithm from Proposition 3.4 on each symbol of \(P\) at a time. Updating the active grammars for each new symbol takes \(\widetilde{O}(k)\) time, committing each of the possible \(\widetilde{O}(1)\) definite grammars to the CKP-match algorithm takes \(\widetilde{O}(M\cdot\sqrt{K})=\widetilde{O}(k^{2})\). Thus the time needed per arriving symbol of the pattern is \(\widetilde{O}(k^{2})\).
For each symbol of the text that arrives during the second phase of the algorithm we need to update the active grammars of the text, update \(m_{s}\), and evaluate the edit distance of the pattern from the current suffix of text. This includes parts _Symbol arrival, Committing a grammar_ and _Pattern edit distance query_ of the algorithm.
_Symbol arrival._ Appending a symbol using the BK-decomposition algorithm from Proposition 3.4 takes \(\widetilde{O}(k)\) time.
_Committing a grammar._ Encoding the grammar takes \(O(M)\) time using the algorithm from Proposition 3.5, and committing it to the CKP-match algorithm takes time \(\widetilde{O}(k^{2})\), as in the pattern case.
Querying the CKP-match algorithm for Hamming distance \(K\) takes \(O(K)=\widetilde{O}(k^{2})\) time. This recovers at most \(k\) pairs of distinct grammars \((G_{i},G^{{}^{\prime}}_{i})\), \(1\leq i\leq k\). Computing edit distance \(k_{i}\) of each pair of strings \(\mbox{eval}(G_{i})\) and \(\mbox{eval}(G^{{}^{\prime}}_{i})\), takes \(\widetilde{O}(S+k^{2}_{i})=\widetilde{O}(k+k^{2}_{i})\) time using Proposition 2.1. If \(\sum_{i}k_{i}\leq k\), the total time for the edit distance computation is bounded by \(\widetilde{O}(k^{2})\). If the computation runs for longer we can stop it as we know \(m_{s}\) is larger than \(k\). Running the algorithm from Corollary 2.2 on the first pair of distinct grammars to compute the minimum edit distance between any suffix of \(\mbox{eval}(G_{1})\) and the string \(\mbox{eval}(G^{{}^{\prime}}_{1})\)
takes \(\widetilde{O}(S+k^{2})\) time. Thus committing a grammar takes time at most \(\widetilde{O}(k^{2})\) where the longest time takes the minimization algorithm on the first pair of grammars.
_Pattern edit distance query._ This step requires the alignment of the last \(R\) grammars of the pattern with the appropriate grammars of the text and computing their edit distances. Using Proposition 2.1, computing edit distances of \(R\) pairs of grammars takes \(R\times\widetilde{O}(k^{2})=\widetilde{O}(k^{2})\) time.
As there are at most \(\widetilde{O}(1)\) committed grammars after processing each new symbol, the total time of this step is \(\widetilde{O}(k^{2})\) per arriving symbol.
## 8 Space complexity of the algorithm
During either phase of the algorithm, we store \(O(RL)=\widetilde{O}(1)\) active and updated grammars and buffer last \(O(R)\) committed grammars. This requires space \(\widetilde{O}(k)\). Furthermore, the CKP-match algorithm requires \(\widetilde{O}(K)=\widetilde{O}(k^{2})\) space. The edit distance algorithm of Proposition 2.1 cannot use more space than its running time so each invocation uses at most \(\widetilde{O}(k^{2})\) space. Similarly, Corollary 2.2 uses space \(\widetilde{O}(k^{2})\). Thus our algorithm uses space at most \(\widetilde{O}(k^{2})\) at any point during its computation.
## Acknowledgements
We thank Tomasz Kociumaka for pointing to us references for Corollary 2.2. We thank anonymous reviewers for helpful comments.
|
2310.15910 | Characterizing Mechanisms for Factual Recall in Language Models | Language Models (LMs) often must integrate facts they memorized in
pretraining with new information that appears in a given context. These two
sources can disagree, causing competition within the model, and it is unclear
how an LM will resolve the conflict. On a dataset that queries for knowledge of
world capitals, we investigate both distributional and mechanistic determinants
of LM behavior in such situations. Specifically, we measure the proportion of
the time an LM will use a counterfactual prefix (e.g., "The capital of Poland
is London") to overwrite what it learned in pretraining ("Warsaw"). On Pythia
and GPT2, the training frequency of both the query country ("Poland") and the
in-context city ("London") highly affect the models' likelihood of using the
counterfactual. We then use head attribution to identify individual attention
heads that either promote the memorized answer or the in-context answer in the
logits. By scaling up or down the value vector of these heads, we can control
the likelihood of using the in-context answer on new data. This method can
increase the rate of generating the in-context answer to 88\% of the time
simply by scaling a single head at runtime. Our work contributes to a body of
evidence showing that we can often localize model behaviors to specific
components and provides a proof of concept for how future methods might control
model behavior dynamically at runtime. | Qinan Yu, Jack Merullo, Ellie Pavlick | 2023-10-24T15:15:18Z | http://arxiv.org/abs/2310.15910v1 | # Characterizing Mechanisms for Factual Recall in Language Models
###### Abstract
Language Models (LMs) often must integrate facts they memorized in pretraining with new information that appears in a given context. These two sources can disagree, causing competition within the model, and it is unclear how an LM will resolve the conflict. On a dataset that queries for knowledge of world capitals, we investigate both distributional and mechanistic determinants of LM behavior in such situations. Specifically, we measure the proportion of the time an LM will use a counterfactual prefix (e.g., "The capital of Poland is London") to overwrite what it learned in pretraining ("Warsaw"). On Pythia and GPT2, the training frequency of both the query country ("Poland") and the in-context city ("London") highly affect the models' likelihood of using the counterfactual. We then use head attribution to identify individual attention heads that either promote the memorized answer or the in-context answer in the logits. By scaling up or down the value vector of these heads, we can control the likelihood of using the in-context answer on new data. This method can increase the rate of generating the in-context answer to 88% of the time simply by scaling a single head at runtime. Our work contributes to a body of evidence showing that we can often localize model behaviors to specific components and provides a proof of concept for how future methods might control model behavior dynamically at runtime.
## 1 Introduction
Large Transformer Language Models (Vaswani et al., 2017) (LMs) store information from pretraining which they can recall at inference time to generate text. This is paired with the exceptional ability of models to use provided context in order to produce coherent text that incorporates new facts. However, facts that are memorized in pretraining and facts that are provided in-context can often compete with each other; in some cases it might be desirable that the model ignores facts from pretraining (e.g., updating outdated information with a prompt), while in others we want the model to prefer what it learned in pretraining (e.g., ignoring false information in prompt injections). Currently, little is understood about the factors and mechanisms that control whether an LM will generate text respecting either the in context or memorized information.
Recent work in mechanistic interpretability aims to deeply explain the internal processes in LMs, allowing us to interpret the contributions of individual model components to the final predicted text (Olah et al., 2020; Wang et al., 2022; Nanda, 2022). We can use tools from these studies to shed light on the model components that are responsible for pushing the model more towards either memorized or contextual information. We study this relationship using a task that requires predicting a capital city in the face of conflicting information (see Figure 1). We measure how often a model will answer that the capital city of a country is the _in-context_ counter
Figure 1: We find that individual attention heads can play specific roles in using context information vs. recalling facts. By up or downweighting these heads, we can often control whether LMs use information from context which conflicts with its pretraining knowledge. For example, downweighting the memory attention head causes the model to prefer “London” above.
factual (e.g., London) vs. the _memorized_ (ground truth) city (Warsaw) it learned in pretraining. Our study consists of two key sets of experiments.
First, in Section 5, we investigate how distributional features of the pretraining data influence behavior. We find that the frequency of a fact in the pretraining corpus strongly correlates with model behavior. This analysis reveals several key findings: (1) The more frequently a country appears in pretraining, the more likely the LM is to generate the memorized capital city; (2) The more frequently the in-context city (i.e., the one with which we want to overwrite) appears, the less likely the model is to use it, regardless of the frequency of the country; (3) Larger models (up to the scale tested: 2.8b parameters) are less likely overall to use in-context information and prefer the memorized answer, even when the fact is less frequent.
Next, in Section 6, we use head attribution (Elmage et al., 2021; Nostalgebraist, 2020; Nanda, 2022) to show that we can localize promotion of the memorized or in-context answer to individual attention heads. By either upweighting or downweighting the head by a scalar value, we can control which answer the model prefers. In the most successful case, downweighting the memory head allows us to increase the rate of the in-context city to 88% while reducing the amount of memorized predictions to 4% on the world capitals task. In a qualitative analysis of the weights of this head (Section 6.4), we show that it specifically promotes geographic information. We find that forcing the opposite behavior, i.e., promoting the memorized answer, is more difficult, and that the mechanism these heads use doesn't necessarily generalize well (SS6.5). Still, the method we discover is surgical, and only requires scaling a single head (0.00001% of Pythia-1.4b's parameters), suggesting that components within an LM may specialize for specific predictable functions, and providing a promising avenue for understanding the internal workings of LMs and further techniques for model editing.
## 2 Related Work
The impressive, but sometimes unpredictable successes of LMs on performing tasks described in context (Brown et al., 2020) has spurred intense interest in the factors that allow models to solve tasks this way. Studies on pretraining datasets have found that higher pretraining term frequency is positively correlated with task performance on factual association tasks (Kandpal et al., 2022) and numerical reasoning (Razeghi et al., 2022), and relates to work on memorization vs. generalization in LMs (Hupkes et al., 2022). Haviv et al. (2023) analyze mechanisms used to recall memorized information by studying idiom generation. Model size is also shown to be a factor that affects tendency to use memorized vs. in context information (Wei et al., 2023). Previous work has also examined how deeply LMs interact with context during in-context learning (Min et al., 2022; Xie et al., 2022). Other work has focused on LMs' abilities to consider counterfactual or hypothetical contexts (Li et al., 2023; Qin et al., 2019), with mixed results in overwriting pretraining memory.
Our work is built heavily on previous work in mechanistic interpretability, which aims to reverse engineer model computations into human understandable components (Nanda et al., 2023; Elhage et al., 2021; Wang et al., 2022). While knowledge from pretraining has been found to be stored in the feedforward (MLP) sublayers (Meng et al., 2023; Geva et al., 2021; Kobayashi et al., 2023; Dai et al., 2022), more recent work has also clarified the role of attention in this same process: Geva et al. (2023) find that attention heads extract facts from an earlier mentioned subject token (e.g., Poland) when required. This naturally sets up our study, which also considers attention heads as the source of the competing effect between copying the counterfactual from earlier in context vs. extracting the memorized fact from an earlier subject token. A core technique in these works is projecting activations from model components into the vocabulary space to make claims about their roles, which we generically refer to here as _logit attribution_(Nostalgebraist, 2020; Wang et al., 2022; Merullo et al., 2023; Belrose et al., 2023; Dar et al., 2022; Milidge and Black, 2022). We leverage this technique to localize attention heads which tend to promote either context or memorized information (SS6).
## 3 Task Design
We study the mechanism that language models use when given counterfactual information in context. For our analysis, we focus on a simple zero-shot task that requires producing the capital city for a given country, which serves as a representative example of the type of common facts that a language model could learn in pretraining. It consists of 248 country capital pairs with 6 additional aliases and
their respective capitals. To create counterfactuals in the dataset, we pair up every country with the rest of the 247 capitals using the following format:
The capital of {country} is {in-context city}. Q: What is the capital of {country}? A:
For example, we can fill in {country} with Poland and {in-context city} with London. The model has learned that the capital is Warsaw from pretraining, but with the in-context prompt, the task becomes ambiguous between whether it should output the known answer Warsaw or overwrite it with London. We query the model to generate a full sentence to determine which of the above task interpretations it preferred. We define Poland as the _country_ and London as the in-context answer. Correspondingly, we define Warsaw, the capital of Poland, as the memorized answer. If London is included in the sentence, then we consider the model to have produced the in-context answer. If the model generates Warsaw, then it is considered as the memorized answer. In total, we have 62,992 such pairs of country and capital.
The World Capital dataset is able to provide a clean analysis giving a unique memorized and in-context answer. Language models perform well on this task producing one of the two expected cities at least 80% of the time (varying depending on model size). The lack of noise in the responses makes the task a good choice for cleanly diagnosing model preference for memorized vs. in-context answers.
## 4 Models
We analyze the overwriting behavior primarily on the Pythia (Biderman et al., 2023) models as well as GPT2 series (Radford et al., 2019). The Pythia models are trained on the Pile (Gao et al., 2020) which we have full access to, allowing us to relate model behavior to frequency effects in the data. While we don't have access to the pretraining data of GPT2, we still report results, using the Pile frequencies as an approximation of what GPT2 might have seen.
## 5 Effect of Term Frequency on Model Tendency to Use Memorized Facts
### Experimental Setup
We hypothesize that the model will be less likely to overwrite information about more frequently appearing country/capital names. The number of in-context predictions will increase when the term frequency descreases. The number of memorized predictions will increase when the term frequency increases. To test this hypothesis, we search through the the pretraining corpus of the Pythia model (i.e., the Pile (Gao et al., 2020), which contains 210 million text documents) in order to compute the term
Figure 2: Results from total 62,992 inputs of every country paired with every counterfactual capital 1. We break down all the inputs into 10 percentile bins from the least to most frequent by four frequency criteria. Every percentile contains around a total of 6300 examples. The first two graphs reflect frequency based on the country (e.g., Poland). The upward trajectory of the red lines show the positive correlation between the proportion of memorized answer predictions and frequency. The last two graphs reflect the frequency of the _in-context capital_ (e.g., London). The drop of the blue lines across all the four graphs show the negative correlation between proportion of in-context answer predictions and term frequency.
frequency of country and city names.
We search both for the frequencies of the individual country and city names, as well as their co-occurrences in the dataset. Co-occurrence is measured by whether a country and a city appear together in the same document. We split the occurrences and co-occurrences into 10 percentile bins, with the 0th bin containing the least frequent 10%, and the 9th bin containing the top 10% most frequent terms. Every bin includes around 25 countries and capitals. We mix every country with all the other 248 capitals to form prompts. We have in total around 6200 instances per bin (given 63k prompts). To give some qualitative examples, capitals like Beijing are in the top percentile bin as measured by occurrence, while capitals like Akrotiri and Dhekelia are in the bottom. For the co-occurrences between country and capital, \(\langle\texttt{China},\texttt{Beijing}\rangle\) is in the top percentile and \(\langle\texttt{Guinea-bissau},\texttt{Bissau}\rangle\) is in the bottom percentile.
We run the counterfactual world capital data through both the Pythia models as well as the GPT2 series of models. We generate the a full sentence by decoding the output. We count the number of times the in-context and memorized answers appear in the decoded sentences and plot these counts as a function of the percentile bins described above.
### Results
As the frequency for the _country_ increases, there is more knowledge stored about the country during pertaining. Therefore, we intuitively expect to see that models are more inclined to predict the memorized answers as the frequency goes up. Figure 2 supports this intuition. We can see a clear upward trend in the pink line, reflecting the increasing proportion of the memorized answers as a function of the increase in term frequency. When the _country_ is more prevalent in the training data, the model has a greater tendency to predict memorized answers.
We also observe a relationship between the frequency of the in-context capital and the model's predictions. As the frequency for either the country or the in-context capital increases, the number of in-context answer predictions decreases. This is demonstrated by the drop of the blue lines in Figure 2. When the given in-context capital is more prevalent in the training data, for example Beijing, the model tends to predict the memorized answer. However, when the given in-context capital is less prevalent, such as Palikir, the model is more likely to predict the in-context answer. We ran the same experiments across all the Pythia and GPT2 models of different sizes (see Appendix A) and see the same frequency effect, especially in larger models.
Figure 3 shows the increase in sensitivity to frequency with respect to model size. We find that as models increase in size, they become more likely overall to produce the memorized answers rather than in-context answers, and that this occurs with the most frequent countries. That is, as larger models become more likely to produce the memorized answer, the changes are not evenly distributed across frequency bins. Rather, a strong memorization bias is observed first for more frequent terms, and then as models get larger, this extends to increasingly lower frequency terms. This can be observed in transition from blue shading (more in-context answers) to red shading (more memorized answers). See Appendix B.1 for results showing
Figure 3: The proportion of in-context and memorized answers decomposed by the frequency of _country_(Poland) across all Pythia models of different sizes (the \(2^{nd}\) graph in figure 2). The upward trend of the red lines shows as the model size increases, the model predicts more memorized answers. Blue and red shading indicates that the amount of in-context or memorized answers is higher, respectively. We find that as models get bigger, they first memorize more frequent capitals before the lower frequency ones.
this effect with respect to the frequency of cities and co-occurrences, where we observe the same trend.
## 6 Identifying and Manipulating Mechanisms for Recall
So far we have shown that (larger) models tend to have a preference to use the answer they have memorized. In this section we ask if there is a specific mechanism within the model that controls whether the memorized or in-context answer is generated, and whether that can be isolated from more general language generating abilities. Because the task boils down to whether the model copies information that was provided in context or not, we focus on analyzing the roles of specific attention heads. Prior work has demonstrated the importance of attention heads for performing copying tasks Wang et al. (2022); Elhage et al. (2021) as well as recall from memory Geva et al. (2023), which motivates our analysis of attention heads. We perform this analysis on only the largest models Pythia-1.4b, Pythia-2.8b, as well as GPT2-xl (see Appendix D).
### Head Attribution
The idea behind logit attribution techniques Nostalgebraist (2020); Wang et al. (2022); Nanda (2022) is to interpret activations or weights in a language model in terms of the vocabulary space. These methods work by using the unembedding matrix (i.e., language modeling head) in order to understand the role of a given component for a given task. This is built on the premise that the final hidden state of the model is the summation of the outputs of all of the components before it Elhage et al. (2021). That is, every layer of output can be traced back and decomposed as the contribution of each sublayer up to that point. We use head attribution to test whether individual heads tend to promote either the in-context capital or the memorized capital. Using this method, we are able to find a single head in each model that primarily controls the use of memorized information1.
Footnote 1: This is not to say that this is the _only_ job of this head in general, or that these are the only heads that play this role.
In Figure 4, we illustrate the method. The additive update made by the attention layer is composed of the individual updates of each attention head after it is passed through the \(W_{O}^{H}\) output matrix within the attention layer. We can project the \(i^{th}\) head into the space of the residual stream by multiplying with the \(i^{th}\) (\(d_{head},d_{model}\)) slice of this matrix (see Appendix C) and then multiplying with the unembedding matrix to get the logit values for the
Figure 4: The head attribution method showing the logit difference calculation for layer 15, head 7 in Pythia-1.4b on the example from Figure 1. Pythia-1.4b has 24 layers and 16 heads for each layer, totaling 384 heads to check. We obtain the memory head and in-context head in the following way: We divide the output weight matrix from an attention layer (\(W_{O}^{H}\)) into 16 components (one for each head) Elhage et al. (2021) Then, we take the dot product between each head \(i\) of the and the \(i^{\text{th}}\) component of the weight matrix. Afterward, we extract the corresponding vectors in the unembedding matrix for the memorized answer (e.g., Warsaw) and in-context answer (e.g., London). We dot product the projected head vector with the two vectors respectively, giving us a scalar value representing the logit for each of those words represented by the head. Subtracting these two scalars give us the logit difference of two answers from one specific head. Blue in the heatmap indicates that the head is promoting in-context answer and red indicates the head is promoting memorized answer.
memorized and in-context city tokens. We subtract these two scalar values to get the _logit difference_ (see Wang et al. (2022)).
Intuitively, this logit difference captures the effect the head has in promoting one word (relative to another) to be output as the final prediction. This provides us a practical way to calculate the the role of each head, and find heads that consistently push the model towards the memorized or in-context answer.
Data:To identify specific heads, we randomly sample 10 examples from each percentile for which each model predicts the in-context answers and another 10 examples for which each model predicts the memorized answers. Thus, in total, we obtain 100 examples on which the original model predicts in-context cities and 100 examples on which it predicts memorized cities. We run these 200 examples through the model in batches of 5 and use head attribution to extract the logit difference between each head in every layer. We observe that there is a variation in the roles of every head throughout the batches, but we identify a series of heads that consistently push the model towards one answer or the other.
### Effect of Tuning Individual Attention Heads
Using head attribution, we identify two different types of heads: **memory heads** and **in-context heads**. The memory heads promote the prediction towards the memorized answer and the in-context heads promote the predictions towards the in-context answer. These heads are shown on the righthand side of Figure 4, which plots the relative effect of each head at each layer for promoting the in-context vs. memorized answers.
Since these heads heavily contribute to the logit increase for one of the two answers, we hypothesize that multiplying the value vectors by a scalar will enable us to increase or decrease the effect of each head. Let this multiplicative value be \(\alpha\). We hypothesize that tuning up the memory head will increase the number of answers that contain the ground truth answer, while tuning it down will increase the number that contain the in-context answer. The opposite should hold for the in-context head.
With this assumption, we apply the scaling intervention on the series of potential memory heads and in-context heads on the 200 sampled examples. From the series of potential heads, we pick the head that has the strongest effect in the intended direction. See Appendix F for results using an alternative memory head. For example, for the in-context head, this effect is measured by the proportion of times the head changes the original memorized answers into in-context answers at it's optimal \(\alpha\). The analogous process is used to find and tune the memory head. Therefore, we identify one memory head and one in-context head (see Figure 5, Appendix C.3), each with their optimal \(\alpha\), as determined via tuning on the development set.
Figure 5 shows the effect of the \(\alpha\) parameter on the proportion of in-context vs. memorized answers for both the memory and in-context heads
Figure 5: With the chosen memory head (15.7) and in-context head (19.14), we apply a multiplicative factor (\(\alpha\)) to measure the effect on producing either memorized or in-context answers. This was performed on two 100 example tuning sets (§6.1). The first graph demonstrates the most successful case of intervention. By tuning the memory head (15.7) value by \(\alpha=-0.7\), can flip 86% of the examples from originally predicting memorized answers to predicting in-context answers. The dotted line shows no intervention (\(\alpha=1\)). The gray dot shows the value of \(\alpha\) that produces the best results according to our criteria.
on Pythia-1.4b. Tuning the memory down has a strong effect on the generated text, flipping more than 80% of the predictions to the given in-context answers, and preventing the model from ever producing the memorized answer. The other interventions show positive but weaker results. In general, the in-context head is less effective at flipping predictions, and promoting the memorized answer is more difficult than promoting the in-context answer.
### Results of Interventions on the World Capital Dataset
Figure 6 shows the intervention results on the full world capital dataset with selected memory and in-context heads and their respective \(\alpha\). The result aligns with our expectations. Negatively tuning the memory head drastically increases proportion of the in-context answers. Specifically, whereas the model originally predicted in-context answers 26% of the time and memorized answers 43% of the time, after our intervention, the model predicts in-context answers 86.2% of the time and memorized answers only 4% of the time. Note that, on Pythia 1.4b, scaling a single head is analagous to modifying 0.00001% of model parameters. This suggests that this head plays a specific role in using the memorized answer in this task. Positively tuning the memory head also increases the memorized answer prediction to 50%. Positively tuning the in-context head pushes the model in the expected direction but has a more muted effect: increasing the amount of in-context answers by 12% but dropping the amount of memorized answers by about 20%. We observe that changing in-context predictions to memorized predictions is more difficult. In the fourth and fifth column, when positively tuning the memory head and negatively tuning the in-context head, we hope to increase the proportion of memorized answers. While there is some increase, it is less profound, only increasing 6%. Given the connection between facts learned in pretraining and the MLP layers (Geva et al., 2021; Meng et al., 2023; Merullo et al., 2023), it's possible that tuning attention alone is not enough to see higher performance in this setting.
We break down the intervention results from Figure 7 into term frequency percentile bins as in Section 5. We focus on the occurrence count of the country and the occurrence count of the in-context capital (London). We select two interventions-negatively tuning memory head and positively tuning in-context head in Pythia-1.4b-both of which should increase the in-context answers and decrease the memorized answers. We find that the intervention on the memory head overcomes the previously-described frequency effects. Specifically, the dashed blue and pink lines are flat across percentiles. When positively tuning the in-context head, we observe that the frequency effects remain, and thus the intervention is not fully successful. In particular, even after intervention, the memorized answers are still positively correlated with term frequency, and the in-context answer is negatively correlated with frequency. Most prominently, tuning the in-context head does not substantially increase the number of in-context answers when the in-context city is high frequency (no mitigation of the city frequency effect) as shown by the blue lines in the 4th graph.
### Head Analysis
Tuning the memory head down inhibits the models' abilities to promote the memorized capital city as we have shown in the previous section. In this section, we explore why this is the case by analyzing the memory head weights. We find the selected memory head (15.7) promotes geography related tokens in the output space, suggesting that this head is responsible for this information as opposed to a more abstract 'truthfulness' direction.
Figure 6: We apply the chosen memory head (15.7) and in-context head (19.14) and the chosen respective scale from Figure 5, we apply the scaling intervention on all of the 62,992 examples. Negatively tuning the memory head produced the most successful result.
Singular Vector DecompositionThe product of the Value and Output weight matrices in an attention head form the OV matrix Elhage et al. (2021), which controls how attending to some token affects the residual stream. Following Millidge and Black (2022); Belrose et al. (2023), we can decompose the \(OV\) matrix into \(\mathbf{SVD(OV)}=\mathbf{USV_{h}}\) where \(\mathbf{V_{h}}\) is the unitary matrix of the right singular vectors representing the subspaces the given head writes into the residual stream (as opposed to the Value weight matrix in \(\mathbf{OV}\)). See Millidge and Black (2022); Belrose et al. (2023) for more information.
The \(i\)th singular vector has the same size as the residual stream; if we decode this vector into the vocab space of the model with the unembedding matrix (the LM head) we can observe the semantic clusters that a given head most strongly promotes. Since the singular vectors are ordered, we know that the first singular vectors are the most important for the head. We qualitatively define the semantic clusters promoted by each head by looking at the top \(k\) tokens decoded from each singular vector.
We compare the memory head 15.7 with the context head 19.4. If the memory head specifically promotes geographical information, we should see clear emphasis on this information that is not present in the context head. In Table 1, we decode the top 10 tokens from the first five singular vectors in each head and find that many of the memory head tokens are geographically focused. The trend becomes even more clear when comparing all singular vectors (see Appendix G). It should be noted that the alternate memory head studied in Appendix F is not as interpretable in the vocab space, despite giving similar intervention results. Understanding the contribution of such heads is an interesting direction to future work.
### Generalizing to New Data
We further explore the domain specificity of the memory (and context) head by applying the method to the COUNTERFACT dataset Meng et al. (2023), which queries factual knowledge from multiple domains. We apply the same scale intervention to the heads in the discovered mechanism. We focus on the paraphrase task of the dataset in the zero-shot setting. For example,
\begin{tabular}{|p{341.4pt}|} \hline Apple A5 is developed by Google. \\ Apple A5 is created by \\ \hline \end{tabular} The dataset replaces the memorized answer (Google) with a counterfactual (Apple). We filter the dataset down to examples where the model predicts the ground truth answer when the counterfactual prompts are not injected. We apply the same memory head, in-context head and their respective intervention scale on the counterfactual dataset. That is, we do no additional data-set specific analysis or tuning. We find that, despite the memory head's high impact on the world capital dataset (increasing the proportion of _in-context answers_ by 60%) it doesn't generalize to the COUNTERFACT dataset. In both cases of interventions, both the proportion of memorized answers and in-context answers decrease. The model produces a higher proportion of invalid answers compared to the intervention on the world capital dataset. This
Figure 7: The frequency effect referred to in Section 5 disappears when we tune down the memory head, showing the success of this strategy. Positively tuning the in-context head shows decent success on lower frequency countries/capitals, but actually causes performance to fall apart in the higher frequency bins. The solid lines show the original predictions and the dotted lines show predictions after the intervention.
could be a result of the need for a more extended label field. The COUNTERFACT dataset includes broad label fields beyond geographical information such as names, dates and etc. The specific head we selected (15.7) is shown to encode memory in a specific field, therefore, this could lead to the poor performance in COUNTERFACT.
## 7 Discussion & Future Work
This paper investigates factors that influence a model's propensity to favor in-context vs. memorized factual associations, when the two compete with one another. Our results demonstrate that the frequency of information in the pretraining corpus can affect the model's tendency to use new, conflicting information provided in context. Building on this, we provide a proof of concept that this tendency can be controlled by a mechanism in the attention heads which allows us to manipulate LMs' tendency to prefer new in-context information without modifying any model parameters directly. By building off insights from mechanistic interpretability, we can localize single attention heads that contribute to this mechanism. This provides evidence that decomposing complex neural networks into understandable components is possible, even in models with billions of parameters. Still, we observe that the selected heads promote domain specific knowledge rather than a more abstract concept of truthfulness. This brittleness is characteristic of mechanistic analyses of larger models, and should be a priority for future work. Nonetheless, given the early stage of research on this level of analysis of large language models, findings of this type even in an isolated setting are exciting and can lay the groundwork for subsequently discovering more general mechanisms.
The exploratory methods described here suggest avenues via which future work might develop more sophisticated techniques for controlling and auditing deployed language models. Adapting LMs post-hoc for applications that require domain-specific information is a growing problem. For example, there are simultaneously reasons we might want to suppress the use of in-context information at run time (e.g., to combat prompt-injection attacks) as well as reasons we might want to encourage it (e.g., to enable users to provide new, personalized, or hypothetical information to the model). The intervention we describe in this work is intriguing in that it can be used without changing the model and can be turned on and off dynamically within the forward pass. It thus offers a promising direction for further work on model editing.
## 8 Conclusion
In the problem setting of predicting world capitals, our results show that the ability of language models (LMs) to overwrite information that it memorized in pretraining depends both on the frequency of the subject of the new fact (the country, e.g., Poland), as well as the frequency of the overwriting information (the counterfactual city, e.g., London). We can intervene on attention heads that we find tend to push the prediction one way or another. By simply rescaling the value vectors of important heads, we can control which city the model predicts without updating any model parameters. We hope these results encourage future work in understanding the internal structure of neural networks in general.
\begin{table}
\begin{tabular}{|c|} \hline
**Memory Head (15.7)** \\ \hline ’LW’, ’Wade’, ’WT’, ’liche’, ‘ienne’, ’ell’, ‘owe’, ‘iale’, ‘uelle’, ’e Té’ \\ ’Italian’, ’Italian’, ’Italy’, ’Italian’, ’Idaï’, ’Idaï’, ’Giovanni’, ’pasta’, ’Io’, ‘Giul’, ’Naples’ \\ ’WA’, ’WS’, ’WA’, ‘owa’, ’ws’, ’wa’, ’Ws’, ’Wa’, ‘pora’, ’WI’ \\ ’WM’, ’WM’, ’wm’, ’mw’, ’w’, ’nw’, ’MW’, ’Minnesota’, ’WN’ \\ ’Guatemala’, ’Guatemala’, ’Guatemala’, ‘usta’, ‘osta’, ‘Tampa’, ’Brazil’, ’ativa’, ’Bah’, ’Tamil’, ’Brazil’ \\ \hline
**In Context Head (19.4)** \\ \hline ’,’,’,’\(\backslash\)’\(\backslash\)200b’, ’\(\ast\)’, ’\(\cdot\)’, ’\(\cdot\)’, ’\(\cdot\)’, ’\(\cdot\)’, ’\(\cdot\)’, ’\(\cdot\)’,’\(\cdot\))’,’\(\cdot\))’ \\ ‘ilogy’, ’vex’, ’\(\vary{\color[rgb]{0,0,0}\definecolor[
### Limitations
Our work aims to show that individual components in LMs can play predictable roles in certain model behaviors; in this case, whether or not to overwrite memorized information about world capitals. Further work is required to understand how to control the use of context or memorized information in generated text for this to be successfully applied in the most general cases. The dataset we use is templated and applied to the limited domain of country-capital relationships, meaning that we can not make general statements about the role of individual attention heads in arbitrary context. It is likely, given the flexibility of LMs, that many different components can play this role depending on the nature of the task. This work contributes to the growing body of evidence that individual components (e.g., attention heads) _can_ specialize for certain roles across contexts. We can not yet show how to control this behavior in arbitrary settings, but we provide a promising avenue for how this might be done in the future.
## 9 Acknowledgments
We thank Catherine Chen, William Rudman, Charlie Lovering, Apoorv Khandelwal, Michael Lepori, Louis Castricato, Samuel Musker, Aaron Traylor, Tian Yun for discussion and comments on this work. We thank Daniel Wong for providing hardware support during writing.
## Ethics Statement
Our work provides early indicators of future tools that could aid in making models safer and more trustworthy. Insights like those described could potentially lead to methods for better predicting how language models might behave in certain settings, preventing models from generating personal information learned in pretraining (preventing access to some memorized information), or the opposite, preventing prompt injections from affecting model behavior (preventing access to certain context information). Although we do not observe the quality of generated text changing substantially in our limited setting, future work is needed to better understand how manipulating the 'intensity' of model components, especially those which affect the recall of pretraining information, can alter model behavior or make it easier to extract memorized text containing personal information.
|
2307.11161 | Weak universality, quantum many-body scars and anomalous
infinite-temperature autocorrelations in a one-dimensional spin model with
duality | We study a one-dimensional spin-$1/2$ model with three-spin interactions and
a transverse magnetic field $h$. The model has a $Z_2 \times Z_2$ symmetry, and
a duality between $h$ and $1/h$. The self-dual point at $h=1$ is a quantum
critical point with a continuous phase transition. We compute the critical
exponents $z$, $\beta$, $\gamma$ and $\nu$, and the central charge $c$
numerically using exact diagonalization (ED) for systems with periodic boundary
conditions. We find that both $z$ and $c$ are equal to $1$, implying that the
critical point is governed by a conformal field theory. The values obtained for
$\beta/\nu$, $\gamma/\nu$, and $\nu$ from ED suggest that the model exhibits
Ashkin-Teller criticality with an effective coupling that is intermediate
between the four-state Potts model and two decoupled transverse field Ising
models. An analysis on larger systems but with open boundaries using
density-matrix renormalization group calculations, however, shows that the
self-dual point may be in the same universality class as the four-state Potts
model. An energy level spacing analysis shows that the model is not integrable.
For a system with periodic boundary conditions, there are an exponentially
large number of exact mid-spectrum zero-energy eigenstates. A subset of these
eigenstates have wave functions which are independent of $h$ and have unusual
entanglement structure, suggesting that they are quantum many-body scars. The
number of such states scales at least linearly with system size. Finally, we
study the infinite-temperature autocorrelation functions close to one end of an
open system. We find that some of the autocorrelators relax anomalously in
time, with pronounced oscillations and very small decay rates if $h \gg 1$ or
$h \ll 1$. If $h$ is close to the critical point, the autocorrelators decay
quickly to zero except for an autocorrelator at the end site. | Adithi Udupa, Samudra Sur, Sourav Nandy, Arnab Sen, Diptiman Sen | 2023-07-20T18:00:05Z | http://arxiv.org/abs/2307.11161v4 | Weak universality, quantum many-body scars and anomalous infinite-temperature autocorrelations in a one-dimensional spin model with duality
###### Abstract
We study a one-dimensional spin-1/2 model with three-spin interactions and a transverse magnetic field \(h\). The model is known to have a \(Z_{2}\times Z_{2}\) symmetry, and a duality between \(h\) and \(1/h\). The self-dual point of \(h=1\) is a quantum critical point with a continuous phase transition. We compute the critical exponents \(z\), \(\beta\), \(\gamma\) and \(\nu\), and the central charge \(c\) numerically using exact diagonalization. We find that both \(z\) and \(c\) are equal to \(1\), implying that the critical point is governed by a conformal field theory with a marginal operator. The three-spin model exhibits Ashkin-Teller criticality with an effective coupling that is intermediate between four-state Potts model and two decoupled transverse field Ising models. An energy level spacing analysis shows that the model is not integrable. For a system with an even number of sites and periodic boundary conditions, there are exact mid-spectrum zero-energy eigenstates whose number grows exponentially with the system size. A subset of these eigenstates have wave functions which are independent of the value of \(h\) and have unusual entanglement structure; hence these can be considered to be quantum many-body scars. The number of such quantum scars scales at least linearly with system size. Finally, we study the infinite-temperature autocorrelation functions at sites close to one end of an open system. We find that some of the autocorrelators relax anomalously in time, with pronounced oscillations and very small decay rates if \(h\gg 1\) or \(h\ll 1\). If \(h\) is close to the critical point, the autocorrelators decay quickly to zero except for an autocorrelator at the end site.
## I Introduction
The well-known transverse field Ising model (TFIM) in one dimension has been studied extensively over many years [1; 2; 3]. The Hamiltonian of the model consists of two-spin interactions (with strength set equal to 1) and a transverse magnetic field with strength \(h\),
\[H_{2}\ =\ -\ \sum_{j=1}^{L}\ [\sigma_{j}^{z}\sigma_{j+1}^{z}\ +\ h\ \sigma_{j}^{x}], \tag{1}\]
where \(\sigma_{j}^{a}\) denote the Pauli matrices at site \(j\) corresponding to a spin-1/2 degree of freedom, and we are considering a system with \(L\) sites and periodic boundary conditions (PBC). The model has a \(Z_{2}\) symmetry since an operator \(D=\prod_{j=1}^{L}\sigma_{j}^{x}\) commutes with the Hamiltonian. The model is known to have a quantum phase transition at a critical point given by \(h=1\). It has a ordered phase for \(h<1\) with a finite magnetization (the \(Z_{2}\) symmetry is spontaneously broken in this phase), and a disordered phase for \(h>1\) with zero magnetization. It also exhibits duality [4; 5] and the self-dual point \(h=1\) is the quantum critical point. The critical point is known to be described by a conformal field theory with \(c=1/2\) and certain critical exponents which are known analytically [6].
Generalizations of the TFIM with \(p\)-spin interactions with duality have been studied using mean-field theories and perturbative calculations [7; 8; 9; 10; 11; 12], with the TFIM corresponding to the case \(p=2\). It is of particular interest to take a close look at what happens in the next simplest case \(p=3\) where the order of phase transition in literature has been debated. We study this case numerically using exact diagonalization (ED) and look at the quantum criticality in this system at the self-dual point which is again given by \(h=1\). Another motivation for studying the case of \(p=3\) is that a Hamiltonian of this form may be engineered using optical lattices either with two atomic species [13] or with polar molecules driven by microwave fields [14]. We note that for the model with \(p=4\), it is not clear whether the transition at the self-dual point is first-order or continuous, while models with \(p\geq 5\) are believed to have a first-order transition at the self-dual point [8; 9; 10].
The three-spin (\(p=3\)) model is a candidate for interesting high-energy behavior as well. For an even number of spins and periodic boundary conditions (PBC), this model satisfies an index theorem [15] that results in the presence of an exponentially large number (in system size) of exact mid-spectrum zero energy eigenstates. Since these states are degenerate in energy, any linear combination of these is also an eigenstate of the system. Recent works [16; 17; 18; 19; 20] have shown that this freedom allows for the possibility of creating mid-spectrum eigenstates which violate the eigenvalue thermalization hypothesis (ETH) by possessing very low entanglement entropy compared to the expected thermal entropy. These eigenstates can be classified as quantum many-body scars [21; 22; 23; 24; 25; 26; 27; 28]. It would be interesting to see if the three-spin model hosts such scar states in the middle of the energy spectrum. Finally, we would like to examine if infinite-temperature autocorrelation functions in open chains show anomalous behaviors as a function of time for this model. A motivation to do so is provided by the observation of infinite (long) coherence times for boundary spins for the TFIM without (with) integrability-breaking perturbations due to the presence of a strong zero mode (an almost strong zero mode) that commutes (almost commutes) with the Hamiltonian [29; 30; 31; 32; 33]. While the TFIM can be mapped to free fermions by the standard Jordan-Wigner transformations, the perturbed TFIM has additional four-fermion interactions. It is not known if the three-spin model has analogous (almost) strong zero modes. A study of the autocorrelators near the ends of a long system may possibly shed light on this.
The plan of this paper is as follows. In Sec. II we present
the Hamiltonian of the model with three-spin interactions and its symmetries. We find that the model has a \(Z_{2}\times Z_{2}\) symmetry which leads to some degeneracies in the energy spectrum of a system with PBC. In Sec. III we discuss the duality of the model. While the duality is easy to show for an infinite-sized system, we discover that the existence of a duality is a subtle issue for finite-sized systems with PBC. In Sec. IV we make a detailed study of the criticality properties of the model at the self-dual point given by \(h=1\). Finite-size scaling is used to first confirm that there is a critical point at \(h=1\) and then to compute the dynamical critical exponent \(z\), the order parameter exponent \(\beta\), the magnetic susceptibility exponent \(\gamma\), and the correlation length exponent \(\nu\). We find that \(z=1\) suggesting that the low-energy sector of the model at \(h=1\) has conformal invariance. We then determine the central charge \(c\) and find that it is close to 1. Next, we observe that although the values of \(\beta\), \(\gamma\) and \(\nu\) for the two-spin and three-spin models are different from each other, the ratios \(\beta/\nu\) and \(\gamma/\nu\) are the same in the two models. This suggests that there is a weak universality [34] and the three-spin model lies on the Ashkin-Teller (AT) line, just like two copies of the TFIM and the four-state Potts model. Using the numerically computed value of \(\nu\) for the three-spin model, we estimate the location of this model on the AT line of critical points.
In Sec. V, we study the energy level spacing statistics to determine if the three-spin model is integrable. We find that the level spacing statistics has the form of the Gaussian orthogonal ensemble, and hence the model is non-integrable. Next, we find that the model has an exponentially large number of mid-spectrum zero-energy eigenstates. Further, we find that the zero-energy eigenstates are of two types which we call Type-I and Type-II. The Type-I states are simultaneous zero-energy eigenstates of the two parts of the Hamiltonian (the three-spin interaction and the transverse field) and consequently stay unchanged as a function of \(h\), thus violating the ETH. Hence they qualify as quantum many-body scars. We give exact expressions for a subset of these Type-I states in terms of _emergent_ singlets and triplets which shows that their number increases at least linearly with system size. In Sec. VI, we study the infinite-temperature autocorrelation function at sites close to one end of a large system and in the bulk with open boundary conditions; the purpose of this study is to understand if there are any states which can be interpreted as the end modes of a semi-infinite system. We find that far from the critical point, at either \(h\ll 1\) or \(h\gg 1\), some of the autocorrelators show an anomalous behavior in that they oscillate and also decay very slowly with time. We provide a qualitative understanding of the oscillatory behavior using perturbation theory. For values of \(h\) close to the critical point, the infinite-temperature autocorrelators decay quickly to zero except for a particular autocorrelator at the end site. In Sec. VII we summarize our main results and point out some directions for future research.
We would like to mention here that several other one-dimensional models with multispin interactions have been studied over the years, and they show a wide variety of unusual features [35; 36; 37]. Our work makes a contribution to this interesting area of research.
## II The model and its symmetries
The Hamiltonian of the three-spin model is given by [7; 8; 9; 10; 11; 12]
\[H_{3} = - \sum_{j=1}^{L}\ [\sigma_{j}^{z}\sigma_{j+1}^{z}\sigma_{j+2}^{z}\ +\ h\ \sigma_{j}^{x}], \tag{2}\]
where \(\sigma_{j}^{a}\) (where \(a=x,y,z\)) denotes the Pauli matrices at site \(j\), and we assume PBC so that \(\sigma_{L+1}^{a}=\sigma_{1}^{a}\) and \(\sigma_{L+2}^{a}=\sigma_{2}^{a}\).
Unlike the case of TFIM, we do not have a \(Z_{2}\) symmetry in this model. However we have three operators \(D_{1},D_{2}\) and \(D_{3}\) which commute with the Hamiltonian \(H_{3}\) in Eq. (2). If the system size \(L\) is a multiple of 3, we can divide the lattice into three sublattices \(A\), \(B\) and \(C\) as shown in Fig. 1. The three operators for this system are then defined
\[D_{1} = \Pi_{j=1}^{L/3}\sigma_{x}^{A_{j}}\sigma_{x}^{B_{j}},\] \[D_{2} = \Pi_{j=1}^{L/3}\sigma_{x}^{B_{j}}\sigma_{x}^{C_{j}},\] \[D_{3} = \Pi_{j=1}^{L/3}\sigma_{x}^{C_{j}}\sigma_{x}^{A_{j}}. \tag{3}\]
These satisfy the constraint \(D_{1}D_{2}D_{3}=I\). Thus we have four decoupled sectors corresponding to the different allowed values of these operators; \((D_{1},D_{2},D_{3})=(1,1,1),(1,-1,-1),(-1,1,-1)\) and \((-1,-1,1)\). Thus this model has \(Z_{2}\times Z_{2}\) symmetry. All the four sectors have equal number of states. We also notice that the operator \(C=\Pi_{j=1}^{L}\sigma_{y}\) anticommutes with the Hamiltonian. Hence for every state \(\ket{\psi}\) with energy \(E\), there is a state \(C\ket{\psi}\) with energy \(-E\) due to which the spectrum of this model has a \(E\rightarrow-E\) symmetry.
With PBC the system also has translation symmetry. If the translation operator is given by \(U\), then we can see from Eq. (II), that
\[UD_{1}U^{-1} = D_{2},\] \[UD_{2}U^{-1} = D_{3},\] \[UD_{3}U^{-1} = D_{1}. \tag{4}\]
We can further see that a combination of these three operators \(D^{\prime}=D_{1}+\omega D_{2}+\omega^{2}D_{3}\) where \(\omega\) is the cube root of unity, transforms into \(e^{-i2\pi/3}(D_{1}+\omega D_{2}+\omega^{2}D_{3})\) upon translation by one site. This is because \(U(D_{1}+\omega D_{2}+\omega^{2}D_{3})U^{-1}=\omega^{2}(D_{1}+\omega D_{2}+ \omega^{2}D_{3})\). This means that for a state \(\ket{\psi_{k}}\) with momentum \(k\), that is, \(U\ket{\psi_{k}}=e^{ik}\ket{\psi_{k}}\), we have a state \((D_{1}+\omega D_{2}+\omega^{2}D_{3})\ket{\psi_{k}}=e^{-i2\pi/3}e^{ik}\ket{ \psi_{k}}=\ket{\psi_{k-2\pi/3}}\) with momentum \(k-2\pi/3\). Similarly, we have a state \((D_{1}+\omega^{-1}D_{2}+\omega^{-2}D_{3})\ket{\psi_{k}}\) for which the momentum
Figure 1: The lattice for the Hamiltonian \(H_{3}\) with three sublattices A, B and C are shown here. The symmetry operators are defined with respect to these sublattices in Eq. (II).
is \(k+2\pi/3\). Since the \(D\) operators commute with the Hamiltonian, the states \(|\psi_{k}\rangle,|\psi_{k-2\pi/3}\rangle\) and \(|\psi_{k+2\pi/3}\rangle\) are degenerate. However in the sector \((D_{1},D_{2},D_{3})=(1,1,1)\), the operators \(D_{1}+\omega D_{2}+\omega^{2}D_{3}\) and \(D_{1}+\omega^{2}D_{2}+\omega D_{3}\) give zero when they act on a state \(|\psi_{k}\rangle\). Therefore the states belonging to this sector do not have a degenerate partner. Thus in the entire spectrum, three-fourths of the states have an exact three-fold degeneracy whereas the other one-fourth belonging to the sector \((1,1,1)\) has no degeneracy. We also have a parity symmetry in this system. For an even system size, we can define parity as a mirror reflection about the middle bond. The parity operator then takes the operator \(D_{1}\to D_{2}\) and \(D_{2}\to D_{1}\) and keeps \(D_{3}\) unchanged. Thus, for a system with open boundary conditions which breaks translation symmetry, we can still have degeneracies coming from parity symmetry. These come from the states in sectors \((D_{1},D_{2},D_{3})=(1,-1,-1)\) and \((-1,1,-1)\) as they go to a different sector under parity.
## III Duality of the model
Just like the TFIM, the three-spin model also exhibits duality on an _infinitely large_ system. We show this by starting from the original lattice with sites labeled by an integer \(j\) which goes from \(-\infty\) to \(+\infty\). Then the sites of dual lattice also lie at \(j\). (This is in contrast to the TFIM where the sites of the dual lattice lie at \(j+1/2\)). The transformation of the Pauli matrices going from the original lattice \(\sigma^{a}_{j}\) to the dual lattice \(\tilde{\sigma}^{a}_{j}\) is given by
\[\tilde{\sigma}^{x}_{j+1}=\sigma^{z}_{j}\sigma^{z}_{j+1}\sigma^{z} _{j+2},\] \[\tilde{\sigma}^{z}_{j-1}\tilde{\sigma}^{z}_{j}\tilde{\sigma}^{z}_ {j+1}=\sigma^{x}_{j}. \tag{5}\]
The Hamiltonian on the dual lattice then takes the form
\[\tilde{H}_{3}=-\sum_{j=-\infty}^{\infty}\ [\tilde{\sigma}^{x}_{j+1}\ +\ h\ \tilde{\sigma}^{z}_{j-1}\tilde{\sigma}^{z}_{j}\tilde{\sigma}^{z}_{j+1}]. \tag{6}\]
Thus going from \(H_{3}\) to \(\tilde{H}_{3}\), the transverse field \(h\) gets mapped to \(1/h\). The self-dual point lies at \(h=1/h\). Hence, if \(H_{3}\) (or \(\tilde{H}_{3}\)) has a phase transition it must occur at \(|h|=1\).
We will now examine if duality also holds for a _finite_ system with PBC as described in Eq. (2). Clearly, we would like both the original and dual lattices to have the same number of sites, \(L\), and the number of states should be \(2^{L}\) in both cases. The latter can only happen if the Pauli operators are independent operators on different sites on both the lattices. The first equation in Eq. (II) and the fact that \((\sigma^{z}_{j})^{2}=1\) for all \(j\) imply that
\[\tilde{\sigma}^{x}_{1}\ \tilde{\sigma}^{x}_{2}\ \tilde{\sigma}^{x}_{4}\ \tilde{\sigma}^{x}_{5}\ \cdots\ \tilde{\sigma}^{x}_{L-2}\ \tilde{\sigma}^{x}_{L-1} = I,\] \[{\rm and}\quad\tilde{\sigma}^{x}_{2}\ \tilde{\sigma}^{x}_{3}\ \tilde{\sigma}^{x}_{5}\ \tilde{\sigma}^{x}_{6}\ \cdots\ \tilde{\sigma}^{x}_{L-1}\ \tilde{\sigma}^{x}_{L} = I \tag{7}\]
if \(L\) is a multiple of 3. Hence there are two constraints on the \(\tilde{\sigma}^{x}_{j}\) operators, implying that the eigenvalues of the operators cannot take all possible values independently of each other. To put it differently, the two constraints mean that the number of states in the dual system is \(2^{L-2}\) rather than \(2^{L}\). We reach a similar conclusion for the original system by using the second equation in Eq. (II). We therefore conclude that duality does not hold for a finite system with PBC if \(L\) is a multiple of 3. It turns out that duality does hold if \(L\) is _not_ a multiple of 3 as the Pauli operators do not satisfy any constraints on either the original lattice or the dual lattice in that case. (Note, however, that the operators \(D_{j}\) defined in Sec. II do not exist if \(L\) is not a multiple of 3). Next, duality implies that there must be a unitary operator \(U_{D}\) which relates the states of the original and dual lattices. Let us write the Hamiltonian in Eq. (2) in the form
\[H_{3} = -\ Z\ -\ h\ X,\] \[{\rm where}\ \ Z = \sum_{j=1}^{L}\ \sigma^{z}_{j}\sigma^{z}_{j+1}\sigma^{z}_{j+2},\] \[X = \sum_{j=1}^{L}\ \sigma^{x}_{j}, \tag{8}\]
and similarly
\[\tilde{H}_{3}\ =\ -\ \tilde{X}\ -\ h\ \tilde{Z}. \tag{9}\]
Then there must be a unitary operator \(U_{D}\) such that \(U_{D}XU_{D}^{-1}=\tilde{Z}\) and \(U_{D}ZU_{D}^{-1}=\tilde{X}\). This means that at the self-dual point \(h=1\), if \(|\psi_{n}\rangle\) is an eigenstate of \(H_{3}\) with eigenvalue \(E_{n}\), and \(|\tilde{\psi}_{n}\rangle=U_{D}|\psi\rangle\) is an eigenstate of \(\tilde{H}_{3}\) with the same eigenvalue, we must have
\[\langle\psi_{n}|X|\psi_{n}\rangle\ =\ \langle\tilde{\psi}_{n}|\tilde{Z}|\tilde{ \psi}_{n}\rangle\ =\ \langle\psi_{n}|Z|\psi_{n}\rangle, \tag{10}\]
where the equality \(\langle\tilde{\psi}_{n}|\tilde{Z}|\tilde{\psi}_{n}\rangle=\langle\psi_{n}|Z| \psi_{n}\rangle\) is a consequence of self-duality. Since \(\langle\psi_{n}|(-X-Z)|\psi_{n}\rangle=E_{n}\), Eq. (10) implies that
\[\langle\psi_{n}|X|\psi_{n}\rangle\ =\ -\ \frac{E_{n}}{2}. \tag{11}\]
at \(h=1\). A test of this relation will be discussed in Appendix A.
Before ending this section, we note that it is not useful to perform a Jordan-Wigner transformation from spin-1/2's to spinless fermions for this model because there are three-spin terms in the Hamiltonian. The Jordan-Wigner transformation maps \(\sigma^{x}_{j}\) to the occupation number \(c^{\dagger}_{j}c_{j}\) of fermions at site \(j\), and \(\sigma^{z}_{j}\) to \(c_{j}+c^{\dagger}_{j}\) times a string of \(\sigma^{x}_{n}\) operators running from \(n=-\infty\) to \(j-1\) (for an infinitely large system). The presence of the three-spin term \(\sigma^{z}_{j}\sigma^{z}_{j+1}\sigma^{z}_{j+2}\) in the Hamiltonian implies there will be an infinitely long string of \(\sigma^{x}_{n}\) operators left over which does not cancel with anything. Thus this model cannot be solved by fermionizing since the fermionic Hamiltonian will have highly non-local terms. We will henceforth analyze the model numerically. In the next section, we will carry out ED calculations to confirm the location of the critical point of the quantum phase transition and to extract the critical exponents.
Quantum criticality of the model
We will now study the three-spin model numerically to understand the nature of the phase transition at \(h=1\) and the critical properties. We will use ED to obtain the ground state and low-lying excitations and then compute various thermodynamic quantities like the magnetization and magnetic susceptibility to study the criticality.
### Energy levels
We use ED to compute the first few energy levels for the Hamiltonian in Eq. (2). The first three excited energy levels with respect to the ground state energy are plotted in Fig.2. We first notice that the phase transition happens close to \(|h|=1\). In the region \(|h|>1\), the system is gapped with a finite difference between the ground state and the first excited energy. The first three excited states are exactly degenerate due to the symmetries \(D_{1},D_{2},D_{3}\) of the model (see Sec. II) with eigenvalues \((D_{1},D_{2},D_{3})=(1,-1,-1),(-1,1,-1)\) and \((-1,-1,1)\). The ground state is unique and belongs to the sector \((D_{1},D_{2},D_{3})=(1,1,1)\). In the region \(|h|<1\), the ground state becomes degenerate with the three-fold degenerate states as the system size approaches infinity. For finite-sized systems, there is a small gap in the region \(|h|<1\). The gap varies with \(h\) and falls off exponentially with the system size; for \(h=0.4\) and \(L=15\), the gap is of the order of \(10^{-4}\).
### Finite-size scaling
To understand the nature of the phase transition in this model in comparison to the TFIM which has two-spin interactions, we look at the behaviors of different quantities close to the critical point. Close to the critical point, any singular quantity, \(\Theta\), will have an asymptotic behavior of the form [38]
\[\Theta\sim|h-h_{c}|^{-\theta}, \tag{12}\]
where \(\theta\) is the critical exponent of the quantity \(\Theta\). In addition, continuous phase transitions have a diverging correlation length scale \(\xi\) which diverges close to the critical point as \(\xi\sim|h-h_{c}|^{-\nu}\), where \(\nu\) is the critical exponent corresponding to the correlation length. This implies that \(\Theta\sim\xi^{\theta/\nu}\). At the critical point, the correlation length diverges. However for finite system sizes, we are limited by the system size \(L\). Hence, when the correlation length exceeds the system size, the quantity will vary with \(L\) depending on the ratio \(L/\xi\), and the above relation gets modified to
\[\Theta\sim\xi^{\theta/\nu}\Theta_{0}(L/\xi), \tag{13}\]
where \(\Theta_{0}(L/\xi)\) is a scaling function with
\[\Theta_{0}(L/\xi)=\begin{cases}\text{constant}&\text{for}\quad L\gg\xi\\ (L/\xi)^{\theta/\nu}&\text{for}\quad L\ll\xi.\end{cases}\]
Thus at the critical point when \(\xi\gg L\), we find that \(\Theta\) scales as [38]
\[\Theta|_{h_{c}}\sim L^{\theta/\nu}. \tag{14}\]
By evaluating \(\Theta\) for different system sizes we can calculate the critical exponent \(\theta/\nu\) once we know the exact location of the critical point \(h_{c}\).
### Numerical determination of critical point
The ground state fidelity is the one of the preliminary ways to detect a quantum phase transition. The fidelity is defined as \(\mathcal{F}(h,\delta h)=|\langle\psi_{0}(h-\delta h/2)\ |\ \psi_{0}(h+\delta h/2)\rangle|\), where \(\psi_{0}(h\pm\delta h/2)\) is the ground state of the Hamiltonian with parameter \(h\pm\delta h/2\), and \(\delta h\) is a small but fixed number. The fidelity is expected to show a pronounced deviation from unity in the neighborhood of a phase transition.
In Fig. 3 we show the variation of the fidelity \(\mathcal{F}(h,\delta h)\) as a function of the transverse field \(h\) for different system sizes \(L=9,12,15,18,21,24\) for a fixed \(\delta h=0.005\). We see a dip close to \(h=1\) for all system sizes confirming that a phase transition occurs at this point. As the system size increases, the magnitude of the dip increases and the location of the dip approaches the predicted value \(h=1\). For the largest system size here \(L=24\), we find that the minimum occurs at \(h_{c}=0.9960\). We also note that the location of the minimum, \(h_{c}(L)\), obtained from the fidelity scales as \(h_{c}(L)=1+aL^{-2/\nu}\), while the value of the fidelity susceptibility \(\chi_{F}(h_{c}=1)=-\partial^{2}\mathcal{F}(h_{c},\delta h)/\partial^{2}( \delta h)|_{\delta h\to 0}\) at \(h_{c}=1\) scales as \(bL^{2/\nu}\), where \(a,b\) are constants and \(\nu\) is the correlation length exponent yielding \(\nu\approx 0.71\) (see Sec. IV.1 for further discussion).
Figure 2: First three energy levels as measured from the ground state energy plotted as a function of the transverse field \(h\) for \(L=15\). We see some degeneracies which arise from the \(Z_{2}\times Z_{2}\) symmetry.
### Dynamical Critical Exponent \(z\)
The smallest energy gap in the system at finite sizes (Fig. 2) can be used to estimate the dynamical critical exponent \(z\). As we approach the critical point, the energy difference between the first excited state and the ground state, \(\Delta\), behaves as
\[\Delta\ \sim\ |h-h_{c}|^{z\nu}. \tag{15}\]
Given the exponent \(z\nu\), Eqs. (12) and (14) imply that
\[\Delta|_{h_{c}}\ \sim\ L^{-z}. \tag{16}\]
We evaluate \(\Delta\) by performing ED for various system sizes \(L=12,15,21,24\) and \(27\) in the neighborhood of the critical point. Fig. 4 (a) shows the variation of \(\Delta\) with \(h\) for different system sizes. At \(h=h_{c}\) we plot a log-log graph of \(\Delta|_{h_{c}}\) versus \(L\) (inset of Fig. 4 (a)), and fit it linearly to obtain the slope. We find that \(z=1.0267\pm 0.0014\) indicating that \(z=1\) at criticality.
### Calculation of central charge \(c\)
Since the critical exponent \(z=1\) for this model, the low-lying excitations at the critical point have a linear dispersion making the system Lorentz invariant with some velocity \(v\) which will be discussed below. Thus the model can be described by a \(1+1\)-dimensional conformal field theory characterized by a central charge \(c^{6}\). In such a theory, the von Neumann entanglement entropy of the system can be used to extract the central charge \(c\). If the system is divided into two subsystems A and B, the von Neumann entanglement entropy between the two systems is given by
\[S_{A}\ =\ -\text{Tr}_{A}(\rho_{A}\text{log}\rho_{A}), \tag{17}\]
where \(\rho_{A}\) is the reduced density matrix of the subsystem A obtained by tracing out the states in B from the density matrix of the ground state: \(\rho_{A}=Tr_{B}\ket{\psi_{GS}}\bra{\psi_{GS}}\). For a finite system size \(L\) with PBC, if we divide the system into two subsystems with sizes \(l\) and \(L-l\), the von Neumann entanglement entropy for the subsystem \(l\) is found to be [39]
\[S(l)\ =\ \frac{c}{3}\ \log[g(l)]\ +\ c^{\prime}, \tag{18}\]
where \(g(l)=(L/\pi)\sin(\pi l/L)\), and \(c^{\prime}\) is a constant. For our model, we take \(L=27\) and calculate \(S(l)\) for different subsystems \(l\), plot \(S(l)\) (Fig. 4 (b)) as a function of \(\log[g(l)]\), and fit it linearly. The central charge \(c\) is three times the slope obtained from this fit which gives \(c=1.0644\pm 0.0072\).
We can use another method to calculate \(c\). The ground state energy of a finite-sized system is found to show the following dependence on the system size \(L\)[39],
\[E_{GS}=\alpha L-\frac{\pi vc}{6L}, \tag{19}\]
where \(\alpha\) is a non- universal constant equal to the ground state energy per site in the thermodynamic limit [40], \(v\) is the velocity of the gapless excitations at the critical point which can be obtained from the dispersion, and \(c\) is the central charge. We first calculate the velocity by plotting the dispersion for \(L=27\) as shown in the inset of Fig. 4 (c). As discussed earlier, the dispersion varies periodically with the momentum with a period equal to \(2\pi/3\). Fitting the inset in Fig. 4 (c) with a function of the form \(E=a\sin(bk)+d\), where \(a=2.2893\pm 0.0134\) and \(b=1.5012\pm 0.0015\) respectively (the value of \(b\) is consistent with a period of \(2\pi/3\). Thus the velocity in the linear region near \(k=0\) is \(v=ab=3.4367\pm 0.0236\). The slope of \(E_{GS}/L\) versus \(1/L^{2}\) shown in Fig. 4 (c) gives a slope equal to \(-\pi vc/(6L)\). Putting all this together, we get the value of \(c\) for this model to be \(c=0.9585\pm 0.0015\). Thus both the methods give an estimate of \(c\) which is close to \(1\). A value of \(c=1\) suggests the possibility of a marginal operator at the critical point [41] of the three-spin model, and hence weak universality. To investigate this further, we proceed to compute the other critical exponents of this system: \(\beta\) related to the order parameter, \(\gamma\) to the magnetic susceptibility, and \(\nu\) to the correlation length.
## Appendix F Order parameter exponent \(\beta\)
We now study the order parameter in this model. Given the three-spin form of the interaction, we define a symmetric order parameter as follows. As described earlier, the lattice
Figure 3: Fidelity as a function of the transverse field \(h\) (for a fixed \(\delta h=0.005\)) is plotted for different system sizes \(L=9,12,15,18,21\) and \(24\). There is a dip in the fidelity close to the expected critical point \(h=1\).
has three sublattices \(A,B\) and \(C\). We define three quantities
\[m_{A} =\frac{3}{L}\ \sum_{n=1}^{L/3}\ \sigma_{3n-2}^{z},\] \[m_{B} =\frac{3}{L}\ \sum_{n=1}^{L/3}\ \sigma_{3n-1}^{z},\] \[m_{C} =\frac{3}{L}\ \sum_{n=1}^{L/3}\ \sigma_{3n}^{z}, \tag{20}\]
and an order parameter
\[m=\sqrt{\langle m_{A}^{2}\rangle+\langle m_{B}^{2}\rangle+\langle m_{C}^{2} \rangle}. \tag{21}\]
For numerical clarity, it would be worthwhile to note here that for finite-size systems, the ground state expectation values \(\langle m_{a}\rangle\) are equal to zero for \(a=A,B,C\) even for \(h<h_{c}\). This is due to the fact that the ground state is four-fold degenerate (in the infinite size limit), and the ground state obtained from ED is a linear combination of these four states making the expectation values exactly equal to zero. To bypass this problem we have first evaluated \(\langle m_{a}^{2}\rangle\) and then taken the square root of the squares. The behavior of \(m\) for our model as a function of transverse field \(h\) is shown in Fig. 5 (a). It begins to drop to zero as we approach \(h_{c}\). Close to the critical point, we have
\[m\sim|h-h_{c}|^{\beta}. \tag{22}\]
From the finite-size scaling of magnetization, we have
\[\mathcal{M}_{z}\sim L^{\beta/\nu}, \tag{23}\]
where \(\mathcal{M}_{z}=m|_{h_{c}}\). The log-log graph for \(\mathcal{M}_{z}\) versus \(L\) is shown in the inset of Fig. 5 (a); from this we find that \(\beta/\nu=0.1291\pm 0.0018\). This ratio is close to the value of \(\beta/\nu=1/8\) found for the TFIM (two-spin model) where it is analytically known that \(\beta=1/8=0.125\) and \(\nu=1\).
### Magnetic susceptibility exponent \(\gamma\)
We now compute the magnetic susceptibility \(\chi\). For this calculation, we add a longitudinal field to the system so that the Hamiltonian becomes
\[H = -\ \sum_{j=1}^{L}\ [\sigma_{j}^{z}\sigma_{j+1}^{z}\sigma_{j+2}^{z}\ +\ h\ \sigma_{j}^{z}\ +\ h_{z}\ \sigma_{j}^{z}], \tag{24}\]
where \(h_{z}\) is the longitudinal field in the system.
The magnetic susceptibility is defined as[42]
\[\chi=\frac{\partial\langle M_{h_{c}}\rangle}{\partial h_{z}}|_{h_{z}\to 0}, \tag{25}\]
where \(\langle M_{h_{c}}\rangle\) is computed as follows. We first define \(M=\frac{1}{L}\sum_{i=1}^{L}\sigma_{i}^{z}\) and evaluate its expectation value in the ground state as a function of the transverse and longitudinal fields \(h_{z}\) and \(h\). It will be non-zero due to the presence of the longitudinal field. At the critical point \(h_{c}=1\) we take the derivative of \(M_{h_{c}}\) with respect to \(h_{z}\) and find its value in the limit \(h_{z}\to 0\). The magnetic susceptibility as a function of the transverse field \(h\) in shown in Fig. 5 (b).
For different system sizes at the critical point we have the quantity \(\chi_{0}=\chi|_{h_{c}}\) which, from finite-size scaling, behaves as
\[\chi_{0}\sim L^{\gamma/\nu}, \tag{26}\]
Figure 4: (a) Plot of the smallest gap, \(\Delta\), as a function of \(h\) for different system sizes. The inset shows a log-log plot of \(\Delta|_{h_{c}}\) fitting which to a straight line gives the dynamical exponent \(z=1.0267\pm 0.0014\). (b) Plot of the ground state entanglement entropy versus the logarithm of \(g(l)=(L/\pi)\sin(l\pi/L)\), where \(l\) is the size of one of the subsystems, and \(L=27\) at the critical coupling \(h_{c}=1\). The slope of the graph is \(c/3\) which gives \(c=1.0644\). (c) From Eq. (19), the variation of the ground state energy with the system size \(L\) at \(h_{c}=1\) gives an estimate for \(c\), namely, the slope of \(E_{GS}/L\) versus \(1/L^{2}\) has a slope equal to \(-\pi vc/(6L)\). For \(L=27\), we find that \(c=0.9585\). The inset shows the velocity estimate of the gapless excitations which is calculated by fitting the function \(E(k)=a\sin(bk)+d\). For \(L=27\), we find \(a=2.2893\) and \(b=1.5012\) respectively giving the velocity \(v=ab=3.4367\).
where \(\gamma\) is the exponent corresponding to susceptibility. This is estimated by plotting a log-log graph of \(\chi_{0}\) versus \(L\) as shown in the inset of Fig. 5 (b). The ratio of the exponents \(\gamma/\nu\) comes out to be \(1.7876\pm 0.0034\) for this model, which is again close to the value of \(\gamma/\nu=7/4\) known for the TFIM where \(\gamma=7/4=1.75\) and \(\nu=1\).
### Correlation length exponent \(\nu\)
To evaluate the correlation length exponent, we return to Eq. (13). Reorganising the equations using the relation, \(\tilde{\Theta}(y)=y^{\theta}\Theta_{0}(y^{\nu})\), we get
\[\Theta\sim L^{\theta/\nu}\tilde{\Theta}(L^{1/\nu}|h-h_{c}|). \tag{27}\]
For a given singular quantity \(\Theta\), by plotting \(\Theta(h,L)L^{-\theta/\nu}\) versus \(L^{1/\nu}(h-h_{c})\) for a range of values of the exponents \(\theta\) and \(\nu\), we find that the data will collapse onto a single curve \(\tilde{\Theta}\) when the exponents are closest to their correct values. This finite-size data collapse method can thus be used to determine the critical exponents. In our case, in order to determine \(\nu\), we choose three thermodynamic quantities namely the energy gap \(\Delta\), the order parameter \(m\) and the fidelity susceptibility \(\chi_{F}\). The given thermodynamic quantity \(\Theta\) times \(L^{\theta/\nu}\) is Taylor expanded up to second order as a function of \(L^{\tau/\nu}(h-h_{c})\) giving the relation
\[\Theta(h,L)L^{\theta/\nu}=a_{0}+a_{1}(L^{\tau/\nu}(h-h_{c}))+a_{2}(L^{\tau/\nu }(h-h_{c}))^{2}. \tag{28}\]
where the value of \(\tau\) is 1 for \(\Delta\) and \(m\), and is 2 for \(\chi_{F}\)[43]. The choice of \(\tau=2\) for \(\chi_{F}\) requires some explanation. From the numerical results for \(\chi_{F}\) for finite \(L\) for the TFIM, we see that while \(\chi_{F}(h_{c}=1)\) scales as \(L^{2/\nu}\) for the TFIM with \(\nu=1\) as expected, the location of maximum of \(\chi_{F}\) does not scale as \(h_{c}(L)-1\propto L^{-1/\nu}\) but instead as \(h_{c}(L)-1\propto L^{-2/\nu}\). This latter scaling is also consistent with exact results for the fidelity susceptibility for the TFIM [44]. The numerical results for \(\chi_{F}\) for the three-spin model indicates exactly the same trend where both the peak height and the deviation of the peak location of \(\chi_{F}\) from \(h_{c}=1\) scales as \(L^{2/\nu}\) and \(L^{-2/\nu}\) respectively with \(\nu\approx 0.71\). We believe that this unusual scaling of the peak location, which forces us to use \(\tau=2\) instead of \(1\) in Eq. (28) for the scaling collapse of \(\chi_{F}\), is related to the self-dual nature of the critical points in both the models.
We then fit our data to this function in Eq. (28) and extract the values of \(a_{0},a_{1},a_{2}\) and \(\nu\) by minimizing \(\chi^{2}\) for the fit. Furthermore, we take \(h_{c}=1\), \(\theta/\nu\) is fixed to \(z=1\) for \(\Delta\) and \(\beta/\nu=1/8\) for \(m\) to reduce the number of fitting parameters and equals \(2/\nu\) for \(\chi_{F}\). As shown in Fig. 6, we find the estimated values of \(\nu\) to be 0.82, 0.826 and 0.709 from the data collapse of \(\Delta\). \(m\) and \(\chi_{F}\) respectively. Among the three panels, we see that the data collapse is best for the energy gap \(\Delta\) (Fig. 6 (a)) where the data for all the system sizes fall very close on top of each other with the available system sizes from ED. However the scaling collapse is not good for the order parameter and fidelity susceptibility and the systematic deviations in Fig. 6 (b), (c) indicate much stronger finite-size corrections in these quantities compared to \(\Delta\). Thus we choose the value of \(\nu\approx 0.82\) based on the data collapse data of the energy gap \(\Delta\) for the three-spin model. From this analysis, we see that the critical point of the three-spin model is different from the TFIM (where \(\nu=1\)) even though the values of \(\beta/\nu\) and \(\gamma/\nu\) seem identical.
### Binder cumulant
Another quantity that shows that the critical behaviour of the three-spin model is different to that of the TFIM is the Binder cumulant which is defined as [45; 46; 47]
\[U_{2}=C+D\frac{\langle m^{4}\rangle}{\langle m^{2}\rangle^{2}}, \tag{29}\]
Figure 5: (a) Plot of the order parameter defined in Eq. (21) versus \(h\) for different system sizes. We see that it has a finite value for \(h<1\) and falls off as \(h>1\). The log-log plot of this quantity at the critical point with maximum system size \(L=27\) gives a slope of \(\beta/\nu\) close to \(0.129\). (b) Plot of the magnetic susceptibility for this model as a function of \(h\). At the critical point, it scales with the system size with an exponent \(\gamma\). From the log-log plot shown in the inset, we find that \(\gamma/\nu=1.7876\).
where the order parameter \(m\) and the normalization constants \(C,D\) are defined appropriately for a given model so that \(U_{2}\) has the values 0 and 1 in the thermodynamic limit in the disordered and the ordered phase respectively. For the two-spin TFIM we have \(m^{2}=(\frac{1}{L}\sum_{i}^{L}\sigma_{i}^{z})^{2}\) with \(C=3/2\) and \(D=-1/2\). For our three spin model the order parameter is defined as in the Eq. (21) with \(C=5/2\) and \(D=-3/2\)[48]. Furthermore, \(\langle O\rangle\) in Eq. (29) denotes \(\langle\psi_{0}|O|\psi_{0}\rangle\) where \(|\psi_{0}\rangle\) equals the ground state at a finite size \(L\) and \(O\) equals either \(m^{2}\) or \(m^{4}\). We plot the Binder cumulant for the ground state of \(H_{3}\) in Fig. 7 as a function of the transverse field \(h\). As expected they cross close to the critical point for different system sizes. More interestingly, for our model, there is a negative dip in \(U_{2}\) close to the critical point for \(L\geq 18\). The dip increases in magnitude as we go to larger system sizes, however it does not increase faster than \(L\); thus the phase transition close to \(h_{c}\) is still continuous in nature [49; 50; 51]. However this is starkly different from the monotonic behavior of \(U_{2}\) for the two-spin case as can be seen in the inset of Fig. 7.
### Comparison with transverse field Ising model, hyperscaling, and quantum Ashkin-Teller model
We have repeated the numerical analysis for the TFIM (two-spin Ising model) using ED for system sizes \(L=8,10,12,14,16,18,20\) and \(22\). In that case our calculations give \(z=1.0026\), \(c\approx 0.50\), \(\beta/\nu=0.1337\) and \(\gamma/\nu=1.7936\). It is useful to note that the fidelity susceptibility for the TFIM does not yield a good scaling collapse while the smallest energy gap, \(\Delta\), leads to the best scaling collapse near the critical point, just like the three-spin model. For the three-spin model we found above that \(z=1.02\), \(\beta/\nu=0.129\) and \(\gamma/\nu=1.789\) with the data from system sizes \(L=9,12,15,18,21,24,27\). We can see that the values of the ratios of critical exponents \(\beta/\nu\) and \(\gamma/\nu\) are very close to each other for the two models. However the correlation length critical exponent \(\nu\) is \(1\) for the two-spin model (TFIM) and close to \(0.8\) for the three-spin model. Since all the exponents and the central charge value conform with the theoretical values from analytical and numerical calculations for the two-spin Ising model [52; 38], we expect that the exponents obtained by the same methods for the three-spin model are also reliable. The estimated values for the two models are tabulated and compared in Table 1. Furthermore, we check for the validity of the hyper-scaling relation for our model. The hyperscaling relation is given by [38]
\[2\beta+\gamma\ =\ \nu(d+z), \tag{30}\]
where \(d\) is the space dimensionality of the system (\(d=1\) in our case). Since \(d\), \(z\), \(\beta/\nu\) and \(\gamma/\nu\) are the same for the two-spin and three-spin models, our model also satisfies the hyperscaling relation.
Figure 7: Plot of Binder cumulant \(U_{2}\) defined in Eq. (29) as a function of the field \(h\) for the three-spin model. The plots for different system sizes cross each other close to \(h_{c}\). We observe a negative dip close to \(h_{c}\), the magnitude of which increases with the system size. This is in contrast to the TFIM where the Binder cumulant is a monotonic function as shown in the inset.
Figure 6: (a) The scaling collapse of the data from energy gap with \(\Delta L^{z}\) versus \(L^{1/\nu}(h-h_{c})\) is shown for different system sizes. We see that this quantity gives a good data collapse with the plots of different system sizes falling on top of each other for the estimated value of \(\nu=0.82\) obtained by fitting the function in Eq. (28). (b) The scaling collapse using the data of order parameter for different system sizes is shown here. We plot \(mL^{-\beta/\nu}\) versus \(L^{1/\nu}(h-h_{c})\) and obtain the best fit for the polynomial function given with \(\nu=0.826\). (c) The scaling collapse for fidelity susceptibility has a scaling relation \(\chi_{P}^{2/\nu}=f(L^{2/\nu}(h-h_{c}))\) and the data gives the best estimate of \(\nu\) to be 0.709.
Since the central charge \(c=1\) for the three-spin model and the ratios of the critical exponents \(\beta/\nu\) and \(\gamma/\nu\) are essentially identical to those of the TFIM, this strongly suggests that the critical behavior of the three-spin model belongs to the class of \(1+1\)-dimensional models with \(z=1\) and \(c=1\) described by the AT model [53]. The AT model constructed on a lattice has two spin \(s=1/2\) freedom on each site denoted by \(\sigma\) and \(\tau\). These operators are coupled by a parameter \(\lambda\). The Hamiltonian for the quantum AT model is given by [54]
\[H_{AT} = -\,h\,\sum_{j=1}^{L}\,\,(\sigma_{j}^{x}+\tau_{j}^{x}+\lambda\sigma _{j}^{x}\tau_{j}^{x})\] \[-\,\sum_{j=1}^{L}\,\,(\sigma_{j}^{z}\sigma_{j+1}^{z}+\tau_{j}^{z} \tau_{j+1}^{z}+\lambda\sigma_{j}^{z}\sigma_{j+1}^{z}\tau_{j}^{z}\tau_{j+1}^{z}).\]
This model is known to exhibit weak universality, namely, the ratios of the exponents \(\beta/\nu=1/8\) and \(\gamma/\nu=7/4\) are independent of \(\lambda\) but the values of the exponents individually depend on \(\lambda\). One limit of \(\lambda=0\) reduces the AT model to two decoupled TFIM thus giving \(c=1\). For this case we know that \(\nu=1\). In the other limit of \(\lambda=1\), we get the four-state Potts model [55], with the critical exponent \(\nu=2/3\). We thus see that our three-spin model \(H_{3}\) also shows this weak universality since \(c=1\), and \(\beta/\nu\) and \(\gamma/\nu\) are very close to \(1/8\) and \(7/4\). However \(\nu\) is different from the TFIM. Since the value of \(\nu=0.82\) for the three-spin model, it must lie somewhere in between two copies of the the TFIM and the four-state Potts model. To find the value of \(\lambda\) for which the three-spin model would get mapped to the AT model, we would have to study the AT model as a function of \(\lambda\). However since the number of degrees of freedom are doubled, we can go only up to system sizes \(L=13\) using ED, and thus cannot rely on those numerical results. An analytical study using real-space renormalization group gives a relation between the critical exponents \(\nu\) and \(\lambda\) as [56]
\[\nu\,\,=\,\,\frac{1}{2\,-\,(\frac{\pi}{2})\,[\arccos(-\lambda)]^{-1}}. \tag{32}\]
Substituting \(\nu=0.82\) gives an effective value of \(\lambda=0.43\) for the critical point of the three-spin model. We would have to perform numerical calculations using the density-matrix renormalization group or quantum Monte Carlo methods to numerically establish the value of \(\lambda\) more precisely. This would be an interesting problem for future studies.
## V Presence of quantum many-body scars
### Non-integrability of the model
We now show that the three-spin model is different from the TFIM in that while the latter model is well-known to be integrable, the former seems to be non-integrable. A common diagnostic to test integrability is to study the energy level spacing statistics. In this section, we will study that level spacing for \(H_{3}\) and discover that shows that the model is non-integrable. If the spectrum of energies is sorted in increasing order so that \(E_{n}\) is the \(n\)-th energy level, then we define the level spacing as [57; 58]
\[s_{n}=E_{n+1}-E_{n}. \tag{33}\]
The distribution of \(s\), called \(P(s)\), gives a way of testing the integrability of the system. The system is integrable if \(P(s)\) is Poisson-like and is non-integrable if \(P(s)\) has a Wigner-Dyson distribution. However, for many-body systems with a non-constant density of states, a new quantity proposed by Oganesyan and Huse [59] is more useful and reliable. This quan
\begin{table}
\begin{tabular}{||c|c|c|c||} \hline Exponent & Method used & Three-spin & Two-spin \\ \hline \(z\) & \(\Delta\) scaling with \(L\) at \(h_{c}\) & 1.0267 (14) & 1.0026 (3) \\ \hline \(\beta\) & \(m\) scaling with \(L\) at \(h_{c}\) & 0.1018 (23) & 0.1337 (64) \\ \hline \(\gamma\) & \(\chi\) scaling with \(L\) at \(h_{c}\) & 1.4102 (43) & 1.7936 (20) \\ \hline \(\nu\) & Data Collapse (\(m\)) & 0.8261 (189) & 1.0019 (62) \\ & Data Collapse (\(\Delta\)) & 0.8202 (55) & 1.0609 (14) \\ & Data Collapse (\(\chi_{F}\)) & 0.7092 (4) & 0.9604 (6) \\ \hline \(c\) & EE Scaling at \(h_{c}\) & 1.0644 (72) & 0.5096 (13) \\ & Energy scaling at \(h_{c}\) & 0.9585 (15) & 0.5034 (68) \\ \hline \end{tabular}
\end{table}
Table 1: Numerical estimates of the critical exponents and the central charge for the three-spin and two-spin Ising models in a transverse field. Here EE stands for entanglement entropy. The error bars shown are obtained from the fitting procedures as discussed in the text.
Figure 8: The distribution of \(\tilde{r}\) defined in Eq. (34) is plotted for system size \(L=18\) with open boundary conditions in the sector \((D_{1},D_{2},D_{3})=(1,-1,-1)\) that contains \(65536\) eigenstates. Further the expected distribution derived for a GOE \(P(\tilde{r})\) is shown in red. We see that they agree quite well. The average of \(\tilde{r}\) also turns out to be close to \(0.53\) as expected for GOE.
tity \(\tilde{r}\) is defined as follows
\[\tilde{r}=\frac{\min(s_{n},s_{n-1})}{\max(s_{n},s_{n-1})}. \tag{34}\]
Since \(\tilde{r}\) involves the ratio of energy spacings, the advantage of evaluating \(\tilde{r}\) is that it is independent of the local density of states. The definition in Eq. (34) implies that it is restricted to lie in the range \(0\) to \(1\).The average value of \(\tilde{r}\) turns out to be \(0.34\) for integrable models but close to \(0.53\) for non-integrable models governed by the Wigner-Dyson Gaussian orthogonal ensemble (GOE). For our model, we evaluate \(\tilde{r}\) in a particular sector \((D_{1},D_{2},D_{3})=(1,-1,-1)\) and with open boundary conditions to eliminate degeneracies due to any residual global symmetries. For \(L=18\), we obtain the value of the average of \(\tilde{r}\) to be \(0.533\). We further see that the numerical data fits very well to the Wigner-Dyson distribution of \(P(\tilde{r})\) given by [58]
\[P(\tilde{r})=\frac{27}{4}\frac{r+r^{2}}{(1+r+r^{2})^{5/2}}\Theta(1-r), \tag{35}\]
where \(r_{n}=s_{n}/s_{n-1}\) and \(\Theta(x)\) is the usual theta function. In Fig. 8, we see that the distribution given in Eq. (35) matches very well with the numerical data. This establishes that the three-spin model is non-integrable. Given that it is non-integrable, we find some of the energy eigenstates with zero energy have an interesting feature as will be discussed in the section below.
### Zero-energy states
An the interesting property of the three-spin model is that for even system sizes with PBC, we find a large number of states with \(E=0\). These are mid-spectrum states since we have a \(E\rightarrow-E\) symmetry of the energy levels. We find that the number of zero-energy increases with system size at least as fast as \(2^{L/2}\). We can prove this using an index theorem [15]. Writing the Hamiltonian \(H_{3}\) in the \(\sigma^{y}\) basis we find that it can be made to have only off-diagonal blocks when the states are divided into two sectors as follows. The spin states for a system size of \(L\) is divided into sectors of (i) states with the number of spin states with \(\sigma^{y}=+1\) being even, labeled as \(N_{\uparrow,\text{even}}\) (ii) states with the number of spin states with \(\sigma^{y}=+1\) being odd labeled as \(N_{\uparrow,\text{odd}}\). This is because for states in \(N_{\uparrow,\text{even}}\), the first term in \(H_{3}\) will flip spins on three sites in the state and the second term will flip one spin, both giving a state with an odd number of up spins, thus connecting to the sector of \(N_{\uparrow,\text{odd}}\). The index theorem states that the number of zero-energy states in the system is equal to or greater than the absolute value of the difference in the number of states in each sector, thus giving a lower bound on the number of zero-energy states. In this case, however, we find that \(|N_{\uparrow,\text{even}}\)-\(N_{\uparrow,\text{odd}}|=0\). However we see that the parity operator can be used to further divide these two sectors into states with \(P=\pm 1\). Since \(L\) is even, we define parity as reflection about the middle of the \((\frac{L}{2})\)-th and \((\frac{L}{2}+1)\)-th sites and find the number of states with parity \(P=\pm 1\) in the two sectors \(N_{\uparrow,\text{even}}\) and \(N_{\uparrow,\text{odd}}\). Let \(n_{1}\) be the number of states with \((P=1,N_{\uparrow,\text{even}})\), \(n_{2}\) with \((P=1,N_{\uparrow,\text{odd}})\), \(n_{3}\) with \((P=-1,N_{\uparrow,\text{even}})\), and \(n_{4}\) with \((P=-1,N_{\uparrow,\text{odd}})\). We know the following relations between \(n_{1}\), \(n_{2}\), \(n_{3}\) and \(n_{4}\).
\[n_{1}+n_{2}+n_{3}+n_{4} =2^{L},\] \[n_{1}+n_{3}=n_{2}+n_{4} =2^{L-1}. \tag{36}\]
Next, given the spin configuration in one of the states, we can see that there are two possibilities. For the system size with \(L\) sites, we can have the configuration from site numbers \(1\) to \(L/2\) to be either (i) different from or (ii) same as the configuration from sites \(L\) to \((L/2)+1\). Examples of this
Figure 9: (a) Plot of logarithm of total number of zero-energy states \(N_{E=0}\) versus the system size \(L\). For all \(L\), we see that it is greater than \(2^{L/2}\) which is a bound given by an index theorem. (b) Plot of the total number \(N_{1}\) of Type-I states versus \(L\). This number also generally increases with system size although not monotonically.
for \(L=6\) are as follows. For the first type, an example of such a configuration is a state like \(\ket{\psi}=\uparrow\uparrow\uparrow\downarrow\uparrow\downarrow\uparrow\) for which we see that the spins from \(1\) to \(3\) are not the same as from \(6\) to \(4\). For such states we can take superpositions \(\ket{\psi}+P\ket{\psi}\) and \(\ket{\psi}-P\ket{\psi}\) which are eigenstates of \(P\) with eigenvalues \(+1\) and \(-1\) respectively. These two come in equal numbers for all \(\ket{\psi}\). An example of the second type of configuration is \(\ket{\uparrow\uparrow\downarrow\downarrow\uparrow\uparrow}\) where the reflection about the midpoint has the same configuration on either side. Such states therefore are eigenvectors of the parity operator with eigenvalue \(+1\), i.e \(P\ket{\psi}=\psi\). We also note that such states have to belong to the sector \(N_{\uparrow,\text{even}}\) since the total number of up- pointing spins is always twice the number of them till half the lattice. From this, we can conclude that the total number of such states of second type are equal to the difference between the number of \(N_{\uparrow,\text{even}}\) with \(P=1\) and \(P=-1\). It is also equal to the number of ways of selecting the configuration from sites \(1\) to \(L/2\), since the other half is then fixed by mirror reflection. This gives \(2^{L/2}\), which leads to the relation
\[n_{1}-n_{3}=2^{L/2}. \tag{37}\]
Turning to the sector \(N_{\uparrow,\text{odd}}\), we see that no state can have \(P\ket{\psi}=\pm\ket{\psi}\). The combination \(\ket{\psi}+P\ket{\psi}\) and \(\ket{\psi}-P\ket{\psi}\) again gives equal number of states with eigenvalues \(\pm 1\). This further implies that
\[n_{2}=n_{4}. \tag{38}\]
From Eqs. (36), (37) and (38), we have the following expressions for the numbers of states in the four sectors,
\[n_{1} =\frac{1}{2}(2^{L-1}+2^{L/2}),\] \[n_{2} =\frac{1}{2}2^{L-1},\] \[n_{3} =\frac{1}{2}(2^{L-1}-2^{L/2}),\] \[n_{4} =\frac{1}{2}2^{L-1}. \tag{39}\]
Thus considering the parity sector \(P=+1\), we see that a lower bound for the number of zero-energy states is given by \(\ket{n_{1}-n_{2}}=\frac{1}{2}2^{L/2}\), and similarly for \(P=-1\), we have \(\ket{n_{3}-n_{4}}=\frac{1}{2}2^{L/2}\). Adding these up we see that the total number of zero-energy states for this system must satisfy \(N_{E=0}\geq 2^{L/2}\). We plot the total number of zero-energy states as a function of \(L\) in Fig. 9 (a). We indeed see that the number is greater than the lower bound of \(2^{L/2}\) for all values of \(L\).
### Type-I and Type-II zero modes
We now notice something more interesting about the zero-energy states described in Sec. V.2. We again consider the Hamiltonian written in the form given in Eq. (8). It then turns out that the zero-energy states come in two types, Type-I and Type-II. A given zero-energy state \(\ket{\psi}\) is said to be Type-II if \(H_{3}\ket{\psi}=0\) but the two terms separately do not give zero, i.e., \(Z\ket{\psi}\neq 0\) and \(X\ket{\psi}\neq 0\). However for a few of the zero-energy states, it turns out that the terms individually also give zero eigenvalues, that is, \(Z\ket{\psi}=0\) and \(X\ket{\psi}=0\). This means that the wave functions of these states are independent of the transverse field \(h\). These Type-I zero modes violate the ETH since they remain unchanged as the coupling \(h\) is varied in spite of the energy level spacing in their neighborhood being exponentially small in \(L^{18}\) and can, therefore, be classified as quantum many-body scars [23]. The number of these Type-I zero-energy states \(N_{1}\) also increases with system size as shown in Fig. 9 (b). We do not know precisely how fast \(N_{1}\) grows with the system size \(L\), but we will show below that the growth is at least linear. The speciality of the Type-I states becomes more clear when we look at a plot of the half-chain entanglement entropy versus the energy spectrum of this model. We find that most of the states lie close to the thermal entropy of the system except for some states which stand out at \(E=0\). These are the Type-I zero-energy states which turn out to typically have very low entanglement entropy compared to a generic state close to \(E=0\) showing a violation of the ETH [60; 61]. For a given system size, we can further perform a minimization of the entanglement entropy within the subspace of these scar states [20] using the algorithm outlined in Ref. [62]. We show these plots with the full spectrum along with the entanglement-entropy minimized scar states in Fig. 10 (a) and (b), for system sizes \(L=12\) and \(L=18\). We see that there is a dramatic drop in the entropy for most of these scar states confirming that they indeed violate the ETH.
The total number of zero-energy states and the number of Type-I zero-energy states for various system sizes \(L\) are shown in Table 2. We see that the total number of zero-energy states increases rapidly with \(L\) while the number of Type-I states changes non-monotonically but on the average increases with \(L\).
We can further appreciate the difference between Type-II and Type-I states by studying their distribution over the Fock
\begin{table}
\begin{tabular}{||c|c|c||} \hline System size & Total number of & Number of \\ \(L\) & zero-energy states & Type-I states \\ \hline \(4\) & \(6\) & \(2\) \\ \hline \(6\) & \(20\) & \(10\) \\ \hline \(8\) & \(30\) & \(9\) \\ \hline \(10\) & \(56\) & \(16\) \\ \hline \(12\) & \(202\) & \(34\) \\ \hline \(14\) & \(236\) & \(19\) \\ \hline \(16\) & \(492\) & \(21\) \\ \hline \(18\) & \(970\) & \(50\) \\ \hline \end{tabular}
\end{table}
Table 2: Total number of zero-energy states and number of Type-I zero-energy states for various system sizes.
space. A state can be written as a superposition of the basis states of the entire Fock space. A particular scar state, after it has been minimized for entanglement entropy, can be written as \(|\psi_{S}\rangle=\sum_{n}^{2^{L}}c_{n}\left|\psi_{n}\right\rangle\), where \(|\psi_{n}\rangle\) are the basis states in the Fock space and \(c_{n}\) is the corresponding amplitude for the scar state \(|\psi_{S}\rangle\). In Fig. 11, we plot the probability \(|c_{n}|^{2}\) for a generic zero-energy state and for the scar states. We see that a Type-II state (Fig. 11 (a)) has non-zero coefficients over a large number of basis states, and the distribution looks random. However, Type-I states as shown in Figs. 11 (b) and (c) can be easily distinguished as they have a large weight over only a few basis states with equal probabilities.
### Some exact Type-I states
We will now present some Type-I states (scars) which we have found analytically [63]. To this end, let us define two states involving sites \(j\) and \(k\) given by
\[S_{j,k} = \frac{1}{\sqrt{2}}\;(\mid\uparrow_{j}\downarrow_{k})\;-\;\mid \downarrow_{j}\uparrow_{k})),\] \[T_{j,k} = \frac{1}{\sqrt{2}}\;(\mid\uparrow_{j}\downarrow_{k})\;+\;\mid \downarrow_{j}\uparrow_{k})), \tag{40}\]
Figure 11: (a) Probabilities \(|c_{n}|^{2}\) of a generic Type-II \(E=0\) state in the entire Fock space for all the basis states are plotted for \(L=18\). We see that the distribution is random. (b) and (c) show the same plot for different Type-I scar states. The distribution is more sparse and also has equal probabilities for many basis states.
Figure 10: Plots of the half-chain entanglement entropy spectrum for all the energy levels of the system for \(L=12\) and \(L=18\) are shown in (a) and (b) respectively. The plot in red correspond to the Type-I scar states which have entanglement entropy much lower than the neighbouring states, clearly violating the ETH.
where \(\uparrow\) and \(\downarrow\) denote spin-up and spin-down in the \(\sigma^{x}\) basis. These are, respectively, spin-singlet and spin-triplet states with total \(S^{x}=(\sigma_{j}^{x}+\sigma_{k}^{x})/2=0\). Note that these states are antisymmetric and symmetric respectively under the exchange of sites \(j\) and \(k\). We find that they satisfy the identities
\[\sigma_{j}^{z}\ S_{j,k} = -\;\sigma_{k}^{z}\ S_{j,k},\quad\sigma_{j}^{z}\sigma_{k}^{z}\ S_{ j,k}\ =\ -\ S_{j,k},\] \[\sigma_{j}^{z}\ T_{j,k} = \sigma_{k}^{z}\ T_{j,k},\quad\quad\sigma_{j}^{z}\sigma_{k}^{z}\ T_ {j,k}\ =\ T_{j,k}. \tag{41}\]
We now consider a system with \(L\) sites with PBC and a state which is a product of singlets with the form
\[|\psi_{1}\rangle\ =\ S_{L,1}\ S_{L-1,2}\ S_{L-2,3}\ \cdots S_{(L/2)+1,L/2}. \tag{42}\]
Clearly \(X|\psi\rangle=0\) where the operator \(X\) is given in Eq. (8). (A picture of \(|\psi_{1}\rangle\) for \(L=8\) is shown in Fig. 12 (a)). Each line connecting a pair of sites denotes a spin-singlet state). Eqs. (41) then imply that
\[(\sigma_{L-1}^{z}\sigma_{L}^{z}\sigma_{1}^{z}+\sigma_{L}^{z} \sigma_{1}^{z}\sigma_{2}^{z})\ |\psi_{1}\rangle = 0,\] \[(\sigma_{L-2}^{z}\sigma_{L-1}^{z}\sigma_{L}^{z}+\sigma_{1}^{z} \sigma_{2}^{z}\sigma_{3}^{z})\ |\psi_{1}\rangle = 0,\] \[(\sigma_{L-3}^{z}\sigma_{L-2}^{z}\sigma_{L-1}^{z}+\sigma_{2}^{z} \sigma_{3}^{z}\sigma_{4}^{z})\ |\psi_{1}\rangle = 0,\] \[\cdots\] \[(\sigma_{L/2+1}^{z}\sigma_{L/2+2}^{z}\sigma_{L/2+3}^{z}+\sigma_{L/ 2-2}^{z}\sigma_{L/2-1}^{z}\sigma_{L/2}^{z})\ |\psi_{1}\rangle = 0,\] \[(\sigma_{L/2}^{z}\sigma_{L/2+1}^{z}\sigma_{L/2+2}^{z}+\sigma_{L/ 2-1}^{z}\sigma_{L/2}^{z}\sigma_{L/2+1}^{x})\ |\psi_{1}\rangle = 0.\]
Hence the state \(|\psi\rangle\) satisfies \(Z|\psi\rangle=0\) where the operator \(Z\) is given in Eq. (8). Since both \(X\) and \(Z\) annihilate \(|\psi\rangle\), we conclude that this is a Type-I state.
Next, we can take the state \(|\psi_{1}\rangle\) and rotate all the sites clockwise by 1 site on the circle. This gives the state
\[|\psi_{2}\rangle\ =\ S_{1,2}\ S_{L,3}\ S_{L-1,4}\ \cdots\ S_{(L/2)+2,(L/2)+1}, \tag{44}\]
and following similar arguments we can show that \(|\psi_{2}\rangle\) is also a Type-I state. Continuing in this way, we find \(L/2\) distinct states, denoted \(|\psi_{n}\rangle\), \(n=1,2,\cdots,L/2\), which are all Type-I states.
Now we observe that if the system is cut into two equal parts by a line, and we consider the state \(|\psi_{1}\rangle\), the line may cut no singlets, one singlet, two singlets, and so on all the way up to \(L/2\) singlets, depending on the orientation of the line (see the two dashed lines in Fig. 12 (a)). As a result, the half-chain entanglement entropy can take all possible values from zero up to \((L/2)\ln 2\). Even the largest of these values is only half of the thermal entropy given by \(L\ln 2\). This again confirms that these are all scar states.
It turns out that there are two other singlet states, denoted \(|\phi_{1}\rangle\) and \(|\phi_{2}\rangle\), which are also Type-I states. These have the form
\[|\phi_{1}\rangle =\ S_{1,2}\ S_{3,4}\ S_{5,6}\ \cdots\ S_{L-1,L},\] \[|\phi_{2}\rangle =\ S_{2,3}\ S_{4,5}\ S_{6,7}\ \cdots\ S_{L,1}. \tag{45}\]
(A picture of \(|\phi_{1}\rangle\) is shown in Fig. 12 (b)). Using Eqs. (41) we can show that these states are also annihilated by the operator \(Z\). (As before, we can find pairs of three-spin terms \(\sigma_{i}^{z}\sigma_{j}^{z}\sigma_{k}^{z}\) and \(\sigma_{l}^{z}\sigma_{m}^{z}\sigma_{n}^{z}\) such that the sum of the two terms annihilates the states \(|\phi_{n}\rangle\)). Further, the half-chain entanglement entropy for these two states range from zero to \(2\ln 2\) depending on the orientation of the line which cuts the system into two halves.
For \(L=4\), the states \(\psi_{n}\) and \(\phi_{n}\) are identical, and we therefore have only two exact type-I states; according to Table 2, these form the complete set of Type-I states. For \(L\geq 6\), the states \(\psi_{n}\) and \(\phi_{n}\) are distinct, and we therefore have \((L/2)+2\) Type-I states.
The states \(|\psi_{n}\rangle\) and \(|\phi_{n}\rangle\) discussed above are examples of resonating valence bond (RVB) states for a \(L\)-site system. If the \(L\) sites are arranged around a circle, the RVB states correspond to joining pairs of sites by lines in such a way that no two lines cross each other. According to the Rumer-Pauling rules [64], there are \(L!/(L/2)!((L/2)+1)!\) such states which are linearly independent, although not orthogonal to each other. We see that \((L/2)+2\) of the RVB states are Type-I states for our model. We conclude that the number of Type-I states increases at least linearly with \(L\).
We can construct one more Type-I state using singlet states as follows. For a system with \(L\) sites and PBC, consider the following state which is a product of singlets connecting diametrically opposite sites,
\[|\psi_{d}\rangle\ =\ S_{1,(L/2)+1}\ S_{2,(L/2)+2}\ S_{3,(L/2)+3}\ \cdots\ S_{L/2,L}. \tag{46}\]
We find that this state is annihilated by terms of the form \(\sigma_{n}^{z}\sigma_{n+1}^{z}\sigma_{n+2}^{z}+\sigma_{(L/2)+n}^{z}\sigma_{(L/2 )+n+1}^{z}\sigma_{(L/2)+n+2}^{z}\), where \(n=1,2,\cdots,L/2\). Hence \(|\psi_{d}\rangle\) is annihilated by the operator \(Z\) given in Eq. (8). A picture of \(|\psi_{d}\rangle\) for \(L=8\) is shown in Fig. 13 (a). However, \(|\psi_{d}\rangle\) is not an RVB state since the different singlet lines cross each other; in fact, any two singlet lines cross each other. But we can write \(|\psi_{d}\rangle\) as a linear combination of RVB states by using the identity
\[S_{i,j}\ S_{k,l}\ -\ S_{i,k}\ S_{j,l}\ +\ S_{i,l}\ S_{j,k}\ =\ 0 \tag{47}\]
several times. Depending on how four sites labeled \(i,\ j,\ k,\ l\)
Figure 12: (a) Picture of the state \(|\psi_{1}\rangle\) given in Eq. (42) for \(L=8\). The lines joining pairs of sites denote spin singlets. Two straight lines dividing the system into two equal parts are shown by dashed lines. The vertical dashed line cuts \(L/2\) singlets, while the horizontal dashed line does not cut any singlet; thereby producing half-chain entanglement entropies equal to \((L/2)\ln 2\) and zero respectively. (b) Picture of the state \(|\phi_{1}\rangle\) given in Eq. (45).
are arranged around a circle, one of the terms in Eq. (47) will corresponding to a state with one crossing while the other two terms will correspond to non-crossing states. Hence, by repeatedly using Eq. (47), we can successively decrease the number of crossings to eventually reduce \(|\psi_{d}\rangle\) to a superposition of RVB states. For \(L=8\), we find that the superposition contains all the states shown in Figs. 12 (a) and (b) as well as a _specific_ linear combination of 8 other RVB states which are of the form
\[|\phi_{d}\rangle\ =\ S_{1,8}\ S_{2,7}\ S_{3,4}\ S_{5,6}, \tag{48}\]
shown in Fig. 13 (b), and 7 other states obtained from Eq. (48) by rotating all the sites clockwise by \(1,\ 2,\ \cdots,\ 7\) sites.
The different kinds of exact Type-I states discussed above do not exhaust all the Type-I states. For instance, Table 2 shows that there are 9 Type-I states for \(L=8\), but the arguments above only account for \((L/2)+2+1=7\) of them.
Finally, we note that if \(L\) is a multiple of 6, we can find exact Type-I states involving both singlets and triplets. Two such states are shown in Figs. 14 for a system with 6 sites. The state in Fig. 14 (a) has the form
\[|\psi^{\prime}_{1}\rangle\ =\ T_{6,1}\ S_{5,2}\ T_{4,3}, \tag{49}\]
while the state in Fig. 14 (b) has the form
\[|\phi^{\prime}_{1}\rangle\ =\ T_{1,2}\ S_{3,4}\ T_{5,6}. \tag{50}\]
These can be shown to be Type-I states by similar arguments as above and using the identities in Eqs. (41). Then one can repeatedly rotate all the sites by 1 site from \(|\psi^{\prime}_{1}\rangle\) and \(|\phi^{\prime}_{1}\rangle\) obtain states of the form \(|\psi^{\prime}_{n}\rangle\), \(n=1,2,3\), and \(|\phi^{\prime}_{n}\rangle\), \(n=1,2,\cdots,6\), respectively. We thus obtain 9 states each of which involves one singlet and two triplets. However, one can show that only 5 of these states are linearly independent; one can choose these to be of the form \(|\phi^{\prime}_{n}\rangle\), where \(n=1,2,\cdots,5\). For \(L=6\), therefore, we get \((L/2)+2=5\) states involving only singlet states and 5 state involving both singlets and triplets. This gives a total of \(10\) Type-I states for \(L=6\) which is in agreement with Table 2.
A similar construction of Type-I states involving singlets and triplets exists whenever \(L\) is a multiple of 6. There are two kinds of such states. The first kind of states resembles the one shown in Fig. 14 (a) and is given by
\[|\psi^{\prime}_{1}\rangle\ =\ T_{L,1}\ S_{L-1,2}\ T_{L-2,3}\ T_{L-3,4}\ \cdots\ S_{L-4,5}\] \[\
actly but this seems to be a difficult problem. We note that all the exact Type-I states discussed in this section have been found by demanding that they be annihilated by the sum of two three-spin terms of the form \(\sigma_{1}^{z}\sigma_{j}^{z}\sigma_{k}^{z}+\sigma_{l}^{z}\sigma_{m}^{z}\sigma_ {n}^{z}\), and these sums combine to give the total operator \(Z\) in Eq. (8). However, there may be more complicated Type-I states which are only annihilated by the sum of three or more three-spin terms.
It is intriguing that singlets and triplets (with zero magnetization) play such an important role in the construction of Type-I states even though the Hamiltonian \(H_{3}\) is not invariant under \(SU(2)\) or any other continuous symmetry.
## VI Anomalous relaxation of autocorrelators at different sites
The ordered phase of the TFIM on a semi-infinite system is characterized by a doubly degenerate spectrum and the presence of a strong edge mode operator that connects pairs of degenerate states with opposite parity [29; 30]. Numerically, this can be observed by studying the infinite-temperature autocorrelator of the \(\sigma^{z}\) operator at different sites near the edge of the system [31; 32; 33].
\[A_{l}^{zz}(t)=\frac{1}{2^{L}}\text{Tr}[\sigma_{l}^{z}(t)\sigma_{l}^{z}]. \tag{53}\]
Since the strong mode operator has a large overlap with the operator \(\sigma_{l}^{z}\) operator at the boundary site, the autocorrelator shows a long plateau near the value of unity with a time scale that increases exponentially with the system size before relaxing to zero. However the autocorrelator of \(\sigma^{z}\) at any other site falls off to zero very quickly in a time scale \(t\lesssim 10\).
This motivates us to ask a similar question for the non-integrable model \(H_{3}\) with open boundary conditions. As discussed earlier, this model has an exact degeneracy in three-fourths of its eigenstates due to the presence of the \(D_{1},~{}D_{2},~{}D_{3}\) operators for PBC, and also a two-fold degeneracy in half of its eigenstates due to parity symmetry for open boundary conditions. These degeneracies are present for any value of the transverse field \(h\). We will study how the spin autocorrelators relax in time at sites near the boundary for various values of \(h\) and see if the degeneracies play any role in the relaxation. The infinite-temperature autocorrelators can be calculated as traces over all the energy eigenstates of the Hamiltonian. We will be interested in the \(zz\)- and \(xx\)-autocorrelators given by
\[A_{l}^{zz}(t)=\frac{1}{2^{L}}\sum_{n,m}e^{\ i(E_{n}-E_{m})t}|\langle n|\sigma_ {l}^{z}|m\rangle|^{2}, \tag{54}\]
and
\[A_{l}^{xx}(t)=\frac{1}{2^{L}}\sum_{n,m}e^{\ i(E_{n}-E_{m})t}|\langle n|\sigma_ {l}^{x}|m\rangle|^{2}, \tag{55}\]
respectively. The autocorrelators defined in this way are expected to reveal the nature of the phase transition and the energy spectra on the two sides of the transition.
We present the results for \(A_{l}^{zz}(t)\) versus \(t\) on a log scale for different lattice sites \(l=1,2,\ldots 6\) (with \(l=1\) being the boundary site) and three values of the transverse field, \(h=0.2,1,\) and \(5.0\), in Figs. 15 (a), (b) and (c) respectively. The relaxation of the autocorrelators shows very interesting behaviors depending on whether \(h\ll 1\), \(h\gg 1\) or \(h=1\). For \(h=0.2\) (see Fig. 15 (a)), we observe qualitatively that \(A_{1}^{zz}\) and \(A_{2}^{zz}\) have a similar structure, with a small plateau for a time interval of \(t\lesssim 10^{4}\), where the autocorrelator remains near 1 before falling off to zero at large times. We believe that this is due to the presence of an operator, which has an appreciable overlap with \(\sigma^{z}\) at sites \(1\) and \(4\) and also has a small commutator with the Hamiltonian itself. The autocorrelator at site \(l=2\) has the most striking behavior, showing oscillations with an approximate period of \(15.5\). We also plot the same autocorrelator in real time instead of the logarithmic scale in Fig. 16 (a), where the oscillations can be seen clearly.
The small frequency oscillations at the site \(l=2\) can be explained by considering the Hamiltonian in the small \(h\) limit and doing a perturbative calculation. First, by putting \(h=0\), we have the Hamiltonian given by \(H_{3}\Big{|}_{h=0}=Z=-\sum_{j=1}^{L-2}\sigma_{j}^{z}\sigma_{j+1}^{z}\sigma_{j +2}^{z}\). The eigenstates of this are given by product states whose each site \(j\) has a definite value of \(\sigma_{j}^{z}=\pm 1\). Therefore, all the eigenvalues of \(Z\) are integer valued and so are the energy differences. Now, an introduction of a small value of transverse field \(h\) gives eigenstates with energy differences of order \(h\). To see this, we look at the couplings in the \(Z\) term and the effects of \(\sigma_{l}^{z}\) in the autocorrelator more carefully. The couplings in \(Z\) containing a particular \(\sigma_{1}^{z}\) can be considered for three separate cases, (a) \(\sigma_{l}^{z}(\sigma_{2}^{z}\sigma_{3}^{z})\), for \(l=1\), (b) \(\sigma_{l}^{z}(\sigma_{1}^{z}\sigma_{3}^{z}+\sigma_{3}^{z}\sigma_{4}^{z})\), for \(l=2\), and (c) \(\sigma_{l}^{z}(\sigma_{l-2}^{z}\sigma_{l-1}^{z}+\sigma_{l-1}^{z}\sigma_{l+1}^ {z}+\sigma_{l+1}^{z}\sigma_{l+2}^{z})\), for \(l\geq 3\). Since each \(\sigma_{j}^{z}\) can take values \(\pm 1\), the products of two spin operators also will take values \(\pm 1\). Therefore, in cases (a) and (c), we have a sum of an odd number of such products which necessarily has a non-zero value. However, in case (b), we have an even number of such terms and hence, for \(l=2\), we can have a case where \(\sigma_{2}^{z}\) is multiplied by zero. More precisely, this happens if \(\sigma_{3}^{z}(\sigma_{1}^{z}+\sigma_{4}^{z})=0\), i.e., if \((\sigma_{1}^{z}+\sigma_{4}^{z})=0\). Thus, the two sets of eigenstates of \(Z\) corresponding to the value of \(\sigma_{2}^{z}=\pm 1\) (we label them as \(|I\rangle\) and \(|II\rangle\) respectively) will be degenerate for any values of \(\sigma_{1}^{z},\sigma_{3}^{z},\sigma_{4}^{z},\sigma_{5}^{z},\sigma_{6}^{z},\ldots\), with the condition that \((\sigma_{1}^{z}+\sigma_{4}^{z})=0\). This condition is satisfied for half of the states when \(\sigma_{1}^{z}\) and \(\sigma_{4}^{z}\) are opposite to each other, and then we have a pairwise degeneracy between the states of types \(|I\rangle\) and \(|II\rangle\). With a small \(h\) present, the term \(-h\sigma_{2}^{x}\) will break the degeneracy, since \(\sigma_{2}^{x}\,|I\rangle=|II\rangle\) and vice-versa. Therefore we end up having a new set of eigenstates \(|\pm\rangle=1/\sqrt{2}(|I\rangle\pm|II\rangle)\) with an energy splitting of \(2h\). Now, since \(\langle-|\,\sigma_{2}^{z}\,|+\rangle=1\), we see from Eq. (54) that for \(l=2\), half the states of the spectrum in the autocorrelator will contribute to a oscillatory term \(e^{\pm i2ht}\). This exactly explains the oscillations seen in Fig. 16 (a). Eventually, for later times the oscillations decay as terms of order \(h^{2}\) and higher in the energy differences become important.
We also note that since \(|\pm\rangle\) are eigenstates of \(\sigma_{2}^{x}\), with eigenvalues \(\pm 1\), these states will contribute to the diagonal
terms (i.e., terms with \(m=n\) and therefore \(E_{m}=E_{n}\)) in the \(xx\)-autocorrelator at \(l=2\) in Eq. (55). Since the diagonal terms are time-independent (as \(E_{m}=E_{n}\)), we expect that the \(xx\)-autocorrelator at \(l=2\) will have a non-zero constant term. This agrees with what we see in Fig. 18 (a) for \(h=0.2\).
For large values of \(h\), we see in Fig. 16 (b) that at several sites near one end of the system, the \(zz\)-autocorrelators show pronounced oscillations before eventually decaying to zero. All the oscillations have the same frequency which is found to be close to \(2h\). We can understand this as follows. For \(h\gg 1\), we see from Eq. (2) that the eigenstates of \(H\) are given, to lowest order, by products of eigenstates of \(\sigma_{j}^{x}\) for all \(j\). An operator \(\sigma_{j}^{z}\) connects two states which have \(\sigma_{j}^{x}=\pm 1\) and therefore unperturbed energies equal to \(\mp h\). The energy difference of these two states is \(2h\), hence Eq. (54) implies that the contribution of these two states to the \(zz\)-autocorrelator at site \(j\) will oscillate as \(e^{\pm i2ht}\); this explains Fig. 16 (b). Next, we can extend this argument to first order in perturbation theory. Consider the \(zz\)-autocorrelator at the first site given by \(j=1\) where the oscillations are most pronounced. To first order in the perturbation \(V=-\sigma_{1}^{z}\sigma_{2}^{z}\sigma_{2}^{z}\), the two states given by \(|I\rangle=|\sigma_{1}^{x}=+1,\sigma_{2}^{x}=a,\sigma_{3}^{x}=b\rangle\) and \(|II\rangle=|\sigma_{1}^{x}=-1,\sigma_{2}^{x}=-a,\sigma_{3}^{x}=-b\rangle\) will mix (here \(a,\,b\) can take values \(\pm 1\)). The unperturbed energies of these states are \(E_{I}=-h(1+a+b)\) and \(E_{II}=h(1+a+b)\) respectively. Hence, to first order in perturbation theory, the energy of the state lying close to \(|I\rangle\) will shift from \(E_{I}=-h(1+a+b)\) to \(E_{I}^{\prime}=-h(1+a+b)+1/(E_{I}-E_{II})=-h(1+a+b)-1/(2h(1+a+b))\). Similarly, the perturbation \(V\) mixes the two states \(|III\rangle=|\sigma_{1}^{x}=-1,\sigma_{2}^{x}=a,\sigma_{3}^{x}=b\rangle\) and \(|IV\rangle=|\sigma_{1}^{x}=1,\sigma_{2}^{x}=-a,\sigma_{3}^{x}=-b\rangle\), and shifts the energy of the state lying close to \(|III\rangle\) from \(E_{III}=h(1-a-b)\) to \(E_{III}^{\prime}=h(1-a-b)+1/(2h(1-a-b))\). The operator \(\sigma_{1}^{z}\) connects the states lying close to \(|I\rangle\) and \(|III\rangle\), and we see from the expressions above that the energy difference between these two states is
\[|E_{I}^{\prime}-E_{III}^{\prime}| = 2h\ +\ \frac{1}{2h}\ \left(\frac{1}{1+a+b}\ +\ \frac{1}{1-a-b}\right) \tag{56}\] \[= 2h\ +\ \frac{1}{h}\ \left(\frac{1}{1-(a+b)^{2}}\right).\]
According to Eq. (54), therefore, the oscillations will have the frequency given in Eq. (56). Now, since \(a,\ b\) can independently take the values \(\pm 1\), giving rise to four possibilities, the expression in Eq. (56) can take two possible values given by \(2h+(1/h)\) (when \(a=-b\)) and \(2h-(1/3h)\) (when \(a=b\)). Hence we expect the oscillations to have a frequency \(\omega\), where \(\omega/(2h)=1+1/(2h^{2})\) and \(1-1/(6h^{2})\). Since these two cases appear and equal number of times, the average value is given
Figure 16: (a) \(A_{l}^{zz}(t)\) at site \(l=2\) for \(h=0.2\), showing long-time oscillations. This can be understood using first-order degenerate perturbation theory. (b) \(A_{l}^{zz}(t)\) showing oscillations at different sites for \(h=5\). This can be understood using effective two-level systems. Both the figures are for system size \(L=14\).
Figure 15: Autocorrelation function \(A_{l}^{zz}(t)\) for \(L=14\) plotted versus time on a log scale for different values of the transverse field. Deep inside the ordered phase, (a) \(h=0.2\), or the disordered phase, (c) \(h=5\), the autocorrelators at several sites near the boundary show oscillations for a long time before decaying to zero. (b) At the critical point, \(h=1\), the autocorrelators decay quickly to zero at all sites except at the boundary site.
by \(\omega/(2h)=1+(1/6h^{2})\). This is in reasonable agreement with the numerical result shown in Fig. 17 for large values of \(h\). We note that since the frequency \(\omega\) used in that figure is obtained by calculating the position of the peak of the Fourier transform of the oscillations in Fig. 16 (b), the decay of the oscillations leads to a small width around the peak. This width also turns out to be of the order of \(1/h\), and we therefore do not see two separate peaks at \(\omega=2h+(1/h)\) and \(2h-(1/3h)\). Remarkably, these early and intermediate time oscillations in \(A_{I}^{zz}(t)\) persist all the way to \(h=1\) (Fig. 17) for the boundary site when the critical point is approached from \(h>1\), while the other autocorrelators show a reasonably rapid decay in the neighborhood of the critical point (see Appendix B for the extraction of the oscillation frequency \(\omega\) in Fig. 17).
## VII Discussion
A summary of our main results is as follows. Motivated by the one-dimensional TFIM which is one of the best studied integrable models with duality and a quantum critical point, we have made a detailed study of a generalization in which there are Ising interactions between three successive spins (instead of two successive spins as in the TFIM). We find that the model has a \(Z_{2}\times Z_{2}\) symmetry for a system with PBC provided that the system size is a multiple of 3. This symmetry implies that the system consists of four sectors which are decoupled from each other, and this leads to three-fold degeneracies in the energy spectrum which involves states from three of the four sectors. Next we have discussed the duality of the model between \(h\) and \(1/h\). While the duality is straightforward to show for an infinite-sized system, the existence of a duality turns out to be a subtle issue for finite-sized systems with PBC. We find that exact duality holds only if the system size is _not_ a multiple of 3. Next, we make a detailed study of the criticality properties of the model at the self-dual point given by \(h=1\). Using ED and system sizes up to \(L=27\), we use finite-size scaling to first confirm that there is indeed a critical point at \(h=1\), and then to compute the dynamical critical exponent \(z\), the order parameter exponent \(\beta\), the magnetic susceptibility exponent \(\gamma\), and the correlation length exponent \(\nu\). We find that \(z=1\) suggesting that the low-energy sector of the model at \(h=1\) has conformal invariance. We then determine the central charge \(c\) in two different ways (from the length-dependences of the entanglement entropy between two parts of the system and of the ground state energy). We find that \(c\) is close to 1. We then observe that although the values of \(\beta\), \(\gamma\) and \(\nu\) for the two-spin and three-spin models are different from each other, the ratios \(\beta/\nu\) and \(\gamma/\nu\) are the same in the two models. This suggests that there is a weak universality and the three-spin model lies on the AT line, just like two copies of the TFIM and the four-state Potts model. All models on this line are known to have \(z=1\), \(c=1\), and the same values of \(\beta/\nu=1/8\) and \(\gamma/\nu=7/4\). There is a quantum AT model which has a parameter \(\lambda\) such that two copies of the TFIM and the four-state Potts model correspond to \(\lambda=0\) and 1 respectively. Given our numerically obtained value of \(\nu\) for the three-spin model, we estimate this model corresponds approximately to \(\lambda\approx 0.43\). A useful direction for future studies would be to determine all these quantities (\(z,\;\beta,\;\gamma,\;\nu\), \(c\) and \(\lambda\)) more accurately for the three-spin model using density-matrix renormalization group or quantum Monte Carlo methods which can be used for much larger system sizes.
We then studied the energy level spacing statistics in a particular symmetry sector of a system with open boundary conditions to determine if the three-spin model is integrable. We find that the level spacing statistics has the form of the Gaussian orthogonal ensemble, and hence the model is non-integrable. Next, we find that the model has an exponentially large number of mid-spectrum zero-energy states which is consistent with an index theorem; the number of states grows at least as fast as \(2^{L/2}\). Further, we find that the zero-energy states are of two types which we call Type-I and Type-II. The Type-I states are special because they are simultaneous zero-energy eigenstates of the two parts of the Hamiltonian (the three-spin interaction and the transverse field); hence their wave functions do not change with \(h\) in spite of the energy level spacing in their neighborhood being exponentially small in system size. These states thus violate the ETH and qualify as quantum many-body scars. We have presented the analytical forms of some of the Type-I states which show that their number grows at least linearly with the system size. However we do not know the form of the growth more precisely (linear, exponential, or some other dependence). Finally, we have studied the infinite-temperature autocorrelation functions for both \(\sigma^{x}\) and \(\sigma^{z}\) at sites close to one end of a large system with open boundary conditions. We find that far from the critical point, at either \(h\ll 1\) or \(h\gg 1\), some of the autocorrelators show an anomalous behavior in that they show pronounced oscillations and decay very slowly with time. The time scale of decay is much larger than the inverse of the energy scales in the Hamiltonian; this is unexpected since the model is non-integrable. We provide a qualitative understanding of the oscillations using perturbation theory. However, the reason for a large decay time is not yet understood analytically. Further
Figure 17: Variation of the frequency of oscillations of \(A_{I=1}^{zz}(t)\) at the end site with the transverse field \(h\) for \(L=14\). For large \(h\) the dependence is consistent with the perturbative result \(\omega/(2h)=1+1/(6h^{2})\).
more, the autocorrelator for \(\sigma^{z}\) at the end site shows persistent oscillations at short and intermediate timescales even when \(h\) is close to the critical point while the other autocorrelators decay quickly to zero. An analytic understanding of this feature is lacking as of now.
###### Acknowledgements.
A.S. thanks Hosho Katsura for illuminating discussions. D.S. thanks Chethan Krishnan for stimulating discussions in the early stages of this work. A.S. and D.S. acknowledge useful discussions with the participants in the ICTS program "Periodically and quasi-periodically driven complex systems" (code: ICTS/pdcs2023/6). S.S. thanks MHRD, India for financial support through the PMRF. D.S. acknowledges funding from SERB, India through project JBR/2020/000043.
|
2304.08160 | When is a DAO Decentralized? | While previously a nascent theoretical construct, decentralized autonomous
organizations have grown rapidly in recent years. DAOs typically emerge around
the management of decentralized financial applications and thus benefit from
the rapid growth of innovation in this sector. In response, global regulators
increasingly voice the intent to regulate these activities. This may impose an
excessive compliance burden on DAOs, unless they are deemed sufficiently
decentralized to be regulated. Yet, decentralization is an abstract concept
with scarce legal precedence. We investigate dimensions of decentralization
through thematic analysis, combining extant literature with a series of expert
interviews. We propose a definition of 'sufficient decentralization' and
present a general framework for the assessment of decentralization. We derive
five dimensions for the assessment of decentralization in DAOs: Token-weighted
voting, Infrastructure, Governance, Escalation and Reputation. We present a
discretionary sample application of the framework and five propositions on the
future regulation and supervision of DAOs. We contribute new practical insights
on the topic of compliance and decentralized organizations to the growing
discourse on the application of blockchain technology in information systems
and management disciplines | Henrik Axelsen, Johannes Rude Jensen, Omri Ross | 2023-04-17T11:12:54Z | http://arxiv.org/abs/2304.08160v1 | # When is a DAO Decentralized?
###### Abstract
While previously a nascent theoretical construct, decentralized autonomous organizations (DAO) have grown rapidly in recent years. DAOs typically emerge around the management of decentralized financial applications (DeFi) and thus benefit from the rapid growth of innovation in this sector. In response, global regulators increasingly voice the intent to regulate these activities. This may impose an excessive compliance burden on DAOs, unless they are deemed sufficiently decentralized to be regulated. Yet, decentralization is an abstract concept with scarce legal precedence. We investigate dimensions of decentralization through thematic analysis, combining extant literature with a series of expert interviews. We propose a definition of "sufficient decentralization" and present a general framework for the assessment of decentralization. We derive five dimensions for the assessment of decentralization in DAOs: Token-weighted voting, Infrastructure, Governance, Escalation and Reputation (TIGER). We present a discretionary sample application of the framework and five propositions on the future regulation and supervision of DAOs. We contribute new practical insights on the topic of compliance and decentralized organizations to the growing discourse on the application of blockchain technology in information systems (IS) and management disciplines.
DAO, Sufficient Decentralization, Regulation, DLT, Blockchain, Compliance.
## 1 Introduction
In financial markets, regulatory objectives traditionally focus on (1) proper functioning and integrity of markets, (2) financial stability, (3) protecting the collective interests of consumers and investor protection, while also (4) aiming to reduce criminal activity and (5) preserving monetary sovereignty.
The crypto economy has experienced rapid growth in recent years, amounting to USD 3 Trillion in late 2021 [1]. Due to its open-source nature, the sector is subject to high competition and enables
decentralized finance (DeFi). DeFi replicates traditional financial services; hence the industry is becoming increasingly important to regulators [2], [3].
The crypto economy operates on permissionless blockchain technology. Regulators see this technology as imperative to innovation, growth, and global competitiveness. While crypto remains primarily unregulated, regulators across the globe are motivating and implementing crypto regulation to meet the challenge of ensuring consumer protection, innovation, and growth without stifling innovation [4], [5].
In recent years, scholars from a wide variety of disciplines have found a shared interest in examining the implications of the technical properties of blockchain technology in their fields. Concepts such as the self-enforcement and formalization of rules, automatization, decentralization of authority, transparent execution of business processes, and codification of trust appear to be conducive to wide-ranging theoretical and industrial innovation.
While there are multiple working definitions of the concept of decentralized autonomous organization (DAO) in industry, most take the form of fluid organizations or loosely organized communities, self-directed and governed through smart contracts without the presence of central authority or a managerial hierarchy [6], [7].
DAO tends to operate through bottom-up interaction and coordination among a set of independent and distributed rational agents. This has increased interest in how DAOs can mitigate principal-agent problems and reduce misconduct by improving [8] through shifting power dynamics. Some observers compare DAOs to nation-states rather than traditional organizations [9]. In this analogy, the formal (on-chain) smart contracts are comparable to a "computational constitution." At the same time, cultures are nurtured through communication emerging around the design, development, and maintenance of the products governed by the DAO.
While Ethereum remains the dominating network, DAOs are now proliferating across blockchains, facilitated by innovation in the underlying infrastructure. There are currently some 5 000 individual DAOs, counting more than 1.7m token holders, and some 700 000 active voting members [10].
Implementing regulatory objectives imposes a high compliance burden for industry participants [14] in traditional finance. For European actors, the total cost of compliance ranges between 2 and 25% of total operating expenses, depending on the size and complexity of the institution [11], [12]. Being subjected to traditional financial institutions' comparatively strict compliance requirements may prove challenging, if not impossible, for DAOs as they are designed today. Regulatory compliance imposes capital and liquidity requirements, strong centralized controls and separation of functions, management hierarchies, and complicated reporting.
Hence, if existing regulation is applied without scrutiny, the novel and poorly defined concept of a DAO may give rise to both conventional and emerging regulatory risks. A key driver among these risks is the prevailing ideological assumption that for regulation to have an effect, a subject in the form of a legal or physical person is required to be held accountable for obligations arising from DAO activities, including those related to regulated financial activities.
Recently, global regulators indicated that the issuance of crypto assets, which may otherwise be subject to compliance requirements, may be exempt if distributed by an entity predominantly or exclusively operating as a "decentralized entity" [5], [13].
Yet, none of the proposals published to date offer a working definition of what might constitute "sufficient decentralization."
As follows, designing a decentralized crypto-based business model based on "smart contracts" is complicated: In addition to the usual challenges in finding product market fit, product leadership, sales, recruitment, development, and scaling, founders must seek to operate their projected business in a decentralized manner or risk negative regulatory implications [14].
While founders may opt for the "Nakamoto model" [15] and operate in full anonymity, secondary service providers required to fund and execute a project are also subject to regulation. Consequently, fully anonymous (anon) stakeholders may find themselves operating in a vacuum, with limited access to ancillary services.
This article asks the following research question: _"When is a DAO (sufficiently) decentralized?"_ We present an artifact designed to assess the level of decentralization in any given DAO across several dimensions. We seek to contribute new practical and actionable insights on the topic of decentralized organizations to the growing distributed ledger technology (DLT) discourse in the information systems and management disciplines. Further, we contribute to the growing regulatory discourse in crypto assets and decentralized finance by providing a pragmatic assessment tool for regulatory compliance assessment.
## 2 Background
### Blockchain Technology and "Decentralized Autonomous Organizations"
Blockchain is a subset of DLT where transactions are recorded through immutable cryptographic signatures. A blockchain's primary function is maintaining an append-only ledger in a peer-to-peer network [16], using a consensus mechanism to validate transactions. Permissionless blockchains are decentralized computer networks that maintain a single global version of a shared database and a shared account ledger that is visible to all stakeholders [17]. Permissionless blockchains are open, so anyone can join, leave, read, and write as they please. No central party authorizes access, and its cryptographic primitives ensure collusion resistance [18]. Bitcoin [15] and Ethereum [19] are important instances of permissionless blockchains.
DeFi apps are financial solutions built with "smart contracts" operating through permissionless blockchain technology.
Smart contracts are scripts that automatically carry out specific business logic. Financial services or products created as smart contracts work autonomously without the need for monitoring or intervention from the software developers who originally designed the application due to the deterministic characteristics of the underlying blockchain.
This means that, as long as the blockchain is active, a smart contract will execute business logic unconditionally and irreversibly [20]. Typically, a smart contract will carry out a set of instructions that allow participants to lend or swap an underlying base asset or other financial assets that have been "tokenized" [21]. DAOs utilize these properties to create rules-based organizations, in which they make decisions instituted in code. A DAO will typically consist of multiple interacting smart contracts responsible for different parts of the DAO, including treasury management, the tallying of votes, and the token itself. All these smart contracts are deployed on the blockchain and maintained as stateful applications. Both users and smart contracts are represented by addresses and compute transactions in the database containing instructions on how to change the state. Transactions emitted to the network are then sequenced in blocks and circulated with the network, at which point a global state-change is enacted.
To illustrate the above, in Figure 1 we present a layered taxonomy in which the _protocol layer_ represents the consensus model determining the logic by which blocks are generated and distributed; the _application layer_ represents the virtual machine in which smart contracts are deployed, and the _interface_ and _user_ layers represent the web-based interface through which users can create and sign transactions.
When a user participates in DAO voting, this process is carried out through one or more transactions in which the user (1) maintains a balance of governance tokens on an address to which they control the private keys and (2) connects their wallet to sign a message or a transaction, enabling them to signal their approval or dismissal of a governance proposal.
While there are multiple ways to implement this logic, the leading solutions rely either on the collection of off-chain signatures through a voting interface (User A) or the direct collection of votes and implementation of pre-deployed code changes by the DAO contract (User B).
In response to voter apathy, DAOs may implement the option for vote-delegation. This is typically carried out directly in the token contract and implemented as a feature in which a token
holder can assign the voting power associated with their balance to a third-party address without losing custody of the tokens.
### The Problem of Defining Decentralization within a Regulatory Context
DAOs are mostly designed and instantiated by a small group of individuals who distribute power and control governance, with a promise to decentralize the governance process at some defined later stage [22].
Without legal recognition, most jurisdictions today may simply treat unregistered DAOs as unincorporated general partnerships, resulting in community members having personal, joint, and several liability for debts or legal actions arising from operating the DAO.
Increasingly, therefore, DAOs establish themselves with "legal wrappers" to protect DAO participants from unlimited liability, optimize tax treatment or engage in contractual "off-chain" transactions, even if not focused on regulatory compliance expectations and "sufficient decentralization" [23].
Figure 1: Blockchain, application layer, and users
Because the common instantiation method is centralized from a design perspective, such a "wrapper" constitutes incorporation. It relates only to the autonomy and legal capacity of the organization, which technically does not prevent the concept of decentralization. Yet, DAOs that operate using a governance token, issued with a "_reasonable expectation of profits to be derived from the entrepreneurial efforts of others,"_ are likely to be considered to undertake regulated financial activity [13].
Some scholars propose that a DAO, like autonomy classification for land and maritime environments [24], be considered autonomous to the extent that it can legally accept liability [22]. In practice, the level of autonomy and anonymity can vary, but a DAO is normally self-directed through voting on- and off-chain; it can be financial or non-financial in purpose, but the traditional legal system seems secondary to its existence and purpose [25].
In 2018, a US Securities and Exchange Commission (SEC) representative suggested that contractual and technical ways exist to structure digital assets, so they function more like consumer items or community enablers and less as regulated securities. At the same time, it was suggested that a security could become "sufficiently decentralized" over time so that it no longer is a security token under the so-called Howey test [13]. Since then, likely accelerated by the increasing success of DeFi, regulators across the globe have increasingly looked to regulate DeFi and DAOs, and uncertainty has prevailed.
Efforts to regulate DAOs as limited liability companies have emerged [26], [27]. More recently, progressive senators in the US are working on regional regulation of DAOs, yet this is still early draft, subject to extensive negotiations of political views [4].
As the first major region attempting to regulate crypto assets at the supranational level, the EU bloc emerged in 2020 with a digital finance package. The EU draft regulation included DAOs in the negotiation phase [5] with legal identity and limited liability for the community members. However, it was omitted in the final version of the regulation, called the Markets in Crypto Asset (MiCA) regulation, approved on June 30, 2022.
Much remains to clarify how DAOs will eventually become regulated, likely through a global policy setter, given the nature of DLT and the world-wide-web. At the time of writing, the final MiCA text is not published. Still, based on the EU Council's negotiation mandate, the regulation appears to treat decentralized activity in a manner similar to the US: "_This regulation applies to natural and legal persons and the activities and services performed, provided or controlled in any manner, directly or indirectly, by them, including when part of such activity or services is performed in a decentralized way...Where crypto assets have no offer or and are not traded in a trading platform which is considered to be operated by a service provider, the provisions of (this regulation, ed.) do not apply_" [28] (recital 12a).
This EU regulation appears to align with the global trend that certain crypto assets may become exempt from specific compliance requirements, even if constituting an activity that might otherwise be a regulated financial activity. But the question of the extent of decentralization required remains to be solved. As there is no definition of "sufficiently decentralized" proposed, nor is there, like in the US, any proposal of allowing a grace period for DAOs to mature to any given level of "sufficient decentralization" [29], such will likely have to evolve through regulatory technical standards set by the EU financial regulators. Combined, the typology suggests overlapping assumptions open for problematization [30].
This is further exacerbated by DAOs frequently operating across multiple jurisdictions with different views on decentralization, resulting in the matter becoming a topic of strategic importance as the uncertainty blocks investments, which impacts the competing growth and innovation objectives mentioned earlier.
### Arriving at a Working Definition for Decentralization
The notion of "decentralization" has its origins in political science and, in the present time, generally refers to the dispersion or distribution of functions and powers. Without an
understanding of the powers of different stakeholders, where and how they exercise their powers, and to whom and how they are accountable, it is difficult to understand whether decentralization is taking place [31].
The concept of decentralization has been applied mainly within the government of nation-states and political science [32], administration [33], fiscal area [34], and environment [35], but also across a diverse range of disciplines, such as complex systems engineering [36], space safety engineering [37], cybernetics [38], management science [39], economics around principal agents theory [40], finance [15], law and technology [41], crypto-economic systems [9] and more.
Within the nascent literature on crypto, the most applied definition of decentralization was proposed by Ethereum co-founder Vitalik Buterin with the introduction of the term "DAO" in 2013 [25].
Here, decentralization is presented as a response to the latent issues of centralized systems, to which decentralized systems can introduce fault tolerance and deter attacks or collusion. In a later publication [42], Buterin suggested that decentralization be viewed across several dimensions: (1) An architectural dimension as in how many computers the system is made up of; (2) a political dimension as in how many controls those computers; and (3) a logical dimension as in how the interface and data structures add up.
Some scholars and practitioners suggest that decentralization is a misleading term, as it has a slightly negative connotation, and no large-scale social, economic or political institution can be fully decentralized and automated without human intervention. Decentralization is then considered more specific to an activity, not to an organization design dimension; instead, we might consider using collaborative models [43].
It follows that measuring decentralization is complicated; "_A true assessment of the degree of decentralization in (a country) can be made only if a comprehensive approach is adopted, and rather than trying to simplify the syndrome of characteristics into the single dimension of autonomy, interrelationships of various dimensions of decentralization are taken into account_" [44], [45].
We propose that "sufficient decentralization" is defined as a verifiable state, where (1) the design of the DAO is collusion resistant and based on long-term equilibrium; (2) its governance processes have unrestricted and transparent access.
## 3 Methodology
This article follows an inductive approach to framework development [46]. We chose thematic analysis as a method to reflect and unravel the surface of the "reality" of DAO decentralization [47] through interviews and literature review. We analyzed the data in six phases: (1) familiarize yourself with the data, (2) generate initial codes, (3) search for themes, (4) review themes, (5) define and name themes, and (6) produce the report.
We chose an explorative, qualitative research approach to identify the relevant dimensions of decentralization in a DAO. We conducted semi-structured, open-ended expert interviews to identify possible themes to supplement literature review findings.
Potential interviewees were approached through contacts from ongoing token engineering projects. We conducted eight interviews with experienced DAO experts and stakeholders (Table 1), each lasting 45-60 minutes.
At the beginning of each interview, we ensured proper consent and confidentiality. We used an interview guide [48] with 10 open questions probing the interviewees' perspectives on aspects of the structural elements of a DAO (decentralized, autonomous, organization) and additional dimensions for assessing decentralization specifically. Interviews were recorded and transcribed, amounting to 82 pages of transcripts and notes.
Although mainly conducted through one-to-one interviews in search of the "decentralization surface" of DAOs and with unclear requirements from the outset, our search process matches elements of a design science research (DSR) method [54], where the artifact design process informed an iterative process with stakeholders, leading to the final result. Our approach is summarized in Figure 2:
After (1) reviewing transcripts and notes from interviews, we (2) extracted dimensions of decentralization and aligned them to the literature on DAOs and DeFi manually. The unit of analysis was the practices conducted by DAO communities, the subsystems used to perform these, and the technical infrastructure supporting them. All three authors were involved in the data analysis. As two authors were involved in the data collection, the third author maintained distance and acted as a devil's advocate to ensure the analysis remained objective and independent of our preconceptions and the interviewees' views [49].
As each expert had their own practical experience from working with DAOs, we first conducted a within-case analysis to gain familiarity with the data and generate a preliminary theory; then, we examined the data for cross-case patterns [50]. The coding procedure comprised several rounds of analysis and refinements of the codes. The topic of decentralization is multi-dimensional and complicated, having to determine the primary angle of analysis either by business subsystem, policy, or technical architectural dimension. During this procedure, we gradually moved from an inductive to an abductive approach [49], using labels to categorize the interviewee-specific language and grouping similar ones.
Our data sampling strategy remained open to new theoretical insights on what constitutes decentralization [51]. In (3) the search for themes, we clustered initial 52 first-order concepts across 7 DAO subsystems, 4 policy dimensions, and 4 technical architectural layers, further (4)(5) synthesizing these into 15 second-order themes across 5 aggregate dimensions. As we analyzed
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline
**DAO Expert** & **Expert role** & **DAO experience** \\ \hline E1 & Complex Systems Architect and Designer & 6 years \\ \hline E2 & Cryptoeconomic, token engineer, ecosystem designer & 4 years \\ \hline E3 & Engineer, Data Scientist, DAO advisor & 5 years \\ \hline E4 & Founder, DAO ecosystem tooling & 4 years \\ \hline E5 & Serial entrepreneur, Co-founder misc DAOs & 8 years \\ \hline E6 & Lawyer, Specialist in DLT/Blockchain projects & 5 years \\ \hline E7 & Lawyer, Crypto Asset Specialist / DeFi legal expert & 5 years \\ \hline E8 & Lawyer, DeFi specialist, National regulatory body & 5 years \\ \hline \end{tabular}
\end{table}
Table 1: Overview of Interviewees
Figure 2: Our search process outline
the data and generated theoretical concepts, we cross-referenced our findings with the extant literature in an iterative process to align our findings.
Our literature review followed a "light approach" [48], where we developed the research protocol, defined - and refined - the research question, and added criteria for DAO research while focusing mainly on decentralization and acknowledging related characteristics to autonomy and organization. The DAO subsystems were identified using a DAO reference model [52]. Still, as the framework should satisfy regulatory and supervisory expectations of a risk-based approach, we also investigated a technical reference model proposed by regulators [53].
Once we had derived the first-order concepts, second-order themes, and aggregate dimensions, we built the data structure as appears in Figures (a)a and (b)b below.
Figure (a)a: Coding of data to themes (1 of 2)
**Figure 3b.** Coding of data to themes (2 of 2)
The artifact was evaluated ex-ante by a representative from a regulator to ensure a level of alignment to regulatory expectations of the framework artifact.
## 4 Introducing "TIGER" Assessment Framework
The proposed artifact comprises a generalized DAO score-card evaluation framework. The framework facilitates a directional analysis of critical DAO components from a systems perspective, where compromising one subsystem may compromise the entire system [9], [43].
In the output component, we leverage traditional supervisory methods [55] and aim to score and consolidate each characteristic to generate an assessment score for each critical dimension that may affect the entire DAO level of decentralization if compromised. The central assessment approach is to which extent, on each dimension and its characteristics, we observe evidence of independent groups of agents operating under mandates without any centralized element of control.
The assessment is designed for point-in-time. Thus, no "safe harbor" assessment component is included, which could be relevant depending on the specifics of the DAO in question. We have, however, aimed to integrate strategic intent to allow a "grace period" to impact the scores. The actual application of scores requires some calibration and further consultations across DAOs and jurisdictions to evolve into a regulatory technical standard.
### A Taxonomy of Agents in a DAO
Permissionless blockchains are essentially a vast network of databases maintaining a shared space. Transactions are batched and circulated with the network in the form of blocks which, once accepted by the network, amend the database with the most recent balance assigned to the known addresses. Maintaining a distributed database of transactions in this fashion introduces a high level of integrity. Still, it necessitates the encryption of user identities, as anyone with access to the database would otherwise be able to view the accounts balances of the individuals using the network.
Permissionless blockchains solve this issue with _private-key infrastructure_ (PKI), in which a private/public key pair is used to generate any number of addresses. Traditional PKI is pseudonymous, as the user's identity is encrypted, but still predisposed to simple heuristic address clustering of transaction patterns [52]. As such, blockchain technology presents a fascinating paradox: Pseudonymous identities are essential in protecting user privacy but, at the same time, offer a design challenge for DAOs. Yet, the replicated nature of the database means that pseudonymous transaction data is available perpetually, enabling stakeholders to access the full transaction history for an address. Different agent definitions are shown in Table 2.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Agent type** & **Description** & **Sample of Evidence** \\ \hline Verifiably Independent Agent (VIA) & A publicly identifiable token holder (maybe with a sizeable reputational interest in maintaining the integrity of their address) with a long and repeated history of participation in governance and communities. & Proof of (real or pseudonymous) identification measures across multiple governance discussions and social media sites, a discernible asset trail, and/or identification standard tokens (Ethereum naming service) \\ \hline Presumably Independent Agent (PIA) & A token holder with a presumed vested interest in a sound governance process and & An address with a transaction history indicating repeated and non-automated use on a near daily basis, coupled with interactions in other DAOs and a discernible transaction pattern. \\ \hline Unidentifiable Agent (UIA) & All addresses not operated by a PIA or a VIA. & Addresses with indications of automation and repetitive transaction patterns or clusters. \\ \hline \end{tabular}
\end{table}
Table 2: Agent definitions
### The TIGER Assessment Questionnaire
After several iterations and pattern analysis, the conceptual artifact was optimized and consolidated to contain 15 characteristics with suggested questions and quantifiers for assessment as shown in Table 3. We summarize the requirements [56] in five general categories of DAO subsystems (items with grey background in column 1 of Table 3) based on expert input and literature [52]: Token Weighted Voting; Infrastructure; Governance, Escalation, and Reputation ("TIGER").
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multicolumn{2}{|c|}{TIGER Analysis} & \multicolumn{2}{|c|}{Veritable} \\ \hline Category & Question & Quantifier \\ \hline Token Weighted & & & \\ \hline Kohn-Neighbor & & & \\ \hline Token distribution at launch & Did the team conduct a “fair” token launch designed to balance incentives for further decentralization with requirements for long-term funding and investor returns? & Percentage of units allocated to addresses associated with insiders, including core-team members, advisors, investors, early collaborators, and service providers. \\ \hline Promoting a non-collusive oligopoly & Does the DAO algorithmically incentivize multilateral participation by rewarding non-colluding groups of agents for strategic participation? & Percentage of units allocated to clearly differentiated stakeholder groups indicated by a misalignment in assumed preferences \\ \hline Concentration of voting power & How distributed are governance tokens amongst active/passive stakeholders? & Number of VIAs required to mount \(>\)51\% of voting power in majority voting schemes? \\ \hline Infestructure & & & \\ \hline Token locking, freezing, and thawing. & Does the token contract code include the ability for any set of stakeholders to lock, move, freeze, andthaw token balances on some or all addresses? & Number of VIAs required to freeze token balances in all or some addresses. \\ \hline Code upgrades & Is there evidence of the possibility of enforcing unilateral decision-making in the code that may compromise decentralization? While most code upgrades will preserve address, state, and balance, any ability to change smart contract code will impose significant security risks to the DAO and its stakeholders. \\ \hline Access & To what extent is access to decision-making through voting or other means accessible to external parties or contributors in a meaningful and unrestricted way? & Mixed assessment relating to quorum and timing: (1) How many verifiably independent agents does it take to produce a positive voting outcome for a “general” Improvement Proposal (Nakamoto co-efficient for governance), and (2) Does the voting process allow proper time and access for token holders to vote on any topic? \\ \hline \end{tabular}
\end{table}
Table 3: TIGER Assessment Questionnaire
#### 4.2.1 Token-weighted Voting and Incentives
The assessment of this dimension includes:
* Analysis of whether the tokens are fairly distributed among the community, founders, and collaborators while also locking token liquidity for the future funding of the DAO's activities. Fair launch considerations include considerations over the pricing of the token across the issuance period(s). Essentially the assessment is a determination of whether the DAO's
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multicolumn{2}{|c|}{Target Analysis} & \multicolumn{1}{c|}{Vertebrates} \\ \hline \hline Category & Question & Quantifier \\ \hline (Convenumes & & \\ \hline Voting delegation & Is any voting delegation fair and unconditional so there is no risk of manipulating reported delegation? & How many VIAs with clearly distinctive preference profiles are presently available for delegation \\ \hline Voting participation & Is there evidence of broad voter activity? & Percentage of token float with active participation in governance \\ \hline Bootstrapping & Is there any centralized activity that goes beyond bootstrapping the journey toward full decentralization of the DAO? & _Qualitative assessment_: Is there evidence of centralized control measures that are not required for the long-term health of a decentralized DAO? \\ \hline Resolution & & \\ \hline \hline Crisis management & Does the constitution or policies include crisis management and dispute resolution mechanisms? & Percentage of tokens required to enact crisis management decision-making \\ \hline Inflation & What is the distribution between token inflation accruing to user A. External (oligopolistic) incentives for non-colluding VIAs (LPs, open-source developers, etc.) and user B. Insider VIAs such as investors, founders, early stakeholders, etc.? & The percentage split user A/ user B. \\ \hline Voting access & Are there any restrictions on availability and access to the DAO’s decision-making process? & Mixed assessment relating to quorum and timing: (1) How many VIAs do it take to produce a positive voting outcome for a “general” Improvement Proposal, and (2) if the voting process allows proper time and access for token holders to vote on any topic. \\ \hline \hline Navigation & & \\ \hline \hline Soft power & Is there evidence of co-optation or informal manipulation? & _Qualitative assessment_: Past evidence or forward-looking assessment of how many known high-profile agents can theoretically swing a vote \\ \hline Responsibility alignment & Does the DAO code or applicable norms introduce the notion of accountability for decision-makers in a fashion that appears symmetrical to the power and responsibility vested in decision-makers? & _Qualitative assessment_: No evidence of asymmetry between responsibility and accountability, for instance, unjust overruling or veto. \\ \hline Accountability & Are measures for conflict and reputation management implemented? & _Qualitative assessment_: Evidence of dispute resolution measures to mitigate centralized attack vectors around reputation \\ \hline \end{tabular}
\end{table}
Table 3: Continued
monetary policy is fair and whether anyone, including the core team, is benefiting unfairly compared to the DAO community long term.
* When assessing whether the DAO incentivizes multilateral participation by allocating tokens to clearly differentiated stakeholder groups, it is important to notice that some collaboration and common focus are to be expected. In addition to quantifying units allocated to independent groups, the assessor could also look for signals: Is there any tangible evidence of cartel's? Is it reasonable to assume that token holders are colluding unfairly? Are big investors talking to the founders and asking them what to vote for, or the other way around?
* The concentration of voting power would include a Nakamoto-coefficient analysis of on-chain and off-chain voting history. The Nakamoto coefficient is a simple, quantitative measure of a system's decentralization [57], [58]. The coefficient is based on the Gini coefficient and calculated based on the number of critical subsystems in a system and how many entities one would need to compromise to control each subsystem.
#### 4.2.2 Infrastructure
The assessment includes:
* Analysis of how the DAO limits large token holders (so-called whales) from having outsized influence. Some DAOs introduce the notion of time-locked voting. This allows token holders to increase the weight of their vote by locking their shares for a certain amount of time after voting has ended, trading the opportunity cost for increased voting power. Freeze and thaw measures may also be applied to the benefit of late-joiners and/or to reduce whale influence.
* Analysis of centralization of control that is not automated in a sufficiently decentralized manner, which includes an assessment of the degree of autonomy in software vs. human centrality but also a view of any single point(s) of failure or single point(s) of control concerns.
* Access is assessed both to quorum and timing, assessing how many VIAs it takes to produce a positive voting outcome for a "general" Improvement Proposal, which we could label as the Nakamoto co-efficient for governance, and second, whether the voting process allows proper time and access for token holders to vote on any topic or if (unfair) restrictions apply.
#### 4.2.3 Governance
Assessment of governance processes is critical to determine whether there are possible centralized attack vectors in a DAO:
* Voting delegation, sometimes referred to as liquid democracy, shares the core principles of political democracy. In this case, a DAO assigns specialists to participate in an electorate with the power to make decisions on behalf of DAO members. This increases centralization, on the other hand, it may improve the quality of decision-making as in the traditional world's representative democracies. In some cases, voting delegation may constitute manipulative and/or regulatory arbitrage through conditional delegation, so the assessment should review delegation mandates to ensure the delegated mandate is not an attempt to arbitrage. The analysis can range from a simple count of the number of individual components in the DAO network and the relative size of these to more advanced network analysis and statistical tests, where a DAO uses more advanced voting delegation.
* From a narrow perspective, the assessment of voting participation analyses voter turnout participation in collective decision-making, which is a dynamic metric that may affect the security of any plutocratic governance system. Simple token-weighted voting may risk the undue influence of "whales" (large token holders). Balanced techniques adopted by DAOs include sociocracy, where decisions are made by consent, not by consensus. Quadratic voting and other alternative voting mechanisms, such as holographic consensus or multi-signature wallet (multi-Sig), are also gaining traction across DAOs. The assessment may also include a fairness assessment of the voting process, where DAOs sometimes use timing mechanisms to
reduce the risk of minority abuse. This process tackles the risk of majority voters gaining an advantage over minority voters; the downside is that the voting process becomes exceptionally long. Another method to ensure a fair voting process is "conviction voting," which is based on the community's aggregated preference and uses time as a utility to strengthen "conviction" to one's vote. A third example includes express voting that may encapsulate intensity or broader community support and thereby reduce the costs of democratic coordination.
* Sometimes, DAOs establish a foundation to own rights that can not easily be decentralized. Although this implies a centrally controlled activity, it should be viewed in context and be considered acceptable if the purpose of the centralized effort is only to bootstrap the journey towards decentralization. Outsourcing also includes software deployment strategy and hosting policy, where, according to statista.com [59], more than 64% of the world's cloud market is currently controlled by three dominant vendors (AWS, Google, and MSFT), who therefore likely host most of the blockchain/Web3 infrastructure that exists, including full nodes, validator nodes, and middleware. This is potentially a significant attack vector for censorship and centralized control.
#### 4.2.4 Escalation
Consideration of the following issues helps in assessing escalation:
* A DAO is only as decentralized as its crisis mode allows. Hence, the assessment should investigate how control measures can be centralized in any crisis. A crisis should be defined through stress testing of the DAO business system and financial and technical resilience. Crisis mitigation and contingency measures should preferably be specified in the DAO constitution or policies for events that can impact the long-term sustainability of the DAO. Some centralization is expected to deal effectively with crisis containment, where fluid democracy may not always be the most efficient. Still, the assessment should determine the extent to which such centralization is subject to democratic control.
* An inflationary token model adds new tokens to the market over time, often through a schedule or as mining rewards or for specific contributions. For the determination of decentralization, the critical assessment point is that any value associated with inflation or deflation benefits all token holders fairly, not for the benefit of non-collaborative agents for any strategic or other participation.
* Availability and access should be equal to all, so any restrictions in access to the DAO, including its decision-making process, may suggest a level of centralized control. The assessment would include a Nakamoto coefficient analysis for both on- and off-chain activities around voter activity and token holdings and a review of voting policies.
#### 4.2.5 Reputation
For assessment of reputation, the following considerations are suggested:
* Soft power through co-optation or informal manipulation is an everyday phenomenon in politics. In DAO communities that allows actors to engage pseudo- or anonymously, it is critical to assess that these features are not used manipulatively. Again, the analysis may potentially involve sophisticated network and statistical analysis.
* DAOs cannot act outside their rules, but because their smart contracts may contain errors or unforeseen events may occur, rule change mechanisms are necessary for resilience purposes. On the other hand, fully decentralized DAOs must also acknowledge their delegated mandates, with accountability following delegated responsibility.
* Increasingly, DAOs implement dispute resolution mechanisms or use dispute resolution services from emerging online third-party decentralized dispute resolution service providers. Other measures, such as implementing tools like Sourcecred [60] to create trust in the
community, or slashing to penalize unwanted behavior or dishonest validation, are similar mechanisms of democratic control designed to incentivize network participation.
## 5 Evaluation
The artifact evaluation was conducted two-fold; First, we field-tested the general concept with a DeFi expert from an EU-based supervisory authority. Second, we applied the TIGER framework to a prominent DAO using publicly available sources.
The field-test evaluation emphasized a pragmatic approach favoring comprehensive coverage of topics of regulatory concern rather than the collection of quantitative data. The introduction of partial compromising having a full impact on the overall assessment result was deemed justifiable but raised several questions, including (1) how to deal with the lack of a grace period in the current implementation of the recently released MiCA package and (2) how to create a level-playing field for "institutional DeFi" (where traditional, currently regulated financial institutions offer decentralized financial products operated by DAOs).
In the remainder of this section, we present a sample evaluation of a DAO as a reference guide to how regulators or industry participants may approach the discretionary application of the TIGER framework.
We use the Compound protocol and its associated governance processes for the sample evaluation. It is important to note that the sample application provided here serves only as a reference guide due to the lack of access and transparency for internal data. While DAO governance primarily happens in public fora, a regulatory authority would arguably have access to a wealth of quantitative and qualitative data provided and collected by the counterparty and its partners.
While this level of access is not attainable in the academic context due to privacy regulations, the level of public governance data available is sufficient in providing a cursory reference application of the framework. Further, if a DAO is already decentralized before enforceable regulation is agreed upon, a regulator/supervisor will need to rely on the same publicly available information we access here. The Compound protocol offers an interesting entry point to the evaluation of the TIGER framework, as the protocol team was amongst the first to issue a governance token (COMP) and the adjacent infrastructure, which led to the present generation of DAO governance.
While stablecoin issuer MakerDAO had already issued their governance token (MKR) years prior, the Compound team was amongst the first to explicitly link the issuance of the token with the usage of the protocol in a bid to incentivize liquidity provisioning. This sparked a period of rapid escalation, commonly referred to by industry observers as "DeFi Summer," in the 3rd Quarter of 2020 as the major decentralized exchange Uniswap (UNI) immediately followed suit in a bid to defend market share against aggressive attempts at siphoning liquidity by the rapidly emerging competitor "SushiSwap" (SUSHI). The ensuing period saw waves of governance tokens enter the market, mimicking the previous ICO frenzy [61].
### Introducing the Compound DAO
Compound [62] is an on-chain market for peer-to-peer lending, enabling users to collateralize and borrow against a selection of 18 assets. At the time of writing, the protocol manages \(\sim\)63.7bn in collateral assets deposited by \(\sim\)300 000 depositors, of which \(\sim\)9000 users have taken out an aggregate of \(\sim\)6895m in outstanding debt against their deposits.
Protocol decision-making is governed by token-holders utilizing the token (COMP) within the governance contract. The Compound Governance process involves submitting pre-deployed code changes to risk management and asset modules above, which stakeholders can then inspect and vote for or against implementing in binary voting sessions. Proposals are generally used to
implement system parameter modifications, but proposals for adding new markets or entirely new features are occasionally implemented as well.
Further in this section, we present a cursory application of the TIGER framework, utilizing a score-card methodology in which we assign a score between 1-5 for each dimension. While there are clearly identifiable areas of improvement, we assess that the Compound DAO is _sufficiently_ decentralized when we factor in the protocol age. Over time, we expect a gradually increasing decentralization as the protocol matures and increasingly larger private and institutional stakeholders join the DAO.
The overall score of our assessment is 3.8 on a scale of 5, split on each aggregate dimension as appears in Figure 5, with no critical dimension failing. A detailed assessment follows below.
### COMP Token Weighted Voting Distribution
The COMP token has a max supply of 10m units, of which 7.15m is in circulation at the time of writing. The COMP supply has a daily inflation rate, currently set at 1139 COMP daily, distributed across market participants (Table 4), alongside a 4-year resting period for insider shareholders ending in June 2024.
As evident, the COMP tokens allocated to shareholders in Compound Labs, Inc. Founders and team members (present and future team members) comprise a narrow minority share of 49.95% of the total token supply, assuming that the recipients retain all tokens after vesting.
While the narrow minority does not technically produce a concentration of voting power in the hands of stakeholders with presumed shared interests, it should be noted that in the theoretical
\begin{table}
\begin{tabular}{l|r|r} \hline \hline Sinkholder Groups & COMP Allocation & Percentage of Total Supply \\ \hline Shareholders of Compound Labs, Inc. & 2 396 307 & 23.96\% \\ \hline Founders \& team & 2 226 037 & 22.26\% \\ \hline Future team members & 372 707 & 3.73\% \\ \hline Users & 4 229 949 & 42.30\% \\ \hline Community Allocation & 775 000 & 7.75\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: COMP allocation to stakeholder groups1
Figure 4: Compound decentralization radar
event of a highly contentious issue between insiders and (external) community members, challengers would need to mount 50.05% of the token float to push through a decision, which is deemed unlikely.
Yet, the distribution of tokens amongst smart contracts and agent types [63] is such that, at present, only a few VIAs retain an adequate amount to mount a hostile proposal process. On this basis, we assign a passing score of 3 out of 5, informed by the relative concentration of votes.
### COMP Infrastructure Assessment
The Compound team has implemented a well-reasoned and simple user interface for the governance process, enabling non-technical users to participate in the governance process.
The Compound Governor and Timelock methods require the deployment of code with the proposal submission. From proposal submission through voting and the mandatory two-day delay following a successful vote, the governance process implements a full week period for any decision made by DAO stakeholders.
In contrast to the frequently used option of using the popular tool Snapshot [64] to collect votes through signatures, this methodology mitigates the need for a single or multi-signer solution which can be required to implement the results of the vote when using Snapshot. Instead, approved proposals are immediately implemented by the contract once they pass. While this methodology has previously imposed costs on voters due to the high execution fees on the Ethereum blockchain, the team has implemented the casting and delegation of votes by offline signatures [65], mitigating voter apathy and improving accessibility of governance participation. Delegation functionality is implemented in the COMP token contract and delegates the voting power for the tokens from one address to another. Users interested in delegating voting power to multiple delegates can split tokens over multiple accounts and delegate to multiple delegates. The COMP token smart contract does not allow freezing addresses, manipulating balances, or upgrading the contract code through upgradeable "proxy contracts."
On this basis, we assess that the Compound governance model and the associated smart contract infrastructure are sufficiently decentralized, yielding a 5/5 score.
### COMP Governance Dynamics
The Compound governance model utilizes delegation strategies, through which token holders can delegate voting power to active participants. To create a proposal, an address must hold in excess of 25 000 COMP (\(\ell\)1.5m) or lock 100 COMP (\(\ell\)6000) to create an "autonomous proposal," which can become ratified if delegated an excess of 25 000 COMP.
Governance proposals are time locked in review for three days, after which voting is initiated for an ensuing three-day period. Proposals gathering a majority of votes with a lower threshold of 400 000 COMP votes are queued for implementation for two days.
The governance of Compound is primarily in the custody of the delegate VIAs, retaining an aggregate of 92.6% of voting power with 2 377 404 COMP tokens in delegation. Of the top 60 delegates, accounting for 99.9% of the total voting weight, there is no additional delegation, so it is fair to assume the said VIAs also control these tokens.
The VIA delegates yield decisive authority over the Compound protocol, for which approximately 70% of the 36 proposals decided upon in 2022 (including failed and canceled votes) were decided by less than ten delegates yielding a clear majority. So far, in 2022, on average, ~600 000 COMP was active in each proposal, again mainly controlled by VIAs.
Through the lifetime of the DAO, 113 proposals have been voted upon, averaging 2.3 per month. The average voter turnout has increased slightly over time to 66 participating addresses per proposal in 2022, up from 56 addresses per proposal in 2020, the first year of operation [66].
Based on this assessment, it appears evident that while Compound governance is managed by a relatively small subset of VIAs with, in most cases, presumed identical preferences, said
stakeholders would be unlikely to mount a hostile proposal against users, given the token distribution.
On this basis, we assign a passing score of 3 out of 5, informed by the relative concentration of votes.
### COMP Escalation and Crisis Management
The Compound governance system uses timelock to introduce sufficient time for careful review of the proposal code before implementation. The community implemented an automated "Proposal Threshold Alert" as an early indicator of potential governance attacks. The alert informs the community if a wallet has accrued sufficient COMP to meet governance thresholds. Further, the Compound Comptroller contract includes elements of a crisis management mechanism with a pause guardian. Compound Labs previously controlled this, but since 2021 transferred it to a community multi-Sig wallet created by community members, where a small group of 4-6 stakeholders, chosen by the community, can pause Mint, Borrow, Transfer, and Liquidate functions. In our understanding, this does not constitute a complete "emergency shutdown" mechanism, so we assess that the multi-Sig does not provide full crisis management capability.
The lack of any special escalatory privileges awarded to early stakeholders became evident early in the life of the protocol when a bug in a proposal placed 280 000 COMP tokens at risk of emission to liquidity providers. While the Compound team removed the ability for users to claim these tokens through the interface, this did not stop users from simply interacting directly with the smart contracts.
In what appears to be a somewhat misguided attempt to return the tokens to the protocol, the founder of Compound Labs, Robert Leshner, threatened to collect information on non-cooperative stakeholders to inform the US tax authorities [67]. While these attempts were ridiculed by the community members, the case resembles the user B situation in Figure 1 above. It provides an example of how all stakeholders, regardless of their seniority in the community, cannot influence decisions governed through smart contracts.
Based on the lack of discriminatory privileges awarded to key stakeholders, outside of the ability to amend the contract web interface, we assess that the Compound DAO is sufficiently decentralized on this dimension, yielding a score of 5/5.
### COMP Reputation and the Impact of Soft Power on Decision-Making Processes
Compound governance primarily occurs in designated online fora, where governance participants pitch and discuss proposals before developing and deploying a proposal code. Discussions are generally cross posted on social media [68] with parallel discussions occasionally led on chat servers [69]. On average, new posts are submitted daily to bi-weekly, indicating a moderate to high activity level.
By cross-referencing with data from LinkedIn [70], we note that the official organization appears to employ 19 employees with titles indicating a commercial relationship with Compound Labs Inc. We did not find evidence of any inordinate influence in proposal submissions by these employees. However, the picture is different when we assess the influence of large vs. small token holders in what we presume is the primary governance forum [71] for pre-proposal discussions: Out of a total of 113 proposals to date, 97 are included in the pre-proposal discussion. Of these, at least 53 posts have been authored by individuals in founding roles or with clear connections to the founding team or major token holders. Of these 53 posts, 32 were authored by the service provider Gauntlet [72], a firm specializing in financial modeling, which previously completed a market risk assessment report on Compound [73]. Gauntlet is identified as the controller of the fourth biggest delegate address, yielding 118 494 COMP at the time of writing this article. While Gauntlet is a frequent and active participant in Compound governance, the primary emphasis is on topics clearly
related to risk management or the addition of new assets to the platform and does not appear manipulative.
There appears to be no dispute resolution mechanism. In the Compound chat forum on Discord; this has been debated, with some community members objecting to any dispute resolution mechanism and others firmly in support. The topic has not been subject to a formal vote. On this basis, we assign a score of 3 out of 5 on this dimension.
## 6 Discussion
In this article, we propose an information system (IS) focused conceptual artifact based on a review of the literature, combined with expert insights from a group of industry stakeholders and experts. The artifact demonstrates the feasibility of structured assessment methods of the level of DAO decentralization both on-chain and off-chain, mapped to generalized, critical processes of DAOs. We address the research question: "When is a DAO (sufficiently) decentralized?"
In analyzing whether a DAO is sufficiently decentralized, we might expect some quantified evidence of chaos, swarm, and/or a self-organized, distributed, decentralized community, as opposed to an ordered, strong organization with centralized command and control that characterizes the traditional organization.
Hence, the critical focus of analysis is whether the DAO stakeholders or "actors" are empowered with delegated authority and whether they operate sufficiently independently of each other and in their own self-interest in an uncoordinated and voluntary manner.
We propose that "sufficient decentralization" is defined as a verifiable state, where the design of the DAO (1) is collusion resistant and based on long-term equilibrium, and (2) its governance processes have unrestricted and transparent access.
From a regulatory perspective, an alternative approach could simply be to analyze (1) if the DAO is conducting a regulated activity, and if so, (2) if there is an accountable legal or physical person upon whom regulation can be enforced; if not, then DAO being sufficiently decentralized has to be acknowledged. In our view, such an approach is too simplistic and does not accept the fundamental premise that DLT/Blockchain is a transformative technology that will foster innovation and growth.
In terms of conciseness and robustness attributes of the assessment framework, the challenge lies in the complexity of decentralization as a concept. We avoid an extensive classification scheme that could lead to cognitive overload when assessing a given level of decentralization point in time while also defining enough dimensions and characteristics to clearly differentiate the objects of interest [55].
From a practical and theoretical perspective, it seems evident that no DAO can start decentralized, as any project must be initiated by a small core team, bootstrapping development until the project matures and attracts open-source contributors. However, as discussed, the European regulators did not play any particular emphasis on this critical point when agreeing on the final text of the MiCA regulation. Some US regulatory proposals suggest a safe harbor rule [25], proposing a grace period to allow a DAO to become sufficiently decentralized, thus introducing the concept of "gradual decentralization." In our proposed assessment framework, we acknowledge this by suggesting that the assessment includes a perspective on the mature DAO design, not just the point-in-time view.
We extrapolate our contributions into the following generalized propositions:
**P1**: The concept of technology-neutral regulation is challenged by DLT/Blockchain. DAOs exist and realize benefits through increasing degrees of decentralization. DAO legal design should therefore support the internal decentralization accomplished by the DAO so that a balance is achieved between external and internal decentralization [11], not the other way around. When regulators in the coming years design technical requirements for the supervision of DAOs, they need to acknowledge this underlying premise and embrace that DLT/blockchain is a transformative technology that requires unique regulatory approaches.
**P2:** Regulators need to embrace the concept of a "grace period" for a DAO to achieve sufficient decentralization. The MiCA regulation did not include this, but it seems challenging to embrace DeFi and the concept of sufficient decentralization without it. We suggest an assessment approach where not only the point-in-time assessment is material to the decision of decentralization but also the design intent, thereby introducing a grace period from a risk-based perspective, allowing the EU to practically align crypto regulatory compliance to the safe harbor proposals from the US [25] and common sense.
**P3**: In the short term, for "Institutional DeFi," a level playing field needs to be developed by financial regulators and supervisors, including a "cut-off" strategy, with clear boundaries for acceptable centralized activity, to allow DLT/Blockchain-based businesses to develop properly, respecting the new technological feature regime. From a regulatory perspective, and in the words of MiCA, complete decentralization seems to require full automation. Still, when elements of human governance are introduced, it is difficult to think of complete decentralization as outlined in MiCA. Some automated features also become centralized through the front-end website hosting or other elements. Regulators must accept that a new playing field for DAOs will develop over the coming years.
**P4:** Regulatory practices around DAO decentralization will evolve across blockchains and business models, each with its own strengths and weaknesses regarding centralized attack vectors and regulatory importance. A risk-based approach to DAO supervision, where required, will therefore need to be developed with a holistic view of decentralization across political, technological, social, and economic dimensions, as well as across underlying technology infrastructures that behave very differently from a risk perspective. We foresee regulators will designate some blockchains to have more systemic risk than others.
**P5:** DLT/Blockchain will transform how regulators supervise and enforce the regulation. The number of DAOs grew by a factor of 8x in the past year [74]. With the increasing certainty on the regulation of crypto, the number of DAOs will likely continue to evolve, and the growth of the token economy and innovation of blockchain-based business models as well. Some sample DAO business models [76, 77] are listed in Appendix 1.
These developments pressure regulators to keep pace with developments in two dimensions: (1) Supervisors with a traditional finance focus will be challenged as their supervisory toolkits and skillsets become disconnected and obsolete. Regulators and supervisors must embrace the available and emerging investigative techniques to analyze DAO structures and processes in real-time, on- and off-chain; (2) A focus on automated and embedded supervision should be prioritized [75].
Our work contributes to practice by identifying criteria for DAOs, regulators, and supervisors to consider when assessing whether a DAO is "sufficiently decentralized," complementing the understanding beyond technical difficulties by taking a holistic view of DAOs as complex socio-technical systems.
Our findings contribute actionable insights to the information system literature by emphasizing how DLT and blockchain technologies may be assessed from a socio-technical perspective. We contribute to DAO communities and regulators with a pragmatic tool to understand to what extent an otherwise regulated activity may be considered sufficiently decentralized and thereby avoid significant and costly compliance requirements.
## 7 Conclusion
We investigate the topic of decentralization as it relates to DAOs, using a thematic analysis method to identify relevant patterns to assess whether sufficient decentralization is presented. Through the framework's design, we demonstrate the feasibility of implementing a structured method for the assessment.
We propose a definition of "sufficient decentralization" and incorporate the notion of a representative democracy via delegated mandate in the assessment framework. Still, it remains to
be concluded what level of delegation and decentralization is acceptable under different regulatory regimes. Some regulators seem to suggest complete decentralization as the only acceptable level. However, complete decentralization in DAOs is challenging to grasp, as they are socio-technical constructs.
We design a generalized assessment framework with suggested quantifiers. Still, the application of all characteristics and levels of quantified assessment will likely vary, depending on the need for regulatory monitoring by jurisdiction. Hence, the framework design is flexible to accommodate change as regulatory practices evolve and regulatory technical standards become defined. We demonstrate the practical application of the framework artifact by assessing the level of decentralization of Compound, an algorithmic money market DAO operating on the Ethereum blockchain.
Our findings suggest that decentralization in DAOs is not a myth. Still, due to the technical features of blockchains, it can be complicated to investigate and assess the true level of DAO decentralization. Our contribution is a pragmatic framework that can guide aspiring DAOs, regulators, and supervisors to advance the decentralization agenda as the crypto and traditional economies increasingly overlap and integrate. We extrapolate the findings into five general propositions on the implications of decentralization on the supervision of regulated financial activity in crypto.
## Acknowledgments
The authors wish to thank the anonymous reviewers as well as Danny Dehghani, Jon Isaksen, Michael Zargham, Griff Green, Angela Kreitenweis, Nina Siedler, Marina Markezic, Kris Paruch, and Matthew Barlin for their valuable insights and feedback.
|
2304.07253 | Fundamental limits to near-field optical response | Near-field optics is an exciting frontier of photonics and plasmonics. The
tandem of strongly localized fields and enhanced emission rates offers
significant opportunities for wide-ranging applications, while also creating
basic questions: How large can such enhancements be? To what extent do material
losses inhibit optimal response? Over what bandwidths can these effects be
sustained? This chapter surveys theoretical techniques for answering these
questions. We start with physical intuition and mathematical definitions of the
response functions of interest (LDOS, CDOS, SERS, NFRHT, etc.), after which we
describe the general theoretical techniques for bounding such functions.
Finally, we apply those techniques specifically to near-field optics, for which
we describe known bounds, optimal designs, and open questions. | Owen D. Miller | 2023-04-14T17:04:20Z | http://arxiv.org/abs/2304.07253v1 | # Fundamental limits to near-field optical response
###### Abstract
Near-field optics is an exciting frontier of photonics and plasmonics. The near field is the region of space within much less than one electromagnetic wavelength of a source, and "near-field optics" refers to the phenomena that arise when optical-frequency sources interact with material structures in their near field. Free-space waves exhibit neglible variations over such small length scales, which might lead one to think this regime simply reduces to classical electrostatics and circuit theory. A new twist in the optical near field is the emergence of **polaritons**, modes that arise near the interfaces between negative- and positive-permittivity materials [1]. Polaritons emerge from an interplay of geometry and material susceptibility, instead of geometry and wave interference, to confine optical waves. Freedom from wave-interference requirements leads to a striking possibility: resonant fields whose size (spatial confinement) is _decoupled_ from its wavelength. Highly confined polaritons enable two reciprocal effects: incoming free-space waves can be concentrated to spatial regions much smaller than the the electromagnetic wavelength (well below the diffraction limit), and, conversely, that patterned materials close to a dipolar emitter can significantly amplify outgoing radiation.
The tandem of strongly localized fields and enhanced emission rates offers significant opportunities for applications including spectroscopy [2; 3], nanolasers [4], coherent plasmon generation [5], and broadband single-photon sources [6]. It also generates fundamental questions: How large can such enhancements be? Are there limits to field localization? All known polaritonic
materials have significant or at least non-trivial amounts of material loss; to what extent does the loss affect these quantities? Over what bandwidths can these effects be sustained?
This chapter surveys theoretical techniques for answering these questions. The same features that make the near field appealing also make it theoretically challenging: there are not fixed photon flows, modal descriptions require exquisite care, and analytical descriptions are not possible except in the simplest high-symmetry scenarios. Over the past decade, thankfully, there has been a surge of interest in identifying what is possible in these systems. One key to the success of these approaches is to not attempt to develop models that apply to every possible instance of a given scattering scenario, but instead to develop techniques that identify _bounds_ to the _extreme possibilities_ of each scattering scenario. In this chapter, we describe these techniques in detail. We start with physical intuition and mathematical definitions of the response functions of interest (Sec. 2), after which we describe the general theoretical techniques for bounding such functions (Sec. 3). Finally, we apply those techniques specifically to near-field optics, for which we describe known bounds, optimal designs, and open questions (Sec. 4).
## 2 Near-field optical response functions
In this section we summarize the background intuition and mathematical equations describing six key near-field optical response functions: local density of states (Sec. 2.1), which is proportional to the radiation of a single dipolar current, free-electron radiation (Sec. 2.2), which is the collective radiation of a line of current created by an electron beam, the cross density of states (Sec. 2.3), which measures modal or emission correlations across different spatial locations, surface-enhanced Raman scattering (Sec. 2.4), which is the simultaneous enhancement of incident radiation and outgoing luminescence, typically for imaging or sensing applications, near-field radiative heat transfer (Sec. 2.5), which is the transfer of radiative energy from a hot body to a cold one, at near-field separations, and mode volume (Sec. 2.6), which refers to the spatial confinement of a resonant mode. Many of these response functions are depicted in Fig. 1.
### Ldos
The first and arguably most important near-field response quantity is the _local density of states_ (LDOS). The central role of LDOS is a result of the extent to which it underpins many connected ideas in near-field optics [8].
The first connection is to the power radiated by a dipole. In general, the work per time done by a field \(\mathbf{E}\) on a current \(\mathbf{J}\) in a volume \(V\) is given by \((1/2)\operatorname{Re}\int_{V}\mathbf{J}^{*}\cdot\mathbf{E}\). This is a generalized version of Watt's Law in circuit theory, and it encodes the work done by the electric field mediating the electric force on the charges in the current, across a distance traveled by the charges given by the product of their speed and the time interval of interest. By Newton's second law, the work per time done by a current \(\mathbf{J}\) on a field \(\mathbf{E}\) is the negative of the expression above, \(-(1/2)\operatorname{Re}\int_{V}\mathbf{J}^{*}\cdot\mathbf{E}\). We can convert the current density \(\mathbf{J}\) to a dipole density \(\mathbf{P}\) by the relation \(\mathbf{J}=\partial\mathbf{P}/\partial t=-i\omega\mathbf{P}\) for harmonic frequency \(\omega\) (\(e^{-i\omega t}\) convention). Then the power radiated by a dipole at \(\mathbf{x}_{0}\) with dipole moment \(\mathbf{p}\) (and therefore dipole density \(\mathbf{P}=\mathbf{p}\delta(\mathbf{x}-\mathbf{x}_{0})\)) is
\[P_{\mathrm{rad}} =-\frac{1}{2}\operatorname{Re}\int_{V}\mathbf{J}^{*}\cdot \mathbf{E}\,\mathrm{d}\mathbf{x}\] \[=\frac{\omega}{2}\operatorname{Im}\int_{V}\mathbf{P}^{*}\cdot \mathbf{E}\,\mathrm{d}\mathbf{x}\] \[=\frac{\omega}{2}\operatorname{Im}\left[\mathbf{p}^{*}\cdot \mathbf{E}(\mathbf{x}_{0})\right].\]
The electric field at \(\mathbf{x}_{0}\), \(\mathbf{E}(\mathbf{x}_{0})\), is the field produced by a delta-function dipole source, which exactly coincides with the dyadic Green's function (GF) \(\mathbb{G}\), evaluated at \(\mathbf{x}_{0}\) from a source at \(\mathbf{x}_{0}\), multiplied by the dipole moment \(\mathbf{p}\), giving:
\[P_{\mathrm{rad}}=\frac{\omega}{2}\operatorname{Im}\left[\mathbf{p}^{*}\cdot \mathbb{G}\left(\mathbf{x}_{0},\mathbf{x}_{0}\right)\mathbf{p}\right].\]
The imaginary part of a complex number of the form \(z^{\dagger}Az\) is \(\operatorname{Im}(z^{\dagger}Az)=z^{\dagger}(\operatorname{Im}A)z\) by symmetry, where \(\operatorname{Im}A\) refers to the anti-Hermitian part of \(A\) (\(\operatorname{Im}A=(A-A^{\dagger})/2i\)). So we have
\[P_{\mathrm{rad}}=\frac{\omega}{2}\mathbf{p}^{\dagger}\left[ \operatorname{Im}\mathbb{G}\left(\mathbf{x}_{0},\mathbf{x}_{0}\right)\right] \mathbf{p}. \tag{1}\]
Figure 1: An array of near-field optical response functions of broad interest. (Adapted from Ref. [7].)
This result gives us the first key near-field response function, the imaginary part of the Green's function evaluated at the source position,
\[\operatorname{Im}\mathbb{G}(\mathbf{x}_{0},\mathbf{x}_{0}), \tag{2}\]
which is proportional to the radiation rate of an electric dipole into any environment.
**Spontaneous emission** typically occurs via electric-dipole transitions in atomic or molecular systems, so the rate of spontaneous emission is governed by the imaginary part of the GF. It has been recognized for many decades that this rate is not an immutable constant, but a function of the environment. Just as specifying the amplitude of a current or voltage source in a circuit does not dictate the power delivered by the source, which depends on the impedance of the load, specifying the amplitude of a dipole moment does not dictate the power it delivers to its electromagnetic environment. This fact inspired the concept of a photonic bandgap [9] and photonic crystals [10; 11], with the goal for _inhibiting_ spontaneous emission, originally to avoid laser power loss. It has conversely inspired significant effort towards _amplifying_ spontaneous emission, for applications such as single-molecule imaging [2; 3]. An early recognition of this fact came from Purcell, who noted that an emitter radiating into a single-photonic-mode environment would have an altered spontaneous emission rate [12]. Purcell recognized that for a single-mode resonator with quality factor \(Q\) and mode volume \(V\), the density of states (per unit volume and per unit frequency) becomes \((Q/\omega)/V\). The relative change of the spontaneous-emission rate is the **Purcell factor**, which is proportional to \(\lambda^{3}Q/V\).
Purcell derived this expression in the context of enhancing magnetic-dipole transitions in spin systems, but exactly the same argument applies to electric-dipole transitions, where it is most used today. This expression drives many modern investigations of high-quality-factor and/or small-mode-volume cavity design [13; 14; 15; 16; 17; 18; 19], to reach the largest Purcell enhancement possible. It can be generalized to multi-mode, high-\(Q\) systems: if each mode has mode field \(\mathbf{E}_{i}\), center frequency \(\omega_{i}\), and linewidth (half-width at half-maximum) \(\gamma_{i}\), the power radiated by a dipole with moment \(\mathbf{p}\) located at position \(\mathbf{x}_{0}\) is [20]
\[P_{\text{rad}}\approx\frac{\omega^{2}}{4}\sum_{i}\frac{\gamma_{i}|\mathbf{E}_ {i}^{\dagger}(\mathbf{x}_{0})\mathbf{p}|^{2}}{(\omega-\omega_{i})^{2}+\gamma_ {i}^{2}} \tag{3}\]
In the limit of infinite \(Q\), the Lorentzian lineshapes become delta functions, and the summation simplifies to delta functions multiplied by the overlap of modal fields with the dipole moment. The overlap of each mode with the dipole is a measure of the relative modal energy concentration at that particular point in space. Hence the overall summation can be understood as a **local density of states**, or LDOS (with appropriate prefactors). The
power radiated by a dipole into an electromagnetic environment, then, is directly proportional to the local density of electromagnetic modes; inserting the correct prefactors leads to an LDOS expression in terms of \(\operatorname{Im}\mathbb{G}\)[8; 21; 22; 23]:
\[\operatorname{LDOS}(\omega,\mathbf{x})=\frac{1}{\pi\omega}\operatorname{Tr} \operatorname{Im}\mathbb{G}(\mathbf{x}_{0},\mathbf{x}_{0}), \tag{4}\]
where the trace encodes a summation over all independent polarizations. (Note that e.g. Ref. [8] defines the Green's function with an extra \(1/\omega^{2}\) factor, which leads to \(\omega\) in the numerator of their analog to Eq. (4).) In free space, the LDOS coincides with the density of states (as there are no spatial variations), and is given by \(\operatorname{LDOS}(\omega)=\omega^{2}/2\pi^{2}c^{3}\). Technically, the expression of Eq. (4) is the _electric_ LDOS; one can similarly define a magnetic LDOS through a summation over the relative magnetic-field strengths, or more generally by the power radiated by a magnetic dipole. For a magnetic Green's function \(\mathbb{G}^{(HM)}\), denoting the magnetic field from a magnetic-dipole source, the magnetic LDOS is [8]
\[\operatorname{LDOS}^{(m)}(\omega,\mathbf{x})=\frac{1}{\pi\omega} \operatorname{Tr}\operatorname{Im}\mathbb{G}^{(HM)}(\mathbf{x}_{0},\mathbf{x} _{0}). \tag{5}\]
The sum of Eq. (4) and Eq. (5) is referred to as the _total_ LDOS, representing the totality of electric- and magnetic-field energy localized to a point \(\mathbf{x}_{0}\), at frequency \(\omega\), over all modes. (Significant alterations to the modal-decomposition expressions are needed, for example, in plasmonic (and polaritonic) systems [24; 25].) Such descriptions are mathematically accurate only in the high-quality-factor limit, but the dipole-radiation interpretation generalizes to any linear scattering scenario.
To summarize, the imaginary part of the Green's function, \(\operatorname{Im}\mathbb{G}(\mathbf{x}_{0},\mathbf{x}_{0})\), is a measure of the power radiated by electric and/or magnetic dipoles in an arbitrary environment, which is proportional to the spontaneous-emission rate of a dipolar emitter, and it encapsulates the Purcell factor, particularly the ratio \(Q/V\), of high-quality-factor modes that concentrate energy at that point. We have extensively described LDOS due to its versatility and cross-cutting nature. The following quantities have more focused and niche applications, and can be described more concisely.
### Free-electron radiation
Radiation by a free-electron beam is closely related to LDOS, with the key distinction being that the current distribution is now a line source. An electron (charge \(-e\)) propagating through free space at constant velocity \(v\hat{\mathbf{x}}\) comprises a free current density \(\mathbf{J}(\mathbf{r},t)=-\hat{\mathbf{x}}ev\delta(y)\delta(z)\delta(x-vt)\), which generates
a frequency-dependent incident field [26]
\[\mathbf{E}_{\mathrm{inc}}=\frac{e\kappa_{p}e^{ik_{v}x}}{2\pi\omega\varepsilon_{0} }\left[\hat{\mathbf{x}}i\kappa_{\rho}K_{0}(\kappa_{\rho}\rho)-\hat{\mathbf{\rho}}k_{ v}K_{1}(\kappa_{\rho}\rho)\right], \tag{6}\]
written in cylindrical coordinates \((x,\rho,\theta)\), where \(K_{n}\) is the modified Bessel function of the second kind, \(k_{v}=\omega/v\), and \(\kappa_{\rho}=\sqrt{k_{v}^{2}-k^{2}}=k/\beta\gamma\) (\(k=\omega/c\), free-space wavevector; \(\gamma=1/\sqrt{1-\beta^{2}}\), Lorentz factor). Then photon emission and energy loss of free electrons interacting with nearby scatterers can be treated as a typical scattering problem, with Eq. (6) as the incident field.
An important feature of Eq. (6) is that the incident field is entirely evanescent (the asymptotic decay of the special function \(K_{n}\) is given by \(e^{-kr}/kr\) in the far field). This is expected on physical grounds, as an electron moving at constant velocity cannot radiate. Once a scattering body is brought close to the electron beam, however, the situation changes: the evanescent incident field can excite modes in the scatterer that couple to far-field radiation. (Physically, the electromagnetic-field-mediated interaction of the electron beam with the scatterer can lead to deceleration and therefore radiation.) The radiated power can be computed by an LDOS-like expression, \(\frac{1}{2}\operatorname{Re}\int\mathbf{J}^{*}\cdot\mathbf{E}\), where \(\mathbf{J}\) is the free-electron current density, but the bound techniques developed below for scattering bodies are most easily applied to the polarization fields \(\mathbf{P}\) within the scatterer, so we prefer an equivalent expression in terms of \(\mathbf{P}\). One option would be a linear combination of a direct-radiation term with a scatterer-interaction-radiation term, but the evanescent-only nature of the incident field implies that the direct-radiation term is zero. Instead, the only power lost by the electron beam is that which is extinguished by the scatterer, into absorption losses or far-field radiation. As we discuss more thoroughly in Sec. 3.1, the extinction of a scattering body \(V\) is given by
\[P_{\mathrm{ext}}=\frac{\omega}{2}\operatorname{Im}\int_{V}\mathbf{E}_{ \mathrm{inc}}^{*}(\mathbf{x})\cdot\mathbf{P}(\mathbf{x})\,\mathrm{d}\mathbf{ x}, \tag{7}\]
which we will use to analyze the free-electron loss, as \(P_{\mathrm{loss}}=P_{\mathrm{ext}}\).
When the beam passes by the scatterer without intersecting it, the resulting radiation is referred to as **Smith-Purcell radiation**. When the beam passes through the scatterer, causing radiation, it is referred to as **transition radiation**. And when the beam radiates while propagating _inside_ a refractive medium (within which the modified speed of light can be smaller than the electron speed), it is referred to as **Cherenkov radiation**. The Smith-Purcell process resides squarely in the realm of near-field electromagnetism.
### 2.3 Cdos
In Sec. 2.1, we showed that the power radiated by a single dipole at position \(\mathbf{x}\) is proportional to the LDOS at that point, which itself is proportional to \(\operatorname{Im}\mathbb{G}(\mathbf{x},\mathbf{x})\). Consider now the power radiated by _two_ dipoles, \(\mathbf{p}_{1}\) and \(\mathbf{p}_{2}\), at positions \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), for a total dipole density of \(\mathbf{P}(\mathbf{x})=\mathbf{p}_{1}\delta(\mathbf{x}-\mathbf{x}_{1})+ \mathbf{p}_{2}\delta(\mathbf{x}-\mathbf{x}_{2})\). The power they jointly radiate is given by
\[P_{\mathrm{rad}} = \frac{\omega}{2}\int_{V}\int_{V^{\prime}}\mathbf{P}(\mathbf{x}) \operatorname{Im}\mathbb{G}(\mathbf{x},\mathbf{x}^{\prime})\mathbf{P}( \mathbf{x}^{\prime})\,\mathrm{d}\mathbf{x}\,\mathrm{d}\mathbf{x}^{\prime} \tag{8}\] \[= \frac{\omega}{2}\left\{\mathbf{p}_{1}^{\dagger}\left[ \operatorname{Im}\mathbb{G}(\mathbf{x}_{1},\mathbf{x}_{1})\right]\mathbf{p}_{ 1}+\mathbf{p}_{2}^{\dagger}\left[\operatorname{Im}\mathbb{G}(\mathbf{x}_{2}, \mathbf{x}_{2})\right]\mathbf{p}_{2}\right.\] \[\left.+\mathbf{p}_{1}^{\dagger}\left[\operatorname{Im}\mathbb{G }(\mathbf{x}_{1},\mathbf{x}_{2})\right]\mathbf{p}_{2}+\mathbf{p}_{2}^{\dagger }\left[\operatorname{Im}\mathbb{G}(\mathbf{x}_{2},\mathbf{x}_{1})\right] \mathbf{p}_{1}\right\}.\]
The first two terms are the powers radiated by the two dipoles in isolation (or when incoherently excited); the second pair of terms is the positive or negative contribution that arises for constructive or destructive (coherent) interference between the two dipoles. For reciprocal media (of arbitrary patterning), the third and fourth terms are complex-conjugates of each other, such that we can just consider one of them (say, the third term) in determining the two-dipole interference. By analogy with Eq. (4), we can define a **cross density of states (CDOS)** by the expression:
\[\operatorname{CDOS}_{ij}(\omega,\mathbf{x}_{1},\mathbf{x}_{2})=\frac{1}{\pi \omega}\operatorname{Im}\mathbb{G}_{ij}(\mathbf{x}_{1},\mathbf{x}_{2}), \tag{9}\]
which differs from Ref. [27] only by the absence of a 2 in the prefactor. The sign of the CDOS indicates the sign of the interference term, while its magnitude is a field-correlation strength between the two points of interest in a given electromagnetic environment. The amplification of emission that can occur when the sign is positive is an example of superradiance, while the reduction of emission when the sign is negative is an example of subradiance, in each case mediated by the local CDOS [28]. Because the CDOS is the off-diagonal part of a positive-definite matrix, it is straightforward to show that its magnitude is bounded above by the square root of the product of the diagonal terms in the matrix, i.e., the local densities of states of the two dipoles in isolation [29].
In systems that are closed, or approximately closed, there is another interesting interpretation of the CDOS [27, 29]. Just as the LDOS can be interpreted as a local modal density, the CDOS can be intepreted as a local modal _connectivity_--it is a measure of spatial coherence between two points. In Ref. [27], it was shown the one can compute local coherence lengths from spatial integrals of the CDOS. From these local coherence lengths, it was unambiguously demonstrated that "spatial squeezing" of eigenmodes occurs in systems of disordered plasmonic nanoparticles. This plausibly explains supris
ing experimental results when probing the local response of such disordered films [30], showing the value of CDOS as an independent concept from LDOS.
There are two other areas in which CDOS emerges as a key metric: Forster energy transfer [31; 32; 33] and quantum entanglement and super-radiative coupling between qubits [34; 35; 36; 37; 38]. The general idea in each case is a dipole \(\mathbf{p}_{1}\) transferring energy to a second dipole \(\mathbf{p}_{2}\). In this scenario, \(\mathbf{p}_{1}\) and \(\mathbf{p}_{2}\) are considered _fixed_. By Poynting's theorem, the energy flux into a small bounding surface of \(\mathbf{p}_{2}\), for a field \(\mathbf{E}_{1}\) generated by \(\mathbf{p}_{1}\), is
\[\frac{\omega}{2}\operatorname{Im}\left[\mathbf{p}_{2}^{\dagger}\mathbf{E}_{1} (\mathbf{x}_{2})\right]=\frac{\omega}{2}\operatorname{Im}\left[\mathbf{p}_{2} ^{\dagger}\mathbb{G}(\mathbf{x}_{2},\mathbf{x}_{1})\mathbf{p}_{1}\right], \tag{10}\]
which is a form of the CDOS. The fixed nature of the second dipole, \(\mathbf{p}_{2}\), is crucial for the CDOS metric to be the correct one. If the second dipole is _induced by the field emanating from the first dipole_, then \(\mathbf{p}_{2}=\alpha_{2}\mathbf{E}_{1}(\mathbf{x}_{2})\), and the correct energy-transfer expression would be the imaginary part of the polarizability multiplied by the squared absolute value of the Green's function.
### Surface-enhanced Raman scattering (SERS)
Surface-enhanced Raman scattering is a technique whereby molecules are excited by a pump field, subsequently emitting Stokes- (or anti-Stokes-) shifted radiation that can be used for imaging or identification [39; 40; 41; 42]. The small cross-sections of most chemical molecules results in very low pump and emission efficiencies in conventional Raman spectroscopy [43], but one can engineer the near-field environment to enhance both the concentration of the pump field as well as the emission rate. Efficiency improvements of up to 12 orders of magnitude have been demonstrated, enabling single-molecule detection and a variety of applications.
SERS is a nonlinear process, in which a single dipolar molecular sees both a pump enhancement as well as a spontaneous-emission enhancement. A key insight for understanding SERS is that the weakness of the nonlinearities of the individual molecules means that the nonlinear process can be treated as the _composition_ of linear processes, in which the pump first enhances the excited-population densities (or, classically, the dipole amplitudes), and then the spontaneous-emission enhancements can be treated as a second step, essentially independent of the first.
We can write the key metric of SERS by considering these two steps in sequence, following a procedure outlined in Ref. [44]. First, an illumination field at frequency \(\omega_{0}\) impinges upon the molecule and its environment; in tandem, a total field of \(\mathbf{E}_{\omega_{0}}(\mathbf{x}_{0})\) is generated at the molecule. The Raman process generates a dipole moment at frequency \(\omega_{1}\) given by
\[\mathbf{p}_{\omega_{1}}=\boldsymbol{\alpha}_{\mathrm{Raman}}\mathbf{E}_{\omega_{0}}( \mathbf{x}_{0}) \tag{11}\]
where \(\boldsymbol{\alpha}_{\mathrm{Raman}}\) is the molecular polarizability. Next, the power radiated at \(\omega_{1}\) by this dipole is given, per Eq. (1), by
\[P_{\mathrm{rad},\omega_{1}}=\mathbf{p}_{\omega_{1}}^{\dagger}\left[\mathrm{Im} \,\mathbb{G}_{\omega_{1}}(\mathbf{x}_{0},\mathbf{x}_{0})\right]\mathbf{p}_{ \omega_{1}}. \tag{12}\]
Hence we see that there are two opportunities for amplification of SERS: concentrating the incoming field \(\mathbf{E}_{\omega_{0}}\) that determines the dipole amplitude, and enhancing the outgoing radiation by maximizing the LDOS, proportional to \(\mathrm{Im}\,\mathbb{G}_{\omega_{1}}(\mathbf{x}_{0},\mathbf{x}_{0})\), at the location of the dipole. To separate the two contributions, we can write the dipole moment as \(\mathbf{p}=\|\boldsymbol{\alpha}\mathbf{E}\|\left(\boldsymbol{\alpha} \mathbf{E}/\|\boldsymbol{\alpha}\mathbf{E}\|\right)\), i.e., an amplitude multiplied by a unit vector. If we denote the unit vector as \(\hat{\mathbf{p}}_{\omega_{1}}\), then we can write
\[P_{\mathrm{rad},\omega_{1}}=\|\boldsymbol{\alpha}_{\mathrm{Raman}}\mathbf{E}_ {\omega_{0}}\|^{2}\hat{\mathbf{p}}_{\omega_{1}}^{\dagger}\left[\mathrm{Im}\, \mathbb{G}_{\omega_{1}}(\mathbf{x}_{0},\mathbf{x}_{0})\right]\hat{\mathbf{p}} _{\omega_{1}}, \tag{13}\]
where now the first term encapsulates \(\omega_{0}\)-frequency concentration, and the second term encapsulates \(\omega_{1}\)-frequency LDOS-enhancement. Straightforward arguments lead to a net SERS enhancement, relative to a base rate \(P_{0}\) without any nearby surface, given by
\[\frac{P_{\mathrm{rad},\omega_{1}}}{P_{0}}=\left(\frac{\|\boldsymbol{\alpha}_{ \mathrm{Raman}}\mathbf{E}_{\omega_{0}}\|^{2}}{\|\boldsymbol{\alpha}_{ \mathrm{Raman}}\|^{2}\|\mathbf{E}_{\mathrm{inc},\omega_{0}}\|^{2}}\right) \left(\frac{\rho_{\hat{\mathbf{p}},\omega_{1}}}{\rho_{0,\omega_{1}}}\right), \tag{14}\]
where \(\|\boldsymbol{\alpha}\|\) refers to the induced matrix norm of \(\boldsymbol{\alpha}\), \(\rho_{\hat{\mathbf{p}},\omega_{1}}\) is the \(\omega_{1}\)-frequency LDOS for a \(\hat{\mathbf{p}}\)-polarized dipole, and \(\rho_{0,\omega_{1}}\) in this expression is the background \(\omega_{1}\)-frequency LDOS of a \(\hat{\mathbf{p}}\)-polarized dipole (not the typical summation over all polarizations). The two parenthetical terms in Eq. (14) must both be bounded to identify fundamental limits to SERS enhancements.
### Near-field radiative heat transfer
The warming of the cold earth by the hot sun is mediated by radiative transfer, i.e., photons radiated from the sun to the earth. The maximum rate at which such a process could occur is of course given by the **blackbody** rate, which is determined only by the solid angle subtended by the earth from the sun (or vice versa). Determination of this blackbody rate requires no knowledge of multiple-scattering processes between the two bodies. In the far field, the only "channels" (carriers of power into and out of a scattering region) are propagating-wave channels; by Kirchhoff's Law [45], one need only know the absorption or emission rates of the two bodies in isolation to know their maximum radiative-exchange rate. A more general viewpoint of far-field radiation, via the idea of communication channels, is discussed in Sec. 3.2.
It has been known for 75 years [46; 47] that two bodies separated by less than a thermal wavelength can exchange radiative heat at significantly larger rates than their far-field counterparts. Once in the near field, the bodies can exchange photons not only through radiative channels but additionally evanescent channels; moreoever, as the separation distance \(d\) is reduced, the number of evanescent channels that can be accessed increases dramatically, scaling as \(1/d^{2}\). These channels can be accessed via any mechanism that produces strong near fields. Polaritonic surface waves, via either plasmons or phonon-polariton materials, are a natural choice, and hyperbolic metamaterials (whose strongest effect is not surface waves but instead high-wavenumber bulk modes with nonzero evanescent tails) can provide similar performance [48; 49]. Photonic crystals can also support surface waves, but the confinement of those waves is typically related to the size of their bandgap [11], thereby scaling with frequency, yielding surface waves with significantly less confinement than their metallic counterparts.
The complexity of near-field radiative heat transfer (NFRHT) is daunting, both experimentally and theoretically. The first experimental demonstrations of enhancements in NFRHT via near-field coupling were not achieved until until the 2000's [50; 51; 52], many decades after the original predictions [46; 47], and measurements in the extreme near field were not achieved until 2015 [53]. There are a number of technical hurdles to experimental measurements, especially maintaining consistent, nanometer-scale gap separations over large-scale device diameters, while simultaneously measuring miniscule heat currents [53].
The theoretical challenge has been no less severe. NFRHT involves rapidly decaying near fields (requiring high resolution), typically over large-area surfaces (requiring a large simulation region), for spatially incoherent and broadband thermal sources (such that the equivalent of very many simulations are needed). The computational complexity of this endeavor has limited the analysis of NFRHT almost exclusively to high-symmetry structures (planar/spherical bodies, metamaterials, etc.) [54; 55; 56; 57; 58; 59], small resonators [56; 60], two-dimensional systems [61] and the like. We review the planar-body interaction, which is informative, while emphasizing the need (and opportunity) for new theoretical tools to understand what is possible when exchanging radiative heat in the near field.
Consider two near-field bodies with temperatures \(T_{1}\) and \(T_{2}\), respectively. By the fluctuation-dissipation theorem, the incoherent currents in body 1, \(\mathbf{J}_{1}\), have ensemble averages (denoted \(\langle\rangle\)) given by [56]
\[\langle\mathbf{J}_{1}(\mathbf{x},\omega)\mathbf{J}_{1}^{\dagger}(\mathbf{x}^{ \prime},\omega)\rangle=\frac{4\varepsilon_{0}\omega}{\pi}\operatorname{Im} \left[\chi_{1}(\mathbf{x},\omega)\right]\Theta(\omega,T_{1})\delta(\mathbf{x }-\mathbf{x}^{\prime})\mathcal{I}, \tag{15}\]
where \(\chi_{1}(\mathbf{x},\omega)\) is the material susceptibility of body 1, \(\mathcal{I}\) is the 3\(\times\)3 identity matrix, and \(\Theta(\omega,T)\) is the Planck distribution,
\[\Theta(\omega,T)=\frac{\hbar\omega}{e^{\hbar\omega/kT}-1}. \tag{16}\]
These currents radiate to body 2, at each frequency \(\omega\), at a rate that we denote \(\Phi_{21}(\omega)\). The rate \(\Phi_{21}(\omega)\) is given by the ensemble average of the flux into body 2, i.e. \(\langle-\frac{1}{2}\operatorname{Re}\int_{S_{2}}\mathbf{E}\times\mathbf{H}^{* }\cdot\hat{\mathbf{n}}\rangle\), where \(S_{2}\) is a bounding surface of \(V_{2}\), \(\hat{\mathbf{n}}\) is the outward normal, and the field sources are given by Eq. (15), except without the Planck function. The Planck function is separated so that \(\Phi_{21}(\omega)\) is independent of temperature and depends only on the electromagnetic environment. Then the radiative heat transfer rate into 2 from currents in 1, denoted \(H_{21}\), is given by
\[H_{21}=\int\Phi_{21}(\omega)\Theta(\omega,T_{1})\,\mathrm{d}\omega. \tag{17}\]
Similarly, the rate of transfer from body 2 to body 1, \(H_{12}\), is given by
\[H_{12}=\int\Phi_{12}(\omega)\Theta(\omega,T_{2})\,\mathrm{d}\omega, \tag{18}\]
and the net transfer rate is the difference between the two. For reciprocal bodies, the rates \(\Phi_{12}(\omega)\) and \(\Phi_{21}(\omega)\) are always equal (by exchanging the source and "measurement" locations), but this is also true more generally: for _two_ bodies exchanging radiative heat in the near field, \(\Phi_{12}(\omega)\) and \(\Phi_{21}(\omega)\) must be equal, or else one could have net energy exchange with both bodies at equal temperatures, in violation of the second law of thermodynamics. Note that if three bodies are present, or either body radiates significant amounts of energy into the far field, this relation need not hold in nonreciprocal systems, and indeed "persistent currents" have been predicted in three-body systems in the near field [62]. Throughout this chapter we will focus on the prototypical two-body case, so we can take
\[\Phi_{12}(\omega)=\Phi_{21}(\omega)=\Phi(\omega), \tag{19}\]
without assuming reciprocity. Hence the _net_ NFRHT rate between the two bodies is given by
\[H_{2\gets 1}=\int\Phi(\omega)\left[\Theta(\omega,T_{1})-\Theta(\omega,T_ {2})\right]\,\mathrm{d}\omega. \tag{20}\]
Often, it is illuminating to reduce the problem to a single temperature \(T\) and study the differential heat transfer for a temperature differential \(\varDelta T\). The net heat exchange divided by this temperature differential is the **heat transfer coefficient**, or HTC, which is given by Eq. (20), except the temperature difference is replaced by a single derivative of \(\Theta(\omega,T)\) with respect to temperature:
\[\mathrm{HTC}=\int\Phi(\omega)\frac{\partial\Theta(\omega,T)}{\partial T}\, \mathrm{d}\omega. \tag{21}\]
Hence, the quantity \(\Phi(\omega)\) is the designable quantity in NFRHT, and is the focus of the NFRHT bounds appearing across Sec. 4.
### Mode volume
Finally, we turn to a unique near-field quantity: mode volume. Intuitively, **mode volume** encapsulates an "amount of space" occupied by an electromagnetic mode. Obviously, defining the volume of a continuous density is necessarily subjective. But we can develop an intuitive approach to the common volume definition. The energy density of a mode \(m\) at any point \(\mathbf{x}\) is proportional to \(\varepsilon(\mathbf{x})|\mathbf{E}_{m}(\mathbf{x})|^{2}\). If the maximum energy density occurs at a point \(\mathbf{x}_{0}\), we can define the volume of the mode as follows: let us redistribute the energy into a binary pattern in which at every point in space it can only take the values \(0\) or \(\varepsilon(\mathbf{x}_{0})|\mathbf{E}_{m}(\mathbf{x}_{0})|^{2}\). Let us also require that the total energy of the mode not change in this binarization, i.e., \(\int\varepsilon(\mathbf{x})|\mathbf{E}(\mathbf{x})|^{2}\) remains fixed. Then the corresponding redistributed field will occupy the volume:
\[V_{m}=\frac{\int\varepsilon(\mathbf{x})|\mathbf{E}_{m}(\mathbf{x})|^{2}}{ \varepsilon(\mathbf{x}_{0})|\mathbf{E}_{m}(\mathbf{x}_{0})|^{2}}. \tag{22}\]
Typical modes of interest, which have strong field concentration and Gaussian- or Lorentzian-like energy decay, are well-suited to such an interpretation.
More rigorously, per Eq. (3), the modal field intensity is the quantity that determines the interaction of a dipole with a specific mode, and the contribution of that mode to the spontaneous emission of the dipole. Then an alternative interpretation of the quantity in Eq. (22) is that the numerator can be taken to be \(1\), for a normalized mode, and the denominator is the relevant coupling term in the Hamiltonian that is to be maximized. This alternative approach explains why a common mathematical objective is to minimize the expression in Eq. (22), without reference to any physical concept of volume.
A critical question around mode volume is whether such a concept is even valid. For closed (or periodic) systems with nondispersive, real-valued permittivities, the Maxwell operator is Hermitian, and there is an orthogonal basis of modal fields that can be orthonormalized. Dispersion in the material systems makes the eigenproblem nonlinear, but for Drude-Lorentz-like dispersions, one can introduce auxliary variables, and in this higher-dimensional space there is again a linear, Hermitian eigenproblem [63]. But once losses are introduced, either through open boundary conditions or material dissipation, the operator is no longer Hermitian, and the modes cannot be orthonormalized with an energy-related inner product [25]. Instead, one must work with _quasinormal modes_ (QNMs), for which two issues arise.
If material losses are the dominant loss mechanism, as is typical in plasmonics, then the key new subtlety often is the modification of orthogonality: the modes are orthogonal in an _unconjugated_ "inner product" (e.g. \(\int\varepsilon\mathbf{E}_{1}\cdot\mathbf{E}_{2}\) instead of \(\int\varepsilon\mathbf{E}_{1}^{*}\cdot\mathbf{E}_{2}\)), which then replaces the standard conjugated inner product in modal expansions such as Eq. (3). While this is mathematically convenient, it can stymie our typical intuition. A beautiful example is demonstrated in Ref. [24]. There, it is shown that the spontaneous emission near a two-resonator antenna can be dominated by two QNMs, as expected. However, if one tries to attribute individual contributions from each QNM, one of the QNMs appears to contribute _negative_ spontaneous emission. This is attributable to the modified inner product: modes that are orthogonal in the unconjugated inner product are not orthogonal in an energy inner product, and their contributions to a positive energy flow (such as spontaneous emission) are invariably linked; one can no longer separate a power quantity such as LDOS into individual contributions from constituent modes. Ultimately, one can define mode volume as a complex-valued quantity [24], in which case it no longer becomes an independent quantity of interest to minimize or maximize, but rather an ingredient for other scattering quantities of interest.
If radiation losses are the dominant loss mechanism, one faces a hurdle even before orthogonality: just _normalizing_ the modal fields becomes tricky. If the modal fields eventually radiate in free space, they will asymptotically scale as \(e^{ik_{m}r}/r\), where \(k_{m}=\omega_{m}/c\) is the wavenumber of the mode and \(r\) is a distance from the scatterer. But the losses to radiation transform the resonant eigenvalues to poles in the lower-half of the complex-frequency plane, i.e., \(\omega_{m}\rightarrow\omega_{m}^{(r)}-i\omega_{m}^{(i)}\), where \(\omega_{m}^{(i)}>0\). Hence the modal fields grow exponentially, \(\sim e^{\omega_{m}^{(i)}r}\), such that any integrals of the form \(\int\mathbf{E}^{2}\) or \(\int|\mathbf{E}|^{2}\) diverge. There are a few resolutions to this issue [25]. Perhaps the simplest is to use computational perfectly matched layers (PMLs) to confine the fields to a finite region. Then, for any accurate discretization of the Maxwell operator, one is simply left with a finite-sized, non-Hermitian matrix, whose eigenvectors will generically be orthonormalizable under the unconjugated inner product. (Exceptions to this occur at aptly named _exceptional points_, where modes coalesce, and one needs Jordan vectors to complete the basis [64, 65].) The orthonormalization of these modes faces the same interpretation issues discussed above in the plasmonic case, and there is one further difficulty: sometimes important contributions to energy expression can come from fields that _primarily reside in the PML region_. It is difficult to attribute physical intuition or meaning to such contributions.
In Sec. 4.4, where we develop bounds for mode volume, we will only deal with cases of lossless dielectric materials, and we assume the quality factors are sufficiently high that the system is approximately closed. This is the limit in which the mode volume as defined by Eq. (22) is exactly the quantity that enters the LDOS expression of Eq. (3), which is typically the underlying goal of minimizing mode volume in the first place. In scenarios where one must use quasinormal modes, it is probably better to eschew them altogether (if
one wants a bound), and to instead work directly with the scattering quantity (e.g. LDOS) of interest.
## 3 Analytical and computational bound approaches
Across many areas of science and technology, "fundamental limits" or "bounds" play an important role in technological selection, theoretical understanding, and optimal design. Examples abound:
* The **Shockley-Queisser limits** for solar-cell energy conversion efficiency. Originally developed for single-cell, all-angle solar absorption and energy conversion [66], the basic framework they developed identifies two required loss mechanisms in any solar cell: radiation back to the sun (at the open circuit condition [67]), and thermalization losses in the establishment of quasi-Fermi levels in each band. Almost any proposed solar-energy-conversion technique must be put through a Shockley-Queisser analysis to earn serious consideration as a technology.
* The **Yablonovitch \(4n^{2}\) limit**, for the maximum broadband, all-angle absorption enhancement in any optically thick material [68]. The factor \(4n^{2}\), for a refractive index \(n\), arises from the density-of-states enhancement in a high-index material, a 2X enhancement from mirrors on the rear surface, and a 2X enhancement from the reorientation of mostly-vertical rays into random angles.
* The **Wheeler-Chu limit** to antenna quality factor, \(Q\)[69, 70]. It is difficult for a subwavelength antenna (such as a cell-phone antenna) to operate over a wide bandwidth, and the Wheeler-Chu (sometimes Harrington is also given credit [71]) limit imposes a bound on the maximum operational bandwidth. Most state-of-the-art antenna designs operate very close to the Wheeler-Chu limit [72].
* The **Bergman-Milton bounds** on the effective properties of a composite material [73, 74, 75, 76, 77, 78].
* The **Abbe diffraction limit** on the maximum focusing of an optical beam. This limit can be circumvented in the near field [79, 80], or even in the far field if one is willing to tolerate side lobes [81, 82, 83, 84, 85, 86, 87].
* The **Shannon bounds**[88], a foundational idea in information theory [89].
Many of these examples involve electromagnetism, but typically only for non-interacting waves and simplified physical regimes. The Yablonovitch \(4n^{2}\) limit applies in geometric (ray) optics, the Wheeler-Chu limit only arises in highly subwavelength structures, and the diffraction limit applies only to free space (or homogeneous-medium) propagation. Is it possible to create an analogous theoretical framework for the full Maxwell equations, identifying fundamental spectral response bounds while accounting for the exceptional points [90, 91], speckle patterns [92], bound states in the continuum [93], and other exotic
phenomena permitted by the wave equation? A flurry of work over the past decade suggests that in many scenarios, the answer should be "yes." In the following subsections we outline the key new ideas that have been developed.
### Global conservation laws
One approach particularly well-suited to formulating bounds is to replace the complexity of the full Maxwell-equation design constraints with a single constraint that encodes some type of conservation law. The Yablonovitch limit, discussed in the previous section, offers a powerful example: to identify maximum absorption enhancement in a geometric-optics setting, one can replace the complexity of ray-tracing dynamics with a single density-of-states constraint. Unfortunately, one cannot extend such density-of-states arguments to full-Maxwell and near-field settings, but other types of "conservation laws" can be identified. A **global conservation law** that has been particularly fruitful for nanophotonics is the **optical theorem**. The optical theorem [94, 95, 96] is a statement of global power conservation: the total power extinguished from an incident beam by a scattering body (or bodies) equals the sum of the powers scattered and absorbed by that body. Writing the extinguished, scattered, and absorbed powers as \(P_{\rm ext}\), \(P_{\rm scat}\), and \(P_{\rm abs}\), respectively, the optical theorem can be expressed as
\[P_{\rm ext}=P_{\rm scat}+P_{\rm abs}. \tag{23}\]
Conventionally, the optical theorem is specified in terms of the far-field scattering amplitudes of a scattering body [94], in which case the extinction is shown to be directly proportional to the imaginary part of the forward-scattering amplitude. This expression can be interpreted as a mathematical statement of the physical intuition that the total power taken from an incident beam can be detected in the phase and amplitude of its shadow. The analysis does not have to be done in the far field; another common version is to relate the extinguished-, scattered-, and absorbed-power fluxes via surface integrals of the relevant Poynting fluxes [95]. Still one more version of the optical theorem, and the one that turns out to be most useful for wide-ranging bound applications, is to the use the divergence theorem to relate the surface fluxes to the fields within the volume of the scatterer, and write all powers in terms of the polarization currents and fields induced in those scatterers [96]. As we briefly alluded to in the discussion of free-electron radiation in Sec. 2.2, the work done by a field \({\bf E}\) on a polarization field \({\bf P}\) in a volume \(V\) is given by \(\left(\frac{\omega}{2}\right){\rm Im}\int_{V}{\bf E}^{*}\cdot{\bf P}=\left( \frac{\omega}{2}\right)\int_{V}{\bf P}^{*}\left[{\rm Im}\,\chi/|\chi|^{2} \right]{\bf P}\), where \(\chi\) is the material susceptibilty. (We assume throughout scalar, electric material susceptibilities \(\chi\). The generalizations to magnetic, anisotropic, and bianisotropic materials are straightforward in every case.) Extinction is the work done by the incident
field on the induced polarization field, scattered power is the work done by that polarization field on the scattered fields \(\mathbf{E}_{\mathrm{scat}}\), and absorbed power is the work done by the total field on the polarization field. Hence the optical theorem reads:
\[\mathrm{Im}\int_{V}\mathbf{E}_{\mathrm{inc}}^{*}(\mathbf{x})\cdot \mathbf{P}(\mathbf{x})\,\mathrm{d}\mathbf{x}=\mathrm{Im} \int_{V}\int_{V}\mathbf{P}^{*}(\mathbf{x})\cdot\mathbb{G}_{0}( \mathbf{x},\mathbf{x}^{\prime})\mathbf{P}(\mathbf{x}^{\prime})\,\mathrm{d} \mathbf{x}\,\mathrm{d}\mathbf{x}^{\prime}\] \[+\int_{V}\mathbf{P}^{*}(\mathbf{x})\cdot\frac{\mathrm{Im}\,\chi( \mathbf{x})}{|\chi(\mathbf{x})|^{2}}\mathbf{P}(\mathbf{x})\,\mathrm{d}\mathbf{ x}, \tag{24}\]
where we have substituted \(\mathbf{E}_{\mathrm{scat}}(\mathbf{x})=\int_{V}\mathbb{G}_{0}(\mathbf{x}, \mathbf{x}^{\prime})\mathbf{P}(\mathbf{x}^{\prime})\,\mathrm{d}\mathbf{x}^{\prime}\) for the scattered field and dropped the constant factor (\(\omega/2\)) preceding every integral. Equation (24) relates extinction on the left-hand side to the sum of scattered and absorbed powers on the right-hand side. For intuition and compactness, it is helpful to rewrite equations like Eq. (24) in a matrix/vector form. We can assume any arbitrarily high-resolution discretization in which \(\mathbf{P}(\mathbf{x})\) becomes a vector \(\mathbf{p}\), the integral operator \(\int_{V}\mathbb{G}(\mathbf{x},\mathbf{x}^{\prime})\,\mathrm{d}\mathbf{x}^{\prime}\) becomes a matrix \(\mathbb{G}_{0}\), and integrals of the conjugate of a field \(\mathbf{a}(\mathbf{x})\) with another \(\mathbf{b}(\mathbf{x})\) are replaced with vector inner products \(\mathbf{a}^{\dagger}\mathbf{b}\). It is also helpful to define a material parameter \(\xi(\mathbf{x})=-1/\chi(\mathbf{x})\), and a corresponding (diagonal) matrix \(\xi=-\chi^{-1}\). With these notational changes, Eq. (24) can be re-written
\[\mathrm{Im}\left(\mathbf{e}_{\mathrm{inc}}^{\dagger}\mathbf{p}\right)= \mathbf{p}^{\dagger}\left[\mathrm{Im}\,\mathbb{G}_{0}+\mathrm{Im}\,\xi\right] \mathbf{p}. \tag{25}\]
This is the vectorized version of the optical theorem, and it illuminates some of the mathematical structure embedded in this particular version of power conservation. The left-hand side is a linear function of the polarization field \(\mathbf{p}\), while the right-hand side is a quadratic function. Moreover, in passive systems the absorbed and scattered powers are nonnegative quantities. This nonnegativity is embedded in the matrices (operators) \(\mathrm{Im}\,\mathbb{G}_{0}\) and \(\mathrm{Im}\,\xi\), both of which are positive semidefinite (denoted by "\(\geq 0\)") in passive systems:
\[\mathrm{Im}\,\mathbb{G}_{0} \geq 0, \tag{26}\] \[\mathrm{Im}\,\xi \geq 0. \tag{27}\]
The positive semidefinite nature of these matrices implies that the right-hand side of Eq. (25) is a convex quadratic functional of \(\mathbf{p}\). Hence Eq. (25) can be interpreted as an ellipsoid (as opposed to a hyperboloid) in the high-dimensional space occupied by \(\mathbf{p}\).
A key feature of Eq. (25), and the conservation laws to follow, is that it is "**domain oblivious**" [97]. Suppose we enforce that constraint on a high-symmetry domain, such as a sphere or half-space, where the operator \(\mathbb{G}_{0}\) might be easy to construct. Of course, enforcing Eq. (25) will enforce power conservation on the sphere itself. But it _also enforces power conservation on all sub-domains of the sphere_. This is not obvious-the operator \(\mathbb{G}_{0}\) is different
for every choice of domain and range, and once we have chosen a sphere for both, it seems that we are stuck with only the sphere domain. The key, however, is the appearance of \(\mathbf{p}\) in each term of Eq. (25), and twice on the right-hand side. To enforce Eq. (25) on a smaller sub-domain, then instead of changing the domain and range of the operator, we can instead enforce the polarization \(\mathbf{p}\) to be zero at each point outside the sub-domain but inside the enclosing domain. On the right-hand side, this effectively changes both the domain and range of \(\mathbb{G}_{0}\), while on the left-hand side, it nulls any extinction contribution from outside the sub-domain. Hence, the conservation law of Eq. (25), and all of the volume-integral-based conservation laws to follow, is domain oblivious.
Power conservation via the optical theorem has led to a surprisingly wide array of bounds and fundamental limits in electromagnetic systems. The key idea is to drop the full Maxwell-equation constraint that is implicit in any design problem, and replace it with only the power-conservation expression of Eq. (25). Even with just this single constraint, surprisingly good bounds can be attained. As an example, consider systems where absorptive losses are more important than radiation/scattering losses. In such systems, we can drop the \(\operatorname{Im}\mathbb{G}_{0}\) term in the optical theorem of Eq. (25), and use its positivity to write a constraint that absorbed power be less than or equal to extinction:
\[\mathbf{p}^{\dagger}\left(\operatorname{Im}\xi\right)\mathbf{p}\leq \operatorname{Im}\left(\mathbf{e}_{\text{inc}}^{\dagger}\mathbf{p}\right). \tag{28}\]
This constraint implies a bound on the strength of the polarization field, because the left-hand-side term is quadratic (and positive-definite) in \(\mathbf{p}\), while the right-hand side is linear in \(\mathbf{p}\). A few steps of variational calculus [98] can identify the largest polarization-field strength that can be induced in a scatterer:
\[\|\mathbf{p}\|^{2}=\mathbf{p}^{\dagger}\mathbf{p}=\int_{V}\left| \mathbf{P}(\mathbf{x})\right|^{2}\,\mathrm{d}\mathbf{x}\leq\frac{\|\mathbf{e }_{\text{inc}}\|^{2}}{\operatorname{Im}\xi}=\frac{|\chi|^{2}}{\operatorname{ Im}\chi}\int_{V}\left|\mathbf{E}_{\text{inc}}(\mathbf{x})\right|^{2}\,\mathrm{d} \mathbf{x}. \tag{29}\]
We have a first bound: in a lossy material, wherein \(\operatorname{Im}\chi>0\), there is a bound on the largest polarization currents that can be induced in a scatterer, based only on the material properties and the energy of the _incident_ wave in the scattering region. Polarization currents beyond this strength would have absorbed powers larger than their extinction, implying an unphysical negative scattered power.
Beyond the strength of the polarization field itself, one can use similar variational-calculus arguments to identify bounds on wide-ranging quantities: extinction, absorption, and scattering, in bulk materials [98], 2D materials [99], and lossy environments [100, 101]; high-radiative-efficiency scatterers [102]; and even near-field quantities such as local density of states [98, 44], near-field radiative heat transfer [103, 99], and Smith-Purcell radiation [104]. As a canonical example, let us consider the extinction, absorption, and scat
tering cross-sections of a scattering body with volume \(V\), susceptibility \(\chi\), and a plane-wave incident field. Cross sections \(\sigma_{\rm ext,abs,scat}\) are the relevant powers divided by the intensity of the incident wave; the corresponding bounds are
\[\frac{\sigma_{\rm abs,scat,ext}}{V}\leq\frac{\beta\omega}{c}\,\frac{|\chi|^{2}}{ \operatorname{Im}\chi}\qquad\beta_{\rm abs,ext}=1,\beta_{\rm scat}=\frac{1}{4}. \tag{30}\]
Per-volume cross-sections are bounded above by the frequency of the incoming waves and the material susceptibilities. Plasmonic nanoparticles can approach these bounds [98, 99, 105].
One subtletly that arises in the near field (whose bounds are discussed in depth in Sec. 4) is which conservation laws to use. The absorption- and extinction-based constraint of Eq. (28) may not be ideal for local density of states, for example, as the power radiated by a dipole is not exactly the same as the power extinguished by a nearby scatterer. (There is a separate pathway for the dipole to radiate directly to the far field, and this radiation can destructively/constructively interfere with waves scattered by the scatterer.) The optical theorem of Eq. (25) arises from equating fluxes through a surface surrounding the scatterer. Instead, in the near field, one can draw a surface around the dipolar source itself. Then one can identify new conservation laws, which now relate the total power radiated by the dipole (the LDOS) to the sum of power absorbed in the scatterer and power radiated to the far field.
In some systems, radiation losses are the limiting factor rather than absorption losses. Prominent examples include metals at low frequencies, and low-loss dielectrics. In these systems, the key component of the optical theorem of Eq. (25) is the radiation-loss term with \(\operatorname{Im}\mathbb{G}_{0}\), not the absorption-loss term. Of course, absorption must be positive, so we can drop it and replace the optical theorem with a second inequality version:
\[\mathbf{p}^{\dagger}\left(\operatorname{Im}\mathbb{G}_{0}\right)\mathbf{p} \leq\operatorname{Im}\left(\mathbf{e}_{\rm inc}^{\dagger}\mathbf{p}\right). \tag{31}\]
Although the \(\operatorname{Im}\mathbb{G}_{0}\) matrix may appear daunting, we typically use high-symmetry volumes for our designable domains, and we can use analytical or semi-analytical forms of \(\operatorname{Im}\mathbb{G}_{0}\) in those domains. (Such usage does not restrict the validity of the bound to only the high-symmetry domain; as discussed above, this expression is domain oblivious.) One common high-symmetry domain is a sphere, in which case \(\operatorname{Im}\mathbb{G}_{0}\) can be written in a basis of vector spherical waves [106, 107, 108]. Application of this approach to the question of maximum cross-sections yields different bounds from the ones of Eq. (30). One must limit the number of spherical waves that can contribute to the scattering process; allowing only the first \(N\) electric multipole leads to maximum cross-sections proportional to the square of the wavelength, \(\lambda\):
\[\sigma_{\rm abs,scat,ext}\leq\frac{\beta\lambda^{2}}{\pi}\left(N^{2}+2N \right)\qquad\beta_{\rm scat,ext}=1,\beta_{\rm abs}=\frac{1}{4}, \tag{32}\]
with double the value if the magnetic vector spherical waves can be equally excited. Note the different values of \(\beta\) for absorption and scattering in the absorption-limited case of Eq. (30) versus the radiation-limited case of Eq. (32). The different coefficients arise because of the different conditions under which maximum extinction occur. In an absorption-dominated system, arbitrarily small scattering is possible (in principle), such that the maximum for extinction and absorption coincide, while the scattered-power maximum requires a reduction in absorption relative to extinction and a \(1/4\) coefficient to account for the matching that must occur. The opposite occurs in scattering-limited systems, where absorption can be arbitrarily small (in principle), the maximum for extinction and scattering coincide, and an extra factor of \(1/4\) is introduced when absorption is to be maximized. The bound of Eq. (32) was originally derived for antenna applications or spherically symmetry scatterers via long and/or restrictive arguments [109, 110, 111, 112, 113]; the single conservation law of Eq. (31) is sufficient to derive Eq. (32) in quite general settings [114, 115]. (An interesting precursor to the global-conservation-law approach is Ref. [116], which identifies metrics that intrinsically have bounded optima over polarization currents, even _without_ any constraints.)
Of course, in some settings both absorption and radiation losses will be important to capture what is possible, and the bounds of Eqs. (30,32) may not be sufficient. It is possible to capture both loss mechanisms in a single bound by using the entirety of the optical theorem, Eq. (25), without dropping either term. This was first recognized in Refs. [117, 108, 118]. Ref. [108] used this approach to derive bounds on the thinnest possible perfect absorber. (Or, conversely, the maximum absorption of an arbitrarily patterned thin film with a given maximum thickness.) Cross-section bounds given in Ref. [117, 108, 118] are generalizations of the two bounds listed above, Eqs. (30,32), containing each as separate asymptotic limits. At normal incidence, one can derive a simple transcendental equation for the minimum thickness, \(h_{\rm min}\), of a perfect absorber with material parameter \(\xi=-1/\chi\):
\[h_{\rm min}=\left(\frac{2\lambda}{\pi}\right)\frac{\operatorname{Im}\xi( \omega)}{1-\operatorname{sinc}^{2}\left(\omega h_{\rm min}/c\right)}. \tag{33}\]
This approach has been successfully applied to the identification of the minimum thickness of a metasurface reflector [119].
Finally, at the global-conservation level, one can go one step further, as first recognized in Refs. [117, 118]. The optical theorem of Eq. (25) represents the conservation of _real_ power across the volume of a scatterer, which can be understood as the conservation of the real part of the Poynting vector through any bounding surface. Additionally, the imaginary part of the Poynting vector, corresponding to what is known as _reactive_ power [95]. The complex-valued version of the optical theorem is essentially the same as Eq. (25) but without the imaginary part in any of the terms; a careful analysis leads to the generalized optical theorem:
\[-\mathbf{p}^{\dagger}\mathbf{e}_{\mathrm{inc}}=\mathbf{p}^{\dagger}\left[\mathbb{G}_ {0}+\xi\right]\mathbf{p}. \tag{34}\]
The real and imaginary parts of Eq. (34) now offer _two_ global conservation laws that must be satisfied in any scatterer. The real-power conservation law accounts for absorption- and radiation-loss pathways, while the reactive-power conservation law accounts for resonance conditions in real materials. The latter has been shown to be beneficial for tightening bounds in plasmonic materials that are relatively large (wavelength-scale sizes are quite large for plasmonic resonators) or which have very large negative real susceptibilities and/or very small imaginary susceptibilities [118]. This approach has been applied to bounds in cloaks [120] and focusing efficiency [121]. Equation (34) can be derived in one step from the volume-integral equation [122] (or Lippmann-Schwinger equation), which in this notation reads \(\left[\mathbb{G}_{0}+\xi\right]\mathbf{p}=-\mathbf{e}_{\mathrm{inc}}\), simply by taking the inner product of that equation with \(\mathbf{p}\).
In this section we have seen that the optical theorem, written over the volume polarization fields induced in a scatterer, offers a single (or two) global conservation laws that can be used to identify bounds in wide-ranging applications. In Sec. 3.3 below we show that it is also a starting point for generating an infinite number of "local" conservation laws. First, however, we will explore an approach that is closely related to global conservation laws: so-called "channel" bounds.
### Channel bounds
In this section, we explore another technique for identifying bounds to what is possible: decomposing power transfer into a set of independent or orthogonal power-carrying "channels." Then the upper limits distill to the maximum power (or alternative objective) per channel multiplied by the number of possible channels.
A particularly elegant formulation of channels was proposed by D. A. B. Miller and colleagues in the early 2000's [123, 124, 125, 126]. Consider a transmitter region that wants to communicate (i.e. send information/energy) to a receiver region, and a vacuum (or background) Green's-function operator \(\mathbb{G}_{0}\) comprising the fields in the receiver from sources in the transmitter. How many communication channels are possible? There is a simple, rigorous mathematical answer to this question: if one decomposes the \(\mathbb{G}_{0}\) operator via a singular value decomposition (SVD) [127],
\[\mathbb{G}_{0}=\mathbb{U}\mathbb{S}\mathbb{V}^{\dagger}, \tag{35}\]
then each pair of singular vectors forms an independent channel. The singular-value decomposition encodes orthogonality and normalization. For example, the first right singular value, which we can call \(\mathbf{v}_{1}\), radiates _only_ to the first
left singular vector \(\mathbf{u}_{1}\) in the receiver region, and the strength of this connection is given exactly by the first singular value, which we can call \(s_{1}\). This triplet \((\mathbf{v}_{1},\mathbf{u}_{1},s_{1})\) mathematically define a **communication channel**, as are all the pairs in the SVD. There cannot be an infinite number of such channels with arbitrarily large strengths, as the channel strengths obey a simple sum rule related to the integral of the Green's function over the transmitter and receiver volumes:
\[\sum_{i}|s_{i}|^{2}=\mathrm{Tr}\left(\mathbb{S}^{\dagger}\mathbb{S}\right)= \mathrm{Tr}\left(\mathbb{G}_{0}^{\dagger}\mathbb{G}_{0}\right)=\int_{V_{T}} \int_{V_{R}}\|\mathbb{G}_{0}(\mathbf{x}_{T},\mathbf{x}_{S})\|^{2}\;\mathrm{d} \mathbf{x}_{T}\,\mathrm{d}\mathbf{x}_{R}. \tag{36}\]
One can define more granular bounds as well: for any transmitter/receiver regions enclosed within high-symmetry bounding domains, one can identify upper limits for each individual singular value [128]. The singular values must decay exponentially in two-dimensional systems, whereas in three dimensions their decay can be sub-exponential. This SVD-based decomposition of Eq. (35) implicitly uses a field-energy normalization; one can alternatively use power-transfer normalizations and arrive at related bounds for the communication strength between two volumes [129; 130; 131]. Each of these is a powerful approach for free-space communication systems such as MIMO [132; 133]. More generally, they capture a general truth about free-space propagation: it can always be decomposed into orthogonal, power-carrying channels.
In the near field, however, evanescent waves do _not_ offer an equivalent set of power-carrying channels. Evanescent waves obey different mathematical orthonormalization rules, which are consistent with the following fact: evanescent waves decaying (or growing) in one direction _cannot_ carry power; power can be transmitted only in the presence of oppositely directed evanescent waves [134]. A prototypical example: a single interface can only exhibit total internal reflection alongside evanescent-wave excitation, whereas the introduction of second interface, and counter-propagating evanescent waves, can lead to the tunneling of power through a "barrier."
In lieu of the general SVD approach, in high-symmetry scenarios it is often possible to decompose power transfer in a high-symmetry basis. For example, a spherically symmetric scatterer preserves the quantum numbers of incoming vector spherical waves and cannot scatter into waves of different quantum numbers, which implies that each vector spherical wave comprises a "channel" for incoming and outgoing radiation. Similarly, in planar systems, the in-plane (parallel) wavevector \(\mathbf{k}\) is a conserved quantity, in which case one can isolate the scattering process into each \(\mathbf{k}\)-dependent propagating and evanescent plane wave. One cannot define free-space evanescent-wave channels, per the orthonormalization discussion above, but a more complete analysis can lead to \(\mathbf{k}\)-dependent transfer coefficients that are readily interpretable as a channel-based power decomposition. We discuss the successful application of these ideas to near-field radiative heat transfer in Sec. 4.1.4. A word of caution is important, however: the assumption of a high-symmetry structure dramat
ically limits the set of structures to which such bounds apply, and in many scenarios it has been found that the symmetry-independent approaches of global conservation laws (previous section) and local conservation laws (next section) yield both tighter _and_ more general bounds.
### Local conservation laws
In the global-conservation-law section of Sec. 3.1, we discussed that one or two conservation-of-power constraints is already sufficient for bounds in many scenarios of interest. Of course, one or two constraints cannot capture every objective of interest: if, for example, one wanted to know the largest average response over multiple incident fields, certainly more constraints are needed. Thankfully, it turns out that there is a systematic way to generate a large number of conservation-law constraints for any nanophotonic design problem of interest.
The key is to identify _local_ **conservation laws** that apply at every point within the scatterer [97, 107]. These conservation laws can be "built" from a volume-integral formulation of the underlying governing dynamics, but we will use a more intuitive approach to develop them. The "generalized optical theorem" is written in Eq. (34) in vector/matrix notation; the equivalent integral expression is
\[\int_{V}\int_{V}\mathbf{P}^{*}(\mathbf{x})\mathbb{G}_{0}(\mathbf{x},\mathbf{x }^{\prime})\mathbf{P}(\mathbf{x}^{\prime})\,\mathrm{d}\mathbf{x}\,\mathrm{d} \mathbf{x}^{\prime}+\int_{V}\mathbf{P}^{*}(\mathbf{x})\xi(\mathbf{x})\mathbf{P }(\mathbf{x})\,\mathrm{d}\mathbf{x}=-\int_{V}\mathbf{P}^{*}(\mathbf{x}) \mathbf{E}_{\mathrm{inc}}(\mathbf{x})\,\mathrm{d}\mathbf{x}. \tag{37}\]
To formulate local conservation laws, we simply recognize the following: for the first integral over the entire scatterer \(V\) that appears in every term, we can replace \(V\) with \(V_{\mathbf{x}}\), where \(V_{\mathbf{x}}\) is an infinitesimal volume centered around any point \(\mathbf{x}\) within the scatterer. With this replacement, the dependence on \(\mathbf{x}\) of each integrand becomes approximately constant (exactly constant in the zero-volume limit), and the integral simplifies to just multiplication by the volume \(V_{\mathbf{x}}\), which appears in every term and can be cancelled, leaving:
\[\int_{V}\mathbf{P}^{*}(\mathbf{x})\mathbb{G}_{0}(\mathbf{x},\mathbf{x}^{ \prime})\mathbf{P}(\mathbf{x}^{\prime})\,\mathrm{d}\mathbf{x}^{\prime}+ \mathbf{P}^{*}(\mathbf{x})\xi(\mathbf{x})\mathbf{P}(\mathbf{x})=-\mathbf{P}^ {*}(\mathbf{x})\mathbf{E}_{\mathrm{inc}}(\mathbf{x}). \tag{38}\]
More rigorous justifications are given in Refs. [97, 107], and can proceed either from the volume-integral formulation or, with equal validity, by converting the volume integrals around \(V_{\mathbf{x}}\) into surface integrals (via the divergence theorem), in which case Eq. (38) is interpreted simply as flux conservation through the surface of \(V_{\mathbf{x}}\). To convert Eq. (38) to the more compact vector notation, we denote new matrices \(\mathbb{D}_{i}\) as diagonal matrices of all zeros except a single 1 at diagonal entry \(i\), in which case Eq. (38) can be written
\[\mathbf{p}^{\dagger}\mathbb{D}_{i}\left(\mathbb{G}_{0}+\xi\right)\mathbf{p}=- \mathbf{e}_{\mathrm{inc}}^{\dagger}\mathbb{D}_{i}\mathbf{x}, \tag{39}\]
which must hold for all spatial locations index by \(i\). Equation (39) offers an infinite set of local conservation laws that must be satisfied for any (linear) scattering body. Moreover, just as for the global conservation laws, Eq. (39) is domain oblivious. Hence if the constraints of Eq. (39) lead to a bound, then that bound will apply to all sub-domains (or "patterns") contained therein.
There is a systematic procedure that one can follow for identifying fundamental limits using the constraints of Eq. (39). If one discards the Maxwell differential (or integral) equations, and only imposes the constraints of Eq. (39), the resulting optimization problem has the form of a **quadratically constrained quadratic program**, or **QCQP**. QCQPs arise across many areas of science and engineering [135, 136, 137, 138, 139, 140], and there are many mathematical approaches for solving them. One in particular is useful for identifying bounds: one can relax a QCQP to a **semidefinite program (SDP)** in a higher-dimensional space [141, 137], which can be solved for its global optimum by standard algorithms in polynomial time [142, 143]. The solution of the SDP is guaranteed to be a bound, or fundamental limit, on the solution of the problem of interest. (The semidefinite program can also be regarded as the "dual" [143] of the dual of the QCQP [144], which is another way to see that it leads to bounds.)
Thus local conservation laws lead to a systematic procedure for identifying bounds, or fundamental limits, to electromagnetic quantities of interest. One replaces the governing Maxwell equations with the domain-oblivious conservation-law constraints of Eq. (39), forms a semidefinite program from the objective and constraints, and solves the SDP to find a bound. To avoid the computational complexity of using all of the constraints, one can iteratively select only the "maximally violated" constraints, for rapid convergence to the bound of interest [97]. A mathematically oriented review of bounds related to Eq. (39) is given in Ref. [145]. Extensions of various types are given in Ref. [146] (multi-functionality), Ref. [147] (quantum optimal control), Ref. [148] (efficiency metrics), and Ref. [149] (other physical equations).
### Sum rules
Whereas the three previous sections primarily emphasized fundamental limits across spatial degrees of freedom, at a single frequency, **sum rules** center around _spectral_ degrees of freedom and constraints related to bandwidth. Sum rules are a prime example of applied complex analysis. Most often they are taught and discussed in the context of material susceptibilities, so we will start there, before focusing on our key interest, scattering problems. In the Appendix Sec. 6 we provide a short review of key results from complex analysis, and the intuition behind their derivations, culminating in the Cauchy
residue theorem that is used for all sum rules. Cauchy's residue theorem, for our purposes, can be distilled to the following statement. Consider a function \(f(z)\) that is **analytic** (has no poles) in some domain \(D\) in the complex \(z\) plane. (Below, the analytic variable \(z\) will be the frequency \(\omega\).) Then the function \(f(z)/(z-z_{0})\) has a simple pole at \(z_{0}\), for \(z_{0}\) in \(D\), and any integral of this function along a closed contour in \(D\) containing \(z_{0}\) simplifies to the value of the function at the pole:
\[\oint_{\gamma}\frac{f(z)}{z-z_{0}}=2\pi if(z_{0}), \tag{40}\]
where \(f(z_{0})\) is the "residue" of the function \(f(z)/(z-z_{0})\). Now let us put Cauchy's residue theorem to use.
Consider a material susceptibility \(\chi\) that relates an electric field \({\bf E}\) to an induced polarization field \({\bf P}\). Typically we might directly consider the frequency-domain relationship of these variables,
\[{\bf P}(\omega)=\chi(\omega){\bf E}(\omega), \tag{41}\]
where we are suppressing spatial dependencies in these expressions for simplicity. (All of the position dependencies are straightforward.) This multiplicative frequency-domain relation arises from a _convolutional_ time-domain relationship: the polarization field at a given field is related to the electric field at all other times convolved with the susceptibility function (as a function of time):
\[{\bf P}(t)=\int\chi(t-t^{\prime}){\bf E}(t^{\prime})\,{\rm d}t^{\prime}. \tag{42}\]
(We do not use different variables for the time- and frequency-domain definitions; the domain should be clear in each context.) **Causality** is the formal specification that _cause precedes effect_. Material susceptibilities are causal: the polarization field cannot arise before the electric field has arrived, which means that for some origin of time, the susceptibility function is identically zero at all preceding times:
\[\chi(t-t^{\prime})=0\qquad\mbox{ for }t<t^{\prime}. \tag{43}\]
In the usual Fourier-transform relation between the time- and frequency-domain susceptibility functions, then, one can set the lower limit of the time-domain integral to be 0:
\[\chi(\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\chi(t)e^{i\omega t}\,{\rm d }t=\frac{1}{2\pi}\int_{0}^{\infty}\chi(t)e^{i\omega t}\,{\rm d}t. \tag{44}\]
Setting the lower limit of the integral to 0 has an important ramification. Let us assume the susceptibility takes a finite value for all real frequencies.
(Metals are an exception, with divergent susceptibilities at zero frequencies, but known modifications to the rules below can be developed to account for this singularity [150, 151].) This implies that the integral of Eq. (44) converges to the correct finite value at each frequency. Now let us consider a **complex-valued** frequency \(\omega=\omega_{0}+i\varDelta\omega\). If we insert this frequency into Eq. (44), we find:
\[\chi(\omega_{0}+i\varDelta\omega)=\frac{1}{2\pi}\int_{0}^{\infty}\chi(t)e^{i \omega_{0}t}e^{-\varDelta\omega t}\,\mathrm{d}t, \tag{45}\]
which is equivalent to the integral of Eq. (44), except now there is the additional exponential decay term \(e^{-\varDelta\omega t}\) in the integrand. This exponential decay term can only aid in convergence, and under appropriate technical assumptions (e.g. Titchmarsh's theorem [152]), one can prove the intuitive idea that Eq. (45) cannot diverge for any \(\varDelta\omega\). This implies that the material susceptibility \(\chi(\omega)\) is analytic in the upper-half of the complex-frequency plane. (Conversely, frequencies in the low half would have the exponentially diverging term \(e^{\varDelta\omega t}\) in their integrands, which would lead to divergences at certain frequencies, which is where the system _resonances_ are located.) Hence we can use the Cauchy integral theorem of Eq. (40) with \(\chi(\omega)\) as the analytic function in the numerator of the integrand. The typical usage of the integral theorem is to select a pole on the real axis (or, technically, in the limit of approaching the real axis from above), and to use a contour \(C\) that follows the real line, includes a semi-circular deformation around \(\omega^{\prime}\), and then closes along a semicircle approaching infinity in the upper-half plane. This contour actually does not enclose _any_ poles, instead "side-stepping" the real-axis pole, at a frequency we denote by \(\omega\). Hence we have
\[\oint_{C}\frac{\chi(\omega^{\prime})}{\omega^{\prime}-\omega}\,\mathrm{d} \omega^{\prime}=0. \tag{46}\]
The integral over \(C\) can be broken into three components: the principal-valued integral along the real axis from negative infinity to infinity (skipping \(\omega^{\prime}\)), the semicircular arc going into the upper-half plane, and the semicircular arc rotating clockwise around \(\omega\). The second of these terms is zero (for sufficient decay of \(\chi(\omega)\)), while the third term is simply \(-i\pi\chi(\omega)\) (half of the typical Cauchy residue term since it is half of a circle, with a negative sign for the clockwise rotation). Equating the negative of the third term to the first, we have:
\[i\pi\chi(\omega)=\int_{-\infty}^{\infty}\frac{\chi(\omega^{\prime})}{\omega^{ \prime}-\omega}\,\mathrm{d}\omega^{\prime}. \tag{47}\]
We can take the imaginary part of both sides, and use the symmetry of \(\chi\) around the origin, \(\chi(-\omega)=\chi^{*}(\omega)\), to arrive at one of the **Kramers-Kronig** (KK) relations for a material susceptibility:
\[\mathrm{Re}\,\chi(\omega)=\frac{2}{\pi}\int_{0}^{\infty}\frac{\omega^{\prime}\, \mathrm{Im}\,\chi(\omega^{\prime})}{(\omega^{\prime})^{2}-\omega^{2}}\,\mathrm{d }\omega^{\prime}. \tag{48}\]
The counterpart KK relation relates the imaginary part of \(\chi(\omega)\) to an integral involving the real part. These KK relations are the foundations of sum rules. There are two special pole frequencies \(\omega\) at which we may have additional information about the material response: infinity frequency and zero frequency (statics). In the limit of infinitely large frequencies, all materials become transparent, with a susceptibility that must scale as
\[\chi(\omega)\to-\frac{\omega_{p}^{2}}{\omega^{2}}\qquad\text{as }\omega\to\infty, \tag{49}\]
where \(\omega_{p}\) is a constant proportional to the total electron density of the material [150, 151]. Inserting this asymptotic limit into the KK relation of Eq. (48), we find our first example of a sum rule:
\[\int_{0}^{\infty}\omega\,\mathrm{Im}\,\chi(\omega)\,\mathrm{d}\omega=\frac{ \pi\omega_{p}^{2}}{2}. \tag{50}\]
Equation (50) is known as either the **TRK sum rule** or the \(f\)**sum rule**[150, 151]. It relates the weighted integral of the imaginary part of the susceptibility to simple constants multiplied by the electron density of the material of interest. The quantity \(\omega\,\mathrm{Im}\,\chi(\omega)\) is proportional to the _oscillator strengths_ in single-electron susceptibility models [153]. Alternatively, in the low-frequency limit, one may know the static refractive index \(n_{0}\) of a given material; inserting \(\omega=0\) in the KK relation of Eq. (48) gives the low-frequency sum rule:
\[\int_{0}^{\infty}\frac{\mathrm{Im}\,\chi(\omega)}{\omega}\,\mathrm{d}\omega= \frac{\pi}{2}\left(n_{0}^{2}-1\right). \tag{51}\]
The two sum rules of Eqs. (50,51) are well-known material sum rules that are useful for spectroscopy [150, 151] as well as for bounds on material properties [154, 155, 156]. We have repeated their well-known derivations to familiarize the reader with the machinery of KK relations and sum rules, which we apply next to scattering properties.
Just as the origin for material sum rules was recognition of material susceptibility as a causal (linear) response function, for scattering sum rules we want to start by recognizing that the electromagnetic field \(\mathbf{E}\) generated by a source (presumably current) is also a causal linear response function: \(\mathbf{E}\) cannot be nonzero before the current \(\mathbf{J}\) is nonzero. Hence the electric field at all times before an origin must be zero, which again leads to analyticity in the upper-half of the complex-frequency plane. Yet we do not want KK relations for the electric field at specific points in space; we want KK relations (and sum rules) for relevant power quantities. Typical expressions of interest might be the field intensity, \(|\mathbf{E}(\mathbf{x},\omega)|^{2}\), or the Poynting flux
\((1/2)\,\mathrm{Re}\left[\mathbf{E}^{*}(\mathbf{x},\omega)\times\mathbf{H}(\mathbf{x},\omega)\right]\), at a point \(\mathbf{x}\), but _neither_ of these quantities is analytic in the upper-half plane. The problematic term in each case is \(\mathbf{E}^{*}(\omega)\). Analyticity is not preserved under complex conjugation, and indeed by symmetry we know that \(\mathbf{E}^{*}(\omega)=\mathbf{E}(-\omega)\) on the real line; if we try to continue \(\omega\) into the upper-half plane, the \(-\omega\) argument moves into the lower-half plane, where the resonances reside. Hence \(\mathbf{E}^{*}(\omega)\) can have poles, and the corresponding power terms do not have simple KK relations or sum rules.
We are rescued, again, by the optical theorem. Whereas absorbed and scattered powers always involve conjugated total fields, _extinction_, by virtue of the optical theorem, takes a different form (Eq. (7)), which is proportional to the overlap integral of the conjugate of the incident field with the induced polarization field, \(\int_{V}\mathbf{E}^{*}_{\mathrm{inc}}\cdot\mathbf{P}\). Many common incident fields, such as plane waves of the form \(e^{i\omega x/c}\), are analytic _everywhere_ in the complex plane, and their conjugates _can_ be analytically continued. The polarization field is the product of the analytic material susceptibility with the analytic electric field, and thus is itself analytic. Hence extinction expressions contain a term that will obey KK relations and sum rules, which we denote \(s(\omega)\):
\[P_{\mathrm{ext}}(\omega)=\frac{\omega}{2}\,\mathrm{Im}\underbrace{\int_{V} \mathbf{E}^{*}_{\mathrm{inc}}(\mathbf{x},\omega)\cdot\mathbf{P}(\mathbf{x}, \omega)\,\mathrm{d}\mathbf{x}}_{s(\omega)}. \tag{52}\]
By the arguments laid out above, the quantity \(s(\omega)\) is analytic in the upper-half plane. It satisfies the other required assumptions as well (e.g. sufficient decay at infinity) for incident fields such as plane waves; we can immediately write a KK relation for it:
\[\mathrm{Re}\,s(\omega)=\frac{2}{\pi}\int_{0}^{\infty}\frac{\omega^{\prime}\, \mathrm{Im}\,s(\omega^{\prime})}{(\omega^{\prime})^{2}-\omega^{2}}\,\mathrm{ d}\omega^{\prime}. \tag{53}\]
Notice that the term in the numerator of the integrand is exactly proportional to extinction; hence sum rules for the imaginary part of \(s(\omega)\) (by analogy with the sum rules for \(\mathrm{Im}\,\chi\)) will necessarily be sum rules for extinction. Again paralleling the susceptibility analysis, we can take the limit as \(\omega\to\infty\), in which case
\[s(\omega) =\int\mathbf{E}^{*}_{\mathrm{inc}}(\mathbf{x},\omega)\cdot \mathbf{P}(\mathbf{x},\omega)\,\mathrm{d}\mathbf{x}\] \[\to-\frac{\omega_{p}^{2}}{\omega^{2}}\int_{V}\left|\mathbf{E}_{ \mathrm{inc}}(\mathbf{x},\omega)\right|^{2}\,\mathrm{d}\mathbf{x}\] \[=-\frac{\omega_{p}^{2}}{\omega^{2}}\left|\mathbf{E}_{0}\right|^{2 }V, \tag{54}\]
where \(\mathbf{E}_{0}\) is the (constant) vector amplitude of the plane wave, and \(V\) is the volume of the scatterer. Evaluating the KK relation for \(s(\omega)\), Eq. (53), in the high-frequency limit gives a sum rule for the imaginary part of \(s(\omega)\):
\[\int_{0}^{\infty}\omega\,\mathrm{Im}\,s(\omega)\,\mathrm{d}\omega=\frac{\pi\omega_{p }^{2}}{2}\left|\mathbf{E}_{0}\right|^{2}V, \tag{55}\]
which in turn implies a sum rule for extinction (via Eq. (52)):
\[\int_{0}^{\infty}P_{\mathrm{ext}}(\omega)\,\mathrm{d}\omega=\frac{\pi\omega_{p }^{2}}{4}\left|\mathbf{E}_{0}\right|^{2}V. \tag{56}\]
Equation (56) dictates that the total integrated extinction of any scattering body is fixed by the amplitude of the incident plane wave and the total number of electrons in the scatterer (from the product of \(\omega_{p}^{2}\) with \(V\)), and is otherwise independent of the shape, resonance profile, and any other characteristics of the scattering body.
Just as for a material susceptibility, one can also derive a sum rule for \(P_{\mathrm{ext}}\) by setting \(\omega=0\) in the KK relation for \(s(\omega)\), Eq. (53). The key low-frequency information we can utilize is that the net induced dipole moment of the scatterer is related to the incident field via a polarizability tensor \(\mathbf{\alpha}\). Following a few algebraic steps [157] paralleling the low-frequency material sum rule, one similarly finds a sum rule for the integrral of \(P_{\mathrm{ext}}(\omega)/\omega^{2}\). The term \((1/\omega^{2})\,\mathrm{d}\omega\) is exactly proportional to \(\mathrm{d}\lambda\), where \(\lambda=2\pi c/\omega\) is the wavelength, so this sum rule is often written as a sum rule over wavelength:
\[\int_{0}^{\infty}P_{\mathrm{ext}}(\omega)\,\mathrm{d}\lambda=\pi^{2}\mathbf{E }_{0}\cdot\mathbf{\alpha}\mathbf{E}_{0}. \tag{57}\]
There is an additional magnetic polarizability term in materials with a nonzero magnetostatic response [157]. Interestingly, Eq. (57) has different dependencies than Eq. (56): the polarizability has a weak dependence on material, but a strong dependence on shape. The low-frequency sum rule implies that scattering bodies with the same size and shape, but made of different materials, can have nearly identical wavelength-integrated extinctions. Moreover, electrostatic polarizabilities obey "domain monotonicity" bounds that dictate that the quantity \(\mathbf{E}_{0}\cdot\mathbf{\alpha}\mathbf{E}_{0}\) must increase as the scatterer domain increases in size, such that one can bound integrated extinction via high-symmetry enclosures for which the right-hand side of Eq. (57) often takes a simplified analytical form. Taken together, the high- and low-frequency sum rules of Eqs. (56,57) comprise strong constraints on the possible scattering lineshapes of arbitrary scatterers.
Eqs. (56,57) are classical sum rules with a long history. The high-frequency sum rule, Eq. (56), was known at least as early as 1963 [158], when the connection to material-susceptibility sum rules was first made. A specialized version of the low-frequency sum rule, Eq. (57), was first proposed by Purcell in 1969 [159], in order to bound the minimum volume occupied by interstellar dust. It was generalized to arbitrary scattering bodies in Ref. [157], where the monotonicity bounds (originally developed by Jones [160]) were connected to the low-frequency sum rules. For many years, it seemed that plane-wave
extinction might be the _only_ scattering quantity for which sum rules can be derived. In recent years, however, it has been recognized that near-field local density of states has a similar form--it is the real or imaginary part of an amplitude, instead of the squared magnitude of an amplitude--for which sum rules can also be derived. We describe this sum rule and its implications in Sec. 4.2.
## 4 Fundamental limits in the near field
We have set the stage: we have introduced near-field optics, defined many of the response functions of interest, and described tools formulated for electromagnetic-response bounds. In this section we describe how these ingredients come together for bounds and fundamental limits to near-field response. We identify different bounds--and the different techniques required to derive them--based on the frequency range of interest: a single frequency (Sec. 4.1), all frequencies (Sec. 4.2), and finite, nonzero bandwidths (Sec. 4.3). We leave bounds for mode volume, which seemingly requires very different techniques, to the final section of the chapter (Sec. 4.4).
### Single-frequency bounds
In Sec. 3, we described two techniques that can be used to identify single-frequency bounds to any linear-electromagnetic response function of interest: conservation laws and channel decompositions. In this subsection we summarize how one can adapt, specialize, and/or combine those approaches in the near field, for spontaneous-emission and CDOS engineering, Smith-Purcell radiation enhancements, and spectral NFRHT response.
#### 4.1.1 Spontaneous emission
The canonical near-field quantity is LDOS, which as discussed in Sec. 2.1 is proportional to the spontaneous emission rate of an electric dipole at a given location. In a closed system, the LDOS is a sum of delta functions over the modes of the system, in which case the LDOS diverges at the modal frequencies. In an open system, however, the modal intuition no longer applies, leading to the more general Green's-function expression of Eq. (4). This scattering quantity lends itself well to the conservation-law-based scattering-response bounds described in Sec. 3.3.
We can repeat here the Green's function expression for LDOS, which we will denote in this section by \(\rho(\mathbf{x},\omega)\):
\[\rho({\bf x},\omega)=\frac{1}{\pi\omega}\operatorname{Tr}\operatorname{Im}\mathbb{G }({\bf x},{\bf x},\omega). \tag{58}\]
The trace of the Green's function can be computed with a summation over three orthogonal unit vectors \({\bf s}_{j}\), for \(j=1,2,3\), in which case the trace can be interpreted as the incoherent summation of the fields from three dipoles with amplitudes \(\varepsilon_{0}{\bf s}_{j}\). There is an initial impediment to applying the conservation-law framework to this expression: it is not written explicitly as a function of the polarization fields, whose constraints are critical to meaningful bounds. This impediment is easily hurdled: one can decompose the Green's function into its incident and scattered components. The scattered fields are the convolutions of the free-space Green's-function matrix \(\mathbb{G}_{0}\) from the scattering domain to the dipole point; by reciprocity, the overlap of \({\bf s}_{j}\) with \(\mathbb{G}_{0}\) is the field incident upon the scattering body \(V\). By this line of reasoning, for a scalar isotropic medium (the general bianisotropic case is derived in Ref. [98]), one can rewrite LDOS as
\[\rho({\bf x},\omega)=\rho_{0}(\omega)+\frac{1}{\pi\omega}\operatorname{Im} \sum_{j}\int_{V}{\bf E}_{\operatorname{inc},{\bf s}_{j}}\cdot{\bf P}_{s_{j}} \operatorname{d}V, \tag{59}\]
where \(\rho_{0}(\omega)\) is the free-space LDOS (which is position-independent, and given below Eq. (4)), and the \({\bf s}_{j}\) subscript encodes the three dipole orientations. Using the same discretized vector/matrix notation as we initiated with Eq. (25), this expression can equivalently be written
\[\rho({\bf x},\omega)=\rho_{0}(\omega)+\frac{1}{\pi\omega}\operatorname{Im} \sum_{j}{\bf e}_{\operatorname{inc},{\bf s}_{j}}^{T}{\bf p}_{s_{j}}. \tag{60}\]
Now we see that LDOS is a linear function of the polarization fields induced in the scattering body. We want to know the largest possible value of LDOS, of Eq. (60), subject to Maxwell's equations, but of course the latter constraint contains all of the complexity of the design problem. Instead, we drop the Maxwell-equation constraint, and impose only one of the conservation laws of Sec. 3. To start, we can impose the conservation law that absorbed power be smaller than extinguished power, of Eq. (28), which leads to the optimization problem:
\[\begin{split}\max_{{\bf p}_{s_{j}}}&\frac{1}{\pi \omega}\operatorname{Im}\sum_{j}{\bf e}_{\operatorname{inc},{\bf s}_{j}}^{T}{ \bf p}_{{\bf s}_{j}}\\ \text{s.t.}&\left(\operatorname{Im}\xi\right){\bf p }_{{\bf s}_{j}}^{\dagger}{\bf p}_{{\bf s}_{j}}\leq\operatorname{Im}\left({\bf e }_{\operatorname{inc},{\bf s}_{j}}^{\dagger}{\bf p}_{{\bf s}_{j}}\right). \end{split} \tag{61}\]
Treating each dipole orientation \({\bf s}_{j}\) independently, one can find from a Lagrangian analysis that the optimal \({\bf p}_{{\bf s}_{j}}\) comprises a linear combination of \({\bf e}_{\operatorname{inc},{\bf s}_{j}}\) and \({\bf e}_{\operatorname{inc},{\bf s}_{j}}^{*}\); in the near field, where the incident field and its conjugate are nearly identical, and the LDOS is dominated by its scattered-field contribution, we ultimately find the following bound [98]:
\[\rho(\mathbf{x},\omega)\leq\frac{1}{\pi\omega}\frac{|\chi(\omega)|^{2}}{ \operatorname{Im}\chi(\omega)}\sum_{\mathbf{s}_{j}}\left\|\mathbf{e}_{\text{inc },\mathbf{s}_{j}}\right\|^{2}=\frac{1}{\pi\omega}\frac{|\chi(\omega)|^{2}}{ \operatorname{Im}\chi(\omega)}\sum_{\mathbf{s}_{j}}\int_{V}\left|\mathbf{E}_{ \text{inc},\mathbf{s}_{j}}\right|^{2}\,\text{d}\mathbf{x}. \tag{62}\]
Normalizing by the free-space electric LDOS \(\rho_{0}(\omega)\), and performing the integral over an enclosing half-space (and keeping only the term that decreases most rapidly with separation distance \(d\)), one finds [98]:
\[\frac{\rho(\mathbf{x},\omega)}{\rho_{0}(\omega)}\leq\frac{1}{8(kd)^{3}}\frac{| \chi(\omega)|^{2}}{\operatorname{Im}\chi(\omega)}, \tag{63}\]
where \(k=\omega/c\) is the free-space wavenumber. Equation (63) represents our first near-field bound. This bound only depends on two parameters of the system: the separation distance \(d\), relative to the wavenumber, and the **material enhancement factor**,
\[\frac{|\chi(\omega)|^{2}}{\operatorname{Im}\chi}. \tag{64}\]
The material enhancement factor encodes a key tradeoff: a large susceptibility magnitude implies large possible polarization currents, while a large imaginary part of the susceptibility implies losses that necessarily restrict resonant enhancement. In Drude metals with \(\chi=-\omega_{p}^{2}/(\omega^{2}+i\gamma\omega)\), the material enhancement factor is given by \(\omega_{p}^{2}/\gamma\omega\), showing that the largest possible single-frequency response is achievable in materials with large electron densities and small losses. The material enhancement factor is described in further detail in Refs. [98, 161].
The second key parameter is the distance \(d\); the factor \(1/d^{3}\) encodes the dramatic enhancements that are possible in the near field. These enhancements are typically achieved with plasmonic modes, and the factor \(1/d^{3}\) arises from the most rapidly decaying component of the free-space Green's function, \(\sim 1/r^{3}\); squaring this term and integrating over a three-dimensional volume leads to the inverse-cubic dependence. The last point also suggests an important caveat: systems with a different dimensionality must have different scaling laws as a function separation distance. Designing for 2D materials, for example, leads to integrals over 2D (or very thin) domains, leading to a \(1/d^{4}\) near-field enhancement factor. There are also more slowly increasing terms that arise from the mid-field and far-field contributions to the free-space Green's function.
Finally, it should be noted that certain constraints of interest can be seamlessly integrated into the optimization problem of Eq. (61). Of particular importance in plasmonics applications is **radiative efficiency**. When one finds a bound on extinction or LDOS, the bound may suggest very large enhancements, but all of that enhancement could be going into material absorption rather than far-field radiation or scattering. Suppose a given ap
plication requires a certain radiative efficiency, such as some fraction \(\eta\) of the total emission going into the far field. This can be written mathematically as the constraint that absorption be smaller than \((1-\eta)\) multiplied by the extinction, or \(P_{\rm abs}\leq(1-\eta)P_{\rm ext}\). Absorption is quadratic in the polarization field, while extinction is linear in the polarization field, such that this expression represents an additional constraint that can be seamlessly incorporated into Eq. (61). Often the bound of interest, with this constraint, is analytically solvable. Ref. [102] identifies precisely such bounds on high-radiative-efficiency plasmonics, prescribing a tradeoff between large response and radiative efficiency. In Ref. [102] it is not only shown that high-radiative-efficiency bounds can be derived; it is also shown that hybrid dielectric-metal designs can approach the bounds, _and_ that they surpass the same fundamental limits evaluated for metal-only structures. This example showcases the power of using bounds to understand the broader landscape of a photonics application area of interest.
#### 4.1.2 Cdos
Bounds to CDOS can be found along very similar lines to the LDOS bounds of above. We can define the trace of the CDOS via Eq. (9), taking
\[\rho({\bf x}_{1},{\bf x}_{2},\omega)=\frac{1}{\pi\omega}\operatorname{Tr} \operatorname{Im}\mathbb{G}({\bf x}_{1},{\bf x}_{2},\omega). \tag{65}\]
Then, we can separate out a scattered contribution coming from the polarization fields induced in the scatterer, just as for LDOS, and when this term dominates (i.e. the geometry primarily mediates the CDOS), we have:
\[\rho({\bf x}_{1},{\bf x}_{2},\omega)=\frac{1}{\pi\omega}\sum{\bf e}_{\rm inc, {\bf s}_{j},{\bf x}_{1}}^{T}{\bf p}_{{\bf s}_{j},{\bf x}_{2}}, \tag{66}\]
where the position subscripts on \({\bf e}_{\rm inc}\) and \({\bf p}\) denote the source positions of the \({\bf s}_{j}\)-polarized dipoles. Hence in CDOS the field incident from one position is overlapped with the polarization field induced by a source from a second position. The bound for CDOS will be identical to that of Eq. (62), but with \(\|{\bf e}_{\rm inc,{\bf s}_{j}}\|^{2}\) replaced by \(\|{\bf e}_{\rm inc,{\bf s}_{j},{\bf x}_{1}}\|\|{\bf e}_{\rm inc,{\bf s}_{j},{ \bf x}_{2}}\|\). Finally, normalizing by free-space LDOS and dropping all except the most rapidly varying terms as a function of separation distances \(d_{1}\), \(d_{2}\), one arrives at the bound [7]
\[\frac{\rho({\bf x}_{1},{\bf x}_{2},\omega)}{\rho_{0}(\omega)}\leq\frac{1}{4k^ {3}\sqrt{d_{1}^{3}d_{2}^{3}}}\frac{|\chi(\omega)|^{2}}{\operatorname{Im}\chi( \omega)}. \tag{67}\]
The discussion of the terms that appeared in the LDOS bound of Eq. (63) can be translated almost seamlessly here: the same material dependence shows up, corresponding to the same possibilities for plasmonic enhancement, and the same distance dependencies due to the same enhancements of the near
fields of the two dipoles. There are likely two further enhancements that can be made to Eq. (67). First, Eq. (67) is a factor of 2 larger than Eq. (63), when the former is evaluated in the limit as \(\mathbf{x}_{1}\to\mathbf{x}_{2}\). This is almost certainly because the bound of Eq. (67) in Ref. [7] came from evaluating bounds for each diagonal element, simplifying, _then_ taking the trace. Taking the trace and then simplifying the bound would likely remove this factor of 2. Second, the bound of Eq. (67) does _not_ depend on the distance between the two dipoles, \(d_{12}\). This may be physical in certain limits, e.g. when a plasmon can maintain its amplitude in propagating from one dipole to the other, but may not be physical when such propagation is not possible, and one would expect improved bounds to capture this. It is likely true that applying the many-conservation-law approach of Sec. 3.3 would incorporate such effects. Nevertheless, Eq. (67) is a good starting point to understand the upper limits to engineering CDOS in photonic environments.
#### 4.1.3 Smith-Purcell radiation
Another exciting application area for the single-frequency bound approach is to Smith-Purcell radiation, which is the radiation that occurs when a free electron passes near a structured material. A constant-velocity free electron produces only a near field, with no far-field component, but when the evanescent wave interacts with grating-like structures, the gratings can couple the near fields to propagating far fields, leading to a release of energy from the electron in the form of electromagnetic radiation. The natural question, then, is how large this energy release can be?
Mathematically, this question is identical to the question of the work done by a dipole (i.e., LDOS), except that the incident field is different in this case, and is given by Eq. (6). Maximizing the overlap of _this_ incident field with the induced polarization field, subject to the same constraint of Eq. (61), leads to a bound on the Smith-Purcell emission spectral probability given by [104]
\[\Gamma(\omega)\leq\frac{\alpha}{2\pi c}\frac{|\chi|^{2}}{\operatorname{Im} \chi}\frac{L\theta}{\beta}\left[(\kappa_{\rho}d)K_{0}(\kappa_{\rho}d)K_{1}( \kappa_{\rho}d)\right], \tag{68}\]
where \(\Gamma=P/\hbar\omega\) for emission power \(P\), \(\alpha\) is the fine-structure constant, \(\beta=v/c\) is the normalized electron velocity, \(L\) and \(\theta\) are the height and opening azimuthal angle of the cylindrical sector containing the patterned material, \(\kappa_{\rho}=k/\beta\gamma\) is the wavenumber divided by \(\beta\) and the Lorentz factor \(\gamma\), \(d\) is the distance of the beam from the surface, and \(K_{n}\) is the modified Bessel function of the second kind. Although the exact expression is somewhat complex, we see that Smith-Purcell radiation also directly benefits from the material enhancement factor \(|\chi|^{2}/\operatorname{Im}\chi\). A seemingly surprising conclusion also emerged from Eq. (68): _slow_ electrons, at small enough separations, can lead to _greater_ radiation enhancements than _fast_ (i.e. high-energy) electrons.
All constant-velocity electrons do not radiate when their speed is smaller than speed of light in the background medium, and emit only near fields. But high-speed electrons are closer to surpassing the Cherenkov threshold, and hence the fields they generate decay more slowly, out to larger distances. By constrast, low-speed electrons have very strong but very tightly confined near fields. But if one brings a patterned surface close enough, the strong very near fields of slow electrons have greater potential for radiation enhancements than the more moderate near fields of fast electrons.
Some of the general trends, and absolute numerical values, of the bound of Eq. (68) were validated theoretically and experimentally in Ref. [104]. In particular, Fig. 2 shows an experimental setup for measuring the Smith-Purcell radiation for electron beams with varying energies, as well as designed gold-on-silicon gratings whose parameters were optimized for maximum response. The key result is shown in panel (d), where the grey region indicates the fundamental bounds, as a function of photon wavelength, with some width to account for experimental undertainties. The colored data points are quantitatively measured probabilities (with no fitting parameters), showing that both the quantitative values of the bounds are nearly approachable, and that the complex wavelength dependence (emerging from an interplay between the material enhancement factor and the optical near fields) correctly captures the response of high-performance designs.
Figure 2: The bounds of Eq. (68) dictate upper limits to Smith–Purcell emission rates. (a–d) The experiments of Ref. [104] quantitatively confirm that designed metallic gratings can approach the fundamental performance limits. (Adapted from Ref. [104].)
#### Spectral NFRHT
Near-field radiative heat transfer, NFRHT, introduced in Sec. 2.5, offers an extraordinary challenge for fundamental limits. It comprises rapidly decaying, large-area, broadband thermal sources for which little has been understood about upper bounds for quite some time. While we tackle the question of broadband enhancements in Sec. 4.3, in this section we describe the recent progress in understanding maximum NFRHT at a single frequency. There are three key results that we can highlight: channel bounds for planar bodies [55; 58], material-loss bounds [103], and an amalgamation of the two [162; 163].
Channel bounds to NFRHT are described as "Landauer bounds," due to their similarities with Landauer transport. For planar bodies with in-plane translational (and therefor rotational) symmetries, the in-plane wavenumber is a conserved quantity, and the energy flux from one body to another can be decomposed into propagating and evanescent plane-wave channels with no cross-channel scattering. One can decompose the fields emanating from the emitting body into normalized plane-wave modes, insert them into the fluctuation-averaged flux, i.e. the average of the integral \(\frac{1}{2}\int_{A}\mathbf{E}\times\mathbf{H}^{*}\cdot\hat{\mathbf{n}}\), for separating plane \(A\) and normal vector \(\hat{\mathbf{n}}\). This results in an expression for the flux rate \(\Phi(\omega)\), of Eq. (20) and Eq. (21), given by
\[\Phi(\omega)=\sum_{j=s,p}\frac{1}{2\pi}\int\frac{\mathrm{d}^{2}\mathbf{\kappa}}{4 \pi^{2}}T_{j}^{12}(\omega,\kappa,d), \tag{69}\]
where \(\mathbf{\kappa}\) is the in-plane wave propagation constant (and \(\kappa\) its magnitude), \(j\) is a polarization index, \(k_{0}\) is the free-space wavenumber and the \(T_{i}\) are "transmission coefficients," which depend on the specific Fresnel reflection coefficients of the two interfaces [58]. This expression has an elegant interpretation: NFRHT is the composition of plane-wave fluxes, each contributing with a weight \(T_{i}\). Moreover, the coefficients \(T_{i}\) are bounded above by \(1\), for both the propagating and evanescent waves [55; 58; 164]. Then, if there is a limit to the largest wavenumber across which a nonzero transmission can be achieved, one will have a bound on the maximum spectral RHT.
Hence it is possible to identify a maximal rate of NFRHT is given by power transferred with "Landauer" transmission unity over all possible plane waves [55; 164]. While intuitive, however, this bound has two serious drawbacks. The first is that if one literally computes the integral of Eq. (69) over all possible waves, the result is infinite, as there are an infinite number of plane-wave channels. Of course one cannot reasonably expect to achieve unity transmission over channels with infinitely large in-plane wavenumbers (as they decay exponentially fast), implying there must be a maximal channel at which the sum should be terminated. But how to choose this value? One proposal, from Ref. [55], was that the maximal accessible channel should be proportional to \(1/a\), where \(a\) is the lattice spacing of the material; the reasoning being that beyond this limit the use of a continuum model of the materials
would not be valid. Another proposal, from Ref. [164], is that the maximal accessible channel wavenumber is given by \(k_{\rm max}=1/d\), where \(d\) is the separation between the two bodies; the reasoning being that the exponential decay of the evanescent waves makes it difficult to achieve large transmission beyond \(1/d\). Each of the resulting bounds (one from \(k_{\rm max}=1/a\), and the other from \(k_{\rm max}=1/d\)), has shortcomings: the lattice-spacing-defined bound is extraordinarily high for any reasonable lattice constant, well beyond all other bounds discussed below. And the separation-defined-bound is in fact not a true bound: it can be superceded with reasonable material parameters [103], which in fact do show non-trivial transmission beyond \(1/d\). Hence the two known versions of the bound are either far too large, or surpassable.
The second serious drawback of using Eq. (69) is that it only applies to planar bodies with translational symmetry in all directions. The use of conservation laws for bounds, discussed next, leads to bounds that can apply to planar bodies with any patterning, while also being tighter than the channel bounds resulting from Eq. (69).
The first use of conservation laws for spectral NFRHT bounds appeared in Ref. [103]. The mathematical procedure is sufficiently complex that we will not go through it in detail here, but the intuition can be explained. The idea is to use the global conservation law requiring \(P_{\rm abs}\leq P_{\rm ext}\) in the spectral NFRHT problem. The difficulty is that the sources are embedded _within_ one of the scattering bodies, which leads to divergences if one blindly applies the constraint \(P_{\rm abs}\leq P_{\rm ext}\). However, the radiative exchange of heat can be decomposed into two subsequent scattering problems, both of which have sources separated from scatterers. In the first step, the incident field is given by the field emanating from body 1 _in the presence of body 1_, with only the second body serving as the scatterer. The absorption in this second body is bounded by the extinction by this second body, which leaves a bound in terms of the second material and the "incident field" emanating from body 1. Of course, we do not know exactly what this field is for any pattern. At this point, however, we can use reciprocity to rewrite the field emanating from body 1 in terms of fields emanating from the free space of body 2's domain, being absorbed by body 1. The constraint \(P_{\rm abs}\leq P_{\rm ext}\) can be applied to this scattering process again, ultimately yielding a single-frequency, flux-per-area \(A\) bound given by [103]
\[\frac{\Phi(\omega)}{A}\leq\frac{1}{16\pi^{2}d^{2}}\frac{|\chi_{1}|^{2}}{\rm Im \,}\chi_{1}\frac{|\chi_{2}|^{2}}{\rm Im\,}\chi_{2}, \tag{70}\]
where \(d\) is the separation distance between the two bodies, and \(\chi_{1}\) and \(\chi_{2}\) are their optical susceptibilities, respectively. This bound includes two key dependencies: the material enhancement factor \(|\chi|^{2}/\,{\rm Im\,}\chi\), and a \(1/d^{2}\) dependence arising from the rapidly decaying near fields in the electromagnetic Green's function. The bound of Eq. (70) is promising, as it suggests significant possible enhancements of spectral NFRHT, and it is plausible: the actual NFRHT
of two planar bodies with equal susceptibilities, on resonance, is given by \(\Phi(\omega)/A=1/(4\pi^{2}d^{2})\ln\left[|\chi|^{4}/(4(\operatorname{Im}\chi)^{2})\right]\), with nearly identical dependencies as Eq. (70), except for the logarithmic dependence on the material enhancement. Can this be overcome, with instead linear enhancements in \(|\chi|^{2}/\operatorname{Im}\chi\)? For some materials, the answer is "yes," as shown with computational inverse design in Ref. [165]. More generally, however, such linear enhancements are not generic, and one can further tighten the bound of Eq. (70).
Refs. [162, 163] showed that one can tighten the bound of Eq. (70) by combining the use of a global conservation law with that of a channel decomposition. If one decomposes the _general_ (not specific to translation-symmetric) scattering response into plane waves, and further imposes conservation laws for absorption and extinction (of the bodies in tandem as well as in isolation), then a long mathematical process leads to a tighter bound. If we define \(\mathbb{G}_{0,AB}\) to be the free-space Green's function matrix for sources in body \(A\) to measurement points in body \(B\), and \(g_{i}\) the singular values of \(\mathbb{G}_{0,AB}\), then the resulting bound is given by [162]:
\[\Phi(\omega)\leq\sum_{i}\left[\frac{1}{2\pi}\Theta(\zeta_{A}\zeta_{B}g_{i}^{2} -1)+\frac{2}{\pi}\frac{\zeta_{A}\zeta_{B}g_{i}^{2}}{(1+\zeta_{A}\zeta_{B}g_{i }^{2})^{2}}\Theta(1-\zeta_{A}\zeta_{B}g_{i}^{2})\right], \tag{71}\]
where \(\zeta_{A,B}=|\chi_{A,B}|^{2}/\operatorname{Im}\chi_{A,B}\). One can see that the expression of Eq. (71) has components of both material response (in \(\zeta_{A,B}\)) and channels (in the \(g_{i}\) factors) in it. Strikingly, in the near-field limit, expression Eq. (71) is given by [163]
\[\begin{split}\Phi(\omega)\frac{d^{2}}{A}\leq&\frac{ 1}{4\pi^{2}}\ln\left(1+\frac{\zeta_{A}\zeta_{B}}{4}\right)\\ &+\frac{\Theta(\zeta_{A}\zeta_{B}-4)}{8\pi^{2}}\left\{\ln(\zeta_ {A}\zeta_{B})+\frac{1}{4}\left[\ln\left(\frac{\zeta_{A}\zeta_{B}}{4}\right) \right]^{2}-2\ln\left(1+\frac{\zeta_{A}\zeta_{B}}{4}\right)\right\},\end{split} \tag{72}\]
which correctly captures the _logarithmic_ material dependence that is seen in planar bodies. This significantly tightens the bound of Eq. (70) for plasmonic materials such as silver or gold which have large material enhancement factors \(|\chi|^{2}/\operatorname{Im}\chi\). The genesis and utility of the bounds of Eqs. (70)-(72) are illustrated in Fig. 3, which contains the derivation of the conservation-law bounds of Eq. (70) in Fig. 3(a), the design of structures showing the material dependence of Eq. (70) in Fig. 3(b), and the more general combination of conservation law and channel-decomposition approach of Eq. (72) in Fig. 3(c).
Generically, it is not possible to find "tighter" single-frequency dependencies than those that arise in Eq. (72), as both the distance and material-enhancement dependencies are achievable in realistic-material planar designs. The only possible improvements are the coefficient prefactors, as well as the
correct material dependence away from the surface-plasmon frequency, suggesting that Eq. (72) indeed captures the key tradeoffs in single-frequency NFRHT. A key remaining question, then, is what is possible over a broad bandwidth? This question is resolved in Sec. 4.3.
### All-frequency sum rules
In Sec. 3.4, we developed the key elements need for sum rules: a causal linear response function, an objective that does not involve the conjugate of that function, and certain technical conditions (e.g. sufficient decay). Optical extinction is the prototype example, as the optical theorem prescribes that extinction be proportional to the imaginary part of the overlap of the incident field with the induced polarization field, a quantity that is analytic (for plane-wave incident fields) in the upper-half plane. Within the past few years [166; 7], it has been realized that there is a near-field analog of extinction: the local density of states, or LDOS. As derived in Sec. 2.1, (electric) LDOS is given by the trace of the imaginary part of the (electric) Green's
Figure 3: A collection of bounds on single-frequency near-field radiative heat transfer. (a) The approach of Ref. [103] using material loss as the only constraint, exploiting reciprocity to bound the response given that the sources are embedded _within_ one of the arbitrarily patterned scattering bodies. (Adapted from Ref. [103].) (b) Bounds and designs from Ref. [165] showing the feasibility, in specific regimes, of achieving enhancements proportional to the square of the material enhancement factor \(|\chi|^{2}/\operatorname{Im}\chi\). (Adapted from Ref. [165].) (c) Tightened bounds from Refs. [162; 163], precluding the possibility of extraordinary response at frequencies away from the surface-polariton frequency of a material of interest. (Adapted from Ref. [163].)
function, evaluated at the source location:
\[\text{LDOS}(\mathbf{x},\omega)=\text{Im}\,\text{Tr}\left[\frac{1}{\pi\omega} \mathbb{G}(\mathbf{x},\mathbf{x},\omega)\right]. \tag{73}\]
The key similarity with extinction is that LDOS is the imaginary part of an amplitude, rather than a squared norm (which depends on the complex conjugate of that amplitude). At first blush, then, it would appear that one can port exactly the derivation used for extinction to derive sum rules for LDOS. However, there are three obstacles that must be overcome.
First, LDOS diverges at high frequencies. Ignoring the effects of a scatterer (which are effectively infinitely far away at infinitely large frequencies), and as seen below Eq. (4), the free-space photon density of states scales as \(\omega^{2}\) as frequency goes to infinity. A diverging LDOS violates the asymptotic-decay requirement of KK relations, prohibiting a sum rule. The resolution, however, is straightforward: one should subtract the free-space LDOS \(\rho_{0}(\omega)\) from the total LDOS, leaving only the scatterer-based contribution \(\rho_{s}(\omega)\):
\[\rho_{s}(\mathbf{x},\omega)=\rho(\mathbf{x},\omega)-\rho_{0}(\omega) =\text{Im}\,\text{Tr}\left[\frac{1}{\pi\omega}\left(\mathbb{G}( \mathbf{x},\mathbf{x},\omega)-\mathbb{G}_{0}(\mathbf{x},\mathbf{x},\omega) \right)\right]\] \[=\text{Im}\,\text{Tr}\left[\frac{1}{\pi\omega}\mathbb{G}_{s}( \mathbf{x},\mathbf{x},\omega)\right], \tag{74}\]
where we define \(\mathbb{G}_{s}\) as the scattered-field part of the Green's function. After isolating the scatterer's contribution to the LDOS, one can verify that the "scattered LDOS" indeed decays sufficiently quickly at high frequencies [7]. Hence this approach of subtracting the free-space LDOS, an approach generalized in "dispersion relations with one subtraction" [152], resolves the first issue of diverging LDOS.
The second issue is that one is not free to arbitrarily choose the pole frequency for a KK relation involving the scattered LDOS. The Green's function itself is finite and generically nonzero at every real frequency, but by definition the LDOS includes a factor of \(1/\omega\), as in Eq. (73). (This does not correspond to a divergent LDOS at zero frequency, as the _imaginary_ part of the Green's function goes to zero at frequency, but the real part does not generically go to 0.) This function, then, already has a pole at the origin. One could try to move the pole to infinite frequency, for example by multiplying by \(\omega/(\omega-\omega_{0})\) and taking the limit as \(\omega_{0}\to\infty\), but the high-frequency asymptotic behavior of LDOS is quite complicated. Hence, there is likely only a single meaningful sum rule for near-field LDOS, which arises from the intrinsic pole at zero frequency.
The third issue is that the real part of the Green's function diverges, since the source and measurement locations coincide; sum rules relate the integral of the imaginary part to the real part (or vice versa), which leads to the impermissible evaluation of an infinite quantity. (Such an integral _should
diverge; the free-space LDOS increases with frequency, meaning that any integral over all frequencies will of course diverge.) One resolution to this issue was proposed in Ref. [167]: to remove the longitudinal contribution to the Green's function, which removes the singularity and suggests that over all frequencies there can be no net change in spontaneous-emission enhancements. But this removal thereby precludes the possibility for near-to-far-field coupling that is crucial for spontaneous-emission engineering, which is why a conventional refractive-index sum rule is recovered. Instead, it was recognized in Refs. [166, 7] that there is an alternative mechanism for overcoming this obstacle: to subtract out the free-space LDOS term from the total term. The free-space term is the one responsible for the diverging real part, yet the free-space LDOS is exactly known and hence there is no need for a KK relation for that part anyhow. Hence this obstacle is resolved by the same procedure as the first one, and we can proceed to deriving a scattered-LDOS sum rule.
The hemispherical contour (with hemispherical bump at the origin), in tandem with the same Cauchy-residue arguments for far-field sum rules in Sec. 3.4, leads to a sum rule for \(\rho-\rho_{0}\) analogous to the far-field case [7]:
\[\int_{0}^{\infty}\rho_{s}(\omega,\mathbf{x})\,\mathrm{d}\omega=\frac{1}{2}\, \mathrm{Re}\,\mathrm{Tr}\,\mathbb{G}_{s}(\mathbf{x},\mathbf{x})\big{|}_{\omega =0}=\alpha_{\mathrm{LDOS}}. \tag{75}\]
Now we have connected the all-frequency scattered-field component of electric LDOS to its electrostatic Green's function. Is that informative? It turns out to be quite informative, because there are near-field "domain monotonicity" theorems [7] that ensure that this shape-dependent Green's-function term is bounded above by its form in any enclosure, and we can choose high-symmetry enclosures where it has a simple analytical form. For example, for
Figure 4: (a) Sum rules, derived using the techniques of Sec. 3.4 and the contour on the lower left, impose strong constraints on LDOS lineshapes. (b) Electric LDOS of various material half-spaces and 2D sheets, with different resonance peaks and bandwidths. The inset, however, shows that the integral converges to identical values for each scenario. (c) Similarly with magnetic LDOS, whose sum rule is now zero. The sum rules are for the scattered-field contributions to the LDOS, which can be negative at frequencies where spontaneous emission is suppressed by the presence of a scatterer. (Adapted from Ref. [7].)
a planar half-space, the near-field electrostatic constant is simply
\[\alpha_{\mathrm{LDOS,plane}}=\frac{1}{16\pi d^{3}}\left[\frac{\varepsilon(0)-1}{ \varepsilon(0)+1}\right], \tag{76}\]
where \(\varepsilon(0)\) is the zero-frequency (electrostatic) permittivity. For conductive materials whose permittivity diverges at zero frequency, the corresponding fraction in Eq. (76) is simply 1, which can also be used as a general bound for any material. Notably, for the _magnetic_ LDOS above an electric material, the right-hand side of the counterpart to Eq. (76) is _zero_: the scattering contribution to the magnetic LDOS must average out to zero (i.e., it provides suppression and enhancement of the free-space LDOS in equal amounts).
An example of the utility of the LDOS sum rule is given in Fig. 4. The electric LDOS is shown for three typical metals: gold (Au), silver (Ag), and aluminum (Al), as well as for a single graphene sheet (with Fermi level 0.6 eV). These four systems show LDOS peaks at quite different frequencies, from below 1 eV to beyond 10 eV, with very different quality factors leading to quite different "spreads" in their spectral response. Yet as is made clear by the inset of Fig. 4, the integrated response is exactly equal for each of these systems, as must be true from Eq. (76) (the material constant \(\alpha\) for each system is exactly 1). Sum rules illuminate unifying principles that must apply across seemingly disparate systems.
### 4.3 Finite, nonzero bandwidth
The techniques of the previous two sections apply to single-frequency and all-frequency scenarios. In this section, we probe an intermediate regime: finite, nonzero bandwidth. Techniques that work for any arbitrary bandwidth would be tantalizingly powerful, as they would incorporate the single- and all-frequency results as asymptotic limits of a more general theory. Yet the techinques of the previous section would seem incapable of extension to nonnzero, finite bandwidths: there is no single scattering problem for which power-conservation laws can be imposed, nor can the contour integrals of the sum-rule approaches be easily modified to a finite bandwidth. In this section, we describe two recently developed approaches to tackle finite-bandwidth bounds: first, transforming bandwidth-averaged response to a complex frequency (largely following Ref. [7]), and second, identifying an oscillator-based representation of any scattering matrix (largely following Ref. [168]).
#### 4.3.1 Complex-frequency bounds
Ref. [7] recognized an intermediate route that utilized both techniques in one fell swoop. The idea can be summarized succinctly: finite-bandwidth average response can be transformed to a scattering problem at a single, _complex_-valued frequency, where quadratic constraints analogous to power conservation can be imposed. The complex frequency accounts for bandwidth, while the power-conservation analog imposes a finite bound. We now develop this intuition mathematically.
To compute the bandwidth average of a response function such as LDOS, one must define a "window function" that encodes the center frequency, the bandwidth, and the nature of the averaging. A common choice is a linear combination of step functions, but this choice turns out to be mathematically treacherous. A simple (and mathematically serendipitous) choice is a Lorentzian function. Uses of tailored window functions for bandwidth averaging were first proposed in Refs. [169, 17]; in the first, bandwidth-averaged extinction was analyzed for scaling laws for optical cloaking, while in the second, they were used to regularize the computational inverse design of maximum LDOS. Our quantity of interest, the frequency-averaged LDOS, \(\langle\rho\rangle\), can be written [7]
\[\langle\rho\rangle=\int_{-\infty}^{\infty}\rho(\omega)H_{\omega_{0},\Delta \omega}(\omega)\,\mathrm{d}\omega, \tag{77}\]
where \(H_{\omega_{0},\Delta\omega}(\omega)\) is the Lorentzian window function,
\[H_{\omega_{0},\Delta\omega}(\omega)=\frac{\Delta\omega/\pi}{(\omega-\omega_{0} )^{2}+(\Delta\omega)^{2}}, \tag{78}\]
where \(\omega_{0}\) is the center frequency and \(\Delta\omega\) is the bandwidth of interest. In Eq. (77) we define the frequency integral from \(-\infty\) instead of \(0\) for smoothness; typically, the window function will be narrow enough to render this difference negligible; conversely, in the all-frequency limit, the symmetry of the LDOS around zero frequency ensures we are working with the correct quantity. We are interested only in the near-field enhancements of \(\rho\), so we will drop the free-space LDOS, as was useful in the sum-rule section to avoid spatial and spectral divergences. Then, consider the integral of Eq. (77): it already covers the entire real line, we can imagine adding to it a the hemispherical contour in the UHP that will contribute infinitesimally. Then the integral is a closed contour, and we can use complex-analytic techniques based on the analyticity of the integrand and the locations of the poles of the integrand. The integrand is not analytic, but the LDOS can be written as \(\rho(\omega)=\operatorname{Im}s(\omega)\), where \(s(\omega)\), proportional to the trace of the imaginary part of the scattered component of the Green's function, _is_ analytic. Taking the imaginary part outside the integral, the remainder of the integrand of Eq. (77) has two poles away from the lower-half plane: one at zero, thanks to the \(1/\omega\)
term in the LDOS, and a second at \(\omega_{0}+i\Delta\omega\). Then, a few lines of algebra gives the frequency average of \(\rho(\omega)\) as [7]
\[\langle\rho\rangle=\operatorname{Im}s(\omega_{0}+i\Delta\omega)+2H_{\omega_{0}, \Delta\omega}(0)\alpha_{\mathrm{LDOS}}. \tag{79}\]
The second term comes from the contribution of the sum rule at a given frequency, and ensures that the ultimate expression will give the sum rule in the asymptotic limit \(\Delta\omega\to\infty\). Here, for simplicity and pedagogy, we will assume a sufficiently narrow bandwidth that the second term can be ignored. (It can always be reintroduced in the final expression.) The first term is the imaginary part of the LDOS scattering amplitude, evaluated at the _complex_ frequency \(\tilde{\omega}=\omega_{0}+i\Delta\omega\). What is the largest this term can be?
To bound the complex-frequency term, we can develop a generalization of the real-frequency conservation-law approach. In Ref. [7] we developed such a generalization via a somewhat complicated line of differential-equation reasoning; here, we develop a simpler (but no less general) integral-equation form. The starting point is the complex-valued integral equation,
\[\left[\mathbb{G}_{0}(\tilde{\omega})+\xi(\tilde{\omega})\right] \mathbf{p}(\tilde{\omega})=-\mathbf{e}_{\mathrm{inc}}(\tilde{\omega}), \tag{80}\]
where we have momentarily included all frequency arguments to emphasize that Eq. (80) is evaluated at the complex frequency \(\tilde{\omega}\). Next, we will multiply on the left by \(\mathbf{p}^{\dagger}/\tilde{\omega}\), and take the imaginary part of the entire equation, to arrive at
\[\mathbf{p}^{\dagger}\left\{\operatorname{Im}\left[\frac{\mathbb{G}_{0}}{ \tilde{\omega}}+\frac{\xi}{\tilde{\omega}}\right]\right\}\mathbf{p}= \operatorname{Im}\left[\left(\frac{\mathbf{e}_{\mathrm{inc}}}{\tilde{\omega}} \right)^{\dagger}\mathbf{p}\right], \tag{81}\]
This equation can be regarded as a complex-valued extension of the real-valued, global conservation law of Eq. (25). In particular, the two terms on the left are both positive-semidefinite, as can be proven by causality (cf. Sec. IX of the SM of Ref. [97]). To remove the shape dependence and focus on the material dependence, then, we can drop the first term on the left-hand side of Eq. (81), and rewrite this equation as an inequality:
\[\mathbf{p}^{\dagger}\left[\operatorname{Im}\left(\frac{\xi}{\tilde{\omega}} \right)\right]\mathbf{p}\leq\operatorname{Im}\left[\left(\frac{\mathbf{e}_{ \mathrm{inc}}}{\tilde{\omega}}\right)^{\dagger}\mathbf{p}\right], \tag{82}\]
Equation (82) imposes a constraint on the strength of the complex-frequency polarization field that enters the near-field scattering amplitude \(s(\tilde{\omega})\). The exact expression for the scattering amplitude is \(s(\tilde{\omega})=\frac{1}{\pi\tilde{\omega}}\operatorname{Tr}\mathbb{G}_{0} (\mathbf{x},\mathbf{x},\tilde{\omega})\). One can maximize the imaginary part of this amplitude subject to the constraint of Eq. (82) by exactly the procedure outlined in Sec. IX of the SM of Ref. [7]; doing so, one arrives at a simple result (remembering that we have dropped the sum-rule term):
\[\langle\rho\rangle\leq\frac{1}{\pi}\frac{|\chi(\tilde{\omega})|^{2}}{\text{Im}[ \tilde{\omega}\chi(\tilde{\omega})]}\mathbf{e}_{\text{inc}}^{\dagger}\mathbf{e }_{\text{inc}}. \tag{83}\]
As a reminder, the inner product of the incident field with itself is a volume integral of the square of the incident fields. The deep near field is dominated by the most rapidly decaying term in the incident fields; integrating only this contribution at the complex frequency gives \(\mathbf{e}_{\text{inc}}^{\dagger}\mathbf{e}_{\text{inc}}=\frac{1}{16\pi d^{3}}\), where we have taken the arbitrary scattering body to fit in a halfspace enclosure separated from the source by a distance \(d\). Inserting this expression into the inequality, and normalizing by the free-space LDOS evaluated at \(|\tilde{\omega}|\), we finally have a bandwidth-averaged bound [7]:
\[\frac{\langle\rho\rangle}{\rho_{0}(|\tilde{\omega}|)}\leq\frac{1}{8|k|^{3}d^{3 }}f(\omega), \tag{84}\]
where \(f(\omega)\) is the bandwidth-averaged generalization of the material-enhancement factor (discussed at real frequencies in Sec. 4.1.1),
\[f(\omega)=\frac{|\tilde{\omega}\chi|^{2}}{|\tilde{\omega}|\,\text{Im}\,( \tilde{\omega}\chi)}. \tag{85}\]
The material enhancement factor of Eq. (85) is slightly simpler than that of Ref. [7], thanks to our use of the simpler integral-equation constraint of Eq. (81).
The bound of Eq. (84) is the key result: the bandwidth-averaged LDOS has an upper bound that is similar to that of the single-frequency LDOS, but reduced by the presence of a complex frequency. This reduction is significant for low-loss materials, for which \(\text{Im}\,\chi\) might be quite small, in which case \(\text{Im}(\tilde{\omega}\chi)\approx(\Delta\omega)\chi\), wherein the bandwidth effectively provides the relevant loss. There is also an additional broadening due to dispersion, as \(\chi\) is evaluated at the complex frequency \(\tilde{\omega}\), at which \(\text{Im}\,\chi\) will generally be larger. (There is _another_ additional term in the more general version of the bound of Eq. (84) that exponentially decays with bandwidth, that we excluded for simplicity.) Hence the bound of Eq. (84) has three properties that are quite theoretically pleasing. First, in the single-frequency limit, it asymptotically approaches the previously derived single-frequency bound. Second, in the all-frequency limit, it asymptotically approaches the previously derived sum rule. And, finally, in the nonzero- and finite-bandwidth regime, it intermediates between the two, with a smaller average response than the single-frequency bound, and a smaller total integrated response than the sum rule. This approach was extended to CDOS and NFRHT as well in Ref. [7], with similar features emerging. One interesting comparison point is to Ref. [170], which examined optimal materials for planar NFRHT designs. Unlike the power-bandwidth bounds, which increase with electron density and decrease with material loss, Ref. [170] found that the key material parameters in planar systems are simply the (ideally small) frequency at which surface polaritons
are strongest, and the bandwidth over which they are strong. This finding has been experimentally corroborated [171], and it emerges theoretically in the more general NFRHT bounds of the next subsection.
Ref. [7] probed the feasibility of approaching the upper bounds in certain prototypical systems. Four key results were identified. First, for center frequencies close to the surface-plasmon frequencies of metals, planar systems supporting such plasmons are able to closely approach the bounds across a wide range of bandwidths. Second, double-cone (bowtie-antenna-like) antennas show a performance that can closely approach (nearly within 2X) their bounds across a wide range of bandwidths, for center frequencies coincident with their resonant frequencies. Third, these bounds were the first to enable systematic comparison of dielectric- and metal-based systems. Unlike the single-frequency case, the complex-frequency material enhancement factor does not diverge for lossless dielectrics (at nonzero bandwidth), which enables predictions of the center frequencies and bandwidths at which metals can be categorically superior to dielectrics, and vice versa. Finally, these bounds also enabled predictions of when 2D materials can be superior to bulk materials, and vice versa. The results highlight the power of fundamental limits more generally: they enable a high-level understanding of the landscape of a given physical design problem, identifying the material and architectural properties that really matter.
The "power-bandwidth" approach of Ref. [7] was recently generalized in Ref. [172] to incorporate the concept of local conservation laws into the picture. Notice that the constraint of Eq. (81) is a global conservation law; at the time that Ref. [7] was published, the local-conservation-law approach had not yet been invented. Ref. [172] remedies this gap, and shows that for dielectric scatterers, the use of additional conservation laws can significantly improve the resulting bounds. There is an interesting interplay between the quality factor of the sources and the bandwidth of interest, and there are useful semi-analytical bounds that can be derived from the global conservation laws applied to large-scale devices. Moreover, inverse-design structures are shown to come quite close to the improved complex-frequency, local-conservation-law bounds.
#### 4.3.2 Oscillator-representation bounds
An alternative to the complex-frequency approach to bandwidth averaging was very recently proposed in Ref. [168]. We will briefly summarize the (detailed) mathematical apparatus developed, and highlight the key result for our purposes: a new, nearly tight bound for bandwidth-averaged NFRHT.
Before delving into scattering bodies, consider the bulk optical susceptibility of a material. It is known that the response of an isotropic passive material can be written as a linear combination of Drude-Lorentz oscillators,
\[\chi(\omega)=\sum_{i}\frac{\omega_{p}^{2}}{\omega_{i}^{2}-\omega^{2}-i\gamma \omega}c_{i}, \tag{86}\]
where \(\omega_{p}\) is the "plasma frequency" of the material (related to its electron density [173, 153]), \(\omega_{i}\) are the oscillator frequencies, \(\gamma\) are infinitesimal oscillator loss rates, and the \(c_{i}\) are "oscillator strengths" that sum to unity thanks to the sum rule of Eq. (50) discussed in Sec. 3.4. Often this representation is derived in single-electron quantum-material frameworks [153], but it applies more generally as a consequence of causality and passivity. (The technically rigorous mathematical statement uses the theory of Herglotz functions [174].) Any linear material's susceptibility must conform to the Drude-Lorentz linear combination of Eq. (86); perhaps not with a small number of oscillators (it is well known that effects such as inhomogeneous broadening lead to other lineshapes, such as the "Voigt" lineshape [175]), but with sufficiently many oscillators. It may seem counter-intuitive to work with a representation that may need 1,000, or even 100,000 oscillators, instead of a different model with fewer parameters. From an optimization perspective, however, this is not correct. In the Drude-Lorentz representation of Eq. (86), the only degrees of freedom are the \(c_{i}\) coefficients, and the susceptibility is _linear_ in these degrees of freedom. In many scenarios, large linear optimization problems are significantly easier to solve (sometimes even analytically) than large, nonlinear (and nonconvex) optimization problems.
Causality and passivity create three key ingredients that together lead to the Drude-Lorentz representation of Eq. (86): a Kramers-Kronig relation, a sum rule, and positivity of the imaginary part of the susceptibility. The exact sequence of transforming those ingredients to the Drude-Lorentz representation is detailed in Ref. [156]. One intuitive description is that the imaginary part of the susceptibility is a positive quantity, and can be discretized into coefficients at many discrete frequencies along the real axis. Passivity implies that these coefficients are real, while the sum rule implies that their sum is constrained. Finally, the Kramers-Kronig relation guarantees that the imaginary parts of the susceptibilities are the _only_ degrees of freedom; the real parts are entirely determined by the imaginary parts. Compiling the mathematical details of these steps leads to Eq. (86), which is a relation that many find intuitive thanks largely to the fact that it can be derived in single-electron quantum mechanics.
The key idea of Ref. [168] is that there is a _wave-scattering operator_ that exhibits nearly identical mathematical properties to material susceptibilities. This operator is the "\(\mathbb{T}\)" matrix. The \(\mathbb{T}\) matrix is a scattering matrix that relates the polarization field induced in any scattering body to the incident fields impinging upong it [176]:
\[\mathbf{P}(\mathbf{x},\omega)=\int_{V}\mathbb{T}(\mathbf{x},\mathbf{x}^{ \prime},\omega)\mathbf{E}_{\mathrm{inc}}(\mathbf{x}^{\prime},\omega)\, \mathrm{d}\mathbf{x}^{\prime}, \tag{87}\]
or, in vector notation:
\[\mathbf{p}=\mathbb{T}\mathbf{e}_{\text{inc}}. \tag{88}\]
The \(\mathbb{T}\) matrix is a _causal linear response function_, as the polarization field at \(\mathbf{x}\) cannot be excited before the incident field exciting it reaches \(\mathbf{x}^{\prime}\). Just as causality implies a Kramers-Kronig relation for material susceptibilities, it was recognized in Ref. [168] that causality implies a Kramers-Kronig relation for \(\mathbb{T}\) matrices. Sum rules come from the low- and high-frequency asymptotic behavior of Kramers-Kronig relations, and the \(\mathbb{T}\) matrix satisfies a matrix-valued analog of the \(f\)-sum rule for material oscillator strengths. Finally, just as passivity implies that the imaginary parts of susceptibilities are positive, it similarly implies that the anti-Hermitian part of the \(\mathbb{T}\) matrix is positive semidefinite. Together, these three ingredients imply a matrix-valued analog of Eq. (86) for any \(\mathbb{T}\) matrix:
\[\mathbb{T}(\omega)=\sum_{i}\frac{\omega_{p}^{2}}{\omega_{i}^{2}-\omega^{2}-i \gamma\omega}\mathbb{T}_{i}, \tag{89}\]
where the Drude-Lorentz parameters are exactly the same as in Eq. (86), and the \(\mathbb{T}_{i}\) are now matrix-valued coefficient degrees of freedom. The exact expression of Eq. (89) is for the case of reciprocal materials; in nonreciprocal terms there is an extra term that makes the calculations more tedious but has no effect on most applications of interest. Analogous to the constraints on material oscillator strengths, passivity and the \(\mathbb{T}\)-matrix sum rule lead to constraints on the \(\mathbb{T}_{i}\):
\[\sum_{i}\mathbb{T}_{i}=\mathbb{I},\qquad\mathbb{T}_{i}\geq 0, \tag{90}\]
where \(\mathbb{I}\) is the identity matrix. Equation (89), and its nonreciprocal analog, must hold for _any_ linear electromagnetic scattering process. Even in scattering processes with complex interference phenomena, Fano resonances, etc., \(\mathbb{T}(\omega)\) must exhibit lineshapes consistent with Eq. (89), which is shown in Ref. [168] to reveal surprising structure even in typical scattering problems.
Our interest in this chapter, however, is in fundamental limits, so we will focus on the utility of Eq. (89) to identify upper bounds in the application considered in Ref. [168], which is NFRHT. The approach in the paper requires a dozen or so mathematical steps explained in Sec. IX of the SM of Ref. [168]; the key is to transform the problem from one of thermal sources inside the hot body radiating power to the cold one to one of incoherent sources _between_ the bodies radiating back to the emitter body. There are various other key steps, such as an appropriate renormalization of the point sources between the bodies. Ultimately, the culmination is the following: NFRHT is rewritten in terms of the total \(\mathbb{T}\) matrix of the collective bodies, at which point the representation of Eq. (89) is inserted. Then, the entire frequency dependence of
the problem is given by the collective products of the Drude-Lorentz oscillators and the Planck function, whose integrals can be determined analytically. Then one is left with a linear summation of given coefficients multiplying the unknown \(\mathbb{T}_{i}\) degrees of freedom. The optimization over all possible \(\mathbb{T}_{i}\), subject to the constraints of Eq. (90), has many unknowns, but can be done _analytically_, leading to a simple yet completely general bound on thermal HTC:
\[\text{HTC}\leq\beta\frac{T}{d^{2}}, \tag{91}\]
where \(T\) is the temperature, \(d\) is the separation, and \(\beta\approx 0.11k_{B}^{2}/\hbar\) is a numerical constant. Equation (91) is an unsurpassable limit that captures the key constraints imposed on every scattering \(\mathbb{T}\) matrix. Strikingly, despite the relative simplicity of the approach, it offers the tightest bounds on NFRHT to date, only a factor of 5 larger than the best theoretical designs [170]. Previous approaches suggested strong material dependencies, with bounds that increased with electron density, whereas planar designs show the reverse trend. In this bound, use of a low-frequency sum rule in the \(\mathbb{T}\)-matrix representation leads to an electron-density-independent bound. Moreover, the optimization over \(\mathbb{T}_{i}\) predicts precisely the same optimal peak transfer frequency as the best designs [168].
There are two sets of relaxations used to arrive at the bound of Eq. (91): first, beyond the representation theorem, no other Maxwell-equation constraints are imposed. Hence the optimal \(\mathbb{T}_{i}\) may not actually be physically realizable. Potentially one could impose such constraints exactly by the local-conservation-law approach discussed above. Second, the heat transfer process is relaxed to the emission of the sources between the bodies into both the source and emitter, whereas the exact expression is the _difference_ between the radiation into the emitter and receiver bodies. The latter relaxation leads to a linear dependence on \(\mathbb{T}(\omega)\), as opposed to the qudaratic dependence in the exact expression. It may be possible to optimize over the exact quadratic expression using manifold optimization techniques [177; 178; 168]. Tightening these relaxations may lead to a further tightening of the bound. Conversely, they may lead to the same bound, and improved design techniques [179] may identify structures that can achieve them.
### Mode volume
In this final section, we turn to the question of bounds on mode volume. Mode volume is a very different response function than any of those previously considered, as it is a property of an eigenfunction rather than a scattering quantity. There is no incident field in the definition of a mode volume, and hence the power-conservation and causality-based approaches of the previous
sections are not immediately useful. In this section, we describe a method for bounding minimum mode volumes based on the optimization-theoretic notion of duality.
In optimization theory, the **dual** of an optimization problem is a second optimization problem, related to but distinct from the original, "primal" optimization problem [143]. The dual problem is formed by incorporating all constraints into the Lagrangian of the original optimization problem, introducing Lagrange multipliers as coefficients of the constraints, and optimizing out the primal variables, leaving only the Lagrange multipliers as degrees of freedom. An equivalent interpretation is that if one interprets a generic minimization optimization problem as the _minimax_ of a Lagrangian, the dual problem is the _maximin_ of the same Lagrangian. The dual program has two properties that can be quite useful for optimization and bounds: it is always a concave maximization problem (equivalent to a convex minimization problem, and therefore efficiently solvable by standard convex-optimization techniques), and its maximum is guaranteed to be a lower bound for the orginal, primal, minimization problem.
For many optimization problems, the dual cannot be expressed in a simple form; even amongst those problems for which it has a simple expression, it often has the trivial solution \(-\infty\) as its maximum, giving a trivial lower bound. Ref. [180] showed that a very special class of electromagnetic design problems have a nontrivial, semi-analytical dual problem. In particular, for design problems in which the objective function to be minimized is the norm of a difference between the electric field \(\mathbf{E}\) and some target field \(\mathbf{E}_{\mathrm{target}}\),
\[\mathcal{F}=\|\mathbf{E}-\mathbf{E}_{\mathrm{target}}\|^{2}, \tag{92}\]
then one can impose the full Maxwell-equation constraints and identify a non-trivial, semi-analytical dual problem. One might suspect that objectives of the form of Eq. (92) might be quite common: after all, a focusing metalens could have a target field that matches an Airy beam along a focal plane, a surface-pattern design intended to maximixe spontaneous-emission enhancements could target the field at the location of the dipole, and so forth. But these cases do not work for the expression of Eq. (92): for a non-trivial dual problem, the field \(\mathbf{E}_{\mathrm{target}}\) must be specified _at every spatial point of the entire domain_. This includes, for examples, the points within the scatterer, the points within any PML regions, etc. Knowing a target field at a single point, or on a focal plane, is not sufficient. And it is hard to think of any application in which we know the target field across the entire domain.
It turns out, however, that mode-volume minimization can be reformulated to target an objective specified over the entire domain. Mode volume, as specified in Eq. (22), is given by the integral of the field energy over all space divided by the field energy at a single point. Typically the integral is treated as a normalization constant (taken to equal 1), and maximization of the field energy at a single point is the key objective. In Ref. [181], it was recognized
that this convention could be reversed: the field energy at the point of interest can be fixed as a normalization constant, equal to 1, while minimizing the integral of the field energy can be the objective. Such an objective is exactly of the form of Eq. (92), with a target field of 0 everywhere! Physically, this makes intuitive sense: a minimum mode volume tries to minimize the field energy at every point, except for the "origin" of interest; everywhere else, it wants to drive the field as close to a target of 0 as possible.
Given this transformation, and a few others described in Ref. [181], one can use the formulation of Ref. [180] to specify a dual program for the mode-volume minimization problem. The solutions of this dual program can be formulated with the modeling language CVX [182] and solved with Gurobi [183], and those solutions represents fundamental lower bounds on the mode volume, given only a designable region and a refractive index of the material to be patterned.
First, the 2D TE case encapsulates scalar-wave physics: without vector fields, there also are not the field discontinuities across boundaries that can be responsible for large field amplitudes in "slot-mode" configurations [16, 18, 19]. There also is no near field for scalar waves, in the sense of large nonpropagating fields that culminate in a singularity at the location of a point source. In this case, the argument for a trivially small mode volume near a perfectly sharp tip _fails_: the lack of a singularity means that one cannot drive the field at the location of the source arbitrarily high. If there is to be no sharp-tip enhancement (as we will see), then dimensional arguments would require mode volume to scale with the square of the wavelength (in 2D), restoring the notion of a "diffraction-limited" mode volume. The only question, then, is the value of the coefficient of the squared wavelength. The duality-computed bounds confirm indeed that below some separation distance \(d\), the mode-volume bounds asypmatotically flattens out, to a small fraction of the square wavelength. This bound depends only on the available refractive index of the designable region, and has been closely approached by inverse-designed structures [17, 181].
The 2D TM case is fundamentally different: sharp field discontinuities occur across material boundaries, and singularities in the near field of point sources imply the possibility for zero mode volume unless fabrication constraints, or similarly a nonzero source-scatterer separation distance, is enforced. In this case, the duality-based approach finds quite different scaling: the 2D TM mode-volume bounds scale as \(d^{2}\), where \(d\) is the relevant source-scatterer distance (or sharp-tip radius of curvature), with no dependence on the wavelength. Intriguingly, this scaling is faster than the typical structure used for mode-volume minimization: a "bowtie antenna" [18, 19], whose optimal mode volume appears to scale only linearly with \(d\) (and hence linearly with wavelength, \(\lambda\), as well). In Ref. [181], it is shown that inverse-designed structures appear to exhibit mode volumes that scale roughly as \(d^{1.4}\), faster than the linear scaling of bowtie antennas but not quite as fast as the duality-based bound. At smaller length scales, these differences can be dramatic. For
minimum feature sizes \(d\approx 0.01\lambda\), the inverse-design curve falls about 5X below the bowtie-antenna curve, which itself is 40X above the mode-volume bound. Resolving this gap, either through identifying better designs or by identifying tighter bounds, could lead to significant reductions in mode volume through near-field engineering.
## 5 Summary and looking forward
Near-field optical response can require significant mathematical machinery, and the techniques to bound them even moreso. We were careful above to give correct and sometimes nearly complete mathematical descriptions. Here, we can give a high-level summary of three of the prototypical response functions and application areas covered:
* LDOS, arguably the most important near-field response function, has single-frequency bounds that scale as \(1/d^{2}\) and \(|\chi(\omega)|^{2}/\operatorname{Im}\chi(\omega)\)[98]. This bound can be achieved at the surface-plasmon frequency of a given material; away from that frequency, inverse designs have shown good performance that can be relatively close to the bound, but generally it is also true that tighter bounds can be computed by using additional constraints. A sum rule is known for all-frequency LDOS [166; 7], which depends on the separation but _not_ on the material; over finite bandwidth, bounds similar to the single-frequency expression can be found, albeit evaluated at the complex frequency. Again, these bounds are nearly achievable when the frequency range is centered around the surface-plasmon frequency of a material, but can be tightened in other scenarios (e.g. dielectric materials) [107]. The key open questions around LDOS are two-fold: first, is there an analytical or semi-analytical bound that can be derived that is nearly achievable across all frequencies? And can one identify achievable bounds for only the _radiative_ part of the LDOS, i.e., that fraction of power that is emitted to the far field?
* Near-field radiative heat transfer is one of the most technically challenging areas of near-field optics, both experimentally and theoretically, but an abundance of work makes it perhaps the area where we have the best understanding of what is possible. For planar bodies, there are simple and powerful transmission expressions for NFRHT [55; 58], as well as an understanding of the optimal materials that lead to the largest response [170; 184; 185]. At a single frequency, semi-analytical bounds have been derived [163] that scale as \(1/d^{2}\) with separation distance and _logarithmically_ with \(|\chi(\omega)|^{2}/\operatorname{Im}\chi\), both dependencies of which are exhibited by planar structures. Finally, when averaging against the Planck function to account for the thermal nature of the radiation, the recently developed oscillator theory of \(\mathbb{T}\) matrices [168] enables a bound proportional only to \(1/d^{2}\) and \(k_{B}^{2}T/\hbar\), with no material dependence. This bound can be ap
proached within a factor of five by the best theoretical designs, showing a comprehensive understanding of what is possible in NFRHT, and the materials and structures needed to achieve that performance. One interesting open question is how this bound changes when one of the bodies must have a bandgap, as is required, for example, in thermophotovoltaics.
* Finally, mode volume is quite different from the other response functions considered above. It is a property of an eigenmode, instead of a scattered field, and hence some of the techniques based on power conservation do not lead to useful bounds in this case. The only approach we know of that leads to useful bounds relies on the _duality_ technique of optimization theory. The most important question surrounding mode volume is how it scales with minimum feature size \(d\). Ideally, it would scale as \(d^{n}\), where \(n\) is the dimensionality of the system (either 2D or 3D), with no dependence on wavelength; this scaling would lead to the largest enhancements at highly subwavelength feature sizes. Certainly such scaling is possible with plasmonic structures, but plasmonic structures are too lossy, and the concept of mode volume itself must be modified for plasmonic mode volume [25]. The question, then, is the optimal scaling for dielectric materials. Interestingly, the duality-based bounds of Ref. [181] suggest exactly \(d^{n}\) scaling. However, bowtie-antenna structures show \(d^{n-1}\) scaling, while inverse designs appear to show a scaling between these two. Hence progress has been made on this crucial question, but it is still not fully resolved: what is the best possible scaling of mode volume with minimum feature size?
The theory of fundamental limits to near-field optical response is now sufficiently rich to be summarized in a book chapter, as we have done here. But the story is not complete: as we have seen in numerous examples, including the three above, there are still many response functions, material regimes, and frequency ranges at which there are gaps between the best known device structures and the best known bounds. Many of the bound techniques described herein have only been discovered in the past few years, and there are likely still significant strides to be made. The optical near field continues to offer a fertile playground for theoretical discovery, experimental demonstration, and new devices and technological applications.
## 6 Appendix: Complex analysis for sum rules
Here we provide a brief summary of the basic rules of complex analysis, and how they are derived, emphasizing the key results relevant to sum rules. More expansive discussions of these ideas can be found in any good complex-analysis textbook.
First, we start with the definition of **complex differentiable**: a function \(f\) is complex differentiable if the limit
\[f^{\prime}(z)=\lim_{h\to 0}\frac{f(z+h)-f(z)}{h} \tag{93}\]
exists for \(h\) along _any path_ in the complex plane. The equality along any path is a very strong constraint, and leads to the Cauchy-Riemann conditions on the derivatives of the real and imaginary parts of \(f\). A function that is complex differentiable at every point on some domain \(\Omega\) is **holomorphic** on \(\Omega\). A major theorem of complex analysis is that all such functions are also **complex analytic** (which means they have a convergent power series in a neighborhood of every point in \(\Omega\)). From complex differentiability, it is a straight path to **Cauchy's integral theorem**: for \(f\) holomorphic on \(\Omega\), and a closed contour \(\gamma\) in \(\Omega\),
\[\oint_{\gamma}f(z)\,\mathrm{d}z=0, \tag{94}\]
which can be proven by setting \(f=u+iv\), \(\mathrm{d}z=\mathrm{d}x+i\mathrm{d}y\), applying Green's / Stokes theorem, and using the Cauchy-Riemann conditions.
An important technique for integrals over open contours is **contour shifting**: if \(\gamma\) and \(\tilde{\gamma}\) are contours with the same endpoints, then
\[\int_{\gamma}f(z)\,\mathrm{d}z=\int_{\tilde{\gamma}}f(z)\,\mathrm{d}z. \tag{95}\]
This follows directly from reversing the second contour, combining it with the first to make a closed contour, and applying Cauchy's integral theorem. Contour shifting is common in Casimir physics, for example, where the standard transformation is a "Wick rotation" from the positive real axis to the positive imaginary axis [186].
One can use contour-shifting to prove an important integral formula. Consider the closed-contour integral \(\oint_{\gamma}\frac{f(z)}{z-z_{0}}\,\mathrm{d}z\), where \(f\) is holomorphic on \(\gamma\), but there is now a singularity in the integrand. For any arbitrary closed contour \(\gamma\), one can follow the prescription of Fig. 5: first make a tiny perforation in the contour, then use that perforation to shift to a modified contour that comprises two straight lines (whose integrals cancel by directionality) and a
Figure 5: Equivalent contours—the latter two by contour shifting—simplify the integration of _any_ closed contour around a singularity (left) to that of a circle arbitrarily close to the singularity (right).
tiny circle at the origin. On the tiny circle, we can write \(f(z)\approx f(z_{0})\). On the circle, \(z=z_{0}+\varepsilon e^{i2\pi t}\), for \(t\) from \(0\) to \(1\), where \(\varepsilon\) is the radius of the circle on \(\tilde{\gamma}\), such that
\[\oint_{\tilde{\gamma}}\frac{f(z)}{z-z_{0}} \approx f(z_{0})\oint_{\tilde{\gamma}}\frac{1}{z-z_{0}}\,\mathrm{ d}z\] \[= f(z_{0})\frac{1}{\varepsilon}\oint e^{-i2\pi t}\,\mathrm{d} \left(\varepsilon e^{i2\pi t}\right)\] \[= 2\pi if(z_{0}). \tag{96}\]
Equation (96) is **Cauchy's integral formula**.
One can take derivatives of Eq. (96) with respect to \(z_{0}\) to yield an expression for the first derivative:
\[f^{\prime}(z_{0})=\frac{1}{2\pi i}\oint_{\gamma}\frac{f(z)}{(z-z_{0})^{2}}\, \mathrm{d}z, \tag{97}\]
and more generally **Cauchy's differentiation formula**:
\[f^{(n-1)}(z_{0})=\frac{(n-1)!}{2\pi i}\oint_{\gamma}\frac{f(z)}{(z-z_{0})^{n} }\,\mathrm{d}z, \tag{98}\]
It is then one final step to get from Cauchy's differentiation formula to the residue theorem. Set the integrand in Eq. (98) to a function \(g(z)\), which has a pole of order \(n\) at \(z_{0}\). By a Laurent expansion, can write _any_ function with a pole of order \(n\) at \(z_{0}\) in this form. Then we have the **residue theorem**:
\[\int_{\gamma}g(z)\,\mathrm{d}z=2\pi i\sum_{\rho}\mathrm{Res}(f;z_{0}), \tag{99}\]
where the **residue** of \(f\) at \(z_{0}\) is defined as
\[\mathrm{Res}(f;z_{0})=\frac{1}{(n-1)!}\lim_{z\to z_{0}}\frac{d^{n-1}}{dz^{n-1 }}\left[(z-z_{0})^{n}f(z)\right]. \tag{100}\]
For \(n=1\), a simple pole, the residue is given by
\[\lim_{z\to z_{0}}\left[(z-z_{0})f(z)\right]. \tag{101}\]
|
2306.03894 | Fractals from Regular Behaviours | We forge connections between the theory of fractal sets obtained as
attractors of iterated function systems and process calculi. To this end, we
reinterpret Milner's expressions for processes as contraction operators on a
complete metric space. When the space is, for example, the plane, the
denotations of fixed point terms correspond to familiar fractal sets. We give a
sound and complete axiomatization of fractal equivalence, the congruence on
terms consisting of pairs that construct identical self-similar sets in all
interpretations. We further make connections to labelled Markov chains and to
invariant measures. In all of this work, we use important results from process
calculi. For example, we use Rabinovich's completeness theorem for trace
equivalence in our own completeness theorem. In addition to our results, we
also raise many questions related to both fractals and process calculi. | Todd Schmid, Victoria Noquez, Lawrence S. Moss | 2023-06-06T17:55:12Z | http://arxiv.org/abs/2306.03894v3 | # Fractals from Regular Behaviours
###### Abstract
We are interested in connections between the theory of fractal sets obtained as attractors of iterated function systems and process calculi. To this end, we reinterpret Milner's expressions for processes as contraction operators on a complete metric space. When the space is, for example, the plane, the denotations of fixed point terms correspond to familiar fractal sets. We give a sound and complete axiomatization of fractal equivalence, the congruence on terms consisting of pairs that construct identical self-similar sets in all interpretations. We further make connections to labelled Markov chains and to invariant measures. In all of this work, we use important results from process calculi. For example, we use Rabinovich's completeness theorem for trace equivalence in our own completeness theorem. In addition to our results, we also raise many questions related to both fractals and process calculi.
fixed-point terms, labelled transition system, fractal, final coalgebra, equational logic, completeness 2012
## 1 Introduction
Hutchinson noticed in [14] that many familiar examples of fractals can be captured as the set-wise fixed-point of a finite family of contraction (i.e., distance shrinking) operators on a metric space. He called these spaces _(strictly) self-similar_, since the intuition behind the contraction operators is that they are witnesses for the appearance of the fractal in a proper (smaller) subset of itself. For example, the famous Sierpinski gasket is the unique nonempty compact subset of the plane left fixed by the union of the three operators \(\sigma_{a},\sigma_{b},\sigma_{c}:\mathbb{R}^{2}\to\mathbb{R}^{2}\) in Figure 1. The Sierpinski gasket is a scaled-up version of each of its thirds.
The self-similarity of Hutchinson's fractals hints at an algorithm for constructing them: Each point in a self-similar set is the limit of a sequence of points obtained by applying the contraction operators one after the other to an initial point. In the Sierpinski gasket, the
point \((1/4,1/2)\) is the limit of the sequence
\[p,\ \sigma_{b}(p),\ \sigma_{b}\sigma_{a}(p),\ \sigma_{b}\sigma_{a}\sigma_{a}(p), \ \sigma_{b}\sigma_{a}\sigma_{a}\sigma_{a}(p),\ \ldots \tag{1}\]
where the initial point \(p\) is an arbitrary element of \({\bf R}^{2}\) (note that \(\sigma_{b}\) is applied last). Hutchinson showed in [14] that the self-similar set corresponding to a given family of contraction operators is precisely the collection of points obtained in the manner just described. The limit of the sequence in (1) does not depend on the initial point \(p\) because \(\sigma_{a},\sigma_{b},\sigma_{c}\) are contractions. Much like digit expansions to real numbers, every stream of \(a\)'s, \(b\)'s, and \(c\)'s corresponds to a unique point in the Sierpinski gasket. The point \((1/4,\sqrt{3}/4)\), for example, corresponds to the stream \((b,a,a,a,\dots)\) ending in an infinite sequence of \(a\)'s. Conversely, every point in the Sierpinski gasket comes from (in general more than one) corresponding stream.
From a computer science perspective, the languages of streams considered by Hutchinson are the _traces_ observed by one-state labelled transition systems, like the one in Figure 1. We investigate whether one could achieve a similar effect with languages of streams obtained from labelled transition systems having more than one state. Observe, for example, Figure 2. This twisted version of the Sierpinski gasket is constructed from a two-state labelled transition system. Each point in the twisted Sierpinski gasket corresponds to a stream of \(a\)'s, \(b\)'s, and \(c\)'s, but not every stream corresponds to a point in the set: The limit corresponding to \((c,a,b,c,c,c,\dots)\) is \((3/4,\sqrt{3}/8)\), for example.
A labelled transition system paired with an interpretation of its labels as contractions on a complete metric space is the same data as a _directed-graph iterated function system_ (GIFS), a generalization of iterated function systems introduced by Mauldin and Williams [19]. GIFSs generate their own kind of self-similar set, and much work has been done to understand the geometric properties of fractal sets generated by GIFSs [7, 8, 9, 10, 19]. We take this work in a slightly different direction by presenting a coalgebraic perspective on GIFSs, seeing each labelled transition system as a "recipe" for constructing fractal sets.
In analogy with the theory of regular languages, we call the fractals generated by finite labelled transition systems _regular subfractals_, and give a logic for deciding if two labelled transition systems represent the same recipe under all interpretations of the labels. By identifying points in the fractal set generated by a labelled transition system with traces observed by the labelled transition system, it is reasonable to suspect that two labelled transition systems represent equivalent fractal recipes--i.e., they represent the same fractal under every interpretation--if and only if they are trace equivalent. This is the content of Theorem 4.6, which allows us to connect the theory of fractal sets to mainstream topics in computer science.
Figure 2: A twisted Sierpüński gasket, depicted in red. In the construction of this set, \(\sigma_{b}\) and \(\sigma_{c}\) are applied twice to a single copy of \(\sigma_{a}\) applied to the set. This has the effect of systematically removing the “top” part of the Sierpüński gasket from its bottom thirds.
Labelled transition systems are a staple of theoretical computer science, especially in the area of process algebra [1], where a vast array of different notions of equivalence and axiomatization problems have been studied. We specifically use a syntax introduced by Milner in [23] to express labelled transition systems as terms in an expression language with recursion. This leads us to a fragment of Milner's calculus consisting of just the terms that constitute recipes for fractal constructions. Using a logic of Rabinovich [26] for deciding trace equivalence in Milner's calculus, we obtain a complete axiomatization of fractal recipe equivalence.
In his study of self-similar sets, Hutchinson also makes use of probability measures supported on self-similar sets, called _invariant measures_. Each invariant measure is specified by a probability distribution on the set of contractions generating its support. In the last technical section of the paper, we adapt the construction of invariant measures to a probabilistic version of labelled transition systems called _labelled Markov chains_, which allows us to give a measure-theoretic semantics to terms in a probabilistic version of Milner's specification language, the calculus introduced by Stark and Smolka [28]. Our measure-theoretic semantics of probabilistic process terms can be seen as a generalization of the trace measure semantics of Kerstan and Konig [15]. We offer a sound axiomatization of equivalence under this semantics and pose completeness as an open problem.
In sum, the contributions of this paper are as follows.
* In Section 3, we give a fractal recipe semantics to process terms using a generalization of iterated function systems.
* In Section 4, we show that two process terms agree on all fractal interpretations if and only if they are trace equivalent. This implies that fractal recipe equivalence is decidable for process terms, and it allows us to derive a complete axiomatization of fractal recipe equivalence from Rabinovich's axiomatization [26] of trace equivalence of process terms.
* Finally, we adapt the fractal semantics of process terms to the probabilistic setting in Section 5 and propose an axiomatization of probabilistic fractal recipe equivalence.
We start with a brief overview of trace semantics in process algebra and Rabinovich's Theorem (Theorem 2.7) in Section 2.
## 2 Labelled Transition Systems and Trace Semantics
Labelled transition systems are a widely used model of nondeterminism. Given a fixed finite set \(A\) of _action labels_, a _labelled transition system_ (LTS) is a pair \((X,\alpha)\) consisting of a set \(X\) of _states_ and a _transition function_\(\alpha:X\to\mathcal{P}(A\times X)\). We generally write \(x\xrightarrow{a}_{\alpha}y\) if \((a,y)\in\alpha(x)\), or simply \(x\xrightarrow{a}y\) if \(\alpha\) is clear from context, and say that \(x\)_emits_\(a\)_and transitions to_\(y\).
Given a state \(x\) of an LTS \((X,\alpha)\), we write \(\langle x\rangle_{\alpha}\) for the LTS obtained by restricting the relations \(\xrightarrow{a}\) to the set of states _reachable_ from \(x\), meaning there exists a _path_ of the form \(x\xrightarrow{a_{1}}x_{1}\xrightarrow{}\cdots\xrightarrow{}x_{n-1} \xrightarrow{a_{n}}x_{n}\). We refer to \(\langle x\rangle_{\alpha}\) as either the LTS _generated by_\(x\), or as the _process starting at_\(x\). An LTS \((X,\alpha)\) is _locally finite_ if \(\langle x\rangle_{\alpha}\) is finite for all states \(x\).
### Traces
In the context of the current work, nondeterminism occurs when a process branches into multiple threads that execute in parallel. Under this interpretation, to an outside observer (without direct access to the implementation details of an LTS), two processes that emit the same set of sequences of action labels are indistinguishable.
\[ae\xrightarrow{a}e\qquad\qquad\frac{e_{1}\xrightarrow{a}f}{e_{1}+e_{2}\xrightarrow{a} f}\qquad\qquad\frac{e_{2}\xrightarrow{a}f}{e_{1}+e_{2}\xrightarrow{a}f}\qquad\qquad \frac{e[\mu v\ e/x]\xrightarrow{a}f}{\mu v\ e\xrightarrow{a}f}\]
Formally, let \(A^{*}\) be the set of words formed from the alphabet \(A\). Given a state \(x\) of an LTS \((X,\alpha)\), the set \(\operatorname{tr}_{\alpha}(x)\) of _traces emitted by \(x\)_ is the set of words \(a_{1}\dots a_{n}\in A^{*}\) such that there is a path of the form \(x\xrightarrow{a_{1}}x_{1}\xrightarrow{}\cdots\xrightarrow{}x_{n-1} \xrightarrow{a_{n}}x_{n}\) through \((X,\alpha)\). Two states \(x\) and \(y\) are called _trace equivalent_ if \(\operatorname{tr}(x)=\operatorname{tr}(y)\). Each trace language \(\operatorname{tr}(x)\) is _prefix-closed_, which for a language \(L\) means that \(w\in L\) whenever \(wa\in L\).
Trace equivalence is a well-documented notion of equivalence for processes [12, 3], and we shall see it in our work on fractals as well.
A _stream_ is an infinite sequence \((a_{1},a_{2},\dots)\) of letters from \(A\). A state \(x\) in an LTS \((X,\alpha)\) emits a stream \((a_{1},\dots)\) if for any \(n>0\), \(a_{1}\cdots a_{n}\in\operatorname{tr}(x)\). We write \(\operatorname{str}(x)\) for the set of streams emitted by \(x\).
In our construction of fractals from LTSs, points are represented only by (infinite) streams. We therefore focus primarily on LTSs with the property that for all states \(x\), \(\operatorname{tr}(x)\) is precisely the set of finite prefixes of streams emitted by \(x\). We refer to an LTS \((X,\alpha)\) satisfying this condition as _productive_. Productivity is equivalent to the absence of _deadlock_ states, states with no outgoing transitions.
Let \((X,\alpha)\) be an LTS. Then the following are equivalent: (i) for any \(x,y\in X\), \(\operatorname{str}(x)=\operatorname{str}(y)\) if and only if \(\operatorname{tr}(x)=\operatorname{tr}(y)\), and (ii) for any \(x\in X\), \(\alpha(x)\neq\emptyset\).
### Specification
We use the following language for specifying processes: Starting with a fixed countably infinite set \(\{v_{1},v_{2},\dots\}\) of _variables_, the set of _terms_ is given by the grammar
\[v\mid ae\mid e_{1}+e_{2}\mid\mu v\ e\]
where \(v\) is \(v_{i}\) for some \(i\in\mathbb{N}\), \(a\in A\), and \(e,e_{1},e_{2}\) are terms.
Intuitively, the process \(ae\) emits \(a\) and then turns into \(e\), and \(e_{1}+e_{2}\) is the process that nondeterministically branches into \(e_{1}\) and \(e_{2}\). The process \(\mu v\ e\) is like \(e\), but with instances of \(v\) that appear free in \(e\) acting like goto expressions that return the process to \(\mu v\ e\).
A _(process) term_ is a term \(e\) in which every occurrence of a variable \(v\) appears both within the scope of a \(\mu v\) (\(-\)) (\(e\) is _closed_) and within the scope of an \(a(-\)) (\(e\) is _guarded_). The set of process terms is written Term. The set of process terms themselves form the LTS \((\texttt{Term},\gamma)\) defined in Figure 3.
In Figure 3, we use the notation \(e[g/v]\) to denote the expression obtained by replacing each _free_ occurrence of \(v\) in \(e\) (one which does not appear within the scope of a \(\mu v\) (\(-\)) operator) with the expression \(g\). Given \(e\in\texttt{Term}\), the process _specified by \(e\)_ is the LTS \(\langle e\rangle_{\gamma}\). The set of process terms, as we have named them, is the fragment of Milner's fixed-point calculus from [23] consisting of only the terms that specify productive LTSs.
Labelled transition systems specified by process terms are finite and productive, and conversely, every finite productive process is trace-equivalent to some process term.
**Lemma 2.5** ([23, Proposition 5.1]).: _For any \(e\in\mathsf{Term}\), the set of terms reachable from \(e\) in \((\mathsf{Term},\gamma)\) is finite. Conversely, if \(x\) is a state in a finite productive LTS \((X,\alpha)\), then there is a process term \(e\) such that \(\operatorname{tr}(e)=\operatorname{tr}_{\alpha}(x)\)._
### Axiomatization of trace equivalence
Given an interpretation of process terms as states in an LTS, and given the notion of trace equivalence, one might ask if there is an algebraic or proof-theoretic account of trace equivalence of process terms. Rabinovich showed in [26] that a complete inference system for trace equivalence can be obtained by adapting earlier work of Milner [23]. The axioms of the complete inference system include equations like \(e_{1}+e_{2}=e_{2}+e_{1}\) and \(a(e_{1}+e_{2})=ae_{1}+ae_{2}\), which are intuitively true for trace equivalence.
To be more precise, given any function with domain \(\mathsf{Term}\), say \(\sigma:\mathsf{Term}\to Z\), call an equivalence relation \(\sim\)_sound with respect to \(\sigma\)_ if \(e\sim f\) implies \(\sigma(e)=\sigma(f)\), and _complete with respect to \(\sigma\)_ if \(\sigma(e)=\sigma(f)\) implies \(e\sim f\). Then the smallest equivalence relation \(\equiv\) on \(\mathsf{Term}\) containing all the pairs derivable from the axioms and inference rules appearing in Figure 4 is sound and complete with respect to \(\operatorname{tr}=\operatorname{tr}_{\gamma}\colon\mathsf{Term}\to\mathcal{P} (A^{*})\).
**Definition 2.6**.: Given \(e_{1},e_{2}\in\mathsf{Term}\), we say that \(e_{1}\) and \(e_{2}\) are _provably equivalent_ if \(e_{1}\equiv e_{2}\), and call \(\equiv\)_provable equivalence_.
**Theorem 2.7** (Rabinovich [26]).: _Let \(e_{1},e_{2}\in\mathsf{Term}\). Then \(e_{1}\equiv e_{2}\) iff \(\operatorname{tr}(e_{1})=\operatorname{tr}(e_{2})\)._
**Example 2.8**.: Consider the processes specified by \(e_{1}=\mu w\ \mu v\ (a_{1}a_{2}v+a_{1}a_{3}w)\) and \(e_{2}=\mu v\ (a_{1}(a_{2}v+a_{3}v))\). The traces emitted by both \(e_{1}\) and \(e_{2}\) are those that alternate between \(a_{1}\) and either \(a_{2}\) or \(a_{3}\). We can show these expressions are trace equivalent via the formal deduction in Figure 5.
Rabinovich's theorem tells us that, up to provable equivalence, our specification language consisting of process terms is really a specification language for languages of traces. In what follows, we are going to give an alternative semantics to process terms by using LTSs to generate fractal subsets of metric spaces. The main result of our paper is that these two semantics coincide: Two process terms are trace equivalent if and only if they generate the same fractals. This is the content of Sections 3 and 4 below.
Figure 4: The axioms and rules of the provable equivalence relation in addition to those of equational logic (not shown). Here, \(e,e_{i},f,f_{i},g\in\mathsf{Term}\) for all \(i\). In \((\mathsf{CN})\), \(g\) has precisely the free variables \(v_{1},\ldots,v_{n}\), and no variable that appears free in \(f_{i}\) is bound in \(g\) for any \(i\). In \((\mathsf{AE})\), \(v\) does not appear free in \(e\).
## 3 Fractals from Labelled Transition Systems
In the Sierpinski gasket \(\mathbf{S}\) from Figure 1, every point of \(\mathbf{S}\) corresponds to a stream of letters from the alphabet \(\{a,b,c\}\), and every stream corresponds to a unique point. To obtain the point corresponding to a particular stream \((a_{1},a_{2},a_{3},\dots)\) with each \(a_{i}\in\{a,b,c\}\), start with any \(p\in\mathbb{R}^{2}\) and compute the limit \(\lim_{n\in\mathbb{N}}\sigma_{a_{1}}\cdots\sigma_{a_{n}}(p)\). The point in the fractal corresponding to \((a_{1},a_{2},a_{3},\dots)\) does not depend on \(p\) because \(\sigma_{a},\sigma_{b},\sigma_{c}\) in Figure 1 are _contraction operators_.
Given a metric space \((M,d)\), a _contraction operator_ on \((M,d)\) is a function \(h:M\to M\) such that for some \(r\in[0,1)\), \(d(h(x),h(y))\leq r\ d(x,y)\) for any \(x,y\in M\). The number \(r\) is called a _contraction coefficient_ of \(h\). The set of contraction operators on \((M,d)\) is written \(\operatorname{Con}(M,d)\).
For example, with the Sierpinski gasket (Figure 1) associated to the contractions \(\sigma_{a}\), \(\sigma_{b}\), and \(\sigma_{c}\), \(r=1/2\) is a contraction coefficient for all three maps. Now, given \(p,q\in\mathbb{R}^{2}\),
\[d(\sigma_{a_{1}}\cdots\sigma_{a_{n}}(p),\sigma_{a_{1}}\cdots\sigma_{a_{n}}(q) )\leq\frac{1}{2^{n}}\ d(p,q)\]
for all \(n\), so it follows that \(\lim_{n\in\mathbb{N}}\sigma_{a_{1}}\cdots\sigma_{a_{n}}(p)=\lim_{n\in\mathbb{N }}\sigma_{a_{1}}\cdots\sigma_{a_{n}}(q)\). For any finite set of contraction operators \(\{\sigma_{a_{1}},\dots,\sigma_{a_{n}}\}\) indexed by \(A\) and acting on a complete metric space \((M,d)\), every stream from \(A\) corresponds to a unique point in \(M\).
A _contraction operator interpretation_ is a function \(\sigma:A\to\operatorname{Con}(M,d)\). We usually write \(\sigma_{a}=\sigma(a)\). Given \(\sigma\colon A\to\operatorname{Con}(M,d)\) and a stream \((a_{1},\dots)\) from \(A\), define
\[\sigma_{\omega}\colon A^{\omega}\to M\qquad\sigma_{\omega}(a_{1},\dots)=\lim_ {n\in\mathbb{N}}\sigma_{a_{1}}\cdots\sigma_{a_{n}}(x) \tag{2}\]
where \(x\in M\) is arbitrary. The _self-similar set_ corresponding to a contraction operator interpretation \(\sigma\) is the set \(\mathbf{S}_{\sigma}=\{\sigma_{\omega}(a_{1},\dots)\ |\ (a_{1},\dots)\) is a stream from \(A\}\).
Note that in (2), the contraction operators corresponding to the initial trace \((a_{1},\dots,a_{n})\) are applied in _reverse_ order. That is, \(\sigma_{a_{n}}\) is applied before \(\sigma_{a_{n-1}}\), \(\sigma_{a_{n-2}}\) is applied before \(\sigma_{a_{n-1}}\), and so on.
### Regular Subfractals
Generalizing the fractals of Mandelbrot [18], Hutchinson introduced self-similar sets in [14] and gave a comprehensive account of their theory. In op. cit., Hutchinson defines a self-similar
set to be the invariant set of an _iterated function system_. In our terminology, an iterated function system is equivalent to a contraction operator interpretation of a finite set \(A\) of actions, and the invariant set is the total set of points obtained from streams from \(A\). The fractals constructed from a LTS paired with a contraction operator interpretation generalize Hutchinson's self-similar sets to nonempty compact sets of points obtained from certain subsets of the streams, namely the subsets emitted by the LTS.
Write \(\mathbf{K}(M,d)\) for the set of nonempty compact subsets of \((M,d)\). Given a state \(x\) of a productive LTS \((X,\alpha)\) and a contraction operator interpretation \(\sigma:A\to\mathrm{Con}(M,d)\), we define \(\llbracket-\rrbracket_{\alpha,\sigma}:X\to\mathbf{K}(M,d)\) by
\[\llbracket x\rrbracket_{\alpha,\sigma}=\{\sigma_{\omega}(a_{1},\dots)\mid(a_ {1},\dots)\text{ emitted by }x\} \tag{3}\]
and call this the set _generated by the state \(x\)_ or the _\(x\)-component of the solution_. As we will see, \(\llbracket x\rrbracket_{\alpha,\sigma}\) is always nonempty and compact.
Given a process term \(e\in\mathsf{Term}\) and a contraction operator interpretation \(\sigma:A\to\mathrm{Con}(M,d)\), the _regular subfractal semantics of \(e\)_ corresponding to \(\sigma\) is \(\llbracket e\rrbracket_{\sigma}=\llbracket e\rrbracket_{\gamma,\sigma}\).
For example, the set of points depicted in Figure 2 is the regular subfractal semantics of \(\mu v\ (av+b(bv+cv)+c(bv+cv))\) corresponding to the interpretation \(\sigma\) given in that figure. The regular subfractal semantics of \(e\) is a proper subset of the Sierpinski Gasket, and in particular does not contain the point corresponding to \((c,a,b,c,c,c,\dots)\).
### Systems and Solutions
Self-similar sets are often characterized as the unique nonempty compact sets that solve systems of equations of the form
\[K=\sigma_{1}(K)\cup\dots\cup\sigma_{n}(K)\]
with each \(\sigma_{i}\) a contraction operator on a complete metric space. For example, the Sierpinski gasket is the unique nonempty compact solution to \(K=\sigma_{a}(K)\cup\sigma_{b}(K)\cup\sigma_{c}(K)\). In this section, we are going to provide a similar characterization for regular subfractals that will play an important role in the completeness proof in Section 4.
One way to think of an \(n\)-state LTS \((X,\alpha)\) is as a system of formal equations
\[x_{i}=a_{k_{1}}x_{j_{1}}+\dots+a_{k_{m}}x_{j_{m}}\]
indexed by \(X=\{x_{1},\dots,x_{n}\}\), where \(x_{i}\xrightarrow{a_{k_{1}}}_{\alpha}x_{j_{i}}\) for \(k_{1},\dots,k_{m},j_{1},\dots,j_{l}\leq n\).
Given a contraction operator interpretation \(\sigma:A\to\mathrm{Con}(M,d)\), and an LTS \((X,\alpha)\), we call a function \(\varphi:X\to\mathbf{K}(M,d)\) a _(\(\sigma\)-)solution_ to \((X,\alpha)\) if for any \(x\in X\),
\[\varphi(x)=\bigcup_{x\xrightarrow{a}y}\sigma_{a}(\varphi(y))\]
Let \(\mathbf{S}\) be the Sierpinski gasket as a subset of \(\mathbb{R}^{2}\). Let \((X,\alpha)\) be the LTS in Figure 1. Then we have a single state, \(x\), with \(x\xrightarrow{a,b,c}x\). The function \(\varphi\colon X\to\mathbf{K}(\mathbb{R}^{2},d)\) given by \(\varphi(s)=\mathbf{S}\) is a solution to \((X,\alpha)\), because \(\mathbf{S}=\sigma_{a}(\mathbf{S})\cup\sigma_{b}(\mathbf{S})\cup\sigma_{c}( \mathbf{S})\).
Finite productive LTSs have unique solutions.
Let \((M,d)\) be a complete metric space, \(\sigma:A\to\mathrm{Con}(M,d)\), and \((X,\alpha)\) be a finite productive LTS. Then \((X,\alpha)\) has a unique solution \(\varphi_{\alpha}\).
The proof of Lemma 3.7 makes use of the _Hausdorff metric_ on \(\mathbf{K}(M,d)\), defined
\[d(K_{1},K_{2})=\max\left\{\sup_{u\in K_{1}}\inf_{v\in K_{2}}d(u,v),\sup_{v\in K_{ 2}}\inf_{u\in K_{1}}d(u,v)\right\} \tag{4}\]
This equips \(\mathbf{K}(M,d)\) with the structure of a metric space. If \(M\) is complete, so is \(\mathbf{K}(M,d)\). Incidentally, we need to restrict to _nonempty_ sets in (4). This is the primary motivation for the guardedness condition which we imposed on our terms. We also recall the _Banach fixed-point theorem_, which allows for the computation of fixed-points by iteration.
[Banach [2]] Let \((M,d)\) be a complete nonempty metric space and \(f\colon M\to M\) a contraction map. Then \(\lim_{n\in\mathbb{N}}f^{n}(q)\) is the unique fixed-point of \(f\).
Fix a complete nonempty metric space \((M,d)\), a productive finite LTS \((X,\alpha)\), and a contraction operator interpretation \(\sigma:A\to\operatorname{Con}(M,d)\). To compute the solution to \((X,\alpha)\), we iteratively apply a matrix-like operator to the set \(\mathbf{K}(M,d)^{X}\) of vectors \([K_{x_{1}},\dots,K_{x_{n}}]\) with entries in \(\mathbf{K}(M,d)\) indexed by \(X\). Formally, we define
\[\left[\alpha\right]_{\sigma}\colon\,\mathbf{K}(M,d)^{X}\to\mathbf{K}(M,d)^{X} \qquad\left(\left[\alpha\right]_{\sigma}\vec{K}\right)_{x}=\bigcup_{x \xrightarrow{a}y}\sigma_{a}(K_{y})\]
for each \(x\in X\). Intuitively, \(\left[\alpha\right]_{\sigma}\) acts like an \(X\times X\)-matrix of unions of contractions.
Proof of Lemma 3.7.: Every fixed-point of \(\left[\alpha\right]_{\sigma}\) corresponds to a solution of \((X,\alpha)\). Given a fixed-point \(\vec{F}\), i.e., \(\left[\alpha\right]_{\sigma}\vec{F}=\vec{F}\), and defining \(\varphi\colon X\to\mathbf{K}(M,d)^{X}\) by \(\varphi(x)=F_{x}\), we see that
\[\varphi(x)=F_{x}=(\left[\alpha\right]_{\sigma}\vec{F})_{x}=\bigcup_{x \xrightarrow{a}y}\sigma_{a}(F_{y})=\bigcup_{x\xrightarrow{a}y}\sigma_{a}( \varphi(y))\]
Conversely, if \(\varphi:X\to\mathbf{K}(M,d)\) is a solution to \((X,\alpha)\), then defining \(F_{x}=\varphi(x)\) we have
\[F_{x}=\varphi(x)=\bigcup_{x\xrightarrow{a}y}\sigma_{a}(\varphi(y))=\bigcup_{x \xrightarrow{a}y}\sigma_{a}(F_{y})=(\left[\alpha\right]_{\sigma}\vec{F})_{x}\]
for each \(x\in X\). Thus, it suffices to show that \(\left[\alpha\right]_{\sigma}\) has a unique fixed-point. By the Banach Fixed-Point Theorem, we just need to show that \(\left[\alpha\right]_{\sigma}\) is a contraction operator. That is, \(\left[\alpha\right]_{\sigma}\in\operatorname{Con}(\mathbf{K}(M),d)\), where \(d\) is the Hausdorff metric. This point is standard in the fractals literature; cf. [14].
### Fractal Semantics and Solutions
Recall that the fractal semantics of a process term \(e\) with respect to a contraction operator interpretation \(\sigma\colon A\to\operatorname{Con}(M,d)\) is the set \(\llbracket e\rrbracket_{\sigma}\) of limits of streams applied to points in the complete metric space \((M,d)\).
Let \((X,\alpha)\) be a finite productive LTS and let \(x\in X\). Given \(e\in\mathsf{Term}\), a complete metric space \((M,d)\), and \(\sigma\colon A\to\operatorname{Con}(M,d)\),
1. \(\llbracket x\rrbracket_{\alpha,\sigma}\in\textbf{K}(M,d)\), i.e., \(\llbracket x\rrbracket_{\alpha,\sigma}\) is nonempty and compact.
2. \(\llbracket-\rrbracket_{\alpha,\sigma}:X\to\textbf{K}(M,d)\) is the unique solution to \((X,\alpha)\).
Proof.: To see _1._, let \(A^{\omega}\) be the set of streams from \(A\) and define the metric space \((A^{\omega},d)\) by
if \(n+1\) is the first index at which \(a_{i}\neq b_{i}\) and \(c_{a_{i}}\) is a given nonzero contraction coefficient of \(\sigma_{a_{i}}\). Since \(A\) is finite, the space \((A^{\omega},d)\) is a compact metric space1, within which \(\operatorname{str}(e)\) is compact. The function \(\sigma^{\omega}\colon A^{\omega}\to M\) is continuous (in fact, \(D\)-Lipschitz where \(D\) is the diameter of \(\mathbf{S}_{\sigma}\)), so \(\llbracket x\rrbracket_{\alpha,\sigma}\) is the continuous image of a compact set and is therefore compact.
Footnote 1: It is the Cantor set on \(A\)-many symbols.
For _2._, it suffices to see that \(\llbracket-\rrbracket_{\alpha,\sigma}\) is a solution to \((X,\alpha)\), since \((X,\alpha)\) is finite and thus has a unique solution by Lemma 3. Given \(x\in X\), let \(p\in\llbracket x\rrbracket_{\alpha,\sigma}\) and suppose \(p=\sigma^{\omega}(a_{1},a_{2},\dots)\) for some stream \((a_{1},a_{2},\dots)\) emitted by \(x\). Then for some \(y\in X\), \(x\xrightarrow{a_{1}}y\) and \(y\) emits \((a_{2},\dots)\). If \(q=\sigma^{\omega}(a_{2},\dots)\), then \(q\in\llbracket y\rrbracket_{\alpha,\sigma}\) and by continuity of \(\sigma_{a_{1}}\), \(p=\sigma_{a}(q)\). It follows that \(p\in\sigma_{a}(\llbracket y\rrbracket_{\alpha,\sigma})\). Conversely, if \(x\xrightarrow{a}y\), and \(q\in\llbracket f\rrbracket\), let \(q=\sigma_{\omega}(a_{1},\dots)\) for some trace \((a_{1},\dots)\in\operatorname{str}(y)\). Then \((a,a_{1},\dots)\in\operatorname{str}(x)\), so \(\sigma_{a}(q)\in\llbracket x\rrbracket_{\alpha,\sigma}\). Since \(a,a_{1}\) and \(y\) were arbitrary,
\[\llbracket x\rrbracket_{\alpha,\sigma}=\bigcup_{x\xrightarrow{a}y}\sigma_{a}( \llbracket y\rrbracket_{\sigma}) \tag{1}\]
In particular, \((\mathsf{Term},\gamma)\) is locally finite, and so by Lemma 3.2 has a unique solution. Theorem 3.2 therefore implies that this solution is \(\llbracket-\rrbracket_{\sigma}\).
We obtain the following, which can be seen as an analogue of Kleene's theorem for regular expressions [16], as a direct consequence of Theorem 3.2.
A subset of a self-similar set is a regular subfractal if and only if it is a component of a solution to a finite productive LTS.
## 4 Fractal Equivalence is Traced
We have seen that finite productive LTSs (LTSs that only emit infinite streams) can be specified by process terms. We also introduced a family of fractal sets called regular subfractals, those subsets of self-similar sets obtained from the streams emitted by a finite productive LTS. An LTS itself is representative of a certain system of equations, and set-wise the system of equations is solved by the regular subfractals corresponding to it. Going from process terms to LTSs to regular subfractals, we see that a process term is representative of a sort of _uninterpreted fractal recipe_, which tells us how to obtain a regular subfractal from an interpretation of action symbols as contractions on a complete metric space.
Given \(e,f\in\mathsf{Term}\), we write \(e\approx f\) if for every complete metric space \((M,d)\) and every contraction operator interpretation \(\sigma:A\to\operatorname{Con}(M,d)\), \(\llbracket e\rrbracket_{\sigma}=\llbracket f\rrbracket_{\sigma}\). We say that \(e\) and \(f\) are _fractal equivalent_ or that they are _equivalent fractal recipes_.
Let \(e,f\in\mathsf{Term}\). Then \(e\approx f\) if and only if \(\operatorname{str}(e)=\operatorname{str}(f)\).
In essence, this is a soundness/completeness theorem for our version of Rabinovich's logic with respect to its fractal semantics that we presented. Our proof relies on the logical characterization of trace equivalence that we saw in Theorem 2.2.
[Soundness] For any \(e,f\in\mathsf{Term}\), if \(e\equiv f\), then \(e\approx f\).
Our proof essentially consists of two parts. First, we observe that LTS homomorphisms (coalgebra homomorphisms) preserve fractal equivalence.
**Lemma 4.4**.: _Let \((M,d)\) be a complete metric space and \(\sigma:A\to\operatorname{Con}(M,d)\) be a contraction operator interpretation. Let \((X,\alpha)\) and \((Y,\beta)\) be productive LTSs, let their solutions be \(\varphi_{\alpha}\) and \(\varphi_{\beta}\), and let \(f\colon(X,\alpha)\to(Y,\beta)\) be a morphism. Then \(\varphi_{\alpha}=\varphi_{\beta}\circ f\)._
Proof.: Recall the notation \(\operatorname{str}(x)\) and \(\operatorname{tr}x\) and Lemma 2.2. In the notation of this result, it is easy to see that \(\operatorname{tr}(x)=\operatorname{tr}(f(x))\) for all \(x\in X\). Thus, \(\operatorname{str}(x)=\operatorname{str}(f(x))\). Now this preservation result follows from the definition of \(\llbracket x\rrbracket_{\alpha,\sigma}\) in (3) and Theorem 3.9.
This is a step towards soundness because it connects to known process theory literature: Two states of an LTS are said to be _bisimilar_ if they are identified by an LTS homomorphism, and _bisimilarity_ is the equivalence relation consisting of all pairs of bisimilar states [3]. It was shown by Milner in [23] that all but (DS) from our axiom set in Figure 4 are sound equivalences with respect to bisimilarity. This concludes the first step of soundness.
The second step is a near-direct consequence of a basic fact about function images: Under a function, the image of a union is the union of its images.
**Lemma 4.5**.: _Let \(e,f\in\mathsf{Term}\) and \(a\in A\). For any contraction operator interpretation \(\sigma:A\to\operatorname{Con}(M,d)\), and any \(a\in A\),_
\[\llbracket e+f\rrbracket_{\sigma} =\llbracket e\rrbracket_{\sigma}\cup\llbracket f \rrbracket_{\sigma} \tag{5}\] \[\llbracket ae\rrbracket_{\sigma} =\sigma_{a}(\llbracket e\rrbracket_{\sigma}) \tag{6}\]
Proof.: Applying Lemma 2.2 to the LTS Term, we see that for all process terms \(e\) and \(f\), \(\operatorname{str}(e)=\operatorname{str}(f)\) if and only if \(\operatorname{tr}(e)=\operatorname{tr}(f)\). Now the structure of Term as an LTS implies that \(\operatorname{str}(e+f)=\operatorname{str}(e)\cup\operatorname{str}(f)\). Equation (5) follows easily from this. We argue similarly for (6): \(\operatorname{str}(ae)\) is exactly the set \(\operatorname{str}(e)\) with all streams in it prefixed by \(a\). To see that \(\llbracket ae\rrbracket_{\sigma}\subseteq\sigma_{a}(\llbracket e\rrbracket_{ \sigma})\), let \(p\in\llbracket ae\rrbracket_{\sigma}\). There is a stream \((a,a_{1},\dots)\in\operatorname{str}(e)\) such that \(p=\lim_{n\in\mathbb{N}}\sigma_{a}\sigma_{a_{1}}\dots\sigma_{a_{n}}(q)\) (for arbitrary \(q\in M\)). Since \(\sigma_{a}\) is continuous, \(p=\sigma_{a}(\lim_{n\in\mathbb{N}}\sigma_{a_{1}}\dots\sigma_{a_{n}}(q))\), so \(p\in\sigma_{a}(\llbracket e\rrbracket_{\sigma})\). The converse is similar.
Proof of Lemma 4.3.: By induction on derivations using the axioms in Figure 4 together with the rules of equational logic. Milner's work in [23] covers all of the axioms except for the distributivity axiom (DS), so it suffices to check that (DS) is sound with respect to fractal equivalence and that fractal equivalence is a congruence.
Fix an interpretation \(\sigma:A\to\operatorname{Con}(M,d)\). Using Lemma 4.5, we have
\[\llbracket a(e+f)\rrbracket=\sigma_{a}\,\llbracket e+f\rrbracket=\sigma_{a}( \llbracket e\rrbracket\cup\llbracket f\rrbracket)=\left[a\right]_{\sigma} \,\llbracket e\rrbracket\cup\left[a\right]_{\sigma}\,\llbracket f\rrbracket= \llbracket ae\rrbracket\cup\llbracket af\rrbracket=\llbracket ae+af\rrbracket\]
This establishes the soundness of (DS). Lastly, we need to check that fractal equivalence is a congruence, meaning that it is preserved by the algebraic operations. More precisely, if \(e_{i}\approx f_{i}\) for \(i\leq n\) and \(g\) is a term with free variables \(v_{1},\dots,v_{n}\), then
\[g[\vec{e}/\vec{v}]\approx g[\vec{f}/\vec{v}] \tag{7}\]
We proceed by induction on \(g\).
1. If \(g=v_{1}\in V\), then \(g[e_{1}/v_{1}]=e_{1}\approx f_{1}=g[f_{1}/v_{1}]\).
2. If \(g=ag^{\prime}\) and (7) is true of \(g^{\prime}\), then by Lemma 4.5, \[\llbracket g[\vec{e}/\vec{v}]\rrbracket=\llbracket ag^{\prime}[\vec{e}/\vec{v} ]\rrbracket=\sigma_{a}\,\llbracket g^{\prime}[\vec{e}/\vec{v}]\rrbracket= \sigma_{a}\,\llbracket g^{\prime}[\vec{f}/\vec{v}]\rrbracket=\llbracket g[ \vec{f}/\vec{v}]\rrbracket\]
3. If \(g=g_{1}+g_{2}\) and (7) holds for \(g_{1},g_{2}\), then again use Lemma 4.5.
4. If \(g=\mu w\ g^{\prime}\), it suffices to consider the case where \(w\neq v_{i}\) for any \(i\leq n\). Assume (7) for \(g^{\prime}\). Then by soundness of (FP), \[\llbracket g[\vec{e}/\vec{v}]\rrbracket =\llbracket\mu w\ g^{\prime}[\vec{e}/\vec{v}]\rrbracket\] \[=\llbracket g^{\prime}[\vec{e}/\vec{v},\mu w\ g^{\prime}[\vec{e}/ \vec{v}]/w]\rrbracket\] \[=\llbracket g^{\prime}[\vec{f}/\vec{v},\mu w\ g^{\prime}[\vec{e}/ \vec{v}]/w]\rrbracket\] \[=\llbracket\mu w\ g^{\prime}[\vec{f}/\vec{v}]\rrbracket\]
The last equation holds by soundness of (UA).
This completes the proof.
[Completeness] For any \(e,f\in\mathsf{Term}\),if \(e\approx f\), then \(e\equiv f\).
Proof.: Consider the space \((A^{\omega},d)\) of streams from \(A\) with the metric below:
\[d((a_{1},\dots),(b_{1},\dots))=\inf\left\{2^{-n}\ \big{|}\ (\forall i\leq n)\ a_{i }=b_{i}\right\}\]
This space is the _Cantor set on \(A\) symbols_, a compact metric space. For any productive LTS \((X,\alpha)\) and \(x\in X\), \(\operatorname{str}(x)\) is a nonempty closed subset of \((A^{\omega},d)\), for the following reason: Given a Cauchy sequence \(\{(a_{1}^{(i)},\dots)\}_{i\in\mathbb{N}}\) in \(\operatorname{str}(x)\), let \((a_{1},\dots)\) be its limit in \((A^{\omega},d)\). Then \(x\) emits every finite initial segment of \((a_{1},\dots)\), because for any \(N\in\mathbb{N}\) there is an \(m\in\mathbb{N}\) such that \((a_{1},\dots,a_{m},a_{m+1}^{(m)},\dots)\in\operatorname{str}(x)\) for \(m>N\). By compactness of \((A^{\omega},d)\), we therefore have \(\operatorname{str}(x)\in\mathbf{K}(A^{\omega},d)\), so \(\operatorname{str}:X\to\mathbf{K}(A^{\omega},d)\).
For each \(a\in A\), let \(\sigma_{a}:A^{\omega}\to A^{\omega}\) be the map \(\sigma_{a}(a_{1},\dots)=(a,a_{1},\dots)\). Then \(\sigma:A\to\operatorname{Con}(A^{\omega},d)\). By construction, \(\operatorname{str}(x)=\bigcup_{x\xrightarrow{a}y}\sigma_{a}(\operatorname{ str}(y))\) for any \(x\in X\). By the uniqueness of fixed points we saw in Lemma 3.2, we therefore \(\operatorname{str}(x)=\llbracket x\rrbracket_{\alpha,\sigma}\).
To finish the proof, consider \((\mathsf{Term},\gamma)\). If \(e,f\in\mathsf{Term}\) and \(e\approx f\), then in particular, \(\operatorname{str}(e)=\operatorname{str}(f)\), because \(\operatorname{str}=\llbracket-\rrbracket_{\gamma,\sigma}\) with \(\sigma\colon A\to\operatorname{Con}(A^{\omega},d)\) as above. Since \((\mathsf{Term},\gamma)\) is productive, \(\operatorname{tr}(e)=\operatorname{str}(e)\) and \(\operatorname{tr}(f)=\operatorname{str}(f)\), so in particular, \(e\) and \(f\) are trace equivalent. By Rabinovich's Theorem, Theorem 2.2, \(e\equiv f\), as desired.
## 5 A Calculus of Subfractal Measures
Aside from showing the existence of self-similar sets and their correspondence with contraction operator interpretations (in Hutchinson's terminology, iterated function systems), Hutchinson also shows that every probability distribution on the contractions corresponds to a unique measure, called the _invariant measure_, that satisfies a certain recursive equation and whose support is the self-similar set. In this section, we replay the story up to this point, but with Hutchinson's invariant measure construction instead of the invariant (self-similar) set construction. We make use of a probabilistic version of LTSs called _labelled Markov chains_, as well as a probabilistic version of Milner's specification language introduced by Stark and Smolka [28] to specify fractal measures. Similar to how fractal equivalence coincides with trace equivalence, _fractal measure equivalence_ is equivalent to a probabilistic version of trace equivalence due to Kerstan and Konig [15].
### Invariant measures
Recall that a _Borel probability measure_ on a metric space \((M,d)\) is a \([0,\infty]\)-valued function \(\rho\) defined on the Borel subsets of \(M\) (the smallest \(\sigma\)-algebra containing the open balls of \((M,d)\)) that is countably additive and assigns \(\rho(\emptyset)=0\) and \(\rho(M)=1\).
Hutchinson shows in [14] that, given \(\sigma\colon A\to\operatorname{Con}(M,d)\), each probability distribution \(\rho\colon A\to[0,1]\) on \(A\) gives rise to a unique Borel probability measure \(\hat{\rho}\), called the _invariant measure_, satisfying the equation below and supported by the self-similar set \(\mathbf{S}_{\sigma}\):
\[\hat{\rho}(B)=\sum_{a\in A}\rho(a)\ \sigma_{a}^{\#}\hat{\rho}(B)\]
Here and elsewhere, the _pushforward measure_\(f^{\#}\hat{\rho}\) with respect to a continuous map \(f\) is defined by \(f^{\#}\hat{\rho}(B)=\hat{\rho}(f^{-1}(B))\) for any Borel subset \(B\) of \((M,d)\).
We can view the specification \(\rho\) of the invariant measure \(\hat{\rho}\) as a one-state Markov process with a loop labelled with each letter from \(A\), similar to how self-similar sets are specified with a one-state productive LTS. We can adapt this construction to multiple states by moving from probability distributions on \(A\) to _labelled Markov chains_, where again, the labels are interpreted as contraction maps.
### Labelled Markov Chains
Let \(\mathcal{D}\) denote the finitely supported probability distribution functor on the category of sets.
A _labelled Markov chain_ (LMC) is a pair \((X,\beta)\) consisting of a set \(X\) of states and a function \(\beta\colon X\to\mathcal{D}(A\times X)\). A _homomorphism_ of LMCs \(h\colon(X,\beta_{X})\to(Y,\beta_{Y})\) is a function \(h:X\to Y\) such that \(\mathcal{D}(h)\circ\beta_{X}=\beta_{Y}\circ h\). We write \(x\xrightarrow{r|a}_{\beta}y\) if \(\beta(x)(a,y)=r\), often dropping the symbol \(\beta\) if it is clear form context.
As we have already seen, given a contraction operator interpretation \(\sigma\colon A\to\operatorname{Con}(M,d)\), every state \(x\) of a productive LTS \((X,\alpha)\) with labels in \(A\) corresponds to a regular subfractal \(\llbracket x\rrbracket_{\alpha,\sigma}\) of \(\mathbf{S}_{\sigma}\). This regular subfractal is defined to be the continuous image of the set \(\operatorname{str}(x)\) under the map \(\sigma_{\omega}\colon(A^{\omega},d_{\sigma})\to(M,d)\), where \(d_{\sigma}\) is determined by the contraction coefficients of the \(\sigma_{a}\)'s as follows: Given a nonzero contraction coefficient \(c_{a}\) of \(\sigma_{a}\) for each \(a\in A\), define \(d_{\sigma}((a_{1},\dots),(b_{1},\dots))=\prod_{i=1}^{n}c_{a_{i}}\), where \(n\) is the least index such that \(a_{n+1}\neq b_{n+1}\). The family \(\llbracket x\rrbracket_{\alpha,\sigma}\) is characterized by its satisfaction of the equations representing the LTS \((X,\alpha)\).
Every LMC \((X,\beta)\) has an _underlying_ LTS \((X,\bar{\beta})\), where \(\bar{\beta}(x)=\{(a,y)\mid\beta(x)(a,y)>0\}\). For each \(x\in X\), we are going to define a probability measure \(\hat{\beta}_{\sigma}(x)\) on \(\mathbf{S}_{\sigma}\) whose support is \(\llbracket x\rrbracket_{\beta,\sigma}\), and that satisfies a recursive system of equations represented by the LMC \((X,\beta)\). Roughly, \(\hat{\beta}_{\sigma}(x)\) is the pushforward of a certain Borel probability measure \(\hat{\beta}(x)\) on \(A^{\omega}\) that does not depend on the contraction operator interpretation \(\sigma\).
We begin by topologizing \(A^{\omega}\), using as a basis the sets of the form
\[B_{a_{1}\cdots a_{n}}=\{(a_{1},\dots,a_{n},b_{1},\dots)\mid(b_{1},\dots)\in A ^{\omega}\}\]
Given a state \(x\) of a LMC \((X,\beta)\) and a word \(w=a_{1}\cdots a_{n}\), we follow Kerstan and Konig [15] and define the _trace measure_ of the basic open set \(B_{w}\) by
\[\hat{\beta}(x)(B_{w})=\sum\{r_{1}\cdots r_{n}\mid x\xrightarrow{r_{1}|a_{1}}x _{1}\to\cdots\xrightarrow{r_{n}|a_{n}}x_{n}\} \tag{8}\]
where \(\hat{\beta}(B_{\epsilon})=\hat{\beta}(A^{\omega})=1\). This defines a unique Borel probability measure on \((A^{\omega},d)\).
Let \(j\colon A^{*}\to[0,1]\) satisfy \(j(w)=\sum_{a\in A}j(wa)\) for any \(w\in A^{*}\) and \(j(\epsilon)=1\), where \(\epsilon\) is the empty word. Then there is a unique Borel probability measure \(\rho\) on \((A^{\omega},d)\) such that for any \(w\in A^{*}\), \(\rho(B_{w})=j(w)\).
Proof.: This is an easy consequence of the Identity and Extension Theorems for \(\sigma\)-finite premeasures. See Propositions 2.3 to 2.5 of [15].
In particular, given any LMC \((X,\beta)\), \(\hat{\beta}(x)(B_{w})=\sum_{a\in A}\hat{\beta}(x)(B_{wa})\), so there is a unique Borel probability measure \(\hat{\beta}(x)\) on \(A^{\omega}\) such that (8) holds for any basic open set \(B_{w}\).
Let \((X,\beta)\) be a LMC, and \(\sigma\colon A\to\operatorname{Con}(M,d)\) be a contraction operator interpretation in a complete metric space. For each \(x\in X\), we define the _regular subfractal measure_ corresponding to \(x\) to be \(\hat{\beta}_{\sigma}(x)=\sigma_{\omega}^{\#}\hat{\beta}(x)\).
Intuitively, the regular subfractal measure of a state in a LMC under a contraction operator interpretation computes the probability that, if run stochastically according to the probabilities labelling the edges, the sequence of points of \(M\) observed in the run eventually lands within a given Borel subset of \((M,d)\).
#### Systems of Probabilistic Equations
Given a complete metric space \((M,d)\), let \(\mathbf{P}(M,d)\) be the set of Borel probability measures on \((M,d)\). In previous sections, we made use of the fact that, when \(\sigma\colon A\to\operatorname{Con}(M,d)\), we can see \(\mathbf{K}(M,d)\) as a _semilattice with operators_, i.e., union acts as a binary operation \(\cup\colon\mathbf{K}(M,d)^{2}\to\mathbf{K}(M,d)\) and each \(\sigma_{a}\colon\mathbf{K}(M,d)\to\mathbf{K}(M,d)\) distributes over \(\cup\). Analogously, equipped with \(\sigma\colon A\to\operatorname{Con}(M,d)\), \(\mathbf{P}(M,d)\) is a _convex algebra with operators_. Formally, for any \(r\in[0,1]\), there is a binary operation \(\oplus_{r}\colon\mathbf{P}(M,d)^{2}\to\mathbf{P}(M,d)\) defined \((\rho_{1}\oplus_{r}\rho_{2})(B)=r\rho_{1}(B)+(1-r)\rho_{2}(B)\), over which each \(\sigma_{a}^{\#}\) distributes, i.e.,
\[\sigma_{a}^{\#}(\rho_{1}\oplus_{r}\rho_{2})=\sigma_{a}^{\#}\rho_{1}\oplus_{r} \sigma_{a}^{\#}\rho_{2}\]
We also make use of a summation notation defined by
\[r_{1}\cdot\rho_{1}\oplus\cdots\oplus r_{n}\cdot\rho_{n}=\rho_{n}\oplus_{r_{n}} \left(\frac{r_{1}}{1-r_{n}}\cdot\rho_{i}\oplus\cdots\oplus\frac{r_{n-1}}{1-r_ {n}}\cdot\rho_{i}\right)\]
for any \(r_{1},\ldots,r_{n}\in[0,1)\).
Given a contraction operator interpretation, an LMC \((X,\beta)\) can be thought of as a system of equations with one side a polynomial term in a convex algebra with operators,
\[x_{i}=r_{i1}\cdot a_{i1}x_{k_{1}}\oplus r_{i2}\cdot a_{i2}x_{k_{2}}\oplus \cdots\oplus r_{im}\cdot a_{im}x_{k_{m}}\]
where \(X=\{x_{1},\ldots,x_{n}\}\) and \(x_{i}\xrightarrow[r_{ij}]{}x_{k_{m}}\) for each \(i,j\leq m\).
Let \((X,\beta)\) be a LMC, and let \(\sigma\colon A\to\operatorname{Con}(M,d)\). A _solution_ to \((X,\beta)\) is a function \(\varphi\colon X\to\mathbf{P}(M,d)\) such that for any \(x\in X\) and any Borel set \(B\),
\[\varphi(x)(B)=\sum_{x\xrightarrow[r]{}y}r\;\sigma_{a}^{\#}(\varphi(y))(B)\]
Every finite LMC admits a unique solution, and moreover, the unique solution is the regular subfractal measure from Definition 5.
Let \((X,\beta)\) be a LMC, \(x\in X\), and \(\sigma\colon A\to\operatorname{Con}(M,d)\). Then the map \(\hat{\beta}_{\sigma}\colon X\to\mathbf{P}(M,d)\) is the unique solution to \((X,\beta)\).
Proof.: We begin by showing that \((X,\beta)\) has a unique solution, and end by showing that \(\hat{\beta}_{\sigma}\) is a solution.
To see that \((X,\beta)\) has a unique solution, we show that every solution is the fixed point of a contraction operator on the \(X\)-fold product of the complete metric space \((\mathbf{P}(M,d),L)\), where \(L\) is the _Kantorovich metric_[11]
\[L(\rho,\theta)=\sup\left\{\int_{M}\varphi\ d\rho-\int_{M}\varphi\ d\theta\ \bigg{|}\ \varphi\colon M\to\mathbb{R}\ \text{nonexpanding}\right\}\]
Hutchinson shows that \((\mathbf{P}(M,d),L)\) is a complete metric space in [14], and that \(\sigma_{a}^{\#}\colon\mathbf{P}(M,d)\to\mathbf{P}(M,d)\) is a contraction with the same coefficient as \(\sigma_{a}\).
Now define the operator \([\beta]_{\sigma}\colon\mathbf{P}(M,d)^{X}\to\mathbf{P}(M,d)^{X}\) by setting
\[[\beta]_{\sigma}(\vec{\rho})_{x}(B)=\sum_{\begin{subarray}{c}x\nmid a\\ x\longrightarrow y\end{subarray}}r\ \sigma_{a}^{\#}\rho_{y}(B)\]
We need to show that \([\beta]_{\sigma}\) is a contraction in the product metric. To this end, let \(\vec{\rho},\vec{\theta}\in\mathbf{P}(M,d)^{X}\), and suppose that for any \(x\in X\), \(L(\rho_{x},\rho_{x})\leq\delta\) for some \(\delta>0\). Let \(c\) be the maximum contraction coefficient of the \(\sigma_{a}\)s. Then, given \(x,y\in X\),
\[L([\beta]_{\sigma}(\vec{\rho})_{x},[\beta]_{\sigma}(\vec{\theta })_{x})\] \[=\sup\left\{\int_{M}\varphi\ d[\beta]_{\sigma}(\rho)_{x}-\int_{M} \varphi\ d[\beta]_{\sigma}(\theta)_{x}\ \bigg{|}\ \varphi\colon M\to\mathbb{R}\ \text{nonexp.}\right\}\] \[=\sup\left\{\sum_{\begin{subarray}{c}x\nmid a\\ x\longrightarrow y\end{subarray}}r(\int_{M}\varphi\ d\sigma_{a}^{\#}\rho_{y} -\int_{M}\varphi\ d\sigma_{a}^{\#}\theta_{y})\ \Bigg{|}\ \varphi\colon M\to\mathbb{R}\ \text{nonexp.}\right\}\] \[\leq\sum_{\begin{subarray}{c}x\nmid a\\ x\longrightarrow y\end{subarray}}rc\sup\left\{\int_{M}\varphi\ d\rho_{y}-\int_{M} \varphi\ d\theta_{y}\ \bigg{|}\ \varphi\colon M\to\mathbb{R}\ \text{nonexp.}\right\}\] \[=\sum_{\begin{subarray}{c}x\nmid a\\ x\longrightarrow y\end{subarray}}rcL(\rho_{y},\theta_{y})\] \[\leq c\delta\]
It follows that \([\beta]_{\sigma}\) is a contraction in the product metric, and therefore has a unique fixed-point. Equivalently, \((X,\beta)\) has a unique solution.
Now we check that \(\hat{\beta}_{\sigma}\) is a solution to \((X,\delta)\). Let \(B\) be any Borel subset of \(A^{\omega}\). We begin with an observation. Let \(C\subseteq A^{*}\) be a prefix closed language, and set \(C_{n}=\{w\in C\ |\ |w|=n\}\) for each \(n\in\mathbb{N}\). Suppose \(C\) satisfies the following property: (*) for any \(n\in\mathbb{N}\) and \(u\in C_{n}\), \(B\subseteq\bigcup_{w\in C_{n}}B_{w}\) but \(B\not\subseteq\bigcup_{w\in C_{n}\setminus u}B_{w}\). Then \(B=\bigcap_{n\in\mathbb{N}}\bigcup_{w\in C_{n}}B_{w}\), and in particular, for any Borel measure \(\rho\) on \(A^{\omega}\), \(\rho(B)=\lim_{n\in\mathbb{N}}\sum_{w\in C_{n}}\rho(B_{m})\).
Now let \(U\) be a Borel subset of \((M,d)\) and take \(B=\sigma_{\omega}^{-1}(U)\). Let \(x\in X\), and recall that \(\hat{\beta}_{\sigma}(x)(U)=\sigma_{\omega}^{\#}\hat{\beta}(x)(U)=\hat{\beta}(x)( B)\). Let \(C=\{w\ |\ (\exists\vec{a}\in\operatorname{str}(x))\ \sigma_{\omega}(w\vec{a})\in B\}\), and observe
that \(C\) satisfies (*) above. We have
\[\hat{\beta}_{\sigma}(x)(U) =\lim_{n\in\mathbb{N}}\sum_{w\in C_{n}}\hat{\beta}(x)(B_{w})\] \[=\lim_{n\in\mathbb{N}}\sum_{w\in C_{n}}\sum_{x\xrightarrow[]{r|a} }r\hat{\beta}(x)(a^{-1}B_{w})\] \[=\sum_{x\xrightarrow[]{r|a}y}r\lim_{n\in\mathbb{N}}\sum_{w\in C_ {n}}\hat{\beta}(y)(a^{-1}B_{w})\] \[=\sum_{x\xrightarrow[]{r|a}y}r\hat{\beta}(y)(U)\]
as desired.
Since the support of \(\hat{\beta}(x)\) is precisely \(\operatorname{str}(x)\), the support of \(\hat{\beta}_{\sigma}(x)\) is precisely \(\sigma_{\omega}(\operatorname{str}(x))\), which we have already seen is the regular subfractal determined by the state \(x\) of the underlying LTS of \((X,\beta)\).
### Probabilistic Process Algebra
Finally, we introduce a syntax for specifying LMCs. Our specification language is essentially the _productive_ fragment of Stark and Smolka's process calculus [28], meaning that the expressions do not involve deadlock and all variables are guarded.
The set of probabilistic terms is given by the grammar
\[v\mid ae\mid e_{1}\oplus_{r}e_{2}\mid\mu v\ e\]
Here \(r\in[0,1]\), and otherwise we make the same stipulations as in Definition 2. The set of probabilistic process terms \(\mathsf{PTerm}\) consists of the closed and guarded probabilistic terms.
Instead of _languages_ of streams, the analog of trace semantics appropriate for probabilistic process terms is a measure-theoretic semantics consisting of trace measures introduced earlier in this section (Equation (8)).
We define the LMC \((\mathsf{PTerm},\delta)\) in Figure 6 and call it the _syntactic LMC_. The _trace measure semantics \(\operatorname{trm}(e)\) of a probabilistic process term \(e\) is defined to be \(\operatorname{trm}(e)=\hat{\delta}(x)\). Given \(\sigma\colon A\to\operatorname{Con}(M,d)\), the _subfractal semantics of \(e\in\mathsf{PTerm}\) corresponding to \(\sigma\)_is \(\hat{\delta}_{\sigma}(e)\).
Intuitively, the trace measure semantics of a process term \(e\) assigns a Borel set of streams \(B\) the probability that \(e\) eventually emits a word in \(B\). Trace measure semantics can be computed inductively as follows.
**Lemma 5.8**.: _For any \(w\in A^{*}\), \(a\in A\), \(e,e_{i}\in\mathsf{PTerm}\), and \(r\in[0,1]\), \(\operatorname{trm}(e)(A^{\omega})=1\) and_
\[\operatorname{trm}(ae)(B_{w}) =\begin{cases}\operatorname{trm}(e)(B_{u})&w=au\\ 0&\text{otherwise}\end{cases}\] \[\operatorname{trm}(e_{1}\oplus_{r}e_{2})(B_{w}) =r\operatorname{trm}(e_{1})(B_{w})+(1-r)\operatorname{trm}(e_{2}) (B_{w})\] \[\operatorname{trm}(\mu v\ e)(B_{w}) =\operatorname{trm}(e[\mu v\ e/v])(B_{w})\]
Similar to the situation with trace semantics and regular subfractals, trace measure semantics and subfractal measure semantics identify the same probabilistic process terms.
**Theorem 5.9**.: _Let \(e,f\in\mathsf{PTerm}\). Then \(\operatorname{trm}(e)=\operatorname{trm}(f)\) if and only if for any contraction operator interpretation \(\sigma\colon A\to\operatorname{Con}(M,d)\), \(\hat{\delta}_{\sigma}(e)=\hat{\delta}_{\sigma}(f)\)._
### Axiomatization
Figure 7 outlines an inference system for determining when the subfractal measures corresponding to two expressions coincide.
**Definition 5.10**.: _Given \(e,f\in\mathsf{PTerm}\), write \(e\equiv f\) and say that \(e\) and \(f\) are provably equivalent if the equation \(e=f\) can be derived from inference rules in Figure 7._
**Theorem 5.11** (Soundness).: _For any \(e,f\in\mathsf{PTerm}\), if \(e\equiv f\), then for any complete metric space \((M,d)\) and any \(\sigma\colon A\to\operatorname{Con}(M,d)\), \(\hat{\delta}_{\sigma}(e)=\hat{\delta}_{\sigma}(f)\)._
Unlike the situation with trace equivalence, it is not known if these axioms are complete with respect to subfractal measure semantics. We leave this as a conjecture.
**Conjecture 5.12** (Completeness).: _Figure 7 is a complete axiomatization of trace measure semantics. That is, for any \(e,f\in\mathsf{PTerm}\), if for any complete metric space \((M,d)\) and any \(\sigma\colon A\to\operatorname{Con}(M,d)\) we have \(\hat{\delta}_{\sigma}(e)=\hat{\delta}_{\sigma}(f)\), then \(e\equiv f\)._
We expect that Conjecture 5.12 can be proven in a similar manner to Theorem 4.6.
## 6 A Question about Regular Subfractals
Certain regular subfractals that have been generated by LTSs with multiple states happen to coincide with self-similar sets using a different alphabet of action symbols and under a different contraction operator interpretation. For example, the twisted Sierpinski gasket in Figure 2 is the self-similar set generated by the iterated function system consisting of the compositions \(\sigma_{a},\sigma_{b}\sigma_{b},\sigma_{b}\sigma_{c},\sigma_{c}\sigma_{b}\), and \(\sigma_{c}\sigma_{c}\).
Figure 7: Axioms for probabilistic trace equivalence Above, \(e,e_{1},e_{2}\in\mathsf{PTerm}\), \(a\in A\), \(r,s\in[0,1]\), and \(rs\neq 1\). Also, in (AE), \(v\) is not free in \(e\).
**Question 1**.: Is every regular subfractal a self-similar set? In other words, are there regular subfractals which can only be generated by a multi-state LTS?
**Example 6.1**.: To illustrate the subtlety of this question, consider the following LTS.
The state \(x\) emits \((a,a,\ldots)\) (an infinite stream of \(a\)'s) and \((a,\ldots,a,b,b,\ldots)\), a stream with some finite number (possibly \(0\)) of \(a\)'s followed by an infinite stream of \(b\)'s. Now let \(M=\mathbb{R}\) with Euclidean distance and consider the contraction operator interpretation \(\sigma_{a}(r)=\frac{1}{2}r\) and \(\sigma_{b}(r)=\frac{1}{2}r+\frac{1}{2}\). Let \(K=\{0\}\cup\{\frac{1}{2^{n}}|n\geq 0\}\). Then \(K\) is the component of the solution at \(x\). This example is interesting because unlike the Twisted Sierpinski gasket in Figure 2, there is no obvious finite set of compositions \(\sigma_{a}\) and \(\sigma_{b}\) such that \(K\) is the self-similar set generated by that iterated function system.
There is an LTS \((X,\alpha)\) with \(X\) a singleton set \(\{x\}\), and a contraction operator interpretation \(\sigma_{x}\) whose solution is \(K\). We take the set of action labels underlying \(X\) to be \(B=\{f,g,h\}\) and use the contraction operator interpretation \(\sigma_{f}(r)=0\), \(\sigma_{g}(r)=1\) and \(\sigma_{h}(r)=\frac{1}{2}r\). It is easy to verify that \(K=\bigcup_{i\in\{f,g,h\}}\sigma_{i}(K)\).
But we claim that \(K\) is not obtainable using a single-state LTS and _the same contractions_\(\sigma_{a}(r)=\frac{1}{2}r\) and \(\sigma_{b}(r)=\frac{1}{2}r+\frac{1}{2}\), or using _any (finite) compositions_ of \(\sigma_{a}\) and \(\sigma_{b}\). Indeed, suppose there were such a finite collection \(\sigma_{1},\ldots\sigma_{n}\) consisting of (finite) compositions of \(\sigma_{a}\) and \(\sigma_{b}\) such that \(K=\bigcup_{i=1}^{n}\sigma_{i}(K)\). Since \(1\in K\), we must be using the stream \((b,b,b,\ldots)\) (since if there is an \(a\) at position \(n\), the number obtained would be \(\leq 1-\frac{1}{2^{n}}<1\)), so some \(\sigma_{i}\) must consist of a composition of \(\sigma_{b}\) some number \(m\geq 1\) of times with itself. Similarly, the only way to obtain \(0\) is with \((a,a,a,\ldots)\), so there must be some \(\sigma_{j}\) which is a composition of \(\sigma_{a}\) some number of times \(p\geq 1\) with itself. But then \(\lim_{n\to\infty}\sigma_{i}\circ\sigma_{j}\circ\sigma_{i}^{n}(r)=1-(\frac{2^ {p}-1}{2^{m+p}})>\frac{1}{2}\), since \(m,p\geq 1\). That point must be in the subset of \(\mathbb{R}\) generated by this LTS. However, it is not in \(K\), since \(\frac{1}{2}<1-(\frac{2^{p}-1}{2^{m+p}})<1\).
More generally, we cannot obtain \(K\) using a single-state LTS even if we allowed finite sums of compositions of \(\sigma_{a}\) and \(\sigma_{b}\).
Once again, it is possible to find a single state LTS whose corresponding subset of \(\mathbb{R}\) is \(K\), but to do this we needed to _change the alphabet and also the contractions_. Perhaps un-coincidentally, the constant operators are exactly the limits of the two contractions from the original interpretation. Our question is whether this can always be done.
On the other hand, the thesis of Boore [7] may contain an answer to Question 1. Boore presents a (family) of 2-state GIFS whose _attractors_, total unions of their regular subfractals, are not self-similar. Attractors of GIFSs are not precisely the same as regular subfractals, so additional work is required to adapt Boore's work to answer Question 1.
## 7 Related Work
This paper is part of a larger effort of examining topics in continuous mathematics from the standpoint of coalgebra and theoretical computer science. The topic itself is quite old, and originates perhaps with Pavlovic and Escardo's paper "Calculus in Coinductive Form" [24]. Another early contribution is Pavlovic and Pratt [25]. These papers proposed viewing some structures in continuous mathematics--the real numbers, for example, and power series expansions--in terms of final coalgebras and streams. The next stage in this line of work was a set of papers specifically about fractal sets and final coalgebras. For example, Leinster [17]
offered a very general theory of self-similarity that used categorical modules in connection with the kind of gluing that is prominent in constructions of self-similar sets. In a different direction, papers like [4] showed that for some very simple fractals (such as the Sierpinski gasket treated here), the final coalgebras were Cauchy completions of the initial algebras.
**Generalizations of IFSs.** Many generalizations of Hutchinson's self-similar sets have appeared in the literature. The generalization that most closely resembles our own is that of an attractor for a _directed-graph iterated function system_ (GIFS) [19]. A LTS paired with a contraction operator interpretation is equivalent data to that of a GIFS, and equivalent statements to Lemma 3.7 can be found for example in [9, 19, 20]. As opposed to the regular subfractal corresponding to one state, as we have studied above, the geometric object studied in the GIFSs literature is typically the union of the regular subfractals corresponding to all the states (in our terminology), and properties such as Hausdorff dimension and connectivity are emphasized. We also distinguish our structures from GIFSs because we need to allow the interpretations of the labels to vary in our semantics.
Another generalization is Mihail and Miculescu's notion of attractor for a _generalized iterated function system_[20]. A generalized IFS is essentially that of Hutchinson's IFS with multi-arity contractions--equivalent to a single-state labelled transition system where labels have 'higher arity'. A common generalization of GIFSs and generalized IFSs could be achieved by considering coalgebras of the form \(X\to\mathcal{P}(\coprod_{n\in\mathbb{N}}A_{n}\times X^{n})\) and interpreting each \(a\in A_{n}\) as an \(n\)-ary contraction. We suspect that a similar story to the one we have outlined in this paper is possible for this common generalization.
**Process algebra.** The process terms we use to specify labelled transition systems and labelled Markov chains are fragments of known specification languages. Milner used process terms to specify LTSs in [23], and we have repurposed his small-step semantics here. Stark and Smolka use probabilistic process terms to specify labelled Markov chains (in our terminology) in [28], and we have used them for the same purpose. Both of these papers also include complete axiomatizations of bisimilarity, and we have also repurposed their axioms.
However, fractal semantics is strictly coarser than bisimilarity, and in particular, bisimilarity of process terms is trace equivalence. Rabinovich added a single axiom to Milner's axiomatization to obtain a sound and complete axiomatization of trace equivalence of expressions [26], which allowed us to derive Theorem 4.6. In contrast, the axiomatization of trace equivalence for probabilistic processes is only well-understood for _finite_ traces, see Silva and Sokolova's [27], which our probabilistic process terms do not exhibit. We use the trace semantics of Kerstan and Konig [15] because it takes into account infinite traces. Infinite trace semantics has yet to see a complete axiomatization in the literature.
**Other types of syntax.** In this paper, we used the specification language of \(\mu\)-terms as our basic syntax. As it happens, there are two other flavors of syntax that we could have employed. These are _iteration theories_[5], and terms in the Formal Language of Recursion \(FLR\), especially its \(FLR_{0}\) fragment. The three flavors of syntax for fixed point terms are compared in a number of papers: In [13], it was shown that there is an equivalence of categories between \(FLR_{0}\) structures and iteration theories, and Bloom and Esik make a similar connection between iteration theories and the \(\mu\)-calculus in [6]. Again, these results describe general matters of equivalence, but it is not completely clear that for a specific space or class of spaces that they are equally powerful or equally convenient specification languages. We feel this matter deserves some investigation.
**Equivalence under hypotheses.** A specification language fairly close to iteration theories was used by Milius and Moss to reason about fractal constructions in [22] under the guise of
interpreted solutions_ to recursive program schemes [21]. Moreover, [22] contains important examples of reasoning about the equality of fractal sets under assumptions about the contractions. Based on the general negative results on reasoning from hypotheses in the logic of recursion [13], we would not expect a completeness theorem for fractal equivalence under hypotheses. However, we do expect to find sound logical systems which account for interesting phenomena in the area.
## 8 Conclusion
This paper connects fractals to trace semantics, a topic originating in process algebra. This connection is our main contribution, because it opens up a line of communication between two very different areas of study. The study of fractals is a well-developed area, and like most of mathematics it is pursued without a special-purpose specification language. When we viewed process terms as recipes for fractals, we provided a specification language that was not present in the fractals literature. Of course, one also needs a contraction operator interpretation to actually define a fractal, but the separation of syntax (the process terms) and semantics (the fractals obtained using contraction operator interpretations of the syntax) is something that comes from the tradition of logic and theoretical computer science. Similarly, the use of a logical system and the emphasis on soundness and completeness is a new contribution here.
All of the above opens questions about fractals and their specifications. Our most concrete question was posed in Section 6. We would also like to know if we can obtain completeness theorems allowing for extra equations in the axiomatization. Lastly, and most speculatively, since LTSs (and other automata) appear so frequently in decision procedures from process algebra and verification, we would like to know if our semantics perspective on fractals can provide new complexity results in fractal geometry.
We hope we have initiated a line of research where questions and answers come from both the analytic side and from theoretical computer science.
### Acknowledgements
Todd Schmid was partially funded by ERC Grant Autoprobe (grant agreement 10100269). Lawrence Moss was supported by grant #586136 from the Simons Foundation. We would like to thank Alexandra Silva and Dylan Thurston for helpful discussions. The images in Figures 1 and 2 were made using SageMath and GIMP.
|
2304.13226 | Cooperative Hierarchical Deep Reinforcement Learning based Joint Sleep
and Power Control in RIS-aided Energy-Efficient RAN | Energy efficiency (EE) is one of the most important metrics for envisioned 6G
networks, and sleep control, as a cost-efficient approach, can significantly
lower power consumption by switching off network devices selectively.
Meanwhile, the reconfigurable intelligent surface (RIS) has emerged as a
promising technique to enhance the EE of future wireless networks. In this
work, we jointly consider sleep and transmission power control for RIS-aided
energy-efficient networks. In particular, considering the timescale difference
between sleep control and power control, we introduce a cooperative
hierarchical deep reinforcement learning (Co-HDRL) algorithm, enabling
hierarchical and intelligent decision-making. Specifically, the meta-controller
in Co-HDRL uses cross-entropy metrics to evaluate the policy stability of
sub-controllers, and sub-controllers apply the correlated equilibrium to select
optimal joint actions. Compared with conventional HDRL, Co-HDRL enables more
stable high-level policy generations and low-level action selections. Then, we
introduce a fractional programming method for RIS phase-shift control,
maximizing the sum-rate under a given transmission power. In addition, we
proposed a low-complexity surrogate optimization method as a baseline for RIS
control. Finally, simulations show that the RIS-assisted sleep control can
achieve more than 16\% lower energy consumption and 30\% higher EE than
baseline algorithms. | Hao Zhou, Medhat Elsayed, Majid Bavand, Raimundas Gaigalas, Steve Furr, Melike Erol-Kantarci | 2023-04-26T01:26:02Z | http://arxiv.org/abs/2304.13226v2 | Cooperative Hierarchical Deep Reinforcement Learning based Joint Sleep, Power, and RIS Control for Energy-Efficient HetNet
###### Abstract
Energy efficiency (EE) is one of the most important metrics for 5G and future 6G networks to reduce energy costs and control carbon footprint. Sleep control, as a cost-efficient approach, can significantly lower power consumption by switching off network devices selectively. Meanwhile, reconfigurable intelligent surface (RIS) has emerged as a promising technique to enhance the EE of 5G beyond and 6G networks. In this work, we jointly consider sleep and transmission power control for reconfigurable intelligent surface (RIS)-aided energy-efficient heterogeneous networks (Hetnets). In particular, we first propose a fractional programming (FP) method for RIS phase-shift control, which aims to maximize the sum-rate under given transmission power levels. Then, considering the timescale difference between sleep control and power control, we introduce a cooperative hierarchical deep reinforcement learning (C-HDRL) algorithm, including a cross-entropy enabled meta-controller for sleep control, and correlated equilibrium-based sub-controllers for power control. Moreover, we proposed a surrogate optimization method as one baseline for RIS control, and conventional HDRL as another baseline for sleep and power control. Finally, simulations show that the RIS-assisted sleep control can achieve more than 16% lower energy consumption and 30% higher energy efficiency than baseline algorithms.
Reconfigurable Intelligent Surfaces, Hierarchical Deep Reinforcement Learning, energy efficiency.
## I Introduction
Energy efficiency (EE) is a critical metric for sustainable 5G and 6G networks. Sleep control has been one of the widely considered approaches to enhance the EE of radio access network (RAN) [1]. Sleep control turns network devices such as base stations (BS) to sleep mode selectively [2]. On the other hand, reconfigurable intelligent surface (RIS) has emerged as a promising technique for amplifying signals in 5G beyond and 6G networks. In particular, a huge number of small and low power consumption elements are integrated into RISs where each is equipped with reflect-arrays to amplify and forward the incoming signal. Since no power amplifier is needed, RISs consume much less energy than conventional relay transceivers. Therefore, RISs can become an ideal solution to improve the EE of RAN [3]. However, these potential solutions for energy-efficient RAN may greatly increase the network management complexity as multiple network functions should be jointly considered to adapt to network dynamics. Fortunately, machine learning techniques offer promising opportunities for intelligent network management and control [4]. The advantage of machine learning algorithms has been extensively studied in many works [5].
In this work, we jointly consider sleep control, transmission power control and RIS control to improve EE performance. Note that the BS transmission power control is involved because: i) if one BS enters the sleep mode, the rest active BSs may need to increase the transmission power to serve the user equipment (UE) that is previously associated with the BS that has switched to sleep mode; ii) BS transmit power and RIS phase shifts are usually jointly optimized, which is known as joint active and passive beamforming [6]; iii) power control can reduce the overall interference level, increase the average Signal-to-interference-plus-noise ratio (SINR), and further improve the EE [7]. However, compared with transmit power and RIS control, sleep control is a long-term decision that affects the network performance for several consecutive time slots. By contrast, power control applies faster decision-making to adapt to the changing traffic demand of UEs. Therefore, such timescale obstacles in decision-making prevent the application of conventional reinforcement learning algorithms [8].
To this end, we deploy a hierarchical architecture for joint sleep and power control, including a meta-controller for long-term sleep control and several sub-controllers for short-term power control. The sleep control is decided every few time slots as a high-level policy for sub-controllers, and sub-controllers can adjust the transmission power in each time slot accordingly. HRL has been used for joint relay selection and power optimization in [9] and multichannel sensing in [10]. But in the existing schemes, the meta-controller uses the average reward of sub-controllers as feedback to assess the high-level policy. Nevertheless, the low-level action selection is constantly changing, and the average reward metric may
have difficulty representing the sub-controller status. Learning multiple levels of policies simultaneously can be difficult due to the inherent instability. Specifically, changes in a policy at one level of the hierarchy may lead to changes in the policies of higher levels, making it difficult to jointly learn multiple levels of policies. For instance, a high average reward may be produced by very unstable sub-controllers that cannot guarantee a satisfying performance. Such unstable low-level policies can mislead the decision-making of higher-level controllers, which has become a critical issue for HRL applications [11]. Consequently, we propose a novel cooperative hierarchical deep reinforcement learning (Co-HDRL) algorithm. For high-level meta-controller, we apply a cross-entropy enabled policy to monitor the stability of sub-controllers, which aims at evaluating the reliability of low-level reward feedback. For low-level sub-controllers, we introduce a correlated equilibrium-based cooperation strategy to stabilize action selections. Therefore, Co-HDRL is expected to produce a stable performance under a hierarchical scheme while allowing a long-term RAN policy to interact with short-term optimizations of network functions.
On the other hand, compared with sleep and power control, RIS phase control has more stringent requirements for fast and real-time optimization [1]. Each RIS element should respond to the incoming signal immediately, and the channel condition such as phase difference may be randomly changed in the next time slot, which requires an optimization algorithm with a fast response. For example, [6] applies an alternating optimization algorithm for active and passive beamforming in RIS-assisted networks, and [12] utilizes fractional programming (FP) techniques to maximize the weighted sum-rate. To better combine sleep and power control with RIS control, we propose an FP-based algorithm to maximize the sum-rate of all UEs [13]. We first apply the quadratic transformation to tackle the multiple-ratio FP terms, and then transform the optimization into a semi-definite programming problem by using the well-known Schur complement.
The main contributions of this work are summarized as follows:
1) We provide a novel hierarchical, intelligent scheme by combining convex optimization-based FP with machine learning-enabled Co-HDRL to improve the EE of heterogeneous networks (HetNets). The Co-HDRL decides the sleep control and BS transmission power level, and then the transmission power is sent to FP for RIS phase-shift optimization.
2) An FP-based RIS phase-shift control method is introduced for multi-BS and multi-RIS HetNet scenarios. It applies an iterative optimization method to maximize the total transmission rate under given power levels. Moreover, we design a surrogate optimization-based baseline algorithm for RIS control, which yields an approximation for the objective function by using surrogate models.
3) We propose a Co-HDRL algorithm for the joint sleep and power control of HetNets, which is capable of handling problems with different timescale actions. More specifically, a cross-entropy enabled policy is applied for high-level meta-controller to evaluate the low-level stability, and correlated equilibrium is deployed for low-level sub-controllers to stabilize the joint action selections. In addition, such joint control capabilities of handling timescale differences are critical to many existing platforms such as O-RAN and associated control loops; i.e. non-RT (with decision horizon in hours) and near-RT RIC (with decision horizon in seconds or milliseconds) [14].
Finally, our simulations show that FP and Co-HDRL can achieve 20% higher throughput, 16% lower power consumption, and 35% higher energy efficiency than other baseline algorithms. Moreover, Co-HDRL presents more stable action selections and higher rewards than conventional HDRL algorithms.
The rest of this work is organized as follows. Section II presents the related work, and Section III gives the network and system model. The FP-based RIS control algorithm is shown in section IV, section V introduces the proposed Co-HDRL, and baseline algorithms are given in section VI. Finally, section VII shows the simulation results, and section VIII concludes this work.
## II Related Work
Machine learning techniques, including Q-learning [15], deep Q-learning [16, 17], transfer learning [18], actor-critic learning [19] have been widely used for wireless network management. For example, Q-learning and deep Q-learning are used in [17] for energy sharing of renewable energy powered base stations to reduce energy footprint. [19] combines actor-critic reinforcement learning with spatial-temporal networks for traffic-aware BS sleep control. [20] defined a deep Q-learning-based scheme with action-wise experience replay and adaptive reward scaling for sleep control. However, it is worth noting that most existing works require actions with the same timescales, which means all the actions have to be decided simultaneously. But in practice, the actions may belong to different agents that apply various decision-making timescales.
Consequently, HRL is proposed to overcome this issue [21]. [9] proposed an HRL-based method for joint relay selection and power optimization in two-hop cooperative relay networks to reduce outage probability. In [10], HRL is used for dynamic spectrum access control, which aims to reduce the complexity of the whole optimization process. In these works, the meta-controller uses the average reward of sub-controllers to evaluate the high-level policy. But the low-level stability issue is not considered since the low-level action selection is constantly changing [11, 22]. Then the unreliable reward feedback from sub-controllers may mislead the policy selection of meta-controllers and degrade the overall performance. In our former work [23], we provide a hierarchical Q-learning-based method for joint sleep and power control, but RIS control is not included in our prior work. In addition, the sub-controller stability is not considered in [23] by using conventional \(\epsilon\)-greedy policy for the action and goal selection for the sub-controller and meta-controller, which may lead to unstable policy selection. In this paper, we propose a Co-HDRL algorithm to overcome this issue, including a cross
entropy enabled meta-controller policy to monitor the low-level stability, and a correlated equilibrium to stabilize the action selection of sub-controllers.
Compared with existing sleep control and power control studies, another difference is that we investigate how RIS technique can contribute to sleep control and improve the EE of HetNets. RISs, as a cost-efficient approach, have become a promising technique to improve the EE of 5G beyond and 6G networks. [24] proposed two algorithms for the transmit power allocation and the phase shifts of RIS elements to maximize the EE metric. [25] studies the tradeoff between energy efficiency and spectral efficiency in RIS-aided networks. Different than these existing works, here, we consider a novel multi-BS and multi-RIS HetNet and study how RISs can assist sleep and power control. We focus on the phase shift optimization of RIS elements and propose an FP-based control method. The proposed RIS control method receives the transmission power information from Co-HDRL as input and optimizes the phase shift to maximize the sum-rate.
## III Network and System Model
As shown in Fig.1, we consider a multi-BS and multi-RIS HetNets environment. Firstly, the small base stations (SBSs) can enter sleep mode when the transmission demand drops, which will reduce the energy consumption and increase overall energy efficiency. Then the MBS will take over the active UEs that are previously covered by sleeping SBSs. Meanwhile, SBS can adjust its transmission power dynamically to achieve the desired SINR for attached UEs and reduce the interference on other SBSs. In this work, we propose a Co-HDRL method for joint sleep and power control of SBSs.
On the other hand, the direct transmission between BSs and UEs may suffer high penetration loss due to dense buildings. Then, RISs are deployed to reshape the signal transmission path from BS to UEs and increase the SINR. Note that one RIS may be shared by several BSs, which means the RIS control will simultaneously affect the performance of several BSs, and we propose an FP-based algorithm for the RIS phase-shift control.
### _RIS-Assisted Channel Model_
The direct transmissions between BS and UEs are usually considered as non-line-of-sight (NLOS) with a high penetration loss caused by dense buildings and complicated propagation environments. Consequently, we assume UEs mainly receive signals by the indirect RIS-assisted transmission that consists of BS-RIS and RIS-UE links [24].
RIS elements are usually deployed on the surface of high buildings, and then the BS-RIS link is considered as line-of-sight (LOS) link:
\[\mathbf{H}_{b,m}=g_{b,m}[\mathbf{h}_{b,m,1},...,\mathbf{h}_{b,m,n},...,\mathbf{h}_{b,m,|\mathcal{ N}_{m}|}], \tag{1}\]
where \(g_{b,m}\) is the path loss from \(b^{th}\) BS to \(m^{th}\) RIS, \(\mathcal{N}_{m}\) is the total number of elements of \(m^{th}\) RIS. The phase difference \(\mathbf{h}_{b,m,n}\) is given by \(\mathbf{h}_{b,m,n}=\exp\left(\frac{-2j\pi d_{b,m,n}}{\lambda}\right)\), where \(j\) is the imaginary unit, \(d_{b,m,n}\) is the distance from \(b^{th}\) BS to \(m^{th}\) RIS, \(\lambda\) is the signal wavelength.
Then, the signal will be reflected by RIS to UEs by a phase shift matrix:
\[\mathbf{\Theta}_{m}=\omega_{m}\text{diag}(\theta_{m,1},...,\theta_{m,n},...,\theta _{m,\mathcal{N}_{m}}), \tag{2}\]
where diag indicates the diagonal operator, \(\omega_{m}\) is the amplitude reflection coefficient of \(m^{th}\) RIS, and \(\omega_{m}\in[0,1]\). \(\theta_{m,n}=e^{j\theta^{\prime}_{m,n}}\) is the phase shift of \(n^{th}\) RIS elements with \(\theta^{\prime}_{m,n}\in\left\{0,\frac{2\pi}{2^{\prime}},\cdots,\frac{2\pi(2 ^{\prime}-1)}{2^{\prime}}\right\}\), and \(\mu\) is the resolution of the RIS element's phase shifter.
Considering the complex environment on the UE side, the RIS-UE link follows the Rician fading, which is a combination of LOS and NLOS transmissions [26]:
\[\mathbf{G}_{m,k}=\sqrt{\frac{\tau_{m,k}}{\tau_{m,k}+1}}\overline{\mathbf{G}}_{m,k}+ \sqrt{\frac{1}{\tau_{m,k}+1}}\widetilde{\mathbf{G}}_{m,k}, \tag{3}\]
where \(\tau_{m,k}\) is the Rician factor between \(m^{th}\) RIS and \(k^{th}\) UE. \(\overline{\mathbf{G}}_{m,k}\) is the deterministic LOS component, which is calculated in a similar way as (1). \(\widetilde{\mathbf{G}}_{m,k}\) is the NLOS component, which is given by independent and identically distributed complex Gaussian distribution.
For a downlink transmission between BS \(j\) and UE \(k\), the channel capacity is:
\[\small\small\begin{split} C_{b,k}=& b_{k}\log\bigg{(} 1+&\frac{P_{b}|\sum\limits_{m\in\mathcal{M}}\mathbf{H}_{b,m}\mathbf{\Theta} _{m}\mathbf{G}_{m,k}^{\dagger}|^{2}}{\sum\limits_{b^{\prime}\in\mathcal{B}_{-b}}P _{b^{\prime}}|\sum\limits_{m^{\prime}\in\mathcal{M}}\mathbf{H}_{b^{\prime},m^{ \prime}}\mathbf{\Theta}_{m^{\prime}}\mathbf{G}_{m^{\prime},k}^{\dagger}|^{2}+N_{0}^{2} }\bigg{)},\end{split} \tag{4}\]
where \(b_{k}\) is the bandwidth allocated to UE \(k\), \(\mathbf{G}_{m,k}^{\dagger}\) represents the conjugate transpose of vector \(\mathbf{G}_{m,k}\), \(N_{0}^{2}\) is the noise power, and \(P_{b}\) is the transmission power of \(b^{th}\) BS. \(\mathcal{B}_{-b}\) denotes the set of BSs except \(b^{th}\) BS, and \(\mathcal{M}\) is the set of RISs in the environment.
Fig. 1: RIS-aided heterogeneous network.
### _Energy Consumption Model_
We assume only SBS can enter the sleep mode, and the energy consumption of the \(b^{th}\) SBS is [27]:
\[E_{b}=\left\{\begin{array}{ll}P_{active}+\delta_{BS}P_{b},&\text{ if }q_{b}=1;\\ P_{sleep},&\text{ if }q_{b}=0,\end{array}\right. \tag{5}\]
where \(q_{b}=1\) indicates active mode, and \(q_{b}=0\) means sleep mode. \(P_{active}\) is the constant energy consumption of SBS in active mode, \(\delta_{BS}\) is the slope of load-dependent energy consumption factor, \(P_{b}\) is the transmission power, and \(P_{sleep}\) is the SBS energy consumption in sleep mode.
### _Problem Formulation_
The overall objective is to maximize the overall EE, and the problem formulation is given by:
\[\max_{q_{b},P_{b},\mathbf{\Theta}_{m}} \frac{\sum_{b\in\mathcal{B}}\sum_{k\in\mathcal{K}_{b}}D_{b,k}}{ \sum_{b\in\mathcal{B}}E_{b}}-\phi\sum_{b\in\mathcal{B}}n_{b}\] (6) s.t. \[\left(1\right)\left(2\right)\left(3\right)\left(4\right)\left(5\right) \tag{6a}\]
where \(D_{b,k}\) is the throughput of \(k^{th}\) UE associated to \(b^{th}\) BS1. \(n_{b}\) is a binary variable: \(n_{b}=1\) means the \(b^{th}\) BS is overloaded; otherwise \(n_{b}=0\). Here, overloading indicates the transmission demand has exceeded the BS channel capacity, and consequently attached UEs may experience a long queuing delay. We define \(\phi\) as a penalty factor to avoid overloading and guarantee network performance. The power consumption of RIS elements is not included here because it is much lower than BS power in practice [24].
Footnote 1: Note that the throughput level \(D_{b,k}\) depends on the user traffic demand and available channel capacity, which means that BS can adjust the transmission power dynamically to save energy consumption when user’s traffic demand is off-peak. Therefore, such definitions can better adapt to wireless network dynamics.
The defined problem (6) includes three control variables \(q_{b}\), \(P_{b}\), and \(\mathbf{\Theta}_{m}\). First, the binary variable \(q_{b}\) in equation (5) indicates the sleep control of SBSs, which will affect the energy consumption \(\sum_{b\in\mathcal{B}}E_{b}\) in the objective. Second, the BS transmission power level \(P_{b}\) in equation (4) will affect both energy consumption and channel capacity. Finally, the channel capacity also depends on the RIS phase shift \(\mathbf{\Theta}_{m}\), which is given by equation (4). In summary, we aim to jointly consider the sleep control, transmission power control, and RIS phase-shift control to optimize the objective defined in equation (6).
## IV Fractional Programming based RIS Phase-shift Control
In this section, we introduce the proposed FP-based RIS phase control. Based on equation (4), the SINR of UE \(k\) is:
\[\psi_{k}=\frac{P_{b}|\sum\limits_{m\in\mathcal{M}}\mathbf{H}_{b,m}\mathbf{\Theta} _{m}\mathbf{G}_{m,k}^{\dagger}|^{2}}{\sum\limits_{b^{\prime}\in\mathcal{B}_{ -b}}P_{b^{\prime}}|\sum\limits_{m^{\prime}\in\mathcal{M}}\mathbf{H}_{b^{\prime},m^ {\prime}}\mathbf{\Theta}_{m^{\prime}}\mathbf{G}_{m^{\prime},k}^{\dagger}|^{2}+ N_{0}^{2}} \tag{7}\]
The objective of RIS phase-shift control is to maximize the total data rate for all UEs, which is given by:
\[\max_{\mathbf{\Theta}_{m}} f_{1}(\mathbf{\Theta}_{m})=\sum_{k\in\mathcal{K}}b_{k}\log(1+\psi_{k})\] (8) s.t. \[|\theta_{m,n}|^{2}=1,m\in\mathcal{M},n\in\mathcal{N}_{m}, \tag{8a}\]
where \(\mathcal{K}\) is the set of UEs, and \(b_{k}\) is the bandwidth allocated to UE \(k\). In equation (8), note that the control variable is \(\mathbf{\Theta}_{m}\), and the BS transmission power \(P_{b}\) depends on the Co-HDRL algorithm, which will be introduced in the next section.
**Proposition 1**: _The problem (8) is equivalent to: [13]_
\[\max_{\mathbf{\Theta}_{m},\mathbf{\beta}} f_{2}(\mathbf{\Theta}_{m},\mathbf{\beta})=\sum_{k\in\mathcal{K}}\bigg{(}b_{k} \log(1+\beta_{k}) \tag{9}\] \[-b_{k}\beta_{k}+\frac{b_{k}(1+\beta_{k})\psi_{k}}{1+\psi_{k}} \bigg{)}\] \[\text{s.t.}(8a),\]
_where \(\mathbf{\beta}=[\beta_{1},\beta_{2},...,\beta_{|\mathcal{K}|}]\) is the auxiliary variables given by Lagrangian dual transform._
_Proof_: See Appendix A.
Given equation (9), we take the ratio term out of the logarithm by Lagrangian transform. To solve the problem (9), we apply an iterative method to update the \(\mathbf{\Theta}_{m}\) and \(\mathbf{\beta}\) alternatively. First, given \(\mathbf{\Theta}_{m}\), setting \(\frac{\partial f_{2}}{\partial\beta_{k}}=0\) claims \(\beta_{k}^{*}=\psi_{k}\). Then, given \(\beta_{k}^{*}\), optimizing \(\mathbf{\Theta}_{m}\) means:
\[\max_{\mathbf{\Theta}_{m}} f_{3}(\mathbf{\Theta}_{m})=\sum_{k\in\mathcal{K}}\frac{b_{k}(1+ \beta_{k})\psi_{k}}{1+\psi_{k}} \tag{10}\] \[=\sum_{k\in\mathcal{K}}\frac{b_{k}(1+\beta_{k})P_{b}|\sum\limits_{ m\in\mathcal{M}}\mathbf{H}_{b,m}\mathbf{\Theta}_{m}\mathbf{G}_{m,k}^{\dagger}|^{2}}{ \sum\limits_{b^{\prime}\in\mathcal{B}}P_{b^{\prime}}|\sum\limits_{m^{\prime}\in \mathcal{M}}\mathbf{H}_{b^{\prime},m^{\prime}}\mathbf{\Theta}_{m^{\prime}}\mathbf{G }_{m^{\prime},k}^{\dagger}|^{2}+N_{0}^{2}}\] s.t. \[(8a),\]
For ease of notation, we define \(\mathbf{\hat{\theta}}_{m}\) by:
\[\mathbf{\hat{\theta}}_{m}=[\theta_{m,1},\theta_{m,2},...,\theta_{m,\mathcal{N}_{m}}] \in\mathbb{C}^{1\times N}, \tag{11}\]
and \(\sqrt{P_{b}}\mathbf{H}_{b,m}\mathbf{\Theta}_{m}\mathbf{G}_{m,k}^{\dagger}\) can be easily transformed to \(\omega_{m}\mathbf{\hat{\theta}}_{m}\text{diag}(\mathbf{H}_{b,m})\mathbf{G}_{m,k}^{ \dagger}\sqrt{P_{b}}\). For notation brevity, we further define \(\mathbf{v}_{b,m,k}=\omega_{m}\text{diag}(\mathbf{H}_{b,m})\mathbf{G}_{m,k}^{\dagger}\sqrt{P_{ b}}\). Then, equation (10) can be rewritten by:
\[\max_{\mathbf{\Theta}}\ f_{3}(\mathbf{\hat{\Theta}}) =\sum_{k\in\mathcal{K}}\frac{b_{k}(1+\beta_{k})|\sum\limits_{m\in \mathcal{M}}\mathbf{\hat{\theta}}_{m}^{\dagger}\mathbf{v}_{b,m,k}|^{2}}{\sum\limits_{b^ {\prime}\in\mathcal{B}}|\sum\limits_{m^{\prime}\in\mathcal{M}}\mathbf{\hat{\theta}}_{ m^{\prime}}^{\dagger}\mathbf{v}_{b^{\prime},m^{\prime},k}|^{2}+N_{0}^{2}} \tag{12}\] \[=\sum_{k\in\mathcal{K}}\frac{b_{k}(1+\beta_{k})|\mathbf{\hat{\Theta} }\mathbf{V}_{b,k}|^{2}}{\sum\limits_{b^{\prime}\in\mathcal{B}}|\mathbf{\hat{ \Theta}}\mathbf{V}_{b,k}|^{2}+N_{0}^{2}}\] s.t. \[(8a),\]
where \(\mathbf{\hat{\Theta}}=[\mathbf{\hat{\theta}}_{1}^{\dagger},\mathbf{\hat{\theta}}_{2}^{ \dagger},...,\mathbf{\hat{\theta}}_{|\mathcal{M}|}^{\dagger}]\) represents all the RIS phase control variables, and \(\mathbf{V}_{b,k}=[\mathbf{v}_{b,1,k},\mathbf{v}_{b,2,k},...,\mathbf{v}_{b,|\mathcal{M}|,k}]\).
**Proposition 2**: Based on quadratic transformation, equation (12) is equivalent to [13]:
\[\max_{\mathbf{\hat{\Theta}},\eta_{k}}f_{4}(\mathbf{\hat{\Theta}}, \eta_{k})=\sum_{k\in\mathcal{K}}(2\sqrt{b_{k}(1+\beta_{k})} \tag{13}\] \[\text{s.t.}\ (8a),\]
where \(\boldsymbol{\eta}=[\eta_{1},\eta_{2},...\eta_{|\mathcal{K}|}]\) a collection of auxiliary variables, and \(\text{Re}\{\}\) refers to the real number part of a complex number.
_Proof_: See Appendix B.
Given proposition 2, we rewrite equation (13) to optimize \(\mathbf{\hat{\Theta}}\):
\[\max_{\mathbf{\hat{\Theta}}} f_{5}(\mathbf{\hat{\Theta}})=2\text{Re}\{\mathbf{\hat{\Theta}}^{ \intercal}\sum_{k\in\mathcal{K}}2\sqrt{b_{k}(1+\beta_{k})}\eta_{k}^{\intercal} \boldsymbol{V}_{b,k}\} \tag{14}\] \[-\mathbf{\hat{\Theta}}^{\intercal}\sum_{k\in\mathcal{K}}|\eta_{k }|^{2}\sum_{b^{\prime}\in\mathcal{B}}|\boldsymbol{V}_{b^{\prime},k}|^{2} \mathbf{\hat{\Theta}}+\sum_{k\in\mathcal{K}}|\eta_{k}|^{2}N_{0}^{2}\] \[=-\mathbf{\hat{\Theta}}^{\intercal}\Lambda\mathbf{\hat{\Theta}}+ 2\text{Re}\{\mathbf{\hat{\Theta}}^{\intercal}\Psi\}+\sum_{k\in\mathcal{K}}| \eta_{k}|^{2}N_{0}^{2}\] s.t. \[(8a),\]
where \(\Lambda=\sum_{k\in\mathcal{K}}|\eta_{k}|^{2}\sum\limits_{b^{\prime}\in \mathcal{B}}|\boldsymbol{V}_{b^{\prime},k}|^{2}\) and \(\Psi=\sum_{k\in\mathcal{K}}2\sqrt{b_{k}(1+\beta_{k})}\eta_{k}^{\intercal} \boldsymbol{V}_{b,k}\) for ease of notations. Note that equation (14) is a quadratically constrained quadratic programming problem, but the phase shift constraint \(|\theta_{m,n}|^{2}=1\) is non-convex. Therefore, we relax this constraint by allowing \(|\theta_{m,n}|^{2}\leq 1\) for convexity, which means:
\[\max_{\mathbf{\hat{\Theta}}} f_{6}(\mathbf{\hat{\Theta}})=-\mathbf{\hat{\Theta}}^{ \intercal}\Lambda\mathbf{\hat{\Theta}}+2\text{Re}\{\mathbf{\hat{\Theta}}^{ \intercal}\Psi\}\] (15) s.t. \[|\theta_{m,n}|^{2}\leq 1,m\in\mathcal{M},n\in\mathcal{N}_{m},\]
which is an optimization problem with strong convexity. The Lagrange dual of this problem can be written as an unconstrained problem:
\[\max_{\mathbf{\hat{\Theta}},\mathbf{\hat{\sigma}}} f_{7}(\mathbf{\hat{\Theta}},\mathbf{\hat{\sigma}})=-\mathbf{\hat{ \Theta}}^{\intercal}\Lambda\mathbf{\hat{\Theta}}+2\text{Re}\{\mathbf{\hat{ \Theta}}^{\intercal}\Psi\} \tag{16}\] \[-\sum_{m\in\mathcal{M}}\sum_{n\in\mathcal{N}_{m}}\sigma_{m,n}(| \theta_{m,n}|^{2}-1),\]
where \(\sigma_{m,n}\) is the Lagrange dual variable for each constraint with \(\mathbf{\hat{\sigma}}=[\boldsymbol{\sigma}_{1},...,\boldsymbol{\sigma}_{m},...,\boldsymbol{\sigma}_{\mathcal{M}}]\), and each \(\boldsymbol{\sigma}_{m}=[\sigma_{m,1},...,\)\(\sigma_{m,n},...,\sigma_{m,|\mathcal{N}_{m}|}]\).
**Proposition 3**: _the optimal value of equation (15) can be achieved by updating optimal \(\mathbf{\hat{\Theta}}\) and \(\mathbf{\hat{\sigma}}\) iteratively._
_Proof_: Setting \(\frac{\partial f_{7}}{\partial\mathbf{\hat{\Theta}}}=0\), then we have the optimal \(\mathbf{\hat{\Theta}}^{\star}\) by:
\[\mathbf{\hat{\Theta}}^{\star}=\frac{\Psi}{\Lambda+\text{diag}(\mathbf{\hat{ \sigma}})}. \tag{17}\]
Then, given \(\mathbf{\hat{\Theta}}^{\star}\), we differentiate (16):
\[\frac{\partial f_{7}}{\partial\sigma_{m,n}} =\frac{\partial(\sum\limits_{m\in\mathcal{M}}\sum\limits_{n\in \mathcal{N}_{m}}\sigma_{m,n}(1-|\theta_{m,n}|^{2}))}{\partial\sigma_{m,n}} \tag{18}\] \[=1-\frac{\partial(\mathbf{\hat{\Theta}}\mathbf{\hat{\Theta}}^{ \intercal}\text{diag}(\mathbf{\hat{\sigma}}))}{\partial\sigma_{m,n}}\] \[=1-|\theta_{m,n}|^{2},\]
Setting \(\frac{\partial f_{7}}{\partial\sigma_{m,n}}=0\) will bring \(|\theta_{m,n}|^{2}=1\), which means each element in \(\mathbf{\hat{\Theta}}^{\star}\) will satisfy the constraints (8a).
Finally, substituting (17) into (16) claims:
\[f_{8}=\frac{\Psi\Psi^{\dagger}}{\Lambda+\text{diag}(\mathbf{\hat{\sigma}})}+ tr(\text{diag}(\mathbf{\hat{\sigma}})), \tag{19}\]
which can be solved by the well-known Schur complement:
\[\max_{\mathbf{\hat{\sigma}},\kappa} f_{9}(\mathbf{\hat{\sigma}},\kappa)=\kappa-tr(\text{diag}(\mathbf{\hat{ \sigma}}))\] (20) s.t. \[\begin{bmatrix}\Lambda+\text{diag}(\mathbf{\hat{\sigma}})&\Psi\\ \Psi^{\dagger}&-\kappa\end{bmatrix}\succeq 0\hskip 56.905512pt.\]
Finally, equation (20) can be efficiently solved as a semidefinite programming (SDP) problem, and we summarize the FP-based RIS phase control in Algorithm 1, which applies an alternative method for optimization.
```
1:Initialize: Wireless channel parameters and BS transmission powers. Setting iteration number \(y=1\), selecting a feasible initial value for \(\mathbf{\Theta}^{(y)}\) based on the constraints.
2:repeat
3: Given \(\mathbf{\Theta}^{(y)}\), updating auxiliary variable \(\beta_{k}^{*(y+1)}=\psi_{k}^{(y)}\) by equation (7).
4: Calculating \(\mathbf{\hat{\Theta}}^{(y+1)}\) and \(\boldsymbol{V}_{b,k}^{(y+1)}\) which has been defined in equation (12).
5: Calculating \(\eta^{*(y+1)}\) as in Appendix B, and generating \(\Lambda^{(y+1)}\) and \(\Psi^{(y+1)}\) accordingly.
6: Calculating \(\mathbf{\hat{\sigma}}^{(y+1)}\) by solving problem (20).
7: Updating \(\mathbf{\hat{\Theta}}^{(\boldsymbol{y})}\) by equation (17), transforming \(\mathbf{\hat{\Theta}}^{(\boldsymbol{y})}\) to \(\mathbf{\Theta}^{(t)}\).
8:\(y=y+1\)
9:until Problem (9) converges, or the iteration number \(y\) reaches the predefined maximum value.
10:Output: Maximum data transmission rate and optimal RIS phase shifts under given transmission power of BSs.
```
**Algorithm 1** Fractional programming based RIS phase control
## V Cooperative Hierarchical Deep Reinforcement Learning for Sleep Control and Power Control
We introduce the proposed Co-HDRL algorithm in this section. Co-HDRL applies a hierarchical architecture for decision-making, including a meta-controller for sleep control, and multiple sub-controllers for transmission power control.
### _Hierarchical SMDP, Meta-controller and Sub-controller Definitions_
In this section, we introduce a hierarchical semi Markov decision process (SMDP), which is used to transform our defined problem formulation into MDP context.
A conventional MDP scheme can be defined by a tuple \(<S,A,T,R>\), where \(S\) is the state set, \(A\) is the action set, \(T\) is the transition probability, and \(R\) is the reward function, respectively. As shown in Fig.2(a), the agents select an action \(a^{t}\) based on current state \(s^{t}\), then it receives the reward \(r^{t}\) and moves to the next state \(s^{t+\Delta t}\). However, conventional
MDP has difficulty in defining more complicated and realistic problems such as tasks with sparse rewards and different timescale actions. For example, sleep control is obviously a long-term decision, since turning on/off SBSs will greatly affect the network performance. By contrast, power control is a delicate and short-term decision to change the SINR of UEs. Therefore, we include a novel hierarchical SMDP framework to better handle the timescale difference between sleep control and power control.
Compared with conventional MDP, the main difference of SMDP lies in the hierarchical architecture. The SMDP here can be defined by a tuple \(<S,A,T,R,\mathcal{G}>\), where \(\mathcal{G}\) is the set of goals. For example, Fig. 2(b) includes three agents in different levels, and each agent has its own MDPs. Given current state \(s_{h}^{t}\), the highest level agent will first select a goal \(g_{h}^{t}\) at time slot \(t\), and this goal serves as a high-level policy instruction for middle-level agent from time slot \(t\) to \(t+4\Delta t\). Similarly, \(g_{m}^{t}\), which is the goal selected by middle-level agent, can affect the low-level action selection from \(t\) to \(t+2\Delta t\). On the other hand, low-level agents will provide reward feedback to higher-level agents, which can be used to evaluate higher-level goals. For instance, \(r_{l}^{t}\) and \(r_{l}^{t+\Delta t}\), which are low-level rewards, may affect the \(r_{m}^{t}\) calculation, i.e. the middle-level reward.
Moreover, these agents can apply different timescales for decision making: e.g., the highest level selects \(g_{h}^{t}\) every \(4\Delta t\), the middle level chooses \(g_{m}^{t}\) every \(2\Delta t\), and the low level makes decisions every \(\Delta t\). This hierarchical timescale setting enables higher flexibility for SMDP in more complicated and realistic environments, which makes it an ideal model to describe the joint sleep control and power control problem. Meanwhile, Fig. 2(b) reveals that the high-level agents rely on the reward feedback of lower levels for goal evaluation. Unstable feedback from low-level agents can mislead the high-level controllers, and sub-optimal policy may be selected, which will finally degrade the overall system reward. Note that SMDP has many variants, and here we define our SMDP to solve the hierarchical control problem [28]. We use three layers as an example in Fig. 2(b), but the proposed SMDP scheme is compatible with any number of layers.
Based on Fig. 2(b), we define an SMDP with two layers. MBS is defined as a meta-controller for sleep control, which will decide the long-term on/off status of attached SBSs. Then, each SBS is considered as a sub-controller to adjust its short-term transmission power level. Sleep control is refereed to high-level policy instruction for SBSs, and SBSs can provide reward feedback to the meta-controller to evaluate the sleep control policies.
For the SBS sub-controller, we define the state, action and reward by:
* **State**: For the \(b^{th}\) SBS, the state \(s_{sub}\) is defined by the total transmission demand level of attached UEs: \[s_{sub}=\frac{\sum_{k\in\mathcal{K}_{b}}W_{b,k}}{W_{b}^{max}},\] (21) where \(W_{b,k}\) represents the transmission demand of UE \(k\), \(\mathcal{K}_{b}\) represents the set of UEs that are associated with \(b^{th}\) SBS. \(W_{b}^{max}\) is the max transmission demand of \(b^{th}\) SBS, which is referred to as a constant value to normalize transmission demand in the current time slot. It is assumed the UE daily transmission demand follows specific patterns as in [29].
* **Action:** The SBS sub-controller can adjust its transmission power level \(P_{b}\) to satisfy the transmission demand in the current time slot. Therefore, the action of \(b^{th}\) SBS sub-controller is \(a_{sub}=\{P_{b}\}\).
* **Low-level reward:** The low-level reward of sub-controller is: \[r_{sub}=\left\{\begin{array}{cc}\frac{\sum_{k\in\mathcal{K}_{b}}W_{b,k}}{ E_{b}}-\phi n_{b},&\text{if }q_{b}=1\\ 0,&\text{if }q_{b}=0\end{array}\right.\] (22) where \(\phi\) and \(n_{b}\) are overloading penalty factor and overload indicator, respectively. The overload definition has been given in equation (6). \(q_{b}\) has been defined in equation (5) as the sleep control indicator. \(r_{sub}\) aims to maximize the EE of \(b^{th}\) SBS, and we assume the \(r_{sub}\) is 0 in sleep mode.
The MBS is defined as a meta-controller, which will produce high-level sleep control policies for the SBS sub-controllers:
* **State**: The state of the MBS meta-controller includes the transmission demand level of all BSs: \[s_{meta}=\bigg{[}\frac{\sum\limits_{k\in\mathcal{K}_{1}}W_{1,k}}{W_{1}^{max}},\frac{\sum\limits_{k\in\mathcal{K}_{2}}W_{2,k}}{W_{2}^{max}},...,\frac{\sum \limits_{k\in\mathcal{K}_{|\mathcal{B}|}}W_{|\mathcal{B}|,k}}{W_{|\mathcal{B}|} ^{max}}\bigg{]},\] (23) where \(\mathcal{B}\) is the set of all BSs.
Fig. 2: MDP and SMDP comparison.
* **High-level goals**: Given the transmission demand level of all BSs, the meta-controller can produce sleep control decisions for SBS sub-controllers: \[g_{meta}=[q_{1},q_{2},...,q_{b},...,q_{|\mathcal{B}_{BS}|}],\] (24) where \(q_{b}\) has been defined in equation(5) as sleep mode indicator of \(b^{th}\) SBS, and \(\mathcal{B}_{SBS}\) is the set of SBSs.
* **High-level reward**: The meta-controller is responsible for the overall performance of all BSs. Accordingly, the high-level reward is given by the objective function we have defined in the problem formulation: \[r_{meta}=\frac{\sum_{b\in\mathcal{B}}\sum_{k\in\mathcal{K}_{b}}D_{b,k}}{\sum _{b\in\mathcal{B}}E_{b}}-\phi\sum_{b\in\mathcal{B}}n_{b},\] (25)
### _Co-HDRL Framework and Network Training_
The overall framework of our proposed Co-HDRL algorithm is given in Fig. 3, which includes one meta-controller and multiple sub-controllers. The goal selected by the meta-controller will decide the on/off status of sub-controllers.
In each controller, we deploy a double deep Q-learning (DDQN) scheme for the Q-value prediction. In conventional deep Q-learning, the loss function \(Er\) of network training is:
\[L(w)=Er(r^{t}+\gamma\max_{a}Q(s^{t+1},a,w^{\prime})-Q(s^{t},a^{t},w)), \tag{26}\]
where \(s^{t}\), \(a^{t}\) and \(r^{t}\) are the state, action and reward at time slot \(t\), respectively. \(w\) and \(w^{\prime}\) are the weight of the main and target networks. \(\gamma\) is the discount factor (\(0<\gamma<1\)). \(Q(s^{t},a^{t},w)\) is the current Q-value that is predicted by the main network, and \(Q(s^{t+1},a,w^{\prime})\) is the target Q-value produced by target network. In equation (26), note that both action selection and evaluation depend on the target network, which is indicated by \(\max\limits_{a}Q(s^{t+1},a,w^{\prime})\). Consequently, the max operator will constantly generate overoptimistic values for the loss function, which will further affect the Q-value prediction in main network training [30]. To this end, the DDQN scheme is proposed to separate the action selection and evaluation by:
\[\begin{split}& L(w)=Er(r^{t}+\\ &\gamma Q(s^{t+1},\arg\max_{a}Q(s^{t+1},a,w),w^{\prime})-Q(s^{t},a ^{t},w)),\end{split} \tag{27}\]
where \(\arg\max\limits_{a}Q(s^{t+1},a,w)\) indicates the main network selects the next action, and the target network evaluates the action by \(Q(s^{t+1},\arg\max\limits_{a}Q(s^{t+1},a,w),w^{\prime})\). Decoupling the action selection and evaluation can provide a more accurate target for the main network training, which will further reduce the Q-value prediction error.
In Co-HDRL, first, the meta-controller selects a high-level goal \(g_{meta}\), which indicates the sleep control decisions on sub-controllers. \(g_{meta}\) is temporarily fixed in the following several time slots, and sub-controllers select actions, receive rewards, and train their networks accordingly. The transmission power of sub-controllers is sent to the RIS control block for phase shift optimization. Based on the long-term performance of sub-controllers, the meta-controller receives an average reward \(r_{meta}\) from the wireless environment and moves to the next state \(s_{meta}\). The new experience tuple \(<s^{t}_{meta},g^{t}_{meta},r^{t}_{meta},s^{t+1}_{meta}>\) will be sent to the experience pool (shown as the left bottom block in Fig. 3). Then the agent will sample a random mini batch from the experience pool, and train the main network as equation (27). The target network will copy the main network weight after several training, which guarantees a stable target for the main network training.
In Fig. 3, it is worth noting that we define a high-level cross-entropy calculation module (shown by the blue block on the top), and a low-level action selection cooperation module (indicated by the red block on the right). In HRL, the meta-controller produces goals to instruct sub-controllers,
Fig. 3: Overall architecture of the proposed Co-HDRL.
and sub-controllers provide reward feedback to the meta-controller to evaluate the high-level goal. However, although the high-level goal \(g_{meta}\) is temporarily fixed, the action of sub-controllers may be constantly changing during this period. Indeed, the sub-controllers' exploration increases the uncertainty of the reward feedback to the meta-controller. Therefore, this feedback uncertainty can mislead the goal evaluation of meta-controller [11, 22]. To this end, we proposed a cross-entropy enabled policy for high-level meta-controller, and a correlated equilibrium-based cooperation strategy for low-level sub-controllers. Following, we will introduce two techniques in detail.
### _High-level Cross-entropy enabled Meta-controller Policy_
In this section, we propose a cross-entropy enabled policy for meta-controller. More specifically, we use the cross-entropy as a metric to evaluate the stationarity of sub-controllers' actions, then the defined metric is used for high-level goal exploration. For example, goals with high reward feedback but low stationarity may be unreliable, since the low stationarity metric means the sub-controllers are still exploring action combinations. On the other hand, goals that bring high stationarity can be less visited in the exploration phase, because the sub-controllers have entered a stable status and it requires less training [31]. Following, we first present the cross-entropy metric definition, then we introduce how to use the metric for goal exploration and selection.
Given a random variable \(x\), and \(N^{X}\) is the total number of possible outcomes of \(x\), then the entropy of \(x\) in set \(X\) is defined by:
\[I(X)=-\sum_{i=1}^{N^{X}}pr(x_{i}){\rm log}(x_{i}), \tag{28}\]
where \(pr(x_{i})\) is the probability of \(x_{i}\) in set \(X\), and \(\sum_{i=1}^{N^{X}}pr(x_{i})=1\). \(I(X)\) indicates the average level of information and uncertainty of variable \(x_{i}\) in set \(X\).
Then we introduce the Kullback-Leibler divergence to define the relative entropy from one distribution \(X\) to another distribution \(Y\) of variable \(x\)[32] :
\[D_{KL}(X||Y)=\sum_{i=1}^{N^{X}}pr(x_{i}){\rm log}\frac{pr(x_{i})}{pr^{{}^{ \prime}}(x_{i})}, \tag{29}\]
where \(pr^{{}^{\prime}}(x_{i})\) is the probability of \(x_{i}\) in set \(Y\). \(D_{KL}(X||Y)\) indicates the difference of set \(X\) and \(Y\) in terms of probability distribution of \(x\).
In this work, we consider \(X\) as the set of action selection history of sub-controllers, and \(Y\) is the set of actions selected in the current time interval \(\Delta t\). Consequently, we can apply \(D_{KL}(X||Y)\) to measure the stationarity of the low-level action selection policy. The idea behind is that a stationary policy will produce similar action selections in different time intervals, which is indicated by a lower \(D_{KL}(X||Y)\) value. By contrast, a higher \(D_{KL}(X||Y)\) means sub-controllers require more training to stabilize the low-level policies.
**Proposition 4**: _The Kullback-Leibler divergence has the minimum value when \(pr(x_{i})=pr^{{}^{\prime}}(x_{i})\)._
**Proof 4**: _Based on Gibbs' inequality [33]:_
\[\begin{split} D_{KL}(X||Y)&=\sum_{i=1}^{N^{X}}pr(x_ {i}){\rm log}\frac{pr(x_{i})}{pr^{{}^{\prime}}(x_{i})}\\ &=-\sum_{i=1}^{N^{X}}pr(x_{i}){\rm log}(\frac{pr^{{}^{\prime}}(x_ {i})}{pr(x_{i})})\\ &\quad({\rm Using}\ \log(x)<x-1\ {\rm for}\ {\rm all}\ x>0)\\ &\geq-\sum_{i=1}^{N^{X}}pr(x_{i})(\frac{pr^{{}^{\prime}}(x_{i})}{pr (x_{i})}-1)\\ &=\sum_{i=1}^{N^{X}}pr(x_{i})-\sum_{i=1}^{N^{X}}pr^{{}^{\prime}}( x_{i})\\ &=0.\end{split} \tag{30}\]
\(D_{KL}(X||Y)=0\) if and only if \(X\) and \(Y\) has the same distribution \(pr(x_{i})=pr^{{}^{\prime}}(x_{i})\).
Moreover, we rewrite equation (29) by:
\[D_{KL}(X||Y) =\sum_{i=1}^{N^{X}}pr(x_{i}){\rm log}\frac{pr(x_{i})}{pr^{{}^{ \prime}}(x_{i})} \tag{31}\] \[=\sum_{i=1}^{N^{X}}pr(x_{i}){\rm log}(pr(x_{i}))-\sum_{i=1}^{N^{X}} pr(x_{i}){\rm log}(pr^{{}^{\prime}}(x_{i}))\] \[=-I(X)-\sum_{i=1}^{N^{X}}pr(x_{i}){\rm log}(pr^{{}^{\prime}}(x_{i})),\]
where \(I(X)\) is the entropy of the action selection distribution history, and \(Y\) is the action selection distribution in the current time interval. The history distribution \(X\) is generally more stable than the current distribution \(Y\). It means \(\sum_{i=1}^{N^{X}}pr(x_{i}){\rm log}(pr^{{}^{\prime}}(x_{i}))\) item contributes more to the uncertainty, which is known as the cross-entropy [34]. In this work, we use cross entropy to represent the stationarity of the sub-controllers.
We include multiple sub-controller in this work, and the cross-entropy of \(b^{th}\) sub-controller under high-level goal \(g_{meta}\) is:
\[I_{(}X_{b,t-1},Y_{b,t}|g_{meta})=-\sum_{i=1}^{A_{g_{meta}}}pr(a_{i}){\rm log}( pr^{{}^{\prime}}(a_{i})), \tag{32}\]
where \(X_{b,t-1}\) is the accumulated action selection distribution of sub-controllers in former \(t-1\) time slots under high-level goal \(g_{meta}\), \(Y_{b,t-1}\) is the action selection distribution under \(g_{meta}\) in current time slot, and \(A_{g_{meta}}\) is the action set of sub-controllers under \(g_{meta}\). The cross-entropy metric \(I_{(}X_{b,t-1},Y_{b,t}|g_{meta})\) defines the stationary of sub-controller action selections under \(g_{meta}\).
Finally, in the exploration phase, the meta-controller selects high-level goals by:
\[pr(g_{meta}|s_{meta})=\frac{{\rm tanh}(\sum_{b\in\mathcal{S}_{-M}}I(X_{b,t-1}, Y_{b,t}|g_{meta}))}{\sum\limits_{{\rm g}_{meta}\in\mathcal{G}}{\rm tanh}(\sum_{b \in\mathcal{B}_{-M}}I(X_{b,t-1},Y_{b,b}|g_{meta}))}. \tag{33}\]
where \(\mathcal{B}_{-M}\) indicates the set of controllers except the MBS meta-controller. The \(\tanh\) indicates the Tanh function, which is applied to normalize all the cross-entropy values.
In summary, equation (33) shows that goals that lead to lower stability will be more frequently selected in the exploration phase, which is indicated by a high cross-entropy value. It guarantees that sub-controllers can provide reliable and stable feedback to the meta-controller for goal evaluation. The meta-controllers and sub-controllers are updated by Fig.4. The meta-controller will first select a goal \(g^{t}_{meta}\) at time \(t\), which is temporarily fixed in the next \(N_{T}\Delta t\) time slot. Then, sub-controllers select the actions in each \(\Delta t\) under current state \(s_{sub}\) and high-level goal \(g^{t}_{meta}\). Finally, the actions and rewards of sub-controllers from \(t\) to \(t+N_{T}\Delta t\) are collected by meta-controllers for cross-entropy calculation and goal selection as shown by equation (33).
### _Low-level Sub-controllers Cooperation_
We propose a correlated equilibrium-based cooperation strategy for low-level sub-controllers. Correlated equilibrium is proposed as a multi-agent collaboration strategy [35]. Compared with existing methods such as Nash equilibrium, the existence and convergence of correlated equilibrium can be better guaranteed by linear programming solutions. Indeed, cooperative action selections can bring higher stability than independent action selections. With correlated equilibrium, the SBS sub-controllers choose actions by:
\[\max_{a_{sub}\in A_{sub}}\sum_{\vec{a}_{sub}\in A_{sub}}pr(\vec{ a}_{sub}|s_{sub})Q(s_{sub},\vec{a}_{sub},w) \tag{34}\] \[sub.to\quad 0\leq pr(\vec{a}_{sub}|s_{sub})\leq 1,\] (34a) \[\sum_{\vec{a}\in A}pr(\vec{a}_{sub}|s_{sub})=1,\] (34b) \[\sum_{a_{-b}\in A_{-b}}pr(\vec{a}_{sub}|s_{sub})(Q(s_{sub},\vec{a }_{sub},w)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-Q(s_{ sub},\vec{a}_{-b},a_{b},w))\geq 0, \tag{34c}\]
where \(pr(\vec{a}_{sub}|s_{sub})\) is the probability of selecting action combination \(\vec{a}_{sub}=(a_{1},a_{2},...,a_{|\mathcal{B}_{-M}|})\) under state \(s_{sub}\), \(\vec{a}_{-b}\) indicates the joint action of all sub-controllers except \(b^{th}\) sub-controller, and \(w\) is the main network weight. \(\mathcal{B}_{-M}\) has been defined in equation (33) as the set of controllers except the MBS meta-controller, \(A_{-b}\) is the set of \(\vec{a}_{-b}\).
Equation (34) uses Q-values predicted by the main network to represent the expected reward. It aims to maximize the total potential reward of all sub-controllers by finding an optimal action selection probability distribution \(pr(\vec{a}_{-b}|s_{sub})\). Constraints (34a) and (34b) are probability distribution constraints. The constraint (34c) indicates that \(\vec{a}_{-b}\) produces higher expected reward with probability \(pr(\vec{a}_{-b}|s_{sub})\), which is indicated by \(pr(\vec{a}_{sub}|s_{sub})(Q(s_{sub},\vec{a}_{sub},w)-Q(s_{sub},\vec{a}_{-b},w ))\geq 0\).
Finally, the proposed Co-HDRL algorithm is summarized in Algorithm 2.
### _Computational Complexity Analyses_
In this section, we analyze the computational complexity of the proposed algorithms. For the RIS control part, the complexity lies in solving equation (20), which is a semidefinite programming (SDP) problem. SDP can be efficiently solved by many methods such as the interior point method, first-order method and so on. Based on the model proposed in [36], the computational complexity is given by \(O(\sqrt{\rho}(c\rho^{2}+c^{l}+\rho^{l})\mathrm{log}(\frac{1}{\zeta}))\), where \(\rho\) is the variable size, \(c\) is the number of constraints, \(l\) is the cost of matrix multiplication, and \(\zeta\) is the relative accuracy. For the Co-HDRL part, the computational complexity depends on neural network training. In this paper, we deploy Long short-term memory (LSTM) networks for the Q-values prediction, which is a special recurrent neural network that can better capture the long-term dependency of input data. The computational complexity of updating LSTM networks is \(O(\alpha^{2}\chi^{2})\), where \(\alpha\) indicates the memory blocks number, and \(\chi\) represents the memory cell number of one block [30].
## VI Baseline Algorithms
### _Hierarchical Deep Reinforcement Learning_
To better demonstrate the capability of our proposed Co-HDRL algorithm, we propose a conventional hierarchical deep reinforcement learning (HDRL) based method as a benchmark.
Fig. 4: Co-HDRL system update between meta-controller and sub-controllers.
In HDRL, both meta-controller and sub-controllers apply \(\epsilon\)-greedy policy for goal and action selections, respectively, and there is no high-level cross-entropy and low-level cooperation between agents. The HDRL is given in Algorithm 3.
```
1:Initialize: Wireless network and Co-HDRL parameters
2:for every \(N_{T}\Delta t\), the MBS do
3: Updating the cross-entropy metric under \(g_{meta}\) based on the action selection history of last \(N_{T}\Delta t\).
4: Calculating the average EE metric of former \(N_{T}\Delta t\) as high-level reward \(r_{meta}\) by equation (6).
5: Updating new state \(s_{meta}\), and saving \((s_{meta},g_{meta},\)\(r_{meta},s^{\prime}_{meta})\) to the experience pool.
6: Sampling a mini-batch from the experience pool randomly. Generating target Q-values \(Q^{Tar}(s_{meta},g_{meta})\)= \(\left\{\begin{array}{cc}r_{meta}&if\ done\\ r_{meta}+\gamma Q(s^{\prime}_{meta},\arg\max\limits_{g}Q(s^{\prime}_{meta}, g,w),w^{\prime})&else\end{array}\right.\)
7: Updating \(w\) using gradient descent by minimizing the loss \(L(w)=Er(Q^{Tar}(s_{meta},g_{meta})-Q(s_{meta},g_{meta},w))\).
8: Copying \(w\) to \(w^{\prime}\) after several training.
9: Selecting a high-level goal \(g_{meta}\) of the next \(N_{T}\Delta t\) by following method:
10: with probability \(\epsilon\), selecting high-level goal \(g_{meta}\) by equation (33); otherwise, choosing goal \(g_{meta}\) by greedy policy \(\arg\max\limits_{a}Q(s_{meta},g_{meta},w)\).
11:endfor
12:for every \(\Delta t\), active SBSs do
13: Selecting the transmission power level \(a_{sub}\) by:
14: with probability \(\epsilon\), sub-controllers select \(a_{sub}\) randomly; otherwise, choosing joint action \(\vec{a}_{sub}\) by correlated equilibrium as equation (34).
15: Sending the transmission power to RIS control **Algorithm 1**, then receives the data transmission rate from **Algorithm 1**, and calculates the low-level reward.
16: Updating new state \(s_{sub}\), and saving \((s_{sub},a_{sub},\)\(r_{sub},s^{\prime}_{sub})\) to the experience pool. Each sub-controller samples a mini-batch from its experience pool, and trains the network similar to the meta-controller.
17:endfor
18:Output: Performance of the network and the learning algorithm.
```
**Algorithm 2** Co-HDRL based joint sleep and power control
### _Surrogate Optimization based RIS phase control_
This section introduces the surrogate optimization-based RIS phase-shift control as a baseline algorithm. In practice, the phase shift can only be selected from a set of fixed values, which mainly depends on the predefined resolution settings of phase shifters. Therefore, we formalize the RIS control problem to maximize the total transmission rate of all UEs by:
\[\max_{\mathcal{Z}} f_{10}(\mathcal{Z})=\sum_{k\in\mathcal{K}}b_{k}\log\left(1+\right.\] \[\left.\frac{P_{b}|\sum\limits_{m\in\mathcal{M}}\boldsymbol{H}_{b,m}\boldsymbol{\Theta}_{m}\boldsymbol{G}^{\dagger}_{m,k}|^{2}}{\sum\limits_{ b^{\prime}\in\mathcal{B}_{-b}}P_{b^{\prime}}|\sum\limits_{m^{\prime}\in \mathcal{M}}\boldsymbol{H}_{b^{\prime},m^{\prime}}\boldsymbol{\Theta}_{m^{ \prime}}\boldsymbol{G}^{\dagger}_{m^{\prime},k}|^{2}+N_{0}^{2}}\right)\] (35) s.t. \[\theta^{{}^{\prime}}_{m,n}=z_{m,n}\frac{2\pi}{2\mu} \tag{35a}\] \[z_{m,n}\in\{0,1,\cdots,(2^{\mu}-1)\} \tag{35b}\]
where \(\theta^{{}^{\prime}}_{m,n}\) has been defined in equation (2) as the RIS phase-shift angle. \(z_{m,n}\) belongs to the matrix \(\mathcal{Z}\), which will decide the phase shift angle \(\theta^{{}^{\prime}}_{m,n}\). Note that the decision variable \(z_{m,n}\) can only be selected from fixed integer values, which means equation (35) is a Mixed Integer Nonlinear Programming (MINLP) problem, then a surrogate optimization method is applied to solve this problem.
Different than the FP-based method, the surrogate optimization treats the optimization as a black-box problem [37]. It yields a reasonable and accurate approximation for the original
optimization problem by a surrogate model, which avoids the complexity of designing a dedicated optimization solution. The proposed surrogate optimization-based RIS phase control is summarized in Algorithm 4. Note that there are many surrogate models and here we choose the radial basis function model for its low computation complexity.
```
1:Initialize: Wireless channel and surrogate optimization parameters, and BS transmission powers. Setting iteration number \(y=1\), and initializing the surrogate model.
2:repeat
3: Obtaining an estimated solution to equation (35) by optimizing the surrogate.
4: Evaluating the surrogate model at the estimated solution produced in last step.
5: Updating the radial basis function-based surrogate model using the new high-fidelity model data.
6:\(y=y+1\)
7:until Problem (35) converges, or the iteration number \(y\) reaches the predefined maximum value.
8:Output: Maximum data transmission rate and optimal RIS phase shifts under given transmission power of BSs.
```
**Algorithm 4** Surrogate optimization based RIS phase control
## VII Performance Evaluation
### _Simulation Settings_
We consider a HetNet that includes 1 MBS and 3 SBSs. The radius of the MBS and SBS are 400 m and 100 m, respectively. The carrier frequency for the MBS and SBS are both 4 GHz [38]. The load-dependent power consumption factor is 4.7 and 2.6 for MBS and SBS, respectively [29]. The whole cell includes 20 UEs that are randomly distributed. 6 RISs are uniformly distributed, and each contains 10 elements with 2 bits resolution. For learning settings, we apply 2 LSTM networks as hidden layers for each controller. The hyperparameters such as neural network learning rate and training frequency are selected by the grid search method, which means we try different parameter combinations and find the best results accordingly. The simulations are repeated 10 runs on MATLAB with 95% confidence interval, and other
Fig. 5: Performance comparison of sleep control and RIS.
settings are summarized in Table I.
### _Performance Comparison under Sleep Control and RIS_
This section investigates the performance under sleep control and RIS techniques. The proposed FP and Co-HDRL methods are deployed for RIS control and sleep control, respectively. As shown in Fig.5, four cases are considered by combining sleep control with RIS control.
Fig. 5(a) shows that sleep control can significantly save power consumption by switching off SBSs intelligently. Combining sleep control with RIS can achieve the lowest energy consumption. Moreover, Fig. 5(b) demonstrates that RIS can greatly increase the average throughput, which can be explained by RIS' capability to reshape the signal transmission path. Consequently, shown by Fig. 5(c), sleep control + RIS produces the highest energy efficiency, no sleep control + no RIS shows the lowest energy efficiency, and the other two combinations present a comparable performance.
To better explain how RIS can help reduce energy consumption, we compare the daily SBS on/off status of RIS and no-RIS condition in Fig.5(d). The vertical axis is the probability of turning on SBSs, which indicates the power consumption level during specific time slots. The blue shade represents the preset traffic load pattern. Fig.5(d) shows that most SBSs are switched off during the low-peak period (from 3:00 to 7:00) in both RIS and no-RIS cases. However, when the traffic load increases to mid-peak and on-peak periods, no-RIS case has to turn on most SBSs to satisfy the increasing traffic demand; otherwise, the overloading may lead to a high penalty. By contrast, using RIS can greatly increase the channel capacity, which means the MBS can still switch off most SBSs to save energy without causing overloading.
In summary, the simulations in Fig.5 demonstrate that combining sleep control and RIS can greatly save energy consumption, increase throughput and achieve higher energy efficiency.
### _RIS Control Strategy Analyses_
Consequently, we compare the proposed FP-based RIS control method with the surrogate optimization strategy.
Firstly, Fig. 6(a) shows the convergence performance of surrogate optimization in one episode. One can observe that surrogate optimization relies on random and adaptive samples to find better solutions. The surrogate model is constantly updated in each iteration to provide a more accurate estimation of the given problem. By contrast, in Fig. 6(b), our proposed FP method reaches a much faster convergence. This faster convergence can be explained by the dedicated designed FP optimization model for RIS control. In the proposed FP, the alternative optimization method guarantees that the next iteration can always perform better than the former iterations.
Similarly, we consider the combination of sleep control with different RIS control methods, and the power consumption, throughput and EE results are given in Fig. 6(c), (d) and (e), respectively. Combining FP-based RIS control with sleep control can bring lower power consumption, higher throughput and EE than other cases. The proposed FP method can achieve 35% lower power consumption and 16% higher EE when
Fig. 6: Performance comparison of difference RIS control strategies.
the peak traffic load is 8 Mbps. Moreover, the SBS on/off status is shown in Fig. 6(f). It demonstrates that FP-based RIS control can maintain low power consumption even in peak-load periods ( from 17:00 to 23:00). On the contrary, surrogate-based RIS control has to turn on SBSs to adapt to the increasing transmission demand to prevent overloading penalty.
### _Machine Learning Algorithm Comparison_
In this section, we compare the proposed Co-HDRL algorithm with conventional HDRL, which aims to prove that the cooperation strategy can bring better performance. The EE and reward performance of 200 episodes are given in Fig.7(a) and (b), in which the proposed Co-HDRL algorithm achieves a higher EE and reward than the HDRL baseline. In Co-HDRL, we apply the cross-entropy-based policy for meta-controller to monitor the stability of sub-controllers, and the correlated equilibrium is deployed for joint action selection of sub-controllers. These cooperation methods enable higher stability for sub-controller action selection and training, which brings better performance than conventional HDRL. In Fig.7(c), we further present the stationarity of various low-level action selection strategies. The proposed multi-agent correlated equilibrium enables a more stable action selection than independent action selection that is applied in HDRL, which is indicated by a lower stationarity metric. Meanwhile, Fig.7(c) demonstrates that correlated equilibrium can reach a stable state much faster than the independent action selection method.
Meanwhile, the satisfying performance of Co-HDRL can also be observed in Fig.7(d), in which the Co-HDRL switches on/off SBSs dynamically during on-peak and off-peak periods. By contrast, HDRL produces the same sleep control decision regardless of different traffic load conditions ( traffic patterns shown by the light blue shade). Moreover, the Co-HDRL performance from 4 to 10 Mbps is presented in Fig.7(e), in which the proposed Co-HDRL make decisions intelligently under various traffic load level. For example, when the traffic load level reaches 10 Mbps, the meta-controller turns on most SBSs to serve the high transmission demand. On the other hand, when the traffic load is 4 Mbps, most SBSs are switched off to reduce power consumption.
We show the average data rate under various RIS element numbers and phase shift resolutions in Fig.7(f). It demonstrates that increasing the number of RIS elements can improve the system data rate. Meanwhile, higher phase shift resolutions can achieve a higher data rate, because the phase can be more accurately shifted.
### _Performance under various cell sizes and SBS numbers._
Finally, this subsection investigates the algorithm performance with different cell sizes and numbers of SBSs. Specifically, Fig. 8(a) shows that increasing cell size will lower EE and increase power consumption, which can be explained by lower SINR due to a larger cell radius. Meanwhile, the EE in Fig. 8(b) increases first and then decreases. In the increasing phase, the MBS has enough channel capacity to support the UEs that are previously associated with the SBSs in sleep mode. However, when the number of SBSs keeps increasing,
Fig. 7: Performance analyses of Co-HDRL and HDRL.
the probability of turning on SBSs increases as illustrated in Fig. 8(b). The main reason is that MBS cannot serve all the UEs when SBSs enter sleep mode, and some SBSs must keep active to fulfill the traffic demand. Consequently, the energy efficiency becomes lower with the increasing number of SBSs.
## VIII Conclusion
Machine learning is a promising technique for network management in 5G and future 6G networks. In this work, we propose a cooperative hierarchical deep reinforcement learning method for RIS-assisted energy efficient RAN, including a cross-entropy enabled meta-controller policy and cooperative action selections for sub-controllers. Besides, a fractional programming-based RIS control strategy is introduced for phase shift control. The proposed architecture enables joint sleep, power, and RIS control with different timescales, increasing the flexibility for RAN management. This is particularly important for 6G and O-RAN where intelligence will be more abundant and control will span several time scales. The proposed method is compared with conventional hierarchical deep reinforcement learning and surrogate optimization-based RIS control, and the simulations show that the proposed algorithms achieve better energy efficiency. In the future, we will focus more on the RIS control methods, and investigate how the RIS location affects the network performance. In addition, we will investigate the effectiveness of the proposed method through experiments and empirical studies.
## Appendix A Proof of Proposition 1
In equation (9), when \(\psi_{k}\) is fixed, note that \(f_{2}\) is a differentiable concave function for \(\mathbf{\beta}\) with \(\frac{\partial^{2}f_{2}}{\partial\mathbf{\beta}^{2}}<0\). Then, we have
\[\frac{\partial f_{2}(\mathbf{\Theta}_{m},\mathbf{\beta})}{\partial\beta_ {k}} =\frac{\partial\bigg{(}b_{k}\log(1+\beta_{k})-b_{k}\beta_{k}+\frac {b_{k}(1+\beta_{k})\psi_{k}}{1+\psi_{k}}\bigg{)}}{\partial\beta_{k}}\] \[=\frac{b_{k}}{1+\beta_{k}}-b_{k}+\frac{b_{k}\psi_{k}}{1+\psi_{k}}\] \[=b_{k}(\frac{\psi_{k}}{1+\psi_{k}}-\frac{\beta_{k}}{1+\beta_{k}})\] \[=b_{k}\frac{\psi_{k}-\beta_{k}}{(1+\psi_{k})(1+\beta_{k})}\]
Setting \(\frac{\partial f_{2}}{\partial\beta_{k}}\) and obviously \(\beta_{k}^{*}=\psi_{k}\), where \(\beta_{k}^{*}\) is the optimal \(\beta_{k}\). Substituting \(\beta_{k}^{*}\) back to equation (9) will recover (8). As such, the optimal values of these two equations are equivalent.
## Appendix B Proof of Proposition 2
In equation (12), for ease of notations, we use \(A_{k}\) to represent \(\sqrt{b_{k}(1+\beta_{k})}(\mathbf{\Theta}\mathbf{V}_{b,k})\), and \(B_{k}\) to represent \(\sum\limits_{b^{\prime}\in\mathbf{\Theta}}|\mathbf{\Theta}\mathbf{V}_{b^{\prime},k}|^{2}+N _{0}^{2}\). Then, we can rewrite equation (13) as \(2\text{Re}\{\eta_{k}^{\dagger}A_{k}\}-\eta_{k}^{\dagger}B_{k}\eta_{k}\), which can be reformulated by \(\eta_{k}^{\dagger}A_{k}+A_{k}^{\dagger}\eta_{k}-\eta_{k}^{\dagger}B_{k}\eta_{k}\). Then we rewrite it as \(A_{k}^{\dagger}B_{k}^{-1}A_{k}-(\eta_{k}^{\dagger}-B_{k}^{-1}A_{k}^{\dagger})B _{k}(\eta_{k}-B_{k}^{-1}A_{k})\). It is obvious that the optimal value is \(\eta_{k}^{*}=B_{k}^{-1}A_{k}\), which means
\[\eta_{k}^{*}=\frac{\sqrt{b_{k}(1+\beta_{k})}\mathbf{\Theta}\mathbf{V}_{b,k}}{\sum \limits_{b^{\prime}\in\mathbf{\Theta}}|\mathbf{\Theta}\mathbf{V}_{b^{\prime},k}|^{2}+N_{0 }^{2}}.\]
Then substituting \(\eta_{k}^{*}\) back to equation (13) will get the optimal value as \(\sum_{k\in\mathcal{K}}\frac{b_{k}(1+\beta_{k})\mathbf{\Theta}\mathbf{V}_{b,k}|^{2}}{ \mathbf{\Theta}\mathbf{V}_{b^{\prime},k}|^{2}+N_{0}^{2}}\), which is exactly the objective of equation (12). As such, the equivalence is demonstrated.
## Acknowledgement
We would like to thank Dr. Long Kong for his useful discussions when starting this work.
|
2306.02552 | User Behavior Simulation with Large Language Model based Agents | Simulating high quality user behavior data has always been a fundamental
problem in human-centered applications, where the major difficulty originates
from the intricate mechanism of human decision process. Recently, substantial
evidences have suggested that by learning huge amounts of web knowledge, large
language models (LLMs) can achieve human-like intelligence. We believe these
models can provide significant opportunities to more believable user behavior
simulation. To inspire such direction, we propose an LLM-based agent framework
and design a sandbox environment to simulate real user behaviors. Based on
extensive experiments, we find that the simulated behaviors of our method are
very close to the ones of real humans. Concerning potential applications, we
simulate and study two social phenomenons including (1) information cocoons and
(2) user conformity behaviors. This research provides novel simulation
paradigms for human-centered applications. | Lei Wang, Jingsen Zhang, Hao Yang, Zhiyuan Chen, Jiakai Tang, Zeyu Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, Jun Xu, Zhicheng Dou, Jun Wang, Ji-Rong Wen | 2023-06-05T02:58:35Z | http://arxiv.org/abs/2306.02552v3 | # RecAgent: A Novel Simulation Paradigm for Recommender Systems
###### Abstract
Recommender system has deeply revolutionized people's daily life and production, bringing a large amount of business value. In the recommendation domain, simulation and real data-based studies are two typical research paradigms, with each having different advantages. Previously, real data-based studies occupy more important positions, since accurately simulating the user preference is quite difficult. Recently, large language models (LLM) have shown great potential to achieve human-like intelligence, which provides new opportunities to overcome the shortcomings of simulation-based studies and thus highlight their advantages, such as much more application scenarios and cheaper data acquisition strategies. To shed lights on this direction, in this paper, we introduce an LLM-based recommender simulator called RecAgent. Our simulator is composed of two modules: (1) the user module and (2) the recommender module. The user module can browse the recommendation website, communicate with other users and broadcast messages on the social media. The recommender module is designed to provide search or recommendation lists to the users, and one can design different models to implement the recommender. All the users take actions based on LLMs, and can freely evolve like in the real world. We present several case studies to demonstrate that the users in our simulator can indeed behave in a reasonable manner as expected. Our project has been released at [https://github.com/RUC-GSAI/YuLan-Rec](https://github.com/RUC-GSAI/YuLan-Rec).
## 1 Introduction
As an active research field, recommender system exemplifies a dynamic intersection of academia and industry, contributing a wealth of insights and innovations [25; 11; 10; 26; 6; 5; 26]. The bedrock of these systems is an accurate understanding of user preferences [25; 26; 32; 5; 28; 10; 11], playing a pivotal role in enhancing the design and evaluation of system strategies.
Traditional works in the recommender system field generally adhere to real data-based studies [10; 11; 26; 31; 6], grounded in accumulating user behavioral data through either interaction with online environments or enlistment of annotators. However, this paradigm confronts two salient challenges. Firstly, this paradigm proves resource-intensive and lacks sustainability. This limitation restricts its utility to a narrow band of fixed problems, thereby impeding swift adaptation to burgeoning new problems in Web 2.0 (_e.g._, RL-based recommendation and explainable recommendation [7; 27; 34; 2; 8]). Second, the richness of user interaction data in real-world contexts can be difficult to capture comprehensively. For instance, a user might choose to watch a film based on a friend's casual mention in conversation, an influencing factor that is difficult to capture through the lens of a recommender system. These hurdles significantly impact the trajectory of our in-depth investigation into recommender systems.
serve as a crucial counterpoint to research primarily reliant on real-world data-based studies. These simulation-based approaches [35; 33; 12; 20; 1] are often more cost-effective and adaptable to various application contexts. For instance, in the field of reinforcement learning (RL), given the substantial costs associated with agent-environment interactions in real-world scenarios, numerous works are assessed leveraging environmental simulators (_e.g._, gym and MuJoCo) [35; 16; 24; 3]. This evidences the value of simulation-based studies in advancing our understanding and development of complex systems. Within the realm of recommender system, the utility of simulation-based studies [13; 23; 15; 17; 2; 8] can potentially be contentious. Unlike other domains where simulators can be intuitively constructed with relative ease (_e.g._, simulating car moving process based on objective physical laws), simulating user subjective preferences in the domain of recommendation presents a far greater complexity. This highlights the unique challenges inherent to the recommendation field, where understanding and modeling user behavior can be significantly more intricate.
Recently, large language models (LLM) [30; 29; 9; 4; 18; 21] have shown great potential to achieve human-like intelligence. Given this advancement, we posit that it is opportune to shift our focus toward simulation-based studies in the field of recommender system. LLMs could potentially offer a more profound comprehension of user preferences, thereby enabling the simulation of subjective information, a previously challenging endeavor. This underscores the potentially transformative impact of LLMs in the recommender system field. In this paper, we open the direction of LLM-based recommendation simulation studies. In specific, we build a recommender simulator (see Figure 1), which is composed of two components. The first one is a user module, which can visit the recommendation website, communicate with the other users and send message on the social media. The second one is a recommender module, which is designed to return the search or recommendation lists to the users. All the users behave and produce their thoughts based on LLM, and can freely evolve in the simulator. In the following, we first introduce the detailed architecture of our simulator and then present several case studies on the simulated user behaviors. At last, we introduce many potential opportunities brought by our simulator in the recommendation domain.
## 2 RecAgent
There are two major components in our simulator, that is, the user module and the recommender module. The user module can (1) browse the recommendation website, (2) chat with the other users or (3) broadcast messages on the social media. The recommender module is responsible for generating the search or recommendation results for the users. We summarize the interactions between the user and recommender and different users in Figure 2. In the following, we first introduce the basic environment setup of our simulator, and then detail the user and recommender modules separately.
### Environment Setup
The core of our simulator includes a server implemented based on LangChain1 and a front-end realized based on Gradio2. The virtual world in the simulator evolves in a round by round manner. In every round, each user takes an action, and the server organizes the action information, and sends it
Figure 1: Visual interface of RecAgent
to the front-end, which is responsible for rendering the action in the visual interface. For example, if a user decides to chat with her friend, then the server delivers this information to the front-end, which highlights the user icon and present a label indicating that she is chatting. The front-end includes the user avatars, the simulator logs and the relations between different users. We also set the buttons of start, stop and reset to control the running process of our simulator. The complete interface of the front-end is presented in Figure 1.
### The User Module
Intuitively, the user behaviors in a real-world recommender system are influenced by both internal and external factors. Examples of the internal factors include the user temper, habits, and gender, which can be explicitly described as a prompt to query the LLM. For the external factors, we mainly consider the influence of the other users. To simulate the user influences, we design two scenarios, that is, one-to-one chatting, and one-to-many broadcasting on social media.
Following the above considerations, we allow each user in our simulator to take three types of actions: (1) entering the recommender system, (2) entering the social network (for one-to-one chatting or one-to-many broadcasting) and (3) do nothing. Each user maintains a memory bank, which can be updated by all the above actions or the reflections summarized by the user herself. At the beginning of each round, the user can choose to do something or nothing using the following prompt3:
Footnote 3: It should be noted that, for saving the space, we do not present the basic information of the user (_e.g._, the user memory) in the prompt
<user> must take one of the actions below: (1) Enter the recommender system. If so, <user> will be recommended some movies, from which <user> can watch some movies, or search for movies by himself. (2) Enter the Social Network website. <user> can chat privately with friends or publish a post to all people <user> knows. (3) Do Nothing. What action would <user> like to take? Respond in one line. If <user> want to enter the Recommender website, write: [RECOMMENDER]:: <user> enter the Recommender website. If <user> want to enter the Social Network website, write: [SOCIAL]:: <user> enter the Social Network website. If <user> want to do nothing, write: [NOTHING]:: <user> does nothing.
#### 2.2.1 Entering the recommender system
Intuitively, a user may visit a website by two different manners: (1) the user has specific goals, and directly search the website for his desired information, and (2) the user does not have explicit
Figure 2: Overall framework of RecAgent. We use different colors to represent different roles in our simulator. The positions where we need to query LLMs are highlighted by the labels of ChatGPT
purposes, and aimlessly browses the website, hoping to obtain the information of interest by the recommendations from the website. Inspired by such intuitions, we allow the users to interact with the recommender system by both searching items and receiving recommendations.
In specific, once a user enters the recommender system, she will be firstly presented with an initial homepage recommendation. Then there are four following actions that the user can take, including: (1) searching movies, (2) watching movies, (3) going to the next page and (4) leaving the system. We ask the user which action she would like to take based on the following prompt:
<user> must take one of the four actions below: (1) Watch some movies in the item list returned by the recommender system. (2) See the next page. (3) Search items. (4) Leave the recommender system. If <user> has recently heard about a particular movie on a social networking website, <user> might want to search for that movie on the recommender system. What action would <user> like to take? Respond in one line. If <user> wants to watch movies on the recommender system, write: [WATCH]:: movie names in the item list returned by the recommendation system, only movie names, separated by semicolons. If <user> wants to see the next page, write: [NEXT]:: <user> see the next page. If <user> wants to search for a specific item, write: [SEARCH]:: single, specific item name <user> wants to search. If <user> wants to leave the recommender system, write: [LEAVE]:: <user> leaves the recommender system.
In the following, we present more details on the above four actions in the recommender system.
**Searching movies:** This action corresponds to the scenario where the user has specific goals. The user will search the system for the movies that she would like to watch. We use the following prompt to obtain the query of the user:
<user> is searching in recommender system. If you are going to pretend to be <user>, what movies would you be interested in and search for in the system?
According to the user query, the recommender system will return the search results based on the similarities between the queried movies and candidates in the database.
**Watching movies:** This action is triggered once the user has found the movies of interests (either in the context of searching or recommendation). Intuitively, a real user may produce much feelings after watching a movie, which will be stored in her memory and influence her future cognition and behaviors. In our simulator, we also query the user feelings on the watched movies, and leverage such information to update the user memory bank, where we use the following prompt to accomplish these operations:
<user> has just finished watching items. <user> has not seen these movies before. If you are going to pretend to be <user>, how will you feel about each movie after watching it?
**Going to the next page:** In our simulator, the recommender system returns the search or recommendation result including five movies each time. If the user do not satisfy with any movie in the current results, she may decide to ask for the next page results. If this action is taken in the context of searching, then the results are produced based on the movie similarities. Otherwise, the results
are generated based on the recommendation algorithms embedded in the recommender module. It should be noted that one can flexibly design different recommendation algorithms in our simulator to evaluate their influence on the user behaviors.
**Leaving the system:** To simulate more reliable user behaviors, we allow the user to exit the recommender system at any step, for example, after the user watches a movie or the user does not find any movie of interest.
#### 2.2.2 Chatting with the other users and broadcasting messages on the social media
In the real world, the behaviors of the users in a recommender system may be not only determined by the factors inside the system. There can be also many external factors, among which, we believe, the information dissemination between different users is an important one, for example, a user may watch a movie because of her friends' recommendation or the movie is widely discussed on the social media. In reality, people may disseminate information in different ways, and we abstract them into two types in our simulator: (1) one-to-one chatting and (2) one-to-many broadcasting.
More specifically, at each round of the simulator, if the user decide to enter the social network, then we determine whether she would like to chat or broadcast based on the following prompt:
<user> must take one of the two actions below: (1) Chat with one acquaintance about movies recently watched on the recommender system, or movies heard about on a social networking website. (2) Publish a post to all acquaintances about movies recently watched on the recommender system, or heard about on a social networking website. What action would <user> like to take? Respond in one line. If <user> wants to chat with an acquaintance, write: [CHAT]::acquaintance's name;what to say. If <user> wants to publish a post to all acquaintances, write: [POST]::what to post.
For one-to-one chatting, a straightforward method is recurrently adding the previous dialog history into the prompt to query the next response. However, this method is quite inefficient, since one needs to frequently access the LLMs. To overcome this shortcoming, we firstly initialize the chatting background, and let LLMs generate the complete dialog at one time, which can greatly enhance the simulation efficiency.
For one-to-many broadcasting, we obtain the message that the user would like to post based on the following prompt:
<user> wants to publish a post to all acquaintances. The post should be related to recently watched movies on the recommender system. If you are going to pretend to be <user>, what will you post? Respond in one line.
Once the user has posted a message, all her friends on the social media can receive this message, and may chat with her, which launches another one-to-one chatting thread.
### The Recommender Module
In our simulator, the recommender module is implemented with the real search and recommendation algorithms, which aims to respond to the user queries and generate recommendation lists for the users. The recommender module is highly flexible, where one can easily change the search/recommendation strategies, and initialize the candidate items with different public available or simulated datasets.
While our current recommender module is designed for general recommendation, it can be easily extended to the other scenarios by slightly change existing settings. For example, one can simulate conversational recommendation by allowing chats between the users and system.
After simulating the user behaviors, all the information in the recommender module can be exported for offline studies, and one can also import offline datasets into the recommender module for user behavior simulation.
## 3 Case Studies
In this section, we show some examples of the user behaviors in our simulator.
**A user browses the recommendation website, watches several movies and generates feelings.**
\(\bullet\) Sophia enters the Recommender website. Sophia is going to recommender system website.
\(\bullet\) Sophia is recommended [<Naked Gun 33 1/3: The Final Insult(1994)>."<All the Rage (a.k.a. It's the Rage)(1999)>",<Police Academy 3: Back in Training(1986)>'<Dead Zone, The (1983)><Bandits (1997)>']. Sophia is looking next page.
\(\bullet\) Sophia is recommended ['<It Happened One Night(1934)>,<Hard 8(akabsyde,a.k.a.Hard Eight)(1996)>',<Third Miracle,The(1999)>,'<Tie Me Up! Tie Me Down!(1990)><Sunset Park(1996)>].
\(\bullet\) Sophia is looking next page.
\(\bullet\) Sophia is recommended [<Little Princess,The(1939)><Love Stinks(1999)> <Once Upon a Time... When We Were Colored(1995)>,<Frank and Ollie(1995) >','<Corruptor, The(1999)>'].
\(\bullet\) Sophia is looking next page.
\(\bullet\) Sophia is recommended ['<Thing From Another World,The(1951)><Chamber, The(1996)>',<Coma(1978)>',<How to Make an American Quilt(1995)>',<One Little Indian(1973)>'].
\(\bullet\) Sophia watched <Thing From Another World,The(1951)>;<Coma(1978)>
\(\bullet\) Sophia feels:<Thing From Another World,The(1951)>:I found it to be an intrigu ing and suspenseful sci-fi classic with great performances and direction.<Coma(1978)>:It was a thrilling and thought-provoking medical drama that kept me on the edge of my seat.
**A user browses the next page of the recommendation system.**
\(\bullet\) Olivia enters the Recommender website.
\(\bullet\) Olivia is going to recommender system website.
\(\bullet\) Olivia is recommended ['<Fantastic Voyage (1966)>', '<Straight Story, The (1999)>', '<Pajama Game, The (1957)>', '<Money Talks (1997)>', '<Wings (1927)>'].
\(\bullet\) Olivia is looking next page.
\(\bullet\) Olivia is recommended ['<Calendar Girl (1993)>', '<Kolya (1996)>', '<Fatal Attraction (1987)>', '<Graveyard Shift (1990)>', '<Montana (1998)>'].
\(\bullet\) Olivia watched <Calendar Girl (1993)>; <Kolya (1996)>; <Fatal Attraction (1987)>; <Graveyard Shift (1990)>
**A user searches a specific movie in the recommendation system.**
\(\bullet\) Olivia is recommended ['<Surf Nazis Must Die (1987)>', '<Shanghai Surprise (1986)>', '<My Son the Fanatic (1998)>', '<Avengers, The (1998)>', '<SubUrbia (1997)>'].
\(\bullet\) Olivia is searching "The Lion King" in the recommender system..
\(\bullet\) Olivia is recommended ['Lion King, The (1994)', 'Jungle Book, The (1994)', 'Jungle Book, The (1967)', 'Cheetah (1989)', 'Tarzan (1999)'].
\(\bullet\) Olivia watched <Lion King, The (1994)>;<Jungle Book, The (1994)>;<Jungle Book, The (1967)>
**A user chats with the other ones on her recently watched movies.**
* Tommie watched <A Chef in Love (1996)>; <Killing, The (1956)>; <Mary Shelley's Frankenstein (1994)>; <Repulsion (1965)>; <Mariachi, El (1992)>
* Tommie feels:<A Chef in Love (1996)>: Impressed by the beautiful cinematography and the unique love story set in the culinary world.
\(\bullet\) Eve enters the Social Network website
\(\bullet\) Eve is going to social media.
\(\bullet\) Eve is chatting with Tommie. Said: Hi Tommie, have you watched any good romantic or mystery movies lately? I'm always on the lookout for recommendations.
\(\bullet\) Tommie said "Actually, I recently watched a really unique movie called <A Chef in Love>. It's not a typical romantic movie, but it's definitely worth checking out if you're looking for something different. And as for mystery, I haven't watched anything in that genre recently, but I'll keep an eye out for you, Eve."
\(\bullet\) Eve said "Thanks for the recommendation, Tommie! I'll definitely check out <A Chef in Love>. Let me know if you come across any good mystery movies in the future. It was nice chatting with you!"
**A user broadcasts messages to share her feelings on the recently watched movies.**
\(\bullet\) Eve enters the Social Network website
\(\bullet\) Eve is going to social media.
\(\bullet\) Eve is posting.
\(\bullet\) Eve posted: "Just watched <Gone Girl> and <A Walk to Remember> and they were both amazing! Any other romance or mystery recommendations for my next movie night? #movienight #romance #mystery #drama"
\(\bullet\) ['Tommie', 'Alice', 'Michael', 'Jake', 'Olivia', 'Lei'] get this post.
**A user responds to the post of the other users.**
\(\bullet\) Tommie enters the Social Network website.
\(\bullet\) Tommie is going to social media.
\(\bullet\) Tommie is posting.
\(\bullet\) Tommie posted: "I'm feeling anxious today, but watching a good comedy always helps me feel better! Any recommendations?"
\(\bullet\) Eve enters the Social Network website
\(\bullet\) Eve is going to social media.
\(\bullet\) Eve is chatting with Tommie. Said: Hi Tommie, I saw your post about feeling anxious. Have you seen the movie "Crazy, Stupid, Love"? It always makes me laugh and feel better when I'm feeling down.
\(\bullet\) Tommie said "Thanks for the recommendation! I'll definitely check it out. Do you have any other favorite comedies?"
\(\bullet\) Eve said "Sure, I have a few more favorites. Have you seen 'The Hangover' or 'Bridesmaids'? They're both hilarious. Let me know what you think of 'Crazy, Stupid, Love' after you watch it. Take care!"
\(\bullet\) Tommie said "Thanks, I'll add those to my list! And I'll definitely let you know what I think of 'Crazy, Stupid, Love'. Thanks for the chat, Eve. Have a great day!"
\(\bullet\) Eve said "You're welcome, Tommie! Glad I could help. Have a great day tool!"
**A user searches movies recommended from the other users.**
\(\bullet\) Tommie said: "Actually, Eve, I have a question. Do you have any recommendations for movies with great design? I really appreciate good visual aesthetics in movies."
\(\bullet\) Eve said: "Of course, Tommie! I love movies with great design too. One movie that comes to mind is <Blade Runner 2049>. The visuals are stunning and really transport you into the world of the movie. Another movie that has great design is
<The Grand Budapest Hotel>, which has a whimsical and colorful aesthetic. Have you seen either of those?"
\(\bullet\) Tommie enters the Recommender website.
\(\bullet\) Tommie is going to recommender system website.
\(\bullet\) Tommie is recommended ['<Last of the Mohicans, The (1992)>', '<Rocky IV (1985)>', '<Story of Us, The (1999)>', '<Verdict, The (1982)>', '<Chopping Mall (a.k.a. Killbolts) (1986)>'].
\(\bullet\) Tommie is searching: I would be interested in searching for <The Grand Budapest Hotel>.
## 4 Potential Opportunities Brought by RecAgent
In this section, we discuss the potential utilization of RecAgent across several applications.
**Cold Start Recommendation**. In the recommendation domain, the cold start problem has obsessed people for a long time [14; 22]. We believe that RecAgent may provide new opportunities for alleviating this problem. To begin with, one can align the profiles of the cold start users in the real and virtual worlds. Then, even if the users in the real world have no (or a small number of) interactions with the items, we can observe how their projections in the virtual world behave, and collect these behaviors to augment the real-world data for learning more accurate recommender models.
**Social Recommendation**. In the field of social recommendation, previous models mostly assume that the users with a social connection may behave similar. However, such an assumption is too strong, in our simulator, one can observe that even if two users are close friends, their preferences can be also quite different. In our simulator, the information dissemination process among the users on the social media is completely transparent. Thus, we have sufficient knowledge to learn the influence of the social connections, where we do not need to impose strong assumptions.
**RL-based Recommendation**. Recently, reinforcement learning (RL) based recommender models have attracted increasing attention, aiming to model the user long-term engagement. An important problem in RL-based recommendation is the lack of accurate simulators. RecAgent can be a natural solution to this problem, and one can design different prompts for querying the user feedback on a recommendation list based on LLM. Here, LLM basically plays the role of reward model. Compared with the previous reward models, LLM can better understand the users, and thus can provide more accurate rewards.
**Explainable Recommendation**. Providing explanations for the recommendation results have shown to be significant to improve the user trust and satisfaction to the system. However, how to evaluate the explanations is a very hard problem. Based on RecAgent, we may directly ask LLM about the feeling of the users on the explanations. For example, "without the explanation, would you like to interact with the item?" (the necessity of the explanation), "Does the explanation provide you with sufficient information for the items?" (the informativeness of the explanation), "Is the explanation pervasive enough for you?" (the pervasiveness of the explanation).
We believe that there are many more scenarios (_e.g._, fairness aware recommendation, debiased recommendation) that RecAgent can be leveraged to improve traditional recommendation studies. As RecAgent becomes more perfect, it may even change existing industry recommender deployment processes. For example, people can maintain an image of their online environment, and then simulate the effect of the newly developed models (by replacing the recommender module in RecAgent). At last, the model with the best simulation results is deployed online.
## 5 Related Work
Simulation-based studies in the recommendation domain are not a new concept, and there have been many previous efforts to build recommender simulators. For example, RecSim [13] is a simulation platform, where the users and system can sequentially interact with each other, and one can freely assign different features to the users or items. Virtual Taobao [23] can simulate users which are aligned with the real-world datasets. MINDSim [15] is a user simulator to produce reasonable user behaviors in a news website. Despite effectiveness, all the above simulators are based on simple human-designed rules, and the user behaviors are usually constrained by strong assumptions. In
contrast, we base our simulator on LLMs, which can better understand the users, and the user behaviors are fully determined by LLMs without any external assumptions.
Our simulator is inspired by a pioneering work called generative agent [19]. However, this work aims to simulate the human's daily life in a general manner. Another related work is AgentVerse4, which simulates the interactions between the students and teacher in the education domain. Different from the previous work, our simulator focuses on the recommendation, which, we believe, is another killer application of the idea of LLM-based simulation. Unlike CV, NLP and many other domains, recommender system is a highly subjective task, and LLMs can better understand humans, which provides new opportunities for the previously unimaginable simulation-based studies.
Footnote 4: [https://github.com/OpenBMB/AgentVerse](https://github.com/OpenBMB/AgentVerse)
## 6 Conclusion and Future Work
In this paper, we introduce a novel recommender simulator based on LLM. Our simulator includes two modules, which simulate the users and recommender system, respectively. We design different actions for the users and system, and let them freely evolve in the simulator. We observe many interesting user behaviors which are well aligned with the real human understandings. This paper makes a first step towards LLM-based simulation studies in the recommendation domain. We believe that there is much room left for improvement. To begin with, our simulator only has 10 users, which is still a toy model. In the future, one can introduce more users, and study how to design a tailored strategy to enhance the running efficiency. Then, one can also introduce more user behaviors other than interacting with the recommender system and social media, which may more comprehensively understand real user preferences.
|
2305.14660 | Complex Mathematical Symbol Definition Structures: A Dataset and Model
for Coordination Resolution in Definition Extraction | Mathematical symbol definition extraction is important for improving
scholarly reading interfaces and scholarly information extraction (IE).
However, the task poses several challenges: math symbols are difficult to
process as they are not composed of natural language morphemes; and scholarly
papers often contain sentences that require resolving complex coordinate
structures. We present SymDef, an English language dataset of 5,927 sentences
from full-text scientific papers where each sentence is annotated with all
mathematical symbols linked with their corresponding definitions. This dataset
focuses specifically on complex coordination structures such as "respectively"
constructions, which often contain overlapping definition spans. We also
introduce a new definition extraction method that masks mathematical symbols,
creates a copy of each sentence for each symbol, specifies a target symbol, and
predicts its corresponding definition spans using slot filling. Our experiments
show that our definition extraction model significantly outperforms RoBERTa and
other strong IE baseline systems by 10.9 points with a macro F1 score of 84.82.
With our dataset and model, we can detect complex definitions in scholarly
documents to make scientific writing more readable. | Anna Martin-Boyle, Andrew Head, Kyle Lo, Risham Sidhu, Marti A. Hearst, Dongyeop Kang | 2023-05-24T02:53:48Z | http://arxiv.org/abs/2305.14660v1 | # Complex Mathematical Symbol Definition Structures:
###### Abstract
Mathematical symbol definition extraction is important for improving scholarly reading interfaces and scholarly information extraction (IE). However, the task poses several challenges: math symbols are difficult to process as they are not composed of natural language morphemes; and scholarly papers often contain sentences that require resolving complex coordinate structures. We present SymDef, an English language dataset of 5,927 sentences from full-text scientific papers where each sentence is annotated with all mathematical symbols linked with their corresponding definitions. This dataset focuses specifically on complex coordination structures such as "respectively" constructions, which often contain overlapping definition spans. We also introduce a new definition extraction method that masks mathematical symbols, creates a copy of each sentence for each symbol, specifies a target symbol, and predicts its corresponding definition spans using slot filling. Our experiments show that our definition extraction model significantly outperforms RoBERTa and other strong IE baseline systems by 10.9 points with a macro F1 score of 84.82. With our dataset and model, we can detect complex definitions in scholarly documents to make scientific writing more readable.1
Footnote 1: Our code and dataset are publicly available at [https://github.com/minnesotanlp/taddex](https://github.com/minnesotanlp/taddex)
## 1 Introduction
As the volume of scientific publishing increases, it is becoming crucial to develop more sophisticated analysis tools and user interfaces for helping scientists make sense of this ever-growing bounty of knowledge. One particular concern is the ability to accurately extract definitions for mathematical symbols. See Figure 1 for one potential use case for mathematical symbol extraction. We find mathematical symbol definition extraction crucial enough to warrant corpora and models tailored to this specific problem.
For definition recognition to be used in user-facing applications, it must achieve a high precision that has not yet been seen in work to date. This task is complicated by the fact that scientific papers often contain multiple symbols and definitions in one sentence, and their spans may be nested or overlapping. Analysis of these symbols and definitions must be coordinated such that the correct definitions are applied to each symbol. Consider for example, the following sentence fragment:
\[\ldots\mathbf{A},\,\mathbf{C}\text{ and }\boldsymbol{\upsilon}\text{ denote the within-layer adjacency, between-layer adjacency and the community label matrix, respectively.}\]
In this case, we wish to define \(\mathbf{A}\) as "within-layer adjacency matrix", \(\mathbf{C}\) as "between-layer adjacency matrix", and \(\boldsymbol{\upsilon}\) as "community label matrix".
For human readers, the word "respectively" immediately clarifies which definition is associated
Figure 1: Reading interfaces such as ScholarPhi Head et al. (2021) could use math symbol definition extractionto surface symbol definitions as needed. This would save the reader from having to flip between paper sections to look up the definitions of terms in mathematical expressions and algorithms, as in this example from Gu et al. (2018).
with each symbol. However, even this simple "respectively" construction is not obvious to an NLP algorithm, due to the fact that the definitions for \(\mathbf{A}\) and \(\mathbf{C}\) are split and overlap with the definition for \(\upsilon\). Little research has been done on the "respectively" construct specifically, but other work has found resolution of coordination to be important for resolving hard NLP problems. An error analysis by Fader et al. (2011) when working on information extraction found that 52% of errors were in part due to coordination. Information extraction in biosciences (Ogren, 2010; Kolluru et al., 2020; Saha and Mausam, 2018) builds on this insight by attempting to resolve coordination relations directly. Cohen et al. (2009) showed that F-scores for recognition of protein-protein structure could be significantly increased by more accurately recognizing coordination structure (using manual rules, assuming distributed semantics, and using post-processing for specific cases). Furthermore, Systems that rely on token-wise structured prediction techniques such as IOB tagging are insufficient to capture complex coordination patterns due to their inability to accommodate overlapping entities.
In order to address the need for improved coordination resolution in scientific information extraction tasks, we developed SymDef, a corpus of scientific papers with a high frequency of complex coordination patterns. Annotations within SymDef are comprised of mathematical symbols masked as SYMBOL and their sometimes overlapping definitions. This corpus provides an interesting resource for study of complex coordination problems, not only because it contains a high frequency of coordination patterns, but also because the symbols are masked. Because the representations of each symbol are not differentiated from one another, the structure and syntax of the sentences are challenging to identify.
We achieved strong results on our SymDef dataset using a simple but effective method to find the complex mapping between multiple symbols and definitions. Specifically, we decompose the structured prediction problem into multiple passes of definition recognition, with one pass per symbol. For instance, our method would target the example sentence three times, once for each symbol in \(\{\mathbf{A}\), \(\mathbf{C}\), \(\upsilon\}\), and return the following symbol and definition pairs: <\(\mathbf{A}\), "within-layer adjacency matrix">, <\(\mathbf{C}\), "between-layer adjacency matrix">, and <\(\upsilon\), "community label matrix">. Since the model recognizes definitions based on a given target symbol, our model is called a _target-based_ model.
Our contributions are the following:
* SymDef: a collection of 5,927 sentences from the full texts of 21 scientific papers, with symbols and definitions annotated when present. The papers included contain sentences with complex coordination patterns (such as containing more than two "and"s or "or"s, and/or the word "respectively"). In total, the dataset contains 913 sentences with complex coordination patterns.
* The development of a novel target-based approach to definition recognition that isolates one symbol at a time and finds the definition for that symbol within syntactically complex sentences. Our system outperforms two IE baselines and few-shot GPT3 inference by large margins.
## 2 Related Work
We discuss previous efforts towards resolving coordination problems, related work in definition recognition, and relevant definition extraction corpora.
**Syntactic Structure Recognition**. Coordination is well-studied in linguistics, but analysis is generally in terms of syntactic structure and logical constraints. For example, Hara et al., 2009 focus on creating tree structures or parses for coordination where determining scope is sufficient. Some notable sub-cases such as Argument-Cluster Coordination or right-node raising are often addressed in some way (Ficler and Goldberg, 2016). There is also work determining the phrase boundaries of components in coordinate structures (Shimbo and Hara, 2007; Ficler and Goldberg, 2016).
While previous work on the syntactic structure of linguistic coordination is useful, definition structures in our work are sometimes more flexible or varied. Furthermore, Dalrymple and Kehler (1995) found that determining the meaning of "respectively" constructions is based on semantics and pragmatics, not the syntax of coordinating conjunctions, and Ogren (2011) found that a parser-free approach works better than one based on a syntactic parse for interpretation of coordinate structures.
Teranishi et al. (2017) propose a neural model that uses similarity and substitutability to predict coordinate spans. Other work has focused on the
problem of splitting sentences into two semantically equivalent ones (Ogren, 2010). However, none of the previous work on coordinated definition structures is applied towards using the resolution of coordination patterns for the extraction of term-definition pairs.
Closest to our work is that of Saha and Mausam (2018) which splits conjunctions to form multiple coherent simple sentences before extracting relation tuples. One constraint is that multiple coordination structures in a sentence must either be disjoint or completely nested, which is more restrictive than our approach.
**Definition Recognition**. We have found that the "respectively" construct is frequently used in the definition of mathematical terms, but its use is not discussed in the literature on definition detection. Others have noted the importance of complex conjunctions in biomedical texts: Ogren (2010) notes that there are 50% more conjunctions in biomedical scientific text than in newswire text, and Tateisi et al. (2008) also found that coordinating conjunctions occur nearly twice as often in biomedical abstracts as in newswire text. This greater frequency of complex conjunctions in scientific and biomedical texts is significant, as Saha et al. (2017) found that coordination was the leading cause of IE recall errors.
Also relevant to our work is that of Dai (2018), who summarized the state of the art in discontiguous span recognition, and Dai et al. (2020), who proposed a transition-based model with generic neural encoding for discontinuous named entity recognition, focusing on separated components and overlapping components.
Span-based information extraction models such a SciIE (Luan et al., 2018) and DyGIE++ (Wadden et al., 2019) are relevant for the task of extracting overlapping or nested entities in that span-representations are better-suited to capture overlapping tokens than traditional IOB tagging approaches; for this reason, we use SciIE and DyGIE++ as baseline models (see Section 5).
**Related Corpora**.
There are a few related datasets annotated for definition extraction. The word-class lattices (WCL) dataset (Navigli et al., 2010) comprises 4,564 sentences from the Wikipedia corpus, 1,717 of which have a single definition and 2,847 of which contain false definitions (patterns that resemble definitions but do not qualify as such). The W00 dataset (Jin et al., 2013) contains 2,512 sentences taken from 234 workshop papers from the ACL Anthology, 865 of which contain one or more non-overlapping definitions.
The Definition Extraction from Texts (DEFT) corpus (Spala et al., 2019) was developed with the intention to provide a more robust corpus of definition annotations with a higher incidence of complex data samples than the WCL and W00 datasets. DEFT includes 2,443 sentences from 2017 SEC filings and 21,303 sentences from open source textbooks on a number of subjects, with a total of 11,004 definition annotations. The DEFT corpus accommodates cross-sentence definitions and multiple definitions per sentence, but not overlapping or nested terms and definitions.
Observing that the extraction of definitions from math contexts requires specialized training corpora, the authors of the Wolfram Mathworld (WFM) corpus (Vanetik et al., 2020) developed a corpus of full sentence definitions. This corpus comprises 1,793 sentences taken from 2,352 articles from Wolfram Mathworld, 811 of which contain a single definition.
Most similar to our corpus is the NTCIR Math Understanding Subtask corpus (Kristianto et al., 2012). This corpus contains 10 ArXiv papers with annotated math expressions and their descriptions. Similarly to ours, the annotation scheme allows for discontinuous descriptions. The primary difference between SymDef and the NTCIR corpus is SymDef's focus on overlapping definition and respectively cases. The 21 papers in SymDef were specifically selected because they had relatively high counts of the word "respectively" and sentences with multiple "and"s, and our approach accommodates overlapping definitions (see Section 3 for details).
## 3 SymDef: Coordination Dataset
SymDef is annotated for the coordination of mathematical symbols and their definitions in order to provide a resource for training smart reading interfaces to recognize symbol definitions with a high level of precision. The corpus contains 5,927 English language sentences from the full texts of 21 machine learning papers published on arXiv2. These papers were selected by ranking arXiv publications from 2012 to 2022 by the
number of mathematical symbols and coordination patterns. This ranking was performed by counting qualifyinging coordination patterns in each paper, where higher coordination pattern counts were prioritized. These counts were determined per paper using regex pattern matching, searching for the strings "respectively" and ", and". The highest ranked papers were manually inspected and 21 papers were chosen based on prevalence of symbol-definition pairs.
The first round of annotations was performed by a subset of the authors. This round contributed to the majority of the dataset, resulting in the annotation of 5,661 sentences comprising the full texts of 20 papers.
Additional data were created to supplement the train dataset by annotating another paper containing 226 sentences. These annotations were performed by two domain experts hired through Upwork, one holding a PhD and the other a doctoral student, both in mathematics. The annotators were selected from a set of four applicants to the Upwork listing, all of whom reside in the United States. During the screening process, each of the four applicants were provided with training videos and written documentation in advance, and were trained for 10-30 minutes on 10 example sentences. Their example annotations were monitored and they were asked questions about their process. Upwork annotators were compensated during training and during the annotation task with an hourly rate of $25.00. Each annotator tracked their hours and were paid $543.75 each for their work. Upwork applicants were informed of the intended use of the data in the job description, and signed an Agreement form.
All annotations were performed using the annotation software BRAT3.
Footnote 3: [https://brat.nlplab.org/](https://brat.nlplab.org/), view license here
### Annotation Schema
The annotation goal for our dataset was to link symbols with the spans of text that define them at the sentence level. In our formulation, definition spans must declare the meaning of each target symbol; a detailed description of the annotation scheme appears in Appendix A. For example, definition spans may state what the symbol stands for, what a function does, or the datatype it represents. In the case that the symbol represents a collection, the definition may serve to describe the elements contained by the symbol. However, candidate phrases that merely assign a value to the symbol, describe how the symbol is used or computed, or define the symbol with an equation are not valid. Definition spans do not have to contain contiguous text, as they may be split across different parts of the sentence. Furthermore, definitions are permitted to overlap with each other and with symbols as seen in Figure 2.
### Inter-Annotator Agreement
In Table 1, precision, recall, and F1 scores for exact term and definition matches were calculated to determine the inter-annotator agreement between the Upworks annotators over a subset of 266 sentences. Additionally, the mean percentage of overlapping tokens for definition spans was calculated. There was significant agreement between annotators for term identification, earning an F1 score of 0.9. Definition identification was more difficult, yielding an F1 score of 0.67 for exact span matches. However, on average 85% of each definition span overlapped between annotators, indicating that, while it is difficult to find the exact span boundaries, annotators were still in agreement on parts of the definition span.
Of the definition annotations that are not perfect matches, 26 of the annotations from one annotator are contained in the annotations from the other. 126 overlap without containment, with an average number of overlapping words of 4.8. Additionally, 7 of the annotations differ completely, without any overlap.
A review of 1,442 test samples found 76 annotator errors. 46 of these errors were missed definitions. 10 definition spans were nearly correct but contained extra words. 6 were invalid definitions. The remaining errors had to do with improperly defining enumerator variables.
Figure 2: An annotation example for sentences with nested symbols and definitions. \(I\) is defined as “layer”, \(h^{t}\) is defined as “hidden representation at layer \(I\)”, \(h^{0}\) is defined as “input \(x\)”, and \(x\) is defined as “input”.
### Dataset Characteristics
We measure the structural complexity of SymDef by considering how many symbols and definitions there are per sentence and how difficult they are to link, and how many sentences contain overlapping or nested symbols and definitions.
Coordination of Multiple Terms and DefinitionsThere are a few characteristics to consider when evaluating the difficulty of coordinating multiple terms and definitions, including: the number of terms and definitions in positive sentences; whether or not every symbol is defined in the sentence (some annotated symbols do not have definitions); and how frequently the terms and definitions are collated (e.g. SYM...DEF...SYM...DEF...). The rationale is that an equal number of collated symbols and definitions could be easily coordinated using a simple rule.
The WCL and WFM corpora contain only one definition per sentence. We compare SymDef with the W00 and DEFT corpora, which sometimes contain multiple terms and definitions per sentence.
Overlapping Symbols and DefinitionsSymDef is uniquely focused on the problem of overlapping symbols and definitions, containing 179 sentences with overlapping spans (13% of positive sentences). Furthermore, many sentences with overlap contained multiple instances of overlapped symbols and definitions. Across all positive sentences there were 480 instances of overlapping, implying that sentences with overlapping contain 2.68 instances on average. W00 and DEFT datasets do not contain overlapping annotations.
## 4 TaDDEx: Coordination Resolution through Targeted Definition Extraction
Our aim is to coordinate multiple terms and definitions through targeted definition detection. This is achieved by implementing a target-based definition detection model where the target is one symbol from the sample sentence for which a definition must be recognized.
### Targeting Individual Symbols in Complex Coordination
Mathematical symbols are masked with the term SYMBOL. Sentences with more than one symbol are duplicated once for each additional symbol. For each sample, the symbol for which a definition should be found is tagged as "</s>SYMBOL</s>". In this way, each sentence is queried once for each mathematical symbol it contains. For example, the following sentence from Zhu et al. (2019)
_And the top-left corner and the bottom-right corner of the predicted projected box are \((i-S\hat{o}_{t_{i,j}},j-S\hat{o}_{t_{i,j}})\) and \((i+S\hat{o}_{b_{i,j}},j+S\hat{o}_{r_{i,j}}|)\) respectively._
would be split into the following two sentences:
_And the top-left corner and the bottom-right corner of the predicted projected box are </s>SYMBOL</s> and SYMBOL respectively._
_And the top-left corner and the bottom-right corner of the predicted projected box are SYMBOL and </s>SYMBOL</s> respectively._
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Term** & **Definition** & **Overlap** \\ \hline Precision & \(.88\pm.08\) & \(.65\pm.12\) & \\ \hline Recall & \(.94\pm.05\) & \(.69\pm.11\) & \(85\%\pm 7\%\) \\ \hline F1 & \(.90\pm.06\) & \(.67\pm.11\) & \\ \hline \hline \end{tabular}
\end{table}
Table 1: IAA scores for exact term matches, exact definition matches, and mean percent of definition tokens that overlap in SymDef.
Figure 3: The TaDDEx model. A sentence with \(n\) symbols is expanded into \(n\) samples. Each sample is input into the RoBERTa model individually such that a predicted definition can be recognized for each target symbol.
### Definition Recognition from Target Symbol
After an individual symbol is targeted and split into separate instances, we detect a definition of the target symbol. Our model is built based on the state-of-the-art definition recognition model called Heuristically-Enhanced Deep Definition Extraction (HEDDEx) (Kang et al., 2020). HEDDEx is trained as multi-task learning with two objectives: it first performs slot-tagging using a Conditional Random Field (CRF) sequence prediction model. The model assigns each token in a sentence one of five tags: term ("B-TERM", "I-TERM"), definition ("B-DEF","I-DEF"), or other ("O"). At the same time, a binary classifier is trained to predict a label indicating if the sentence contains a definition.
In detail, after tokenizing the sentences using the ScispCyt4 pipeline en_core_sci_md (Neumann et al., 2019), we encode input from a Transformer encoder fine-tuned on the task of definition recognition. Following Kang et al. (2020), we choose the best performing Transformer encoder, RoBERTa (Liu et al., 2019) as our main framework. We used the large version of RoBERTa from Huggingface5(Wolf et al., 2020). The CRF prediction model we used is torch-crf6.
Footnote 4: [https://allenai.github.io/scispacy/](https://allenai.github.io/scispacy/), Apache License 2.0
Footnote 5: [https://huggingface.co/docs/transformers/model_doc/roberta](https://huggingface.co/docs/transformers/model_doc/roberta), Apache License 2.0
Footnote 6: [https://github.com/yumoh/torchcrf](https://github.com/yumoh/torchcrf), MIT Licence
We also provide additional syntactic features as input, which are parts of speech, syntactic dependencies, abbreviations, and entities, which were extracted using ScispCap.
## 5 Experiments
DatasetsThe dataset is split randomly into train, dev, and test splits. The full texts of papers are kept together for the test set (i.e., sentences in the test set are not members of papers in the train set). The training set contains 4,930 samples after splitting each sentence into samples according to the number of symbols. The dev and test sets contain 1,442 samples each. The data is managed using PyTorch's dataloader7(Paszke et al., 2019).
Footnote 7: [https://pytorch.org/docs/stable/data.html](https://pytorch.org/docs/stable/data.html), view license here
BaselinesWe trained and tested two span-based information extraction models on our dataset, SciIE (Luan et al., 2018) and DyGIE++ (Wadden et al., 2019). We transformed our dataset into the SciIE format, where TERM and DEF are named entities, and DEFINITION-OF is the relation between coordinated terms and their definitions. Mathematical symbols were masked with SYMBOL, but the models were not pointed towards a targeted symbol. Instead, the models were tasked with extracting multiple TERM and DEFINITION pairs per training sample. Each model's ability to coordinate multiple terms and definitions was measured by looking at its ability to extract DEFINITION-OF relations between the named entities. Details on the setup for these experiments can be found in Appendix B.
We also calculated zero-, one-, and few-shot GPT3 baselines using text-davinci-003 in a question-answer format. For details on the experimental setup and post-processing, see Appendix C.
TrainingFor TaDDEx, we trained RoBERTa large (Liu et al., 2019) on the tokenized samples
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline dataset & \# positive & total terms & total defs (defs) & \# equal term & \# collated & term \(\mid\) def \\ & sentences & (terms per sentence) & per sentence) & and def. & terms and & IAA \\ & & sentence) & & counts & defs & \\ \hline SymDef & 1,403 & 3,290 **(2.34)** & 1,713 **(1.22)** & 681 **(49\%)** & 576 **(41\%)** & **0.90 \(\mid\) 0.67** \\ W00 & 865 & 959 (1.11) & 908 (1.05) & 725 (84\%) & 699 (81\%) & - \(\mid\) - \\ DEFT & **7,311** & 7,847 (1.07) & 7,262 (0.99) & 5,220 (72\%) & 6,582 (90\%) & 0.80 \(\mid\) 0.54 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Column 2 shows the total number of sentences containing at least one term. Column 4 shows the total number of definitions. Columns 5 and 6 show the number of samples containing an equal number of terms and with collated terms and definitions. Column 7 shows the reported Inter-Annotator Agreement scores (DEFT was evaluated using Krippendorf’s alpha). Boldface indicates the best value per column.
and syntactic features from the training set for 50 epochs using a batch size of 12, and maximum sequence length of 100. AdamW8 is used for optimization, with a learning rate of 2e\(-\)5 and Adam's epsilon set to 1e\(-\)6. These hyperparameter settings were based on the results of the parameter sweep performed for Kang et al. (2020). After each epoch, the model is validated on the dev set, and the model weights are updated upon improved performance. Loss is calculated using cross entropy loss9.
Footnote 8: [https://huggingface.co/transformers/v3.0.2/main_classes/optimizer_schedules.html](https://huggingface.co/transformers/v3.0.2/main_classes/optimizer_schedules.html)
Footnote 9: [https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html)
Evaluation MetricsWe used BOI tagging to evaluate model performance, where words in the sample sentence that are not a part of a term or definition are assigned "O", terms are assigned "B-TERM", and definition spans are indicated with the tags "B-DEF" (for the first word in the span) and "I-DEF" (for the remaining words in the span). We ultimately merged the "B-DEF" and "I-DEF" tags. The predicted labeling is compared with the ground truth by calculating the macro F1, precision, and recall scores for three classes "O", "B-TERM", and "I-DEF". We also report the F1, precision, and recall scores for "B-TERM" and "I-DEF" individually. FAll scores were calculated for all models using scikit-learn (Pedregosa et al., 2011).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Model** & \multicolumn{2}{c}{**Macro**} & **Term** & **Def** \\ \hline \multirow{3}{*}{TaDDEx (ours)} & F & **84.82** & **81.54** & **73.56** \\ & P & 82.08 & 74.83 & 71.91 \\ & R & **88.04** & 89.56 & **75.28** \\ \hline \multirow{3}{*}{HEDDEx} & F & 64.13 & 64.63 & 36.03 \\ & P & 64.80 & 61.68 & 44.37 \\ & R & 64.26 & 67.87 & 30.33 \\ \hline \multirow{3}{*}{SciIE} & F & 63.22 & 53.16 & 37.49 \\ & P & 84.76 & 79.53 & 76.47 \\ & R & 54.85 & 39.92 & 24.83 \\ \hline \multirow{3}{*}{DyGIE++ (Wadden et al., 2019)} & F & 73.92 & 65.44 & 57.03 \\ & P & **98.02** & **98.41** & **97.05** \\ & R & 63.12 & 49.01 & 40.38 \\ \hline \multirow{3}{*}{GPT3 (few-shot) (Brown et al., 2020)} & F & 50.51 & 66.30 & 37.22 \\ & P & 43.79 & 50.53 & 25.06 \\ \cline{1-1} & R & **66.53** & **96.39** & 72.31 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of definition recognition systems on SymDef. F1, precision, and recall scores. The Macro scores were calculated by finding the mean of the individual scores each of the three labels “O”, “I-DEF”, and “I-TERM”. The Term and Definition scores are a binary measure of the system’s ability to classify Terms and Definitions.
Figure 4: An example ground-truth annotation in the test of SymDef: (left) a complex sample including the terms, definitions, and relations between them. (right) Eight ground-truth and predicted term-definition pairs. Exact correct definitions are shown in blue. Nothing output shown as -. From Zhang et al. (2017).
### Main Results
The evaluation scores can be seen for TaDDEx and the baseline systems in Table 3. Results were generated with a single run. Both IE baseline models were able to extract the named entities TERM and DEF, as well as the relation DEFINITION-OF. See Table 3 for the resulting scores.
Figure 4 shows a sample from the test set containing a complicated coordination. This sample has 8 terms and 8 definitions, some of which are overlapping.
### Error Analysis
Of the 1,442 test samples, our system made incorrect predictions for 135 samples. Of the 135 errors, \(28\) (\(20.7\%\)) of them were false negatives, 33 (\(24.4\%\)) of them were false positives, and 74 (\(54.8\%\)) were labeled incorrectly. Often, the system's predicted definition overlapped with the ground truth, but added or omitted tokens. Sometimes, the system incorrectly labeled definitions in sentences without a symbol definition.
There was not a strong correlation in terms of system accuracy and total number of symbols in the sample for the TaDDEx model and GPT3 baselines, but HEDDEx, SciIE, and DyGIE++ performed much better for samples with fewer symbols (see Figure 5). All three systems performed perfectly on sentences without a symbol. TaDDEx was least accurate for sentences with six or ten symbols, but did not generally perform worse as the number of symbols increased: the mean macro F1 score for samples with between 1 and 5 symbols was 85.03 with standard deviation \(\pm 5.48\), and the mean score for samples with between 6 and 10 symbols was \(85.11\pm 9.15\). SciIE's scores decreased as the number of symbols per sample increased from 0 to 5 symbols, remained stable from 5 to 9 symbols (scores ranging between 47.51 and 50.56), then dropped to 40.39 for ten samples. DyGIE++ assigned"O" to every token, yielding a perfect score for samples with zero symbols, and between 31.84 and 32.84 for all other samples. These results are significant, because they show that the targeted definition recognition method is better at complex term-definition coordination than traditional span-based information extraction techniques.
## 6 Limitations and Future Work
Having to point the model to the term targeted for definition identification requires prior knowledge of the terms in the dataset. This requires either a dataset with annotated terms such as SymDef, or an initial classification step to extract the terms for each sentence.
Within the domain of our SymDef dataset, terms are restricted to mathematical expressions, which are masked with the single token SYMBOL. One limitation of our model is that it under performs for non-symbolic terms. However, we emphasize that the problem of mathematical symbol definition extraction is important enough that it is appropriate to target an approach specifically to this problem. Furthermore, we believe the inability of information extraction systems such as DyGIE++ and SciIE to adapt to the challenges of SymDef warrants the development of approaches that work specifically for the extraction of mathematical symbol definitions.
## 7 Potential Risks
A system that surfaces automatically extracted definitions to readers without 100% accuracy will occasionally surface an inaccurate definition. Intelligent reading interfaces that use definition extraction run the risk of providing wrong information to the reader. Furthermore, the "illusion of clarity" that
Figure 5: (a) The macro F1 score based on the number of symbols in the sample, and (b) the difference in scores calculated by subtracting baseline F1 scores from TaDDEx.
information systems provide can elicit a false sense of complete understanding in human users, which discourages the users from looking more closely (Nguyen, 2021).
## 8 Conclusion
In this paper we describe the creation of a dataset of 21 scientific papers containing a high concentration of complex term and definition coordinations. We also provide a novel methodology for recognizing multiple coordinated term and definition pairs by targeting one term at a time. Our results show improvement on the span-based approach to relation extraction. Particularly promising is the consistency that our model maintains as the number of symbols per sentence increases.
|
2307.14019 | One-Nearest Neighborhood Guides Inlier Estimation for Unsupervised Point
Cloud Registration | The precision of unsupervised point cloud registration methods is typically
limited by the lack of reliable inlier estimation and self-supervised signal,
especially in partially overlapping scenarios. In this paper, we propose an
effective inlier estimation method for unsupervised point cloud registration by
capturing geometric structure consistency between the source point cloud and
its corresponding reference point cloud copy. Specifically, to obtain a high
quality reference point cloud copy, an One-Nearest Neighborhood (1-NN) point
cloud is generated by input point cloud. This facilitates matching map
construction and allows for integrating dual neighborhood matching scores of
1-NN point cloud and input point cloud to improve matching confidence.
Benefiting from the high quality reference copy, we argue that the neighborhood
graph formed by inlier and its neighborhood should have consistency between
source point cloud and its corresponding reference copy. Based on this
observation, we construct transformation-invariant geometric structure
representations and capture geometric structure consistency to score the inlier
confidence for estimated correspondences between source point cloud and its
reference copy. This strategy can simultaneously provide the reliable
self-supervised signal for model optimization. Finally, we further calculate
transformation estimation by the weighted SVD algorithm with the estimated
correspondences and corresponding inlier confidence. We train the proposed
model in an unsupervised manner, and extensive experiments on synthetic and
real-world datasets illustrate the effectiveness of the proposed method. | Yongzhe Yuan, Yue Wu, Maoguo Gong, Qiguang Miao, A. K. Qin | 2023-07-26T08:04:01Z | http://arxiv.org/abs/2307.14019v1 | # One-Nearest Neighborhood Guides Inlier Estimation for Unsupervised Point Cloud Registration
###### Abstract
The precision of unsupervised point cloud registration methods is typically limited by the lack of reliable inlier estimation and self-supervised signal, especially in partially overlapping scenarios. In this paper, we propose an effective inlier estimation method for unsupervised point cloud registration by capturing geometric structure consistency between the source point cloud and its corresponding reference point cloud copy. Specifically, to obtain a high quality reference point cloud copy, an One-Nearest Neighborhood (1-NN) point cloud is generated by input point cloud. This facilitates matching map construction and allows for integrating dual neighborhood matching scores of 1-NN point cloud and input point cloud to improve matching confidence. Benefiting from the high quality reference copy, we argue that the neighborhood graph formed by inlier and its neighborhood should have consistency between source point cloud and its corresponding reference copy. Based on this observation, we construct transformation-invariant geometric structure representations and capture geometric structure consistency to score the inlier confidence for estimated correspondences between source point cloud and its reference copy. This strategy can simultaneously provide the reliable self-supervised signal for model optimization. Finally, we further calculate transformation estimation by the weighted SVD algorithm with the estimated correspondences and corresponding inlier confidence. We train the proposed model in an unsupervised manner, and extensive experiments on synthetic and real-world datasets illustrate the effectiveness of the proposed method.
## 1 Introduction
With the rapid development of 3D data acquisition technology, point cloud data collected by LiDAR [42], Structured Light Sensors [29], and Stereo Cameras [9] has become ubiquitous in various 3D computer vision and robotics applications [20, 6], such as autopilot [15], surgical navigation [23], and simultaneous localization and mapping [11]. In such applications, rigid body point cloud registration plays an essential role, which aims to find a rigid transformation to align one point cloud to another.
The recent advances have been dominated by learning-based methods. Most of these methods focus on solving the point cloud registration task in a supervised manner [21, 40, 33, 16, 39]. They require labeled data as the supervision signal to learn effective representations. However, obtaining labeled data is cumbersome and time-consuming, which may hinder applications in real scenarios. To address this limitation, unsupervised point cloud registration
Figure 1: A toy example of single neighborhood and proposed 1-NN strategy. (a) shows that false matching (C3) has very high matching score because of high single neighborhood similarity and similar contextual feature. (b) shows that the matching score is adjusted with the help of 1-NN strategy, and C3 is correctly judged as a false matching.
methods gradually attract scholars' attention. Global alignment difference is a widely used optimization signal in unsupervised methods, which learns the optimal rigid transformation to align point cloud pair perfectly by minimizing Chamfer Distance [24, 43]. Nevertheless, Chamfer Distance is very sensitive to the presence of outlier and cannot provide effective self-supervised signal to guide inlier estimation on partially overlapping point cloud.
To tackle issues mentioned above, we propose an effective inlier estimation method for unsupervised point cloud registration by capturing geometric structure consistency between source point cloud and its corresponding reference point cloud copy. Two crucial questions remain to be addressed in order to make unsupervised inlier estimation a success: _How to obtain a high quality corresponding reference point cloud copy? Can single neighborhood strategy provide reliable matching map to generate reference copy?_
We provide answers to both questions. Our key insight is that relying solely on single neighborhood is unreliable to generate matching map. As shown in Figure 1(a), the false matching C3 is mistakenly identified as a correct one due to high single neighborhood similarity and similar contextual feature. Interestingly, introducing the closest point neighborhood of raw point to aid judgement can greatly alleviate this dilemma. The reason is that at least one of raw point and closest point has correct matching. Even if two points are both false matching, integrated dual neighborhood matching score in the matching map is typically quite small and the real impact is limited. We define this strategy as One-Nearest Neighborhood (1-NN) shown in Figure 1(b). The dual neighborhood matching score of C3 has decreased due to low matching score of 1-NN strategy and thus C3 is correctly judged as a false matching.
Motivated by the discussion above, we propose the dual neighborhood fusion matching module to facilitate matching map construction and integrate dual neighborhood matching scores to generate high quality reference point cloud copy. This module employs a 1-NN strategy, which selects the closest point from the input point cloud to generate a 1-NN point cloud. Benefiting from the high quality reference copy, the neighborhood graph formed by inlier and its neighborhood should have consistency between source point cloud and its corresponding reference copy, and outlier is just the opposite. Based on this observation, we propose the geometric neighborhood inlier estimation module to construct effective transformation-invariant geometric structure representations and capture their consistency to score the inlier confidence for each estimated correspondences between source point cloud and its corresponding reference copy. This module provides simultaneously effective byproduct as self-supervised signal based on geometric structure for model optimization. To demonstrate efficacy of the proposed method, we conduct extensive experiments on the synthetic datasets ModelNet40 [35], Augmented ICL-NUIM [7] and real-world dataset 7Scenes [41]. Experimental results illustrate capturing geometric structure consistency between the source point cloud and its corresponding reference copy is effective for inlier estimation. To summarize, our contributions are as follows:
* We propose the dual neighborhood fusion matching module to facilitate matching map construction, which can generate high quality reference copy for source point cloud.
* Based on high quality reference copy, we design a geometric neighborhood inlier estimation module to score the inlier confidence for each estimated correspondences between source point cloud and its reference copy.
* In the unsupervised setting, instead of using the ground-truth transformation, we construct geometric structure consistency objective based on transformation-invariant self-supervised signal for model training and optimization.
## 2 Related Work
**Traditional point cloud registration methods.** Most traditional methods need a good initial transformation and converge to the local minima near the initialization point. One of the most profound methods is the Iterative Closest Point (ICP) algorithm [4], which begins with an initial transformation and iteratively alternates between solving two trivial subproblems: finding the closest points as correspondence under current transformation, and computing optimal transformation by SVD [19] based on identified correspondences. Though ICP can complete a high-precision registration, it is susceptible to the initial perturbation. In recent years, variants of ICP have been proposed [30, 38, 5, 14, 27], and they can improve the defects of ICP and enhance the registration accuracy [3]. However, these methods retain a few essential drawbacks. Firstly, they depend strongly on the initialization. Secondly, it is difficult to integrate them into the deep learning pipeline as they lack differentiability. Thirdly, explicit estimation of corresponding points leads to quadratic complexity scaling with the number of points [28], which can introduce significant computational challenges.
**Learning-based registration methods.** At present, most learning-based methods are based on supervision [33, 40, 26, 12]. PointnetLK [1] is a classical correspondence-free method, which calculates global feature descriptors through PointNet and iteratively uses the Inverse Compositional formulation and LK algorithm (IC-LK) [22, 37] to minimize distance between the descriptors to achieve registration. RPM-Net [39] utilizes the differentiable Sinkhorn
layer and annealing to get soft assignments of point correspondences from hybrid features learned from both spatial coordinates and local geometry.
Recently, unsupervised point cloud registration has gained increasing attention due to its applicability in scenarios where labeled training data is scarce or unavailable. Some methods have been proposed to address this challenge and achieved promising results [18, 10, 8, 2, 17]. Feature-metric point cloud registration framework (FMR) [16] enforces the optimisation of registration by minimising a feature-metric projection error with a autoencoder-based network. Unfortunately, the registration performance will significantly decline on the partial data due to the lack of inlier estimation. RIE [31] propose a inlier estimation method, which can capture graph-structure difference between source point cloud and the reference point copy generated by single neighborhood strategy for inlier estimation. However, single neighborhood strategy restrains high quality reference point copy generating possibly and thus affect inlier estiamtion. In comparison, our method achieves stable and reliable reference point copy generating with 1-NN for inlier estimation and can provide effective self-supervised signal for model optimization.
## 3 Methodology
We first introduce notations utilized throughout this paper. Given two point clouds: source point cloud \(\mathcal{P}=\{\mathbf{p}_{i}\in\mathbb{R}^{3}\mid i=1,\ldots,N\}\) and reference point cloud \(\mathcal{Q}=\{\mathbf{q}_{j}\in\mathbb{R}^{3}\mid j=1,\ldots,M\}\), where each point is represented as a vector of \((x,y,z)\) coordinates. Point cloud registration task aims to estimate a rigid transformation \(\{\mathbf{R},\mathbf{t}\}\) which accurately aligns \(\mathcal{P}\) and \(\mathcal{Q}\), with a 3D rotation \(\mathbf{R}\in SO(3)\) and a 3D translation \(\mathbf{t}\in\mathbb{R}^{3}\). The transformation can be solved by:
\[\min_{\mathbf{R},\mathbf{t}}\sum_{(\mathbf{p}_{i}^{*},\mathbf{q}_{j}^{*})\in \mathcal{H}^{*}}\|\mathbf{R}\cdot\mathbf{p}_{i}^{*}+\mathbf{t}-\mathbf{q}_{j} ^{*}\|_{2}^{2}, \tag{1}\]
where \(\mathcal{H}^{*}\) is the set of ground-truth correspondences between \(\mathcal{P}\) and \(\mathcal{Q}\). We propose an effective inlier estimation method for unsupervised point cloud registration by capturing geometric structure consistency between source point cloud and its corresponding reference point cloud copy. To obtain a high quality reference copy, we design dual neighborhood fusion matching module to facilitate matching map construction with 1-NN strategy. Benefiting from the high quality reference copy, we design the geometric neighborhood inlier estimation module to construct effective transformation-invariant geometric structure representations and capture their consistency to score the inlier confidence for each estimated correspondences between source point cloud and its reference copy. The pipeline is illustrated in Figure 2.
### Dual Neighborhood Fusion Matching Module
The matching score is typically calculated by single point feature distance [39] or single neighborhood integration [31] when constructing matching map. These methods may suffer from false matching due to similar contextual feature, as illustrated in Figure 1. In this paper, we propose the 1-NN strategy to generate 1-NN point cloud, which can alleviate this dilemma and facilitate matching map construction for obtaining high quality reference point cloud copy.
We first define 1-NN point cloud \(\widehat{\mathcal{P}}\) and \(\widehat{\mathcal{Q}}\) as the closest point for each point in \(\mathcal{P}\) and \(\mathcal{Q}\), respectively:
\[\begin{split}&\widehat{\mathcal{P}}=\{\widehat{\mathbf{p}}_{i} \mid\widehat{\mathbf{p}}_{i}=\arg\min_{\mathbf{p}_{m}}\|\mathbf{p}_{m}- \mathbf{p}_{i}\|_{2}^{2},\,i\neq m\},\\ &\widehat{\mathcal{Q}}=\{\widehat{\mathbf{q}}_{j}\mid\widehat{ \mathbf{q}}_{j}=\arg\min_{\mathbf{q}_{m}}\|\mathbf{q}_{m}-\mathbf{q}_{j}\|_{2 }^{2},\,j\neq m\}.\end{split} \tag{2}\]
We construct a local patch \(\mathcal{N}_{\circ}\) with \(K\)-nearest neighborhood for each point and extract local features with Dynamic Graph CNN [34] for input point cloud and 1-NN point cloud. Associated learned features are denoted as \(\mathcal{F}_{\mathcal{P}}\), \(\mathcal{F}_{\widehat{\mathcal{P}}}\), \(\mathcal{F}_{\mathcal{Q}}\) and \(\mathcal{F}_{\widehat{\mathcal{Q}}}\), respectively. Then, matching scores are calculated in pairs with the normalized negative feature distance for input point cloud and 1-NN point cloud:
\[\begin{split}&\mathbf{M}_{i,j}=\text{softmax}\left(-\mathbf{D}_{i,1},- \mathbf{D}_{i,2},\ldots,-\mathbf{D}_{i,M}\right)_{j},\\ &\widehat{\mathbf{M}}_{i,j}=\text{softmax}\left(-\widehat{ \mathbf{D}}_{i,1},-\widehat{\mathbf{D}}_{i,2},\ldots,-\widehat{\mathbf{D}}_{ i,M}\right)_{j},\end{split} \tag{3}\]
where \(\mathbf{D}_{i,j}=\|\mathcal{F}_{\mathbf{p}_{i}}-\mathcal{F}_{\mathbf{q}_{j}}\|_ {2}\) and \(\widehat{\mathbf{D}}_{i,j}=\|\mathcal{F}_{\widehat{\mathbf{p}}_{i}}-\mathcal{F} _{\widehat{\mathbf{q}}_{j}}\|_{2}\) denote the Euclidean distance between learned local features. Based on the matching scores, we construct dual neighborhood matching map by fusing and averaging matching scores of each \(\mathcal{N}_{\circ}\):
\[\mathbf{G}_{i,j}=\frac{1}{K}\sum_{\mathbf{p}_{i^{\prime}}\in\mathcal{N}_{ \mathbf{p}_{i}}}\sum_{\mathbf{q}_{j^{\prime}}\in\mathcal{N}_{\mathbf{q}_{j} }}\mathbf{M}_{i^{\prime},j^{\prime}}+\frac{1}{K}\sum_{\widehat{\mathbf{p}}_{i ^{\prime}}\in\mathcal{N}_{\widehat{\mathbf{p}}_{i}}}\sum_{\widehat{\mathbf{q} }_{j^{\prime}}\in\mathcal{N}_{\widehat{\mathbf{q}}_{j}}}\widehat{\mathbf{M}}_{ i^{\prime},j^{\prime}}. \tag{4}\]
A higher matching score indicates consistently larger matching probability, thus highlighting the superior matching quality offered by 1-NN compared to single neighborhood. The explanation is as follows: at least one matching is a correct matching in \(\mathbf{M}_{i,j}\) and \(\widehat{\mathbf{M}}_{i,j}\), thus leading to a high dual neighborhood matching score in \(\mathbf{G}_{i,j}\). Even in cases where \(\mathbf{M}_{i,j}\) and \(\widehat{\mathbf{M}}_{i,j}\) are all false matching, the dual neighborhood matching score reflected in \(\mathbf{G}_{i,j}\) is typically quite small and the real impact is limited. Finally, we formulate the final matching map as:
\[\begin{split}\mathbf{F}_{i,j}=&\text{softmax}\left(- \mathbf{D}_{i,1}^{\prime},-\mathbf{D}_{i,2}^{\prime},\ldots,-\mathbf{D}_{i,M}^{ \prime}\right)_{j},\\ \mathbf{D}_{i,j}^{\prime}=&\exp\left(\alpha-\mathbf{G }_{i,j}\right)*\left(\mathbf{D}_{i,j}+\widehat{\mathbf{D}}_{i,j}\right), \end{split} \tag{5}\]
where \(\mathbf{D}_{i,j}^{\prime}\) is negatively related to the matching score \(\mathbf{G}_{i,j}\). We utilize the exponential strategy to control the changing
ratio, along with a hyper-parameter \(\alpha\) to control the influence of the dual neighborhood matching [31]. Based on the high quality matching map \(\mathbf{F}\in\mathbb{R}^{N\times M}\), we generate a reference point cloud copy \(\widetilde{\mathcal{Q}}\in\mathbb{R}^{N\times 3}\) including matching for each point \(\mathbf{p}_{i}\) in source point cloud \(\mathcal{P}\):
\[\widetilde{\mathbf{q}}_{i}=\sum_{j=1}^{M}\mathbf{F}_{i,j}\cdot\mathbf{q}_{j}. \tag{6}\]
Note that Equation 6 means that reference point cloud copy \(\widetilde{\mathcal{Q}}\) contains all predictive correspondences of \(\mathbf{p}_{i}\) in source point cloud. Benefiting from the high quality matching map \(\mathbf{F}\in\mathbb{R}^{N\times M}\), reference point cloud copy is also high quality and provides excellent precondition for inlier estimation. In particular, if \(\mathbf{p}_{i}\) is an inlier, the estimated correspondence \(\widetilde{\mathbf{q}}_{i}\) will appear in the correct position in reference copy, providing convenience for estimating inlier by capturing geometric structure neighborhood consistency. Conversely, if \(\mathbf{p}_{i}\) is an outlier, the estimated correspondence tends to have an unstable position.
### Geometric Neighborhood Inlier Estimation Module
In this section, we explore the geometric structure neighborhood consistency between the source point cloud and its reference point cloud copy for reliable inlier estimation. As shown in Figure 3, the neighborhood graph formed by inlier and its neighborhood points (\(\mathcal{N}_{\mathbf{p}_{i}}\) and \(\mathcal{N}_{\widetilde{\mathbf{q}}_{i}}\)) should have consistency between source point cloud and its reference copy. Conversely, the neighborhood graph formed by outlier has significantly different geometric structure between \(\mathcal{N}_{\mathbf{p}_{i}}\) and \(\mathcal{N}_{\widetilde{\mathbf{q}}_{i}}\). Since estimated correspondence \(\widetilde{\mathbf{q}}_{i}\) tends to have an unstable position as illustrated in Section 3.1, resulting in a chaotic neighborhood. Based on the above observation, we propose a geometric neighborhood inlier estimation module to construct effective transformation-invariant geometric structure representations, and adaptively capture the geometric structure consistency between \(\mathcal{N}_{\mathbf{p}_{i}}\) and \(\mathcal{N}_{\widetilde{\mathbf{q}}_{i}}\) to score the inlier confidence for each estimated correspondences.
We first construct a learnable neighborhood graph by transformation-invariant geometric structure representations of \(\mathcal{N}_{\mathbf{p}_{i}}\) and \(\mathcal{N}_{\widetilde{\mathbf{q}}_{i}}\), which consists of edge representation and angle representation:
\[\mathbf{e}_{i,k}^{\mathbf{p}}=\mathbf{p}_{i}-\mathbf{p}_{k},\ \mathbf{a}_{r,s}^{ \mathbf{p}}=\angle\left(\mathbf{e}_{i,r}^{\mathbf{p}},\ \mathbf{e}_{i,s}^{ \mathbf{p}}\right),\] \[\mathbf{e}_{i,k}^{\overline{\mathbf{q}}}=\widetilde{\mathbf{q}} _{i}-\widetilde{\mathbf{q}}_{k},\ \mathbf{a}_{r,s}^{\overline{\mathbf{q}}}=\angle\left(\mathbf{e}_{i,r}^{ \overline{\mathbf{q}}},\ \mathbf{e}_{i,s}^{\overline{\mathbf{q}}}\right), \tag{7}\] \[i\neq k,\ k=1,\ldots,K,r,s\in\{1,\ldots,K\}\]
where \(\mathbf{p}_{k}\) and \(\widetilde{\mathbf{q}}_{k}\) are the points in \(\mathcal{N}_{\mathbf{p}_{i}}\)\(\mathcal{N}_{\widetilde{\mathbf{q}}_{i}}\). The numerically robust operator \(\angle\left(\cdot_{1},\cdot_{2}\right)\) computed as:
\[\angle\left(\cdot_{1},\cdot_{2}\right)=\text{atan2}\left(\left\|\cdot_{1} \times_{2}\right\|,\left(\cdot_{1}\right)\cdot\left(\cdot_{2}\right)\right), \tag{8}\]
which provides results in range \([0,\pi)\). Moreover, these representations express sensitive and discriminative geometric structure in the point cloud, and provide adequate geometric cues for subsequent pipeline.
In order to better capture the neighborhood relevance and promote contextual message propagation, we utilize Multi
Figure 2: The pipeline of the proposed method. We first extract local features of input point cloud and 1-NN point cloud with DGCNN. The dual neighborhood fusion matching module (DNFMM) aims to facilitate the construction of matching map \(\mathbf{F}\) and improve matching confidence for generating high quality reference point cloud copy \(\widetilde{\mathcal{Q}}\). Based on the observation that the neighborhood graph formed by inlier and its neighborhood should have geometric structure consistency between \(\mathcal{P}\) and \(\widetilde{\mathcal{Q}}\), the geometric neighborhood inlier estimation module (GNIEM) constructs transformation-invariant geometric structure representations (edge and included angle) and captures their consistency to score the inlier confidence for each estimated correspondences between \(\mathcal{P}\) and \(\widetilde{\mathcal{Q}}\). Finally, the transformation is estimated with SVD method via the reliable inliers correspondence.
layer Perceptron (MLP) \(f_{\theta}\) with parameters \(\theta\) to fuse representations and characterize the consistency between the neighborhoods by the subtraction of the fused geometric representations:
\[\mathbf{d}_{i,k}=f_{\theta}\left(\text{concat}\left(\mathbf{e}_{i,k}^{\mathbf{p} },\mathbf{a}_{r,s}^{\mathbf{p}}\right)\right)-f_{\theta}\left(\text{concat} \left(\mathbf{e}_{i,k}^{\widetilde{\mathbf{q}}},\mathbf{a}_{r,s}^{\widetilde{ \mathbf{q}}}\right)\right). \tag{9}\]
Next, we further adaptively learn the attention coefficients \(\delta_{i,k}\) of each geometric structure consistency:
\[\delta_{i,k}=\text{softmax}\left(f_{\mu}\left(\mathbf{d}_{i,1}\right),f_{\mu} \left(\mathbf{d}_{i,2}\right),\ldots,f_{\mu}\left(\mathbf{d}_{i,K}\right) \right)_{k}, \tag{10}\]
where \(f_{\mu}\) is another MLP with parameters \(\mu\). Then, we calculate the inlier confidence \(\mathbf{w}_{i}\) of correspondence (\(\mathbf{p}_{i},\widetilde{\mathbf{q}}_{i}\)) by aggregating the geometric structure consistency weighted:
\[\mathbf{w}_{i}=1-\text{Tanh}\left(\left|l\left(\sum_{k=1}^{K}\delta_{i,k}\;* \;\mathbf{d}_{i,k}\right)\right|\right), \tag{11}\]
where \(l\) is a linear function. Finally, we select the largest \(N_{c}\) weights as reliable inliers correspondence of the source and its reference copy:
\[\mathcal{C}_{h}=\left\{\left(\mathbf{p}_{h},\widetilde{\mathbf{q}}_{h}\right) \mid h\in\text{topk}\left(\mathbf{w}_{i}\right),i=1,\ldots,N_{c}\right\}. \tag{12}\]
We can solve transformation \(\left\{\mathbf{R}_{est},\mathbf{t}_{est}\right\}\) in closed form using weighted SVD based on reliable inliers correspondence, which has been shown to be differentiable in [25]:
\[\mathbf{R}_{est},\mathbf{t}_{est}=\min_{\mathbf{R},\mathbf{t}}\sum_{\left( \mathbf{p}_{i},\widetilde{\mathbf{q}}_{i}\right)\in\mathcal{C}_{h}}\mathbf{ w}_{i}\|\mathbf{R}\cdot\mathbf{p}_{i}+\mathbf{t}-\widetilde{\mathbf{q}}_{i}\|_{2}^{2}. \tag{13}\]
We utilize an iterative scheme to update the source point cloud \(\mathcal{P}\) with \(\left\{\mathbf{R}_{est},\mathbf{t}_{est}\right\}\).
Besides, this inlier estimation method provides useful byproduct for unsupervised learning, which can be reliable self-supervised signal. More details can be seen in the following section.
### Optimization
We conduct loss function for model optimization, which consists of four part. Then, we train the proposed model in an unsupervised manner instead of using the ground-truth transformations.
**Global Consistency Loss.** We investigate the global consistency loss between the final transformed source point cloud \(\mathcal{P}^{\prime}\) and the reference point cloud \(\mathcal{Q}\). We utilize the Huber function to assemble the global consistency loss, which is defined as follow:
\[\mathcal{L}_{gc}=\sum_{\mathbf{p}^{\prime}\in\mathcal{P}^{\prime}}H_{\beta} \left(\min_{\mathbf{q}\in\mathcal{Q}}\|\mathbf{p}^{\prime}-\mathbf{q}\|_{2}^{2 }\right)+\sum_{\mathbf{q}\in\mathcal{Q}}H_{\beta}\left(\min_{\mathbf{p}^{ \prime}\in\mathcal{P}^{\prime}}\|\mathbf{q}-\mathbf{p}^{\prime}\|_{2}^{2} \right). \tag{14}\]
However, relying solely on global consistency loss is detrimental to the accuracy and reliability of our model. Since the model may still potentially converge to sub-optimization due to the existing outliers, and massive potential information of point cloud is wasted. Hence, it is critical to mine the potential self-supervised signals in the point cloud and construct loss functions based on other existing elements.
**Dual Neighborhood Consistency Loss.** Based on reliable inliers correspondence in Equation 12, we denote the inliers set of the source and its reference point cloud copy as \(\mathbf{X}\in\mathbb{R}^{N_{c}\times 3}\) and \(\mathbf{Y}\in\mathbb{R}^{N_{c}\times 3}\), respectively. We utilize the neighborhood between the inliers to construct consistency objective, which aims to minimize the registration error between each neighborhood \(\mathcal{N}_{\mathbf{x}_{i}}\) and \(\mathcal{N}_{\mathbf{y}_{i}}\):
\[\mathcal{L}_{in}=\sum_{\mathbf{x}_{i}\in\mathbf{X}_{\mathbf{y}_{i}}\in \mathbf{Y}}\sum_{\mathbf{p}_{j}\in\mathcal{N}_{\mathbf{x}_{i}}\cdot\widetilde{ \mathbf{q}}_{j}\in\mathcal{N}_{\mathbf{y}_{i}}}\left\|\mathbf{R}_{est}\mathbf{ p}_{j}+\mathbf{t}_{est}-\widetilde{\mathbf{q}}_{j}\right\|_{2}, \tag{15}\]
where \(\mathcal{N}_{\mathbf{x}_{i}}\) is transformed by \(\mathcal{N}_{\mathbf{y}_{i}}\).
**Geometric Structure Consistency Loss.** The geometric signal buried in point cloud is readily ignored, which hinders unsupervised inlier estimation. To address this issue, we design a geometric neighborhood loss with the reliable geometric self-supervised signal proposed in Section 3.2:
\[\mathcal{L}_{gs}= \sum_{\mathbf{x}_{i}\in\mathbf{X},\mathbf{y}_{i}\in\mathbf{Y}} \sum_{\mathbf{p}_{j}\in\mathcal{N}_{\mathbf{x}_{i}}\cdot\widetilde{\mathbf{q} }_{j}\in\mathcal{N}_{\mathbf{y}_{i}}}\left\|\mathbf{e}_{i,j}^{\mathbf{p}}- \mathbf{e}_{i,j}^{\widetilde{\mathbf{q}}}\right\|_{2}+ \tag{16}\] \[\sum_{\mathbf{x}_{i}\in\mathbf{X}_{\mathbf{y}_{i}}\in\mathbf{Y}} \sum_{\mathbf{p}_{j}\in\mathcal{N}_{\mathbf{x}_{i}}\cdot\widetilde{\mathbf{q} }_{j}\in\mathcal{N}_{\mathbf{y}_{i}}}\left\|\mathbf{a}_{r,s}^{\mathbf{p}}- \mathbf{a}_{r,s}^{\widetilde{\mathbf{q}}}\right\|_{2},\]
where \(\mathbf{a}_{r,s}^{\mathbf{p}}\) and \(\mathbf{a}_{r,s}^{\widetilde{\mathbf{q}}}\) is calculated by \(\mathbf{e}_{i,j}^{\mathbf{p}}\) and \(\mathbf{e}_{i,j}^{\widetilde{\mathbf{q}}}\) refering Equation 7.
**Spatial Consistency Loss.** We further explore to eliminate the spatial difference between the estimated correspondence and the real correspondence for each selected inlier
Figure 3: A toy example of inlier and outlier. The neighborhood graph formed by inlier and its neighborhood points should have geometric structure consistency (edge and included angle) between source point cloud and its reference point cloud copy. The outlier is just the opposite.
\(\mathbf{x}_{i}\) and utilize spatial consistency loss with cross-entropy to sharpen matching map:
\[\mathcal{L}_{sc}=-\frac{1}{|\mathbf{X}|}\sum_{\mathbf{x}_{i}\in\mathbf{X}}\sum_{ j=1}^{M}[\![j=\arg\max_{j^{\prime}}\mathbf{G}_{i,j^{\prime}}]\!]\log\mathbf{G}_{i,j}, \tag{17}\]
where \([\![\cdot]\!]\) is the Iverson bracket. Spatial consistency loss encourages to improve the matching probability and thus the estimated correspondence point in reference copy tends to have an stable position.
Since our work utilize an iterative scheme, we compute the loss at each iteration \(N_{l}\) and have the weighted sum loss:
\[\mathcal{L}=\sum_{l=1}^{N_{l}}\left(\mathcal{L}_{gc}^{l}+\gamma\mathcal{L}_{ in}^{l}+\rho\mathcal{L}_{gs}^{l}+\lambda\mathcal{L}_{sc}^{l}\right), \tag{18}\]
where \(\gamma\), \(\rho\) and \(\lambda\) are trade-off parameters to control corresponding loss function.
## 4 Experiments
### Experimental Setup
We evaluate the proposed method on synthetic datasets ModelNet40 [35] and Augmented ICL-NUIM [7], and real-world dataset 7Scenes [32]. ModelNet40 contains 12,308 CAD models of 40 different object categories. Augmented ICL-NUIM consists of 1,478 synthetic model generated by applying data augmentation on original 739 scan pairs. 7Scenes is a generally used dataset of indoor environment with 7 scenes including Chess, Fires, Heads, Office, Pumpkin, RedKitchen and Stairs.
We compare our method to traditional methods and recent learning-based methods. The traditional methods include ICP [4], FGR [16] and FPFH + RANSAC [13]. The recent learning based methods include IDM [21], FMR [16], RPMNet [39], CEMNet [17] and RIE [31]. For consistency with previous work, we measure Mean Isotropic Error (MIE) and Mean Absolute Error (MAE). All metrics should be zero if reference point cloud align to source point cloud perfectly.
### ModelNet40 Dataset and Evaluation
We first evaluate registration on ModelNet40, each point cloud contains 2,048 points that randomly sampled from mesh faces and normalized into a unit sphere. We randomly generate three Euler angle rotations within \([0^{\circ},45^{\circ}]\) and translations within \([-0.5,0.5]\) on each axis as the rigid transformation during training. Noted that, to simulate partial-to-partial registration, we crop the reference point cloud \(\mathcal{Q}\) and the source point cloud \(\mathcal{P}\) respectively, and retain 70% of the points. All experiments on ModelNet40 take same settings.
**Unseen Objects.** Our models are trained and tested on datasets comprising of samples belonging to the same categories, and both the training and test sets are obtained without any preprocessing or manipulation. We apply a random transformation on the reference point cloud \(\mathcal{Q}\) to generate corresponding source point cloud \(\mathcal{P}\). Table 1 shows quantitative results of the various algorithms under current experimental settings. The proposed method substantially outperforms all baseline in all metrics. We can observe our method can even outperform the supervised IDM, RPMNet and FMR by a large margin. Benefiting from the high quality reference point cloud copy and reliable inlier estimation, our method attains highly accurate registration and improves the registration accuracy by an order of magnitude. In order to show the effect of our proposed approach clearly, a qualitative comparison of the registration results can be found in Figure 4. Our method ensure minimal impact on changing of the shape, and achieve the best performance even on asymmetric shape.
**Unseen Categories.** To verify the generalization ability on categories, we train the models on the first 20 categories and test on the remaining unseen categories. The results are summarized in Table 1. We can observe that the majority of baseline consistently exhibit lower performance on the unseen categories, especially learning-based methods. In contrast, traditional algorithms are less susceptible to this issue due to the insensitivity of handcrafted methods to shape variance [36]. Our registration process remains highly precise, achieving the lowest error across all metrics, while also maintaining acceptable levels of fluctuation.
**Gaussian Noise.** In order to assess performance in the presence of noise, which is commonly encountered in real-world point clouds, we train our model on noise-free data and then evaluate all baseline using a test set featuring Gaussian noise. We randomly and independently generate noisy points to introduce noise into in source point cloud and reference point cloud by sampling from \(\mathcal{N}(0,0.5)\) and clipped to \([-1.0,1.0]\). This experiment is significantly more challenging, as constructing matching map and reference copy become much more difficult. As shown in Table 1, our method outperforms other baseline. In addition, a visualization of registration results can be found in Figure 5. We can observe the Gaussian noise does not affect the registration of main body in point cloud. The experimental results is robust to the noise and indirectly confirms the positive guiding effect of the 1-NN strategy on inlier estimation.
### Other Datasets and Evaluation
We further conduct comparison evaluation on other datasets: 7Scenes and Augmented ICL-NUIM. We sample the reference point clouds to 2,048 points and randomly sample three Euler angle rotations within \([0^{\circ},45^{\circ}]\) and translations within \([-0.5,0.5]\) on each axis as the rigid
transformation to obtain source point clouds, then downsample the point clouds to 1,536 points to generate the partial data. As demonstrated in Table 3, our method exhibits extremely higher registration precision on all criteria on Augmented ICL-NUIM and 7Scenes, especially the rotation error. Due to space limit, we present more visualization results and quantitative comparison results in appendix. We can summarize our method has best performance and is comfortable with real-world dataset.
### Ablation Study and Analysis
In this section, we conduct extensive ablation studies for a better understanding of the various modules in our method on ModelNet40 and 7Scenes. Due to space limit, we present more ablation studies in appendix.
**Dual Neighborhood Fusion Matching Module.** We first conduct ablation study on proposed 1-NN point cloud in dual neighborhood fusion matching module. We replace this component with single neighborhood. As shown in the second and seventh rows of Table 2, applying solely the single neighborhood brings no performance gain, since the single neighborhood confuse the matching map and thereby lower the quality of the generated reference point cloud copy. Therefore, we fuse 1-NN point cloud to enhance neighborhood matching map which can promote the gen
\begin{table}
\begin{tabular}{l|c c c|c c c c|c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Unseen Objects} & \multicolumn{4}{c|}{Unseen Categories} & \multicolumn{4}{c}{Gaussian Noise} \\ & MAE(**R**) & MAE(**t**) & MIE(**R**) & MIE(**t**) & MAE(**R**) & MAE(**t**) & MIE(**R**) & MIE(**t**) & MAE(**R**) & MIE(**t**) \\ \hline ICP [4] (\(\bigcirc\)) & 3.4339 & 0.0114 & 6.7706 & 0.0227 & 3.6099 & 0.0116 & 7.0556 & 0.0228 & 4.6441 & 0.0167 & 9.2194 & 0.0333 \\ FGR [16] (\(\bigcirc\)) & 0.5972 & 0.0021 & 1.1563 & 0.0041 & 0.4579 & 0.0016 & 0.8442 & 0.0032 & 1.0676 & 0.0036 & 2.0038 & 0.0072 \\ FPFH+RANSAC [13] (\(\bigcirc\)) & 0.7031 & 0.0025 & 1.2772 & 0.0050 & 0.4427 & 0.0021 & 0.9447 & 0.0043 & 1.4316 & 0.0061 & 2.5345 & 0.0120 \\ IIDAM [21] (**a**) & 0.4243 & 0.0020 & 0.8170 & 0.0040 & 0.4809 & 0.0028 & 0.9157 & 0.0055 & 2.3076 & 0.0124 & 4.5332 & 0.0246 \\ RPMNet [39] (**\(\blacktriangle\)**) & 0.0051 & **0.0000** & 0.0201 & **0.0000** & 0.0064 & 0.0001 & 0.0207 & 0.0001 & 0.0075 & **0.0000** & 0.0221 & 0.0001 \\ FMR [16] (**\(\blacktriangle\)**) & 3.6497 & 0.0101 & 7.2810 & 0.0200 & 3.8594 & 0.0114 & 7.6450 & 0.0225 & 18.0355 & 0.0536 & 35.7986 & 0.1063 \\ ECMNet[17] (\(\bigtriangle\)) & 0.1385 & 0.0001 & 0.2489 & 0.0002 & 0.0084 & 0.0002 & 0.1405 & 0.0003 & 10.7026 & 0.0393 & 21.1836 & 0.0781 \\ RIE [31] (\(\bigtriangle\)) & 0.0033 & **0.0000** & 0.0210 & **0.0000** & 0.0059 & **0.0000** & 0.0228 & 0.0001 & 0.0069 & 0.0001 & 0.0230 & 0.0001 \\ \hline Ours (\(\bigtriangle\)) & **0.0006** & **0.0000** & **0.0195** & **0.0000** & **0.0007** & **0.0000** & **0.0182** & **0.0000** & **0.0006** & **0.0000** & **0.0193** & **0.0000** \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation results on ModelNet40. Bold indicates the best performance and underline indicates the second-best performance. (\(\bigcirc\)), (\(\blacktriangle\)) and (\(\bigtriangle\)) denote the traditional, supervised and unsupervised methods, respectively.
Figure 4: Qualitative comparison of the registration results on unseen objects data (blue: source point cloud, yellow: reference point cloud, green: transformed source point cloud).
Figure 5: Visualization of registration results on the noisy ModelNet40 (blue: source point cloud, yellow: reference point cloud, green: transformed source point cloud). The gray line is intended to separate the point cloud and prevent scattered points from impacting the clarity of visualization.
eration of correct reference point cloud copy.
**Geometric Neighborhood Inlier Estimation Module.** We further evaluate the effect of geometric neighborhood inlier estimation module. In this experiment, we do not construct transformation-invariant geometric structure representations for neighborhood but rather utilize raw coordinate to estimation inlier. As shown in the first and seventh rows of Table 2, we can observe coordinate-based method has significantly degraded performance, because coordinate-based method cannot provide transformation-invariant representations, leading to high stochasticity in inlier estimation. Noted that, geometric neighborhood inlier estimation module provides a self-supervised signal for our model, and \(\mathcal{L}_{gs}\) is designed based on this signal. Therefore, when geometric neighborhood is deleted, the loss function \(\mathcal{L}_{gs}\) should not participate in optimization.
**Loss Function.** Comparing 3\(\sim\)6 rows in Table 2, we evaluate the performance of the model with different loss functions. Comprehensively, lacking any part of \(\mathcal{L}\) will degrade the performance of the model. The error observed in the fourth row is extremely large, primarily due to the absence of any optimization objective related to transformation in the loss function. This significantly reduces the registration performance of the model. In particular, this ablation study confirms our prediction of \(\mathcal{L}_{gs}\) and \(\mathcal{L}_{sc}\) can provide reliable and effective self-supervised signal and improve the matching map construction.
## 5 Conclusion
We propose an effective inlier estimation method for unsupervised point cloud registration, which aims to capture geometric structure consistency between source point cloud and its corresponding reference point cloud copy. We design 1-NN point cloud to potentially facilitate matching map construction for obtaining high quality reference copy. Based on the high quality reference copy, and observation that the neighborhood graph formed by inlier and its neighborhood points should have geometric structure consistency between source and its reference copy, we design a geometric neighborhood inlier estimation module to score the inlier confidence for each estimated correspondence and provide simultaneously the effective self-supervised signal based on geometric structure for model optimization. Finally, we conduct extensive experiments on ModelNet40, Augmented ICL-NUIM and 7Scenes, demonstrating that our unsupervised framework can achieve outstanding performance and 1-NN strategy effectively guides inlier estimation. Moreover, the visualizations of complete predictions demonstrate that the results are faithful and plausible.
\begin{table}
\begin{tabular}{c c c c c|c c c|c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} DNFM \\ Module \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} GNIE \\ Module \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\mathcal{L}_{gc}\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\mathcal{L}_{in}\) \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} \(\mathcal{L}_{gs}\) \\ \end{tabular} } & \multicolumn{4}{c|}{ModelNet40} & \multicolumn{4}{c}{7Scenes} \\ & & & & MAE(**R**) & MAE(**t**) & MIE(**R**) & MIE(**R**) & MAE(**R**) & MAE(**t**) & MIE(**R**) & MIE(**R**) \\ \hline \hline & & & & & & & 0.0919 & 0.0013 & 0.1733 & 0.0026 & 3.4771 & 0.0195 & 7.0834 & 0.0368 \\ & & & & & & 0.0106 & 0.0001 & 0.0271 & 0.0003 & 0.0051 & **0.0000** & 0.0240 & **0.0001** \\ & & & & & & 0.0020 & **0.0000** & 0.0210 & **0.0000** & 0.0189 & 0.0002 & 0.0282 & 0.0003 \\ & & & & & & 0.1774 & 0.0018 & 0.2799 & 0.0037 & 0.8186 & 0.0194 & 1.5426 & 0.0375 \\ & & & & & & 0.0140 & 0.0002 & 0.0343 & 0.0004 & 0.0111 & 0.0001 & 0.0285 & 0.0002 \\ & & & & & & 0.0103 & 0.0001 & 0.0281 & 0.0002 & 0.0262 & 0.0002 & 0.0455 & 0.0004 \\ & & & & & & **0.0006** & **0.0000** & **0.0195** & **0.0000** & **0.0036** & **0.0000** & **0.0191** & **0.0001** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study of different components on ModelNet40 and 7Sences. DNFM Module: Dual Neighborhood Fusion Matching Module; GNIE Module: Geometric Neighborhood Inlier Estimation Module; \(\mathcal{L}_{gc}\), \(\mathcal{L}_{in}\), \(\mathcal{L}_{gs}\) and \(\mathcal{L}_{sc}\): Each Loss Function in Equation 18.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Method & MAE(**R**) & MAE(**t**) & MIE(**R**) & MIE(**t**) \\ \hline \multicolumn{5}{c}{Augmented ICL-NUIM} \\ \hline ICP [4] (\(\bigcirc\)) & 2.4022 & 0.0699 & 4.4832 & 0.1410 \\ FGR [16] (\(\bigcirc\)) & 2.2477 & 0.0808 & 4.1850 & 0.1573 \\ FPFH+RANSAC [13] (\(\bigcirc\)) & 1.2349 & 0.0429 & 2.3167 & 0.0839 \\ IADM [21] (\(\blacktriangle\)) & 4.4153 & 0.1385 & 8.6178 & 0.2756 \\ RPMNet [39] (\(\blacktriangle\)) & 0.3267 & 0.0125 & 0.6277 & 0.0246 \\ FMR [16] (\(\blacktriangle\)) & 1.1085 & 0.0398 & 2.1323 & 0.0786 \\ CEMNet[17] (\(\triangle\)) & 0.2374 & 0.0005 & 0.3987 & 0.0010 \\ RIE [31] (\(\triangle\)) & 0.0492 & 0.0023 & 0.0897 & 0.0049 \\ \hline Ours (\(\triangle\)) & **0.0005** & **0.0002** & **0.0210** & **0.0004** \\ \hline \multicolumn{5}{c}{7Scenes} \\ \hline ICP [4] (\(\bigcirc\)) & 6.0091 & 0.0130 & 13.0484 & 0.0260 \\ FGR [16] (\(\bigcirc\)) & 0.0919 & 0.0004 & 0.1705 & 0.0008 \\ FPFH+RANSAC [13] (\(\bigcirc\)) & 1.2325 & 0.0062 & 2.1875 & 0.0124 \\ IIDAM [21] (\(\blacktriangle\)) & 5.6727 & 0.0303 & 11.5949 & 0.0629 \\ RPMNet [39] (\(\blacktriangle\)) & 0.3885 & 0.0021 & 0.7649 & 0.0042 \\ FMR [16] (\(\blacktriangle\)) & 2.5438 & 0.0072 & 4.9089 & 0.0150 \\ CEMNet [17] (\(\triangle\)) & 0.0559 & 0.0001 & 0.0772 & 0.0003 \\ RIE [31] (\(\triangle\)) & 0.0121 & 0.0001 & 0.0299 & **0.0001** \\ \hline Ours (\(\triangle\)) & **0.0036** & **0.0000** & **0.0191** & **0.0001** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation results on Augmented ICL-NUIM and 7Scenes. Bold indicates the best performance and underline indicates the second-best performance. (\(\bigcirc\)), (\(\blacktriangle\)) and (\(\triangle\)) denote the traditional, supervised and unsupervised methods, respectively. |
2306.12999 | Strong-field photoionization by circularly polarized light | We demonstrate that strong-field ionization of atoms driven by circularly
polarized light becomes an adiabatic process when described in the frame
rotating with the laser field. As a direct consequence, a conservation law
emerges: in the rotating frame the energy of the tunneling electron is
conserved for rotationally invariant potentials. This conservation law, arising
from a classical picture, is retrieved through a proper classical-quantum
correspondence when considering the full quantum system, beyond the Strong
Field Approximation. | Jonathan Dubois, Camille Lévêque, Jérémie Caillat, Richard Taïeb, Ulf Saalmann, Jan-Michael Rost | 2023-06-22T15:56:49Z | http://arxiv.org/abs/2306.12999v1 | # Strong-field photoionization by circularly polarized light
###### Abstract
We demonstrate that strong-field ionization of atoms driven by circularly polarized light becomes an adiabatic process when described in the frame rotating with the laser field. As a direct consequence, a conservation law emerges: in the rotating frame the energy of the tunneling electron is conserved for rotationally invariant potentials. This conservation law, arising from a classical picture, is retrieved through a proper classical-quantum correspondence when considering the full quantum system, beyond the Strong Field Approximation.
IntroductionTunnel ionization is a fundamental quantum process which plays a key role in probing techniques for measuring the real-time motion of electrons inside atoms and molecules [1; 2; 3; 4; 5]. In order to probe experimentally this dynamics on an attosecond timescale, infrared and near-infrared laser pulses are most commonly employed. When the laser intensity is strong enough [6; 7; 8], a valence electron of an atom or molecule can tunnel ionize through the potential barrier induced by the non-perturbative field. The subsequent photoelectron dynamics, governed by field-driven rescattering [9], lead to highly non-linear phenomena such as above-threshold ionization [10] or high-harmonic generation [11] that have been subsequently developed to design "self-probing" spectroscopies with unprecedented time and space resolutions [12; 13]. Controlling the conditions under which tunnel ionization occurs, predicting the phase-space configuration of the photoelectron wavepacket after tunneling, and modelling the tunneling rates, are essential theoretical steps for interpreting and decoding the experimental measurements which allow one to retrieve attosecond-resolved information on electron dynamics.
The essence of tunnel ionization is efficiently captured by adiabatic, quasi-static, theories such as ADK [6; 14; 15]. However, in the infrared regime, the characteristic tunneling time of the electron and the laser period are on the same order of magnitude, such that energy of the electron under the barrier are significantly affected by nonadiabatic sub-cycle couplings [16; 17; 18; 19]. As a consequence, the energy of the electron during tunnel ionization is strongly influenced by the oscillations of the laser field, and the tunneling electron gains energy on the order of an electron-volt [20; 21; 22]. These energy changes of the electron upon tunneling are called nonadiabatic effects [19] and the energy of the electron right after tunneling is hard to assess. In atoms, this is commonly achieved by neglecting the interaction between the electron and the ion [16; 17; 18; 18] in the framework of the so-called strong-field approximation [23; 24] (SFA). The SFA not only provides analytic formulas for ionization phenomena, it also unravels the classical behavior of the electron subjected to strong-laser fields. It is thus an essential ingredient for the design and interpretation of time-resolved experiments using intense laser fields.
While the essential of strong-field physics can be addressed by considering linearly polarized pulses, circularly polarized (CP) fields valuably offer additional experimental ways of probing ultrafast dynamics in atoms and molecules, as demonstrated with the "Attoclock" setup [25; 26]. There, information on the target and on the tunneling process is directly encoded in the photoelectron momentum distributions [27] since driving tunnel-ionized electrons with CP fields dramatically reduces recollision. However, the semiclassical treatment of quantum strong-field tunneling has raised several debates on the time spent by the electron under the potential barrier [27; 28; 29; 30]. It is shown that the ion-electron interaction plays a crucial role and cannot be overlooked. While intense CP light presents a unique avenue for probing chirality of molecules, a major obstacle remains to predict and control the phase-space configuration of the electron by fully taking into account the ion-electron interaction and nonadiabatic effects during tunnel ionization.
In this letter, we show that tunnel ionization of electrons by CP fields obeys a classical conservation law. By choosing a proper reference frame and by fully taking into account the ion-electron interaction, the nonadiabatic effects occuring on short timescales in the laboratory frame (LF) are transformed into adiabatic ones. Making use of this conservation law unravels the intrinsic link between the angular momentum of the electron and its energy variations in CP fields. We also show that this conservation law is, in fact, present in the SFA, and notably supports the classical picture of the process. Atomic units are used unless stated otherwise.
Laboratory frameWe consider a single-active electron in an atom interacting with a classical electric field in the dipole approximation within the length gauge. The Hamiltonian governing the dynamics is
\[\mathrm{H}(t)=\frac{\mathbf{p}^{2}}{2}+V(\mathbf{r})+\mathbf{r}\cdot\mathbf{F} (t), \tag{1}\]
where \(\mathbf{r}\) is the electron position, \(V(\mathbf{r})\) is the ion-electron energy potential and \(\mathbf{p}\)=\(-\mathrm{i}\boldsymbol{\nabla}\) is the momentum operator. The Hamiltonian (1) is here expressed in the LF, i.e. the frame in which the electrons are detected in experiments. We consider ion-electron energy potentials which are invariant under rotations, which typically corresponds to atomic potentials [31] (and to some extent to molecules modeled by a continuous potential such as benzene [32]
and buckminsterfullerene [33]). We used soft-Coulomb potentials [34] with parameters adapted to model He [35] and Ne [36] in the single active electron approximation (see Supplemental Material [37]). To address the specificities of CP driven dynamics, we performed simulations for He initially in its 1s (\(m\)=0) state (with ionization potential \(I_{p}\)=24.3 eV), or Ne initially in each of its 2p\({}_{\pm}\) (\(m\)=\(\pm\)1) states (with ionization potential \(I_{p}\)=21.6 eV). The time-dependent CP laser electric field is defined as F(\(t\))=\(-\partial_{t}\mathbf{A}(t)\) where \(\mathbf{A}(t)\) is the associated vector potential
\[\mathbf{A}(t)=\frac{F}{\omega}f(t)\Big{(}\mathbf{e}_{x}\cos(\omega t)+\mathbf{ e}_{y}\sin(\omega t)\Big{)}. \tag{2}\]
Considering \(\mathbf{e}_{z}\) (orthogonal to the polarization plane) as the quantification axis, a \(m\)=\(+1\) (\(-1\)) electron is therefore co-rotating (counter-rotating) with the laser field. The amplitude of the laser is \(F\), its intensity is \(I\)=\(2F^{2}\) and its frequency is \(\omega\). Here, we consider an IR of 800 nm and a 2-cycle sin\({}^{4}\) envelop given as \(f(t)\)= cos(\(\pi t/\tau)^{4}\) for \(|t|\)\(\leq\)\(\tau/2\) and zero otherwise, with \(\tau\)=\(4\times\)2\(\pi/\omega\).
The (\(\mathbf{r},\mathbf{p}\)) phase-space distributions, right after tunneling and fully taking into account the ion-electron interaction, are obtained using the backpropagation method [22; 29]. For this, the wavefunction \(\psi(\mathbf{r},t)\), starting from the initial state \(\psi_{0}(\mathbf{r})\), is propagated forward quantum mechanically using the time-dependent Schrodinger equation (TDSE) i\(\partial_{t}\psi\)=H(\(t\))\(\psi\), until one cycle after the end of the laser pulse, i.e. \(T\)=\(3\times\)2\(\pi/\omega\). From \(\psi(\mathbf{r},T)\), we can extract the classical phase-space distribution at \(T\), which is then propagated backward, using Hamilton's equations until we match the tunneling condition, corresponding to the vanishing of the longitudinal momentum [22; 29]. These equations are defined from the classical analog of H (Eq. (1)), hereafter referred to as \(\mathcal{H}(\mathbf{r},\mathbf{p},t)\) (all through the text, calligraphic letters stand for classical analogs).
Figure 1 displays the reconstructed energy distributions in the He case [frames (a), red] and in the Ne cases [frames (c), green (\(m\)=\(-1\)) and blue (\(m\)=\(+1\))]. Four main features emerge from this figure. First, we see in frame (c1) that the ionization probability is larger for an initially counter-rotating electron than for a co-rotating one, in agreement with [20; 26]. Second, frames (a1) and (c1) show that, starting from a delta-function at \(-I_{p}\), the energy distribution shifts towards higher energies (by around 1 eV) and gets a width of about 2 eV during the tunneling process. This is a clear signature of non-adiabatic effects and was also observed in [22]. Third, frame (c1) also shows that, for oriented initial states, the photoelectron peak shifts towards higher energies roughly twice more, and with a smaller width, for \(m\)=\(-1\) than for \(m\)=\(+1\). These three first features are supported by the quantitative data reported in the first two lines of table 1. Finally, in frames (a2) and (c2) we observe that the obtained comma-shaped electron distributions in the energy-position plane lie close to the potential barrier at the peak amplitude of the laser field (the classically forbidden regions are indicated by grey areas), regardless of its initial energy and its magnetic quantum number. These large energy variations during tunneling can be assessed either using SFA [20], and thus neglecting the ion-electron interaction, or going into the rotating frame where we can interpret them on subcycle timescales.
_Rotating frame perspective_ The rotating frame (RF) has already been used either in the classical context for
Figure 1: Configuration of the electron after tunnel ionization obtained by the backpropagation method [22; 29]. (a) and (c) are the distributions in the LF. (b) and (d) distributions in the RF. Left panels are the distributions of the energy after tunneling (normalized with respect to their ionization probability), right panels are the distributions in energy and in position along the field direction (\(\mathbf{F}(t)\) or \(\widetilde{\mathbf{F}}(t)\)) after tunneling. The grey regions indicate the classically forbidden region of the electron in the LF and in the RF at the peak amplitude of the laser field (i.e., at time \(t\)=0). The upper panels are for He for the initial state 1s (red) for \(I\)=8\(\times\)10\({}^{14}\) W cm\({}^{-2}\), and the lower panels are for Ne for the initial state 2p\({}_{-}\) (green) and 2p\({}_{+}\) (blue) for \(I\)=6\(\times\)10\({}^{14}\) W cm\({}^{-2}\). The dotted curves are complex trajectories of tunneling electron with initial (open circles) and final conditions (solid circles).
\begin{table}
\begin{tabular}{c|c c c} Atom & He 1s & Ne 2p\({}_{-}\) & Ne 2p\({}_{+}\) \\ \hline \(e\) (eV) & 1.642 & 2.362 & 0.958 \\ \(\Delta e\) (eV) & 2.064 & 1.839 & 2.070 \\ \hline \(\widetilde{e}\) (eV) & \(-0.032\) & 0.048 & \(-0.013\) \\ \(\Delta\widetilde{e}\) (eV) & 0.113 & 0.102 & 0.120 \\ \hline \(\theta\) (deg.) & 0.452 & 1.331 & 0.205 \\ \(\Delta\theta\) (deg.) & 6.728 & 7.561 & 6.632 \\ \end{tabular}
\end{table}
Table 1: Statistical quantities characterizing the energy and angular distributions of the electron after tunneling in the laboratory frame and in the rotating frame obtained by the semiclassical backpropagation method (see text). In the former, the distribution of \(E\)+\(I_{p}\) is fitted by a gaussian with mean value \(e\) and standard deviation \(\Delta e\). In the latter, the distribution of \(\widetilde{E}\)+\((I_{p}\)+\(m\omega)\) is fitted by a gaussian with mean value \(\widetilde{e}\) and standard deviation \(\Delta\widetilde{e}\). The distribution of \(<\)\((-\mathbf{r},\mathbf{F}(t))\) is fitted by a gaussian with mean value \(\theta\) and standard deviation \(\Delta\theta\).
strong-field physics in [38; 39], or in the quantum context for high-harmonic generation by bicirculary polarized pulses [40] and for ionization by microwaves [41]. Switching to the frame which rotates with the laser field is formally achieved by means of the time-dependent matrix \(R_{\omega}(t)\) associated with the rotation of angle \(\omega t\) around \(\mathbf{e}_{z}\). In the RF associated with the CP field, the vector potential is \(\widetilde{\mathbf{A}}(t)\)=\(R_{\omega}(t)\mathbf{A}(t)\)=\((F/\omega)f(t)\mathbf{e}_{x}\) and the laser electric field is \(\widetilde{\mathbf{F}}(t)\)=\(R_{\omega}(t)\mathbf{F}(t)\)=\(-(F/\omega)(\dot{f}(t)\mathbf{e}_{x}+f(t)\omega\mathbf{e}_{y})\). The fast carrier oscillations at frequency \(\omega\) present in the LF disappear in the RF.
Wavefunctions \(\widetilde{\psi}\) in the RF frame are related to \(\psi\) in the LF frame by the unitary transformation
\[\widetilde{\psi}(\mathbf{r},t)=\exp\left(\mathrm{i}\,\omega t\,\mathrm{L}_{z} \right)\psi(\mathbf{r},t)\equiv\psi(R_{\omega}^{-1}(t)\mathbf{r},t), \tag{3}\]
where \(\mathrm{L}_{z}\)=\(\mathbf{r}\)\(\times\)\(\mathbf{p}\)\(\cdot\)\(\mathbf{e}_{z}\) is the angular momentum normal to the polarization plane and \(\widetilde{\psi}\) is the wavefunction of the electron in the RF. In the RF, the TDSE becomes \(\mathrm{i}\partial_{t}\widetilde{\psi}\)=\(\widetilde{\mathrm{H}}(t)\widetilde{\psi}\) with Hamiltonian
\[\widetilde{\mathrm{H}}(t)=\frac{\mathbf{p}^{2}}{2}+V(\mathbf{r})-\omega \mathrm{L}_{z}+\mathbf{r}\cdot\widetilde{\mathbf{F}}(t), \tag{4}\]
where the Coriolis term \(\omega\mathrm{L}_{z}\) results from the time-dependent rotation (3) from the LF to the RF. Due to the rotational invariance of Hamiltonian (4) in absence of laser field, it shares the same field-free eigenstates as Hamiltonian H (1) but shifted in energy due to the Coriolis term. Thus, the ionization potential in the RF is \(\widetilde{I}_{p}\)=\(I_{p}\)+\(m\omega\) with \(\widetilde{\mathrm{H}}(-\infty)\psi_{0}\)=\(-\widetilde{I}_{p}\psi_{0}\). The classical Hamiltonian in the RF is denoted \(\widetilde{\mathcal{H}}(\widetilde{\mathbf{r}},\widetilde{\mathbf{p}},t)\) where \(\widetilde{\mathbf{r}}\) and \(\widetilde{\mathbf{p}}\) are the phase-space variables of the electron in the RF. Hamiltonian \(\widetilde{\mathcal{H}}\) is obtained either by using the classical analogy of (4), or equivalently by performing the time-dependent canonical transformation \(\widetilde{\mathbf{r}}\)=\(R_{\omega}(t)\mathbf{r}\) and \(\widetilde{\mathbf{p}}\)=\(R_{\omega}(t)\mathbf{p}\) from \(\mathcal{H}\).
In the RF, the time-dependence of the electric field is reduced to its envelop \(f(t)\), which (i) varies on timescales longer than a laser cycle and (ii) can play the role of an _adiabatic parameter_. Since tunnel ionization occurs on timescales shorter than the laser cycle, the classical picture predicts that the energy of the electron is approximately conserved during tunnelling. Hence, we expect in the RF after tunneling the energy
\[\widetilde{E}\approx-\big{(}I_{p}+m\omega\big{)}. \tag{5}\]
This is indeed confirmed by the results of the backpropagation method displayed in Fig. 2 where we show the distribution of energy gained by the electron _during_ tunnel ionization in the RF, i.e. \(\Delta\widetilde{E}\)=\(\widetilde{E}\)+\((I_{p}\)+\(m\omega)\). In table 1, we report for all initial states a shift of the photoelectron peak position in the RF (\(\widetilde{e}\)) of a few tens of meV, i.e. two orders of magnitude lower than in the LF (\(e\)). The peak width in the RF (\(\Delta\widetilde{e}\)) is around 100 meV, i.e. one order of magnitude lower than in the LF (\(\Delta e\)). We have checked that the conservation law given in Eq. 5, within the tunneling regime, is robust with respect to field intensities and frequencies. This confirms that tunnel ionization in the RF occurs adiabatically on the energy isosurface \(\widetilde{\mathcal{H}}(\widetilde{\mathbf{r}},\widetilde{\mathbf{p}},t)\)=\(\widetilde{E}\) in phase space, where \(\widetilde{E}\) results from the \(m\)- and \(\omega\)-dependent electron-ion coupling and laser interactions, see Eq. 5. Note that this is a clear confirmation from TDSE calculations of the extension to the adiabatic regime of the ADK assumption that tunnel ionization occurs on a constant energy surface.
In the RF we can define an effective potential [37] as
\[\widetilde{V}_{\mathrm{eff}}(\mathbf{r},t)=V(\mathbf{r})-\frac{\omega^{2}}{2} \left(\mathbf{e}_{z}\times\mathbf{r}\right)^{2}+\mathbf{r}\cdot\widetilde{ \mathbf{F}}(t). \tag{6}\]
The classically forbidden regions where tunnelling takes place, at the peak of the pulse envelop (grey areas in Figs. 1b and 1d), are bounded by \(\widetilde{V}_{\mathrm{eff}}(\mathbf{r},0)\). These regions dominate the tunneling dynamics. As observed in Fig. 1d, the RF potential barrier is thinner and the energy gap between the top of the barrier and \(\widetilde{I}_{p}\) is smaller for a 2p\({}_{+}\) electron (smaller \(\widetilde{I}_{p}\)) than for a 2p\({}_{-}\) one (larger \(\widetilde{I}_{p}\)). Since tunneling is strongly suppressed with increasing classically forbidden area, the ionization probability for a counter-rotating electron is larger than for a co-rotating one in strong CP fields in this typical regime, in agreement with the SFA [20], numerical simulations [36] and experimental measurements [26].
The last two lines of table 1 reveal that the electron ionizes mainly along the laser electric field direction, as predicted from tunneling theories [16; 17; 18; 20; 21; 42], and that the position of the electron after tunneling is approximately given by \(\widetilde{\mathbf{r}}\)=\(-r_{0}\widetilde{\mathbf{F}}(t)/|\widetilde{\mathbf{F}}(t)|\). The radius \(r_{0}\) can be determined using the conservation law
Figure 2: Distributions of the energy variation in the rotating frame \(\Delta\widetilde{E}\)=\(\widetilde{E}\)+\((I_{p}\)+\(m\omega)\) (filled lines), in the laboratory frame \(\Delta E\)=\(E\)+\(I_{p}\) (solid lines) and distribution of the inertial energy variation \(\omega\Delta\mathcal{L}_{z}\)=\(\omega(\mathcal{L}_{z}\)\(-\)\(m)\) (dashed lines), corresponding to the distributions of Fig. 1.(1) shifted by their initial energy and normalized to unity. All the data were obtained by the backpropagation method and normalized to unity. The gaussian fit parameters of \(\Delta E\) and \(\Delta\widetilde{E}\) are given in table 1.
in the RF (Eq. 5). It corresponds to the position of the outermost intersection between the effective potential \(\widetilde{V}_{\rm eff}({\bf r},t)\) and the initial energy \(-\widetilde{I}_{p}\), i.e. the solution of \(\widetilde{V}_{\rm eff}(-r_{0}\widetilde{\bf F}(t)/|\widetilde{\bf F}(t)|,t)\)=\(-\widetilde{I}_{p}\). In Fig. 3, \(r_{0}\) at the peak of the envelop is indicated by vertical dotted lines: it quantitatively matches the minimum exit distance after tunneling, given by the distributions plotted on the same figure. Finally, in the RF, the momentum of the electron, when exiting the potential barrier, is \(\widetilde{\bf p}\)=\(\omega{\bf e}_{z}\times\widetilde{\bf r}\) (zero kinetic energy, see also [43]). It is therefore perpendicular to the electric field and non-zero, in agreement with tunneling theories [16; 17; 18; 20].
The knowledge gained from the RF perspective can now be used to deepen our understanding of both the phase-space configuration after tunneling in the LF, and the classical-quantum correspondence of tunnel ionization. As at each time \({\cal H}({\bf r},{\bf p},t)\)=\(\widetilde{\cal H}(\widetilde{\bf r},\widetilde{\bf p},t)\)+\(\omega{\cal L}_{z}\) and according to Eq. (5), the angular momentum variations of the electron during tunnel ionization are directly converted into energy in the LF. Thus, we have
\[\Delta E\approx\omega\,\Delta{\cal L}_{z}, \tag{7}\]
with \(\Delta E\)=\(E\)+\(I_{p}\) and \(\Delta{\cal L}_{z}\)=\({\cal L}_{z}\)\(-m\) the energy and angular momentum variations of the electron before and after tunneling. Figure 2 shows a comparison between \(\Delta E\) and \(\omega\Delta{\cal L}_{z}\), and the perfect agreement between the two curves confirms our findings resulting in the conversion law (7).
Strong-field approximationFinally, we aim to validate the conservation law (5), originating from a classical picture, with the commonly used SFA approach [23; 24; 44]. We start in the LF from Hamiltonian (1). Within SFA, one uses an ansatz for the electronic wavefunction as the sum of the initial bound state \(\psi_{0}\) and an ionized wavepacket \(\varphi\), i.e. \(\psi({\bf r},t)\)= \(\exp({\rm i}I_{p}t)\psi_{0}({\bf r})\)+\(\varphi({\bf r},t)\). After substituting this approximation in the TDSE, one obtains
\[\left({\rm i}\,\partial_{t}-\frac{{\bf p}^{2}}{2}-{\bf r}\cdot{\bf F}(t) \right)\varphi({\bf r},t)=s({\bf r},t),\] (8a) where the ion-electron interaction on the ionizing wavepacket is neglected [23] and \[s({\bf r},t)=\left({\bf r}\cdot{\bf F}(t)\right)\,\exp({\rm i}I_{p}t)\,\psi_{ 0}({\bf r}), \tag{8b}\]
is the electron source of the ionizing wavepacket which solely depends on the initial state [48]. In the Green function formalism, the dynamics of the ionizing wavepacket is then governed by
\[\varphi({\bf r},t)=\int_{-\infty}^{t}{\rm d}t^{\prime}\int{\rm d}{\bf r}^{ \prime}\;G({\bf r},t;{\bf r}^{\prime},t^{\prime})\,s({\bf r}^{\prime},t^{ \prime}). \tag{9}\]
The Green function \(G({\bf r},t;{\bf r}^{\prime},t^{\prime})\) can be expressed exactly in terms of the classical action \({\cal S}({\bf r},t;{\bf r}^{\prime},t^{\prime})\) solution of the Hamilton-Jacobi equation since \({\cal H}\) is linear in position for \(V\)=0 [45; 46; 37]. In the RF, the dynamics of \(\varphi\) is obtained by performing the transformation (3)
\[\widetilde{\varphi}({\bf r},t)=\int_{-\infty}^{t}{\rm d}t^{\prime}\int{\rm d} {\bf r}^{\prime}\;\widetilde{G}({\bf r},t;{\bf r}^{\prime},t^{\prime})\, \widetilde{s}({\bf r}^{\prime},t^{\prime}),\] (10a) with the source term \[\widetilde{s}(R_{\omega}(t){\bf r},t)\] = \[s({\bf r},t)\]. There, the initial state accumulates a time-dependent phase, i.e. \[\psi_{0}(R_{\omega}^{-1}(t){\bf r})\] = \[\exp({\rm i}m\omega t)\psi_{0}({\bf r})\]. Hence the source term becomes \[\widetilde{s}({\bf r},t)=\left({\bf r}\cdot\widetilde{\bf F}(t)\right)\,\exp \left({\rm i}\widetilde{I}_{p}t\right)\psi_{0}({\bf r}). \tag{10b}\]
The Green function becomes \(\widetilde{G}(R_{\omega}(t){\bf r},t;R_{\omega}(t^{\prime}){\bf r}^{\prime},t^ {\prime})\)=\(G\big{(}{\bf r},t;{\bf r}^{\prime},t^{\prime}\big{)}\), or equivalently the classical action becomes \(\widetilde{\cal S}(R_{\omega}(t){\bf r},t;R_{\omega}(t^{\prime}){\bf r}^{\prime},t ^{\prime})\)=\(\mathcal{S}\big{(}{\bf r},t;{\bf r}^{\prime},t^{\prime}\big{)}\). Note that we have numerically verified that Eq. (10a) (or equivalently Eq. (9)) reproduces with great fidelity the ionization probabilities of [20] and [36]. Analytic expressions can be found in [47]. On short timescales, around the peak amplitude of the laser field, the pulse envelop is \(f(t)\)=1, and the Green function in the RF becomes invariant under translation in time [37]
\[\widetilde{G}({\bf r},t;{\bf r}^{\prime},t^{\prime})=\widetilde{G}({\bf r},t- t^{\prime};{\bf r}^{\prime},0), \tag{11}\]
regardless of the interaction potential. This clearly indicates that the energy of the ionizing wavepacket is conserved during tunnel ionization. The time-integral in (10a) can be substituted by the time-independent Green function \(\widetilde{G}({\bf r},{\bf r}^{\prime};-\widetilde{I}_{p})\) propagating the electron from \({\bf r}^{\prime}\) to \({\bf r}\) on a constant energy level \(-\widetilde{I}_{p}\)=\(-(I_{p}\)+\(m\omega)\). The complex trajectories obtained by the saddle point approximation [23] are depicted by dotted lines in Figs. 1b and 1d. They reproduce well the final energy in the LF and in the RF, and show that the energy of the electron in the RF does not change during tunneling.
As a final remark we would like to add, that the present scheme of adiabatic time-dependent motion in the RF can be extended to potentials which are not rotationally symmetric, as long as the resulting time-dependence of the potential in the rotating frame is slow, i.e. adiabatic. Generally speaking, this will be the case for potentials smoothly varying in space, which is the typical case for molecules and more complex systems.
Figure 3: Distributions of the distance of birth of the electron from the origin obtained by the backpropagation method and normalized to unity. The vertical dotted lines are the position of the effective potential barrier \(r_{0}\) computed from \(\widetilde{V}_{\rm eff}({\bf r},0)\)=\(-\widetilde{I}_{p}\) (see Eq. (6)) with tunneling exit at the peak amplitude of the laser field \({\bf r}\)=\(-r_{0}{\bf F}(0)/|{\bf F}(0)|\).
SummaryWe have shown that electrons subjected to strong CP laser pulses obey classical conservation laws, confirmed by semiclassical treatments. These conservation laws offer a clear characterization of the tunnel ionization process, and provide a powerful tool for deeper analysis. In addition, analysis in the rotating frame together with the conservation laws offer a promising avenue for predicting and controlling the phase-space configuration of the electron after tunnel ionization for more complex systems, such as molecules.
_Acknowledgements_ JD acknowledges ATTOCOM funded by the Agence Nationale de la Recherche. JD acknowledges Kieran Fraser, Panos Giannakeas, Andrew Hunter and Gabriel Lando for fruitful discussions.
|
2307.03565 | MALIBO: Meta-learning for Likelihood-free Bayesian Optimization | Bayesian optimization (BO) is a popular method to optimize costly black-box
functions. While traditional BO optimizes each new target task from scratch,
meta-learning has emerged as a way to leverage knowledge from related tasks to
optimize new tasks faster. However, existing meta-learning BO methods rely on
surrogate models that suffer from scalability issues and are sensitive to
observations with different scales and noise types across tasks. Moreover, they
often overlook the uncertainty associated with task similarity. This leads to
unreliable task adaptation when only limited observations are obtained or when
the new tasks differ significantly from the related tasks. To address these
limitations, we propose a novel meta-learning BO approach that bypasses the
surrogate model and directly learns the utility of queries across tasks. Our
method explicitly models task uncertainty and includes an auxiliary model to
enable robust adaptation to new tasks. Extensive experiments show that our
method demonstrates strong anytime performance and outperforms state-of-the-art
meta-learning BO methods in various benchmarks. | Jiarong Pan, Stefan Falkner, Felix Berkenkamp, Joaquin Vanschoren | 2023-07-07T12:57:10Z | http://arxiv.org/abs/2307.03565v3 | # MALIBO: Meta-learning for Likelihood-free Bayesian Optimization
###### Abstract
Bayesian optimization (BO) is a popular method to optimize costly black-box functions. While traditional BO optimizes each new target task from scratch, meta-learning has emerged as a way to leverage knowledge from related tasks to optimize new tasks faster. However, existing meta-learning BO methods rely on surrogate models that suffer from scalability issues and are sensitive to observations with different scales and noise types across tasks. Moreover, they often overlook the uncertainty associated with task similarity. This leads to unreliable task adaptation when only limited observations are obtained or when the new tasks differ significantly from the related tasks. To address these limitations, we propose a novel meta-learning BO approach that bypasses the surrogate model and directly learns the utility of queries across tasks. Our method explicitly models task uncertainty and includes an auxiliary model to enable robust adaptation to new tasks. Extensive experiments show that our method demonstrates strong anytime performance and outperforms state-of-the-art meta-learning BO methods in various benchmarks.
## 1 Introduction
Bayesian optimization (BO) is a widely used framework to optimize expensive black-box functions [54], with applications including material design [21] and automated machine learning [31]. In traditional BO, a probabilistic surrogate model and an acquisition function are employed to propose the next query candidate for optimization. The surrogate model, often a Gaussian process (GP), models the black-box function and provides uncertainty estimates, enabling a balance between exploration and exploitation. The acquisition function determines the utility of potential queries based on a specific exploration-exploitation trade-off.
While BO typically focuses on each new target task individually, recent approaches leverage information from previous runs on related tasks through transfer learning [68] and meta-learning [63] to _warm-start_ BO. In this context, each _task_ denotes the optimization of a specific black-box function and we assume that related tasks share similarities with the target task, e.g. tuning the same neural network on multiple datasets. The information from related tasks is assumed to be available, for instance, through a public repository [64] or repeated experiments. Prior knowledge from related tasks can be used to build informed surrogate models [53; 72; 17; 46; 70; 5], restrict the search space [47], or initialize the optimization with configurations that generally score well [15; 50; 65].
However, many of these approaches require a surrogate model to approximate the target function, which gives rise to several issues: (i) GP-based methods scale poorly with the number of observations as well as number of tasks, due to their cubic computational complexity [49]. (ii) In practice, observations across tasks can have different scales, e.g. the validation error of an algorithm can be high on one dataset and low on another. Although normalization can be applied to the data from related tasks, normalizing the target task data is often challenging, especially when only a few observations are available to estimate its range. As a result, regression-based surrogate models can
struggle to adequately transfer knowledge from related tasks [2; 73; 16; 50; 70]. (iii) While GPs typically assume the observation noise to be Gaussian and homoscedastic, real-world observations often have different noise distributions and can be heteroscedastic. This discrepancy can lead to poor meta-learning and optimization performance [51]. Moreover, when adapting to tasks that have limited observations (e.g. early iterations during optimization) or tasks that significantly differ from related tasks, estimating the task similarity becomes challenging due to the scarcity of relevant task information. Hence, it is desirable to explicitly model the uncertainty inherent to such tasks [19; 5]. Nevertheless, many existing methods warm-start BO by only modeling relations between tasks deterministically [15; 72; 65], making the optimization unreliable.
To tackle these limitations, we propose a novel and scalable meta-learning BO approach inspired by the idea of likelihood-free Bayesian optimization (LFBO) [56]. Our method overcomes the limitations of surrogate modeling by directly modeling the acquisition function, which makes less stringent assumptions about the observed values compared to GPs. This enables effective learning across tasks with varying scales and noises. To account for task uncertainty, e.g. lack of task information or when the target task is distinctly different from related tasks, we introduce a probabilistic meta-learning model to capture the task uncertainty, as well as a novel adaptation procedure based on gradient boosting to robustly and efficiently adapt to each new task.
This paper makes the following contributions: (i) We propose a scalable and robust meta-learning BO approach that directly models the acquisition function of a given task based on knowledge from related tasks, while being able to cope with heterogeneous observation scales and noise types across tasks. (ii) We use a probabilistic model to meta-learn the task distribution, enabling us to account for the uncertainty inherent in each target task. (iii) We ensure robust adaptation to new tasks that are not well captured by meta-learning by adding a novel adaptation procedure based on gradient boosting.
## 2 Related Work
Meta-learning Bayesian optimizationVarious methods have been proposed to improve the data-efficiency of BO through meta-learning [63] or transfer-learning [68], and have shown effectiveness in diverse applications [1; 18].
One line of work focuses on the initialization of the optimization (_initial design_) by reducing the search space [47; 38] or reusing promising configurations from similar tasks. Task similarity can be determined using hand-crafted features [15] or learned through neural networks (NNs) [33]. Another approach involves estimating the utility of a given configuration across the current and prior tasks using heuristics [71] or learning-based [65] techniques. Transfer learning is also employed to modify the surrogate model using multi-task GPs [60; 62], additive GP models [24; 39], weighted combinations of independent GPs [53; 72; 16], or shared feature representation learned across tasks [46; 70].
Several methods _simultaneously_ learn the initial design and modify the surrogate model. For instance, BOHAMIANN [57] applies task-specific embeddings for BO and adopt a Bayesian NN as the surrogate model, which is computationally expensive and hard to train. ABLR [46] and BANNER [5] both leverage a NN to learn a shared feature representation across tasks and task-specific Bayesian linear regression (BLR) layers for scalability and adaptability. While ABLR adapts to new tasks by fine-tuning the whole network for every new observation, BANNER meta-learns a task-independent mean function and only fine-tunes the BLR layer during optimization. However, both methods are sensitive to changes in scale and noise across tasks due to their smoothness and noise assumptions. To address this, Gaussian Copula Process Plus Prior (GC3P) [50]) transforms the observed values via the empirical cumulative distribution function (CDF) and fit a NN across all related tasks. Although GC3P warm-starts the optimization by using a NN to predict the mean for a GP on the target task, its scalability is limited by its GP surrogate.
Likelihood-free acquisition functionsBayesian optimization does not require an explicit model of the likelihood of the observed values [23] and can be done by directly approximating the acquisition function. Tree-structured Parzen estimator (TPE) [4] phrases BO as a density ratio estimation problem [59] and uses the density ratio over 'good' and 'bad' configurations as an acquisition function. BORE [61] estimates the density ratio through class probability estimation [48; 59], which is equivalent to modeling the acquisition function with a binary classifier. Its regret analysis and extension
to parallel optimization exist [44]. By transforming the acquisition function into a variational problem, likelihood-free BO (LFBO) [56] uses the probabilistic predictions of a classifier to directly approximate the acquisition function. In this paper, we leverage the flexibility of likelihood-free acquisition functions and combine it with a meta-learning model to obtain a sample-efficient, scalable, and robust BO method.
## 3 Problem Statement and Background
Meta-learning Bayesian optimizationB Bayesian optimization (BO) aims to optimize a target black-box function \(f:\mathcal{X}\to\mathbb{R}\) over \(\mathbf{x}\in\mathcal{X}\). In the case of meta-learning, \(T\) related black-box functions \(\{f^{t}(\cdot)\}_{t=1}^{T}\) are given in advance, each with the same domain \(\mathcal{X}\). The optimization is warm-started with previous evaluations on the related functions, \(\mathcal{D}^{\text{meta}}=\{\mathcal{D}^{t}\}_{t=1}^{T}\) with \(\mathcal{D}^{t}=\{(\mathbf{x}_{i}^{t},y_{i}^{t})\}_{i=1}^{N^{t}}\), where \(y_{i}^{t}=f^{t}(\mathbf{x}_{i}^{t})+\epsilon^{t}\) are evaluations corrupted by noise \(\epsilon^{t}\) and \(N^{t}=|\mathcal{D}^{t}|\) is the number of observations collected from task \(f^{t}\). Given a new task, at step \(N+1\), BO proposes \(\mathbf{x}_{N+1}\) and obtains a noisy observation from the target function \(y_{N+1}=f(\mathbf{x}_{N+1})+\epsilon\), with \(\epsilon\) drawn i.i.d. from some distribution \(p_{\epsilon}\). To obtain the proposal \(\mathbf{x}_{N+1}\), a probabilistic surrogate model is first fitted on \(N\) given previous observations on the target function \(\mathcal{D}_{N}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) and the related functions \(\mathcal{D}^{\text{meta}}\). For simplicity, we denote \(\mathcal{D}:=\mathcal{D}_{N}\cup\mathcal{D}^{\text{meta}}\). The resulting model is then used to compute an acquisition function, for example, the expected utility of a given query \(\mathbf{x}\) via
\[\alpha^{\text{U}}(\mathbf{x};\mathcal{D},\tau)=\mathbb{E}_{y\sim p(y|\mathbf{ x},\mathcal{D})}[U(y;\tau)]=\int U(y;\tau)p(y\mid\mathbf{x},\mathcal{D})\, \mathrm{d}y\,, \tag{1}\]
where \(U(y;\tau)\) is a chosen utility function with a threshold \(\tau\) that decides the utility of observing \(y\) at \(\mathbf{x}\) and controls the exploration-exploitation trade-off [69; 23]. The predictive distribution \(p(y\mid\mathbf{x},\mathcal{D})\) is given by the probabilistic surrogate model and the maximizer \(\mathbf{x}_{N+1}=\arg\max_{\mathbf{x}\in\mathcal{X}}\alpha(\mathbf{x};\mathcal{ D},\tau)\) is the proposed candidate. The most common examples that take the form of Equation (1) is the Expected Improvement (EI) [40] and Probability of Improvement (PI) [37]). Many other acquisition functions exist, for examples, UCB [58], Entropy Search [27; 28; 66] and Knowledge Gradient [20]. We refer to Shahriari et al. [54] for details.
Likelihood-free acquisition functionsLikelihood-free acquisition functions model the utility of a query without explicitly modeling the predictive distribution. For example, tree-structured Parzen estimator (TPE) [4] dismisses the surrogate for the outcomes and instead model two densities that split the observations w.r.t. to a threshold \(\tau\), namely \(\ell(\mathbf{x})=p(\mathbf{x}\mid y\leq\tau,\mathcal{D}_{N})\) and \(g(\mathbf{x})=p(\mathbf{x}\mid y>\tau,\mathcal{D}_{N})\) for the promising and non-promising data distributions, respectively. The threshold \(\tau\) relates to the \(\gamma\)-th quantile of the observed outcomes via \(\gamma=\Phi(\tau):=p(y\leq\tau)\). In fact, the resulting density ratio (DR) \(\alpha^{\text{DR}}(\mathbf{x};\mathcal{D}_{N},\tau)=\ell(\mathbf{x})/g( \mathbf{x})\) is shown to have the same maximum as PI [56; 23].
BORE [61] improves several aspects of TPE by directly estimating the density ratio instead of solving the more challenging problem of modeling two independent densities as an intermediate step. It rephrases the density ratio estimation as a binary classification problem where all observations within the same class have the same importance. Specifically, they show \(\alpha^{\text{DR}}(\mathbf{x};\mathcal{D}_{N},\tau)\propto C_{\boldsymbol{ \theta}}(\mathbf{x})=p(k=1\mid\mathbf{x},D_{N},\tau)\), where \(k=\mathbbm{1}\left(y\leq\tau\right)\) represents the binary class labels for classification and the classifier \(C_{\boldsymbol{\theta}}\) has learnable parameters \(\boldsymbol{\theta}\).
Likelihood-free BO (LFBO) [56] provides a general framework to directly learn the acquisition function that takes the specific form as in Equation (1) through a classifier. By rephrasing the integral as a variational problem, LFBO involves solving a weighted classification problem with noisy labels for the class \(k=1\), where the weights correspond to utilities. It is shown that for EI, where \(U(y;\tau):=\max(\tau-y,0)\), optimizing the following objective yields a classifier to estimate the EI acquisition function:
\[\mathcal{L}^{\text{LFBO}}(\boldsymbol{\theta};\mathcal{D}_{N},\tau)=-\mathbb{E} _{(\mathbf{x},y)\sim\mathcal{D}_{N}}\big{[}\max(\tau-y,0)\ln C_{\boldsymbol{ \theta}}(\mathbf{x})+\ln(1-C_{\boldsymbol{\theta}}(\mathbf{x}))\big{]}\,. \tag{2}\]
The resulting classifier separates promising and non-promising configurations with probabilistic prediction that can be interpreted as utility of queries, which leads to scale-invariant models without noise assumption and allows the application of any classification method [61; 56]. Further details of the algorithms are provided in Appendix A.
## 4 Methodology
In this section, we introduce our Meta-learning for LIkelihood-free BO (MALIBO) method, which extends LFBO with an effective meta-learning approach. An illustration of our method on a one-dimensional problem is shown in Figure 1. Our approach uses a neural network to meta-learn both a task-agnostic model based on features learned across tasks (right panel in Figure 1), and a task-specific component providing uncertainty estimation to adapt to new tasks. Additionally, we use Thompson sampling (dashed lines in Figure 1) as a exploratory strategy to account for the task uncertainty. Finally, as explained below, we apply gradient boosting as a residual prediction model to enable our model to adapt to tasks that are not well captured by our meta-learned model.
Network structureMALIBO uses a structured neural network that combines a meta-learned task-agnostic model with task-specific layer. We show an overview in Figure 2 and provide details for the choices below. Following previous works [5; 46], our meta-learning model uses a deterministic, task-agnostic model to map the input into features \(\mathbf{\Phi}=\phi(\mathbf{x})\), where \(\phi:\mathcal{X}\rightarrow\mathbb{R}^{d}\) is a learnable feature mapping shared across all tasks and \(d\) is the predefined dimensionality of the feature space. We use a Residual Feedforward Network (ResFFN) for learning \(\phi\), which has been shown to be robust to network hyperparameters and generalizes well to different problems [30]. To enable our model to provide good initial proposals, we introduce a task-agnostic mean prediction layer \(m:\mathbb{R}^{d}\rightarrow\mathbb{R}\) that learns the promising areas from the related tasks. We refer to the combined task-agnostic components \(m\) and \(\phi\) as \(g_{\mathbf{\omega}}\) (shown in blue), which is parameterized by \(\mathbf{\omega}\). To allow adaptation on each task \(t\), we use a task prediction layer \(r_{t}:\mathbb{R}^{d}\rightarrow\mathbb{R}\), which is parameterized by layer weights \(\mathbf{z}_{t}\in\mathcal{Z}\subseteq\mathbb{R}^{d}\). Since each \(\mathbf{z}_{t}\) embeds in a low dimensional latent space \(\mathcal{Z}\) and is a unique vector for each task, we refer to \(\mathbf{z}_{t}\) as the task-specific embedding and to \(\{\mathbf{z}_{t}\}\) as the set of embeddings for all meta-tasks. We will train our model such that the \(\{\mathbf{z}_{t}\}\) follow a known distribution \(p(\mathcal{Z})\) and discuss below how to use this as a prior for target task adaptation. Lastly, in order to obtain classification outputs as in LFBO, we apply logistic regression subsequently to produce probabilistic class predictions \(p(k=1\mid\mathbf{x})\). The prediction for an observation in task \(t\) is then given by \(C(\mathbf{x}_{t})=\sigma(m(\mathbf{\Phi})+\mathbf{z}_{t}^{\intercal}\mathbf{ \Phi})\).
Figure 1: Illustration of meta-learning the acquisition function. Left: The top panel shows observations from 10 related tasks and the target task. The top performing observations (\(\tau=\phi^{-1}(\gamma),\gamma=1/3\)) in each task are shown in red, the rest in blue. The bottom panel shows the maximum-a-posteriori estimate of the acquisition function in solid while the Thompson samples are shown in dashed curves. Right: Features learned by our model, showing that MALIBO successfully identifies the promising areas, while the Thompson samples show variability in the meta-learned acquisition function.
Figure 2: Schematic representation of our meta-learning classifier. A Residual Feedforward Network (ResFFN) maps the input \(\mathbf{x}\) via a shared feature mapping function \(\phi\). From this, we construct a task-agnostic mean prediction \(m(\mathbf{\Phi})\) and task-specific embedding \(\mathbf{z}_{t}\), which is distributed according to a prior distribution \(p(\mathcal{Z})\). The feature mapping function \(\phi\) and mean prediction \(m\) are fixed after meta-training, denoted by the task-agnostic component \(g_{\mathbf{\omega}}\). Finally, we add and convert them to a class prediction.
Meta-learningDirectly optimizing \(\mathcal{L}^{\text{LFBO}}\) to meta-learn our model would lead to task embeddings that do not conform to any particular prior task distribution \(p(\mathcal{Z})\), and thus render task adaptation difficult and unreliable [19]. Therefore, we regularize the task embeddings \(\{\mathbf{z}_{t}\}\) during training to enable Bayesian inference. In addition, such regularization can also avoid overfitting in the task space \(\mathcal{Z}\) and improves the generalization performance of our model. Specifically, we assume the prior of the task embeddings to be a multivariate normal (MVN), \(p(\mathcal{Z})=\mathcal{N}(\mathbf{0},\mathbf{I})\) and apply a regularization term to bring the empirical distribution of the \(\{\mathbf{z}_{t}\}\) close to the prior distribution. The loss used for training on the meta-data reads:
\[\mathcal{L}^{\text{meta}}(\boldsymbol{\omega},\{\mathbf{z}_{t}\}_{t=1}^{T})= \frac{1}{T}\sum_{t=1}^{T}\mathcal{L}^{\text{LFBO}}(\boldsymbol{\omega}, \mathbf{z}_{t};\mathcal{D}^{t},\tau)+\lambda\mathcal{R}(\{\mathbf{z}_{t}\}_{t= 1}^{T};p(\mathcal{Z}))\,, \tag{3}\]
where the first term is the loss function from LFBO as in Equation (2), weighting the observations in the meta-data with improvements and the second term \(\mathcal{R}\) is the regularization term weighted by \(\lambda\). We regularizes the empirical distribution of \(\{\mathbf{z}_{t}\}\) to match the Gaussian prior in a tractable way [52, 5]:
\[\mathcal{R}(\{\mathbf{z}_{t}\}_{t=1}^{T};p(\mathcal{Z}))=\lambda_{\text{KS}} \sum_{j=1}^{d}(F([\mathbf{z}_{t}]_{j})-\Phi([\mathbf{z}_{t}]_{j}))^{2}+\lambda _{\text{Cov}}\|\mathbf{I}-\text{Cov}(\{\mathbf{z}_{1},\dots,\mathbf{z}_{T}\}) \|_{\text{F}}^{2}\,, \tag{4}\]
where the first term matches the marginal CDFs similar to a Kolmogrov-Smirnov (KS) test, and the second term matches the empirical covariance of the task embeddings to the covariance of the prior. The hyperparameters \(\lambda_{\text{KS}}\) and \(\lambda_{\text{Cov}}\) encode the trade-off between these two terms. We denote \(F\) as the empirical CDF and Cov as the empirical covariance matrix. For more details we refer to Appendix C.
We only consider a uni-modal Gaussian prior in this work, as we will show it already demonstrates strong performance against other baselines. For more complex task distributions, one could extend it with multi-modal Gaussian prior [52].
Task adaptationAfter meta-training, the model can efficiently adapt to new tasks by estimating an embedding \(\mathbf{z}\) based on the learned feature mapping function \(\phi\). In principle, one could use a maximum likelihood classifier obtained by directly optimizing Equation (2) w.r.t. \(\mathbf{z}\). However, such a classifier does not consider the task uncertainty and would suffer from unreliable adaptation [19] and over-exploitation [61, 56, 44]. Furthermore, when a potential disparity between the distribution of the meta-data and the non-i.i.d. data collected during optimization arises, a probabilistic model would be informed via uncertainty estimation and thereby can exploit less knowledge learned from the meta-data. Therefore, we propose to use a Bayesian approach for task adaptation, which makes our classifier uncertainty-aware and more exploratory.
Consider the task embedding \(\mathbf{z}\) for the target task follows a distribution \(p(\mathbf{z}\mid\mathcal{D}_{N})\) after \(N\) observations, then the predictive distribution of our model can be written as
\[C(\mathbf{x};\boldsymbol{\omega},\mathcal{D}_{N})=\int p(k=1\mid\boldsymbol{ \omega},\mathbf{z})p(\mathbf{z}\mid\mathcal{D}_{N})\,\mathrm{d}\mathbf{z}\,, \tag{5}\]
which accounts for the epistemic uncertainty in the task embedding. Since the parameters \(\boldsymbol{\omega}\) of task-agnostic model \(g_{\boldsymbol{\omega}}\) are fixed after meta-training, we denote our classifier as \(C(\mathbf{x})\) for simplicity.
As there is no analytical way to evaluate the integration in Equation (5), we have to resort to approximation methods, such as variational inference [25, 7], Laplace approximation [6, 41], and Markov chain Monte Carlo [42, 29]. We consider the Laplace approximation for the task posterior distribution \(p(\mathbf{z}\mid\mathcal{D}_{N})\) as a fast and scalable method, and show its competitive performance against other more expensive alternatives in Appendix E.3.
The Laplace's method aims to fit a Gaussian distribution around the maximum-a-posteriori (MAP) estimate of the distribution and match the second order derivative at the optimum. In the first step, we obtain the MAP estimate by maximizing the posterior of our classifier \(C\) parameterized by \(\mathbf{z}\). To be consistent with the regularization used during meta-training, we use a standard, isotropic Gaussian prior for the weights: \(p(\mathbf{z})=\mathcal{N}(\mathbf{z}\mid\mathbf{0},\mathbf{I})\). Given observations \(\mathcal{D}_{N}\), the negative log posterior \(p(\mathbf{z}\mid\mathcal{D}_{N})\) is proportional to
\[\mathcal{L}^{\text{MALIBO}}(\mathbf{z})=\frac{1}{2}\mathbf{z}^{\mathsf{T}} \mathbf{z}-\sum_{n=1}^{N}\left(k_{n}(\tau-y)\ln\hat{k}_{n}+\ln(1-\hat{k}_{n}) \right)\,, \tag{6}\]
where \(\hat{k}=\sigma(m(\mathbf{\Phi})+\mathbf{z}^{\mathsf{T}}\mathbf{\Phi})\) is the class probability prediction and the MAP estimate of the weights given by \(\mathbf{z}_{\text{MAP}}=\arg\min_{\mathbf{z}\in\mathcal{L}}\mathcal{L}^{\text{ MALIBO}}\). As a second step, we compute the negative Hessian of the log posterior
\[\mathbf{\Sigma}_{N}^{-1}=\nabla\nabla\mathcal{L}^{\text{MALIBO}}=\mathbf{ \Sigma}_{0}^{-1}+\sum_{n=1}^{N}(k_{n}(\tau-y)+1)\hat{k}_{n}(1-\hat{k}_{n}) \mathbf{\Phi}_{n}\mathbf{\Phi}_{n}^{\mathsf{T}}\,, \tag{7}\]
which serves as the precision matrix for the approximated posterior \(q(\mathbf{z})=\mathcal{N}(\mathbf{z}\mid\mathbf{z}_{\text{MAP}},\mathbf{ \Sigma}_{N})\). Therefore Equation (5) can be approximated as
\[C(\mathbf{x})\simeq\int p(k=1\mid\mathbf{\omega},\mathbf{z})q(\mathbf{z})\,\text{ d}\mathbf{z}\,. \tag{8}\]
Uncertainty-based explorationDuring early phase in optimization, every meta-learning model has to reason about the target task properties based only on limited data that are available, which can lead to highly biased results and over-exploitation [19]. Moreover, it is shown that LFBO also suffer from similar issue even without meta-learning [61; 56; 44]. Therefore, we propose to use Thompson sampling based on task uncertainty for constructing a more exploratory acquisition function, and the resulting sampled predictions is generated by
\[\hat{C}(\mathbf{x})=\sigma\left(m(\mathbf{\Phi})+\hat{\mathbf{z}}^{\mathsf{T} }\mathbf{\Phi}\right),\quad\hat{\mathbf{z}}\sim q(\mathbf{z})\,. \tag{9}\]
Besides stronger exploration in the early phases of optimization, Thompson sampling also enables us to extend MALIBO to parallel BO by using multiple Thompson samples of the acquisition function in parallel. It is shown that this bypasses the sequential scheme of traditional BO, without introducing the common computational burden of more sophisticated methods [32]. We believe this to be a valuable strategy for parallelization and briefly explore it in Appendix F.
Gradient boosting as a residual prediction modelOperating in a meta-learned feature space enables fast task adaptation for our Bayesian classifier. However, it relies on the assumption that the meta-data is sufficient and representative for the task distribution, which does not always hold in practice. Moreover, a distribution mismatch between observations \(\mathcal{D}_{N}\) and meta-data \(\mathcal{D}^{\text{meta}}\) can arise when \(\mathcal{D}_{N}\) is generated by an optimization process while \(\mathcal{D}^{\text{meta}}\) consists of, e.g., i.i.d. samples.
We employ a residual model independent of the meta-learning model, such that, even given non-informative features, our classifier is able to regress to an optimizer that operates in the input space \(\mathcal{X}\). We propose to use gradient boosting (GB) [22] as a residual prediction model for classification, which consists of an ensemble of weak learners that are sequentially trained to correct the errors from the previous ones. Specifically, we replace the first weak learner by a strong learner, namely, our meta-learned classifier. With Thompson sampling, our classifier can be written as
\[C_{\text{GB}}(\mathbf{x})=\sigma\left(m(\mathbf{\Phi})+\hat{\mathbf{z}}^{ \mathsf{T}}\mathbf{\Phi}+\sum_{i=1}^{M}h_{i}(\mathbf{x})\right)\,, \tag{10}\]
where each \(h_{i}\) represents the \(i\)-th trained base-learner for the error correction from gradient boosting. In addition to robust task adaptation, this approach offers two advantages: First, gradient boosting does not require an additional weighting scheme for combining different classifiers and automatically determines the weight of the meta-learned model; Second, gradient boosting demonstrates strong performance for LFBO on various benchmark [56], making our classifier remain competitive performance even when meta-learning fails as shown in Appendix E.2.
The resulting residual model is trained solely on data collected during optimization, and thus might overfit in the early iterations with limited data. To avoid this, we apply gradient boosting only after a few iterations of Thompson sampling exploration and train it with early stopping. Note that this does not diminish the usefulness of the residual model, because our goal is to encourage exploration in early iterations as outlined in Section 4, and gradually rely more on the knowledge from the target task once sufficiently many observations have been obtained. We refer to Appendix H for details.
## 5 Experiments
In this section, we first show a preliminary ablation study to exhibit the effect of using Thompson sampling and gradient boosting. We then describe the experiments conducted to empirically evaluate
our method. For the choice of problems, we focus on automated machine learning (AutoML), i. e. hyperparameter optimization (HPO) and neural architecture search (NAS). To study robustness towards data with heterogeneous scale and noise, we evaluate our method on synthetic functions with multiplicative noise. We show that MALIBO outperforms other state-of-the-art meta-learning BO methods in AutoML problems and is robust to heterogeneous scale and noise.
BaselinesWe compare our method against state-of-the-art baselines across all problems. As methods without meta-learning, we picked random search [3], LFBO [56] and GP-UCB [58] for our experiments. For meta-learning BO methods, we chose ABLR [46], RGPE [16], GC3P [50] and MetaBO [65] as representative algorithms. Additionally, we consider a simple baseline for extending LFBO with meta-learning, called LFBO+BB, which combines LFBO with bounding-box search space pruning [47] as a meta-learning approach. For all LFBO-based methods, including MALIBO, we set the required threshold hyperparameter \(\gamma=1/3\) following [61, 56].
Evaluation metricsIn order to aggregate performances across tasks, we use _normalized regret_ as the quantitative performance measure for AutoML problems [72, 50]. This is defined as \(\min_{\mathbf{x}\in\mathcal{X}_{n}}(f^{t}(\mathbf{x})-f^{t}_{\text{min}})/(f^ {t}_{\text{max}}-f^{t}_{\text{min}})\), where \(\mathcal{X}_{n}\) denotes the set of inputs that have been selected by an optimizer up to iteration \(n\), \(f^{t}_{\text{min}}\) and \(f^{t}_{\text{max}}\) respectively represent the minimum and the maximum objective computed across all offline evaluations available for task \(t\). We report the mean of normalized regrets across all tasks within a benchmark as the aggregated result. For all benchmarks, we report the results by mean and standard error across _100 random runs_.
Effects of exploration and residual predictionWe first demonstrate the effect of Thompson sampling and the residual prediction model for optimizing a Forrester function [55] as a toy example. By using the meta-learned model as shown in Figure 1, MALIBO performs task adaptation for a new Forrester function for \(10\) iterations. We compare the results of MALIBO against a variant without the proposed Thompson sampling and gradient boosting, which only uses the approximated posterior predictive distribution in Equation (8) by probit approximation [41] for the acquisition function. As shown in Figure 3, MALIBO without Thompson sampling fails to adapt the new task with little exploration and optimizes greedily around the local optimum. This greedy optimization occurs due to the strong dependence of LFBO on a good initialization to not over-exploit. In contrast, the proposed MALIBO allows the queries to cover both possible optima by encouraging explorations. In addition, gradient boosting performs the refinement beyond the smooth meta-learned acquisition function, which can be seen in the discontinuity in the predictions. By suppressing the predicted utility in the
Figure 3: Effects of Thompson sampling and residual prediction on optimizing a Forrester function. Color circles denote the optimization queries (from bright to dark), dashed curve denotes a Thompson sample of the acquisition function and the orange curve shows the sample combined with gradient boosting.
less promising areas, gradient boosting refines the acquisition function and shifts the proposed query to a lower value region.
Real-world benchmarksWe empirically evaluate our method on various real-world optimization tasks, focusing on AutoML problems, including neural architecture search (NASBench201 [12]), hyperparameter optimization for neural networks (HPOBench [35]) and machine learning algorithms (MLBench [14]).
In NASBench201, we consider designing a neural cell with \(6\) discrete parameters, totaling \(15,625\) unique architectures, evaluated on CIFAR-10, CIFAR-100 [36] and ImageNet-16 [9]. We aim to find the optimal architecture for a neural network that yields the highest validation accuracy. For HPOBench, we aim to find the optimal hyperparameters for a two-layer feed-forward regression network on four popular UCI datasets [13]. The search space is \(9\)-dimensional and the optimization objective is the validation mean squared error after training the corresponding network configuration. In MLBench, we picked \(5\) algorithms (SVM, LogReg, XGBoost, RandomForest and MLP) from the ML benchmark suite [14]. They are evaluated on \(20\) OpenML tasks [64], except for MLP with only 8 tasks (as defined in the benchmark). The search spaces dimensions range from \(2\) (SVM) to \(5\) (MLP) with the same objective as in NASBench201. We provide details for benchmarks in Appendix I.
To train and evaluate the meta-learning BO methods, we conduct our experiment in a leave-one-task-out way: all meta-learning methods use one task as the target task and all others as related tasks. In this way, every task in a benchmark has been picked as the target task once. To construct the meta-datasets for meta-learning, we randomly select \(N\) configuration-objective pairs from the related tasks. For NASBench201 and HPOBench, we set \(N\) to be \(512\), while for MLBench, \(N\) is \(128\). All meta-learning methods, except MetaBO, are trained from scratch for each independent run, to account for variations due to the randomly sampled meta-data. Because of its long training time, MetaBO is trained once for each target problem on more meta-data than other methods to avoid limiting its performance with a bad subsample. We show its results only for HPOBench and NASBench201, and refer to Appendix H for details.
The aggregated results for all three benchmarks are summarized in Figure 4. It is evident that MALIBO consistently achieves strong anytime performance, surpassing other methods that either exhibit poor warm-starting or experience early saturation of performance. Notably, MALIBO outperforms other methods by a large margin in HPOBench, primarily because we focus on minimizing the validation error of a regression model in this benchmark. This task poses a significant challenge for GP-based regression models, as the observation values undergo abrupt changes and have varying scales across tasks, thereby violating the smoothness and noise assumptions inherent in these models. In all benchmarks, GC3P performs competitively only after the Copula process is fitted and LFBO matches its final performance. LFBO+BB exhibits similar performance as MALIBO in warm-starting and converges quickly, but the search space pruning technique forbids the method to explore regions beyond the promising areas in the meta-data, making its final performance even worse than its non-meta-learning counterpart. ABLR and RGPE performs poorly on most of the benchmarks, except for MLBench, because their meta-learning techniques require more meta-data for effective warm-starting, making them less data-efficient than MALIBO. MetaBO shows strong warm-starting performance in HPOBench while it fails in NASBench. This is because due to the fact that the tasks in NASBench are much more diverse than in HPOBench, and MetaBO fails to transfer
Figure 4: Aggregated normalized regrets for different BO algorithms on real-world AutoML problems.
knowledge from tasks that are different from the target task. Moreover, MetaBO adapts poorly to the task, as reported in other studies [70; 67]. For more experimental results, we refer to Appendix G.
Runtime analysisTo confirm the scalability of MALIBO, we compared its runtime against the baseline methods for different benchmarks. We observed that latent features and the Laplace approximation only introduces a negligible overhead compared to LFBO and MALIBO's runtime grows slowly with the number of observations. All other meta-learning methods, except for LFBO+BB, are considerably slower than MALIBO. We show our detailed experimental results in Appendix D.
Robustness against heterogeneous noiseWe use synthetic function ensembles [5] to test the robustness against heterogeneous noise in the data. We focus on the Hartmann3 function ensemble [11], which is a three-dimensional problem with four local minima. Their location and the global minimum varies across different functions in the ensemble. See Appendix I for more details.
To avoid biasing this experiment towards a single method, we use a heteroscedastic noise incompatible with any assumptions about the noise of any method. In particular, this violates the GP methods' and ABLR's assumption of homoscedastic, Gaussian noise. GC3P makes a similar assumption after the nonlinear transformation of the observation values, which does not translate to any well-known noise model. LFBO, LFBO+BB and MALIBO make no explicit noise assumptions, but optimize for the best mean. We choose a multiplicative noise, i.e. \(y=f(\mathbf{x})\cdot(1+\epsilon\cdot n)\), where \(n\sim\mathcal{N}(\mathbf{0},\mathbf{1})\). The noise corrupts observations with larger values more, while having a smaller effect on those with lower values. To see the robustness with different noise levels, we evaluate \(\epsilon\in\{0,0.1,1.0\}\). For meta-training, we randomly sampled \(512\) noisy observations from \(256\) functions in the ensemble. We show our results in Figure 5, where we can see across all noise levels, our method learns a meaningful prior for the optimization. The GP-based methods, despite their strong performance in the noise-free case, especially RGPE, degrade significantly with increasing noise levels.
## 6 Conclusion
We introduced Meta-learning for LIkelihood-free BO (MALIBO), which models the acquisition function directly from observations coupled with meta-learning. The method is computationally efficient and robust to heterogeneous scale and noise in the meta-data, which is challenging for other methods. In addition to the improved data efficiency, MALIBO uses a Bayesian classifier with Thompson sampling to account for task uncertainty, enabling our model to adapt to the tasks reliably. To ensure robust adaption to tasks that are not captured by the meta-learning, we incorporate gradient boosting into our model. Empirical results demonstrate superior performance on real-world benchmarks, as well as synthetic benchmarks with heteroscedastic noise.
Despite the promising experimental results, some limitations of the method should be noted. (i) The exploitation and exploration parameter \(\tau\) in likelihood-free BO algorithms could be treated more carefully, e.g. via a probabilistic treatment [61]. (ii) The value of the regularization hyperparameter \(\lambda\) used throughout all experiments seems robust, but could lead to sub-optimal behavior on other problems. (iii) For more complex task distribution, our uni-modal prior could become a limiting factor. While the GMM generalization of exists [52], there is no guarantee for its performance within MALIBO.
Figure 5: Normalized regret for different BO algorithms on Hartmann3 ensembles (\(D=3\)) with various levels of multiplicative noise. |
2306.14330 | Energy correlations in heavy states | We study energy correlations in states created by a heavy operator acting on
the vacuum in a conformal field theory. We argue that the energy correlations
in such states exhibit two characteristic regimes as functions of the angular
separations between the calorimeters: power-like growth at small angles
described by the light-ray OPE and slowly varying, or ``flat'', function at
larger angles. The transition between the two regimes is controlled by the
scaling dimension of the heavy operator and the dynamics of the theory. We
analyze this phenomenon in detail in the planar ${\cal N}=4$ SYM theory both at
weak and strong coupling. An analogous transition was previously observed in
QCD in the measurement of the angular energy distribution of particles
belonging to the same energetic jet. In that case it corresponds to the
transition from the light-ray OPE, perturbative regime described in terms of
correlations between quarks and gluons to the flat, non-perturbative regime
described in terms of correlations between hadrons. | D. Chicherin, G. P. Korchemsky, E. Sokatchev, A. Zhiboedov | 2023-06-25T20:01:18Z | http://arxiv.org/abs/2306.14330v2 | # Energy correlations in heavy states
###### Abstract
We study energy correlations in states created by a heavy operator acting on the vacuum in a conformal field theory. We argue that the energy correlations in such states exhibit two characteristic regimes as functions of the angular separations between the calorimeters: power-like growth at small angles described by the light-ray OPE and slowly varying, or "flat", function at larger angles. The transition between the two regimes is controlled by the scaling dimension of the heavy operator and the dynamics of the theory. We analyze this phenomenon in detail in the planar \(\mathcal{N}=4\) SYM theory both at weak and strong coupling. An analogous transition was previously observed in QCD in the measurement of the angular energy distribution of particles belonging to the same energetic jet. In that case it corresponds to the transition from the light-ray OPE, perturbative regime described in terms of correlations between quarks and gluons to the flat, non-perturbative regime described in terms of correlations between hadrons.
+
Footnote †: preprint: CERN-TH-2023-109
IPhT-T23/051
LAPTH-034/23
## 1 Introduction and summary
* 2 Energy correlations in the free scalar theory
* 3 Energy correlations in \({\cal N}=4\) SYM
* 3.1 Energy-energy correlation
* 3.2 Sum rules
* 3.3 Born contribution
* 3.4 Contact terms for \(K=2\)
* 4 Energy correlations at weak coupling
* 4.1 Recurrence relation for the perturbative corrections
* 4.2 One-loop correction
* 4.3 Two-loop correction
* 5 Heavy sources at weak coupling
* 5.1 Mellin approach
* 5.2 Leading term of the large \(K\) expansion
* 5.3 Integral relation between the kernels at large \(K\)
* 5.4 Two-loop corrections to the large \(K\) asymptotics
* 5.5 Subleading corrections
* 6 Contact terms
* 6.1 Sum rule approach
* 6.2 Mellin approach
* 6.3 Two-loop contact terms from the sum rules
* 7 Event shapes at strong coupling
* 7.1 Supergravity approximation
* 7.2 Stringy correction
* 8 Clustering in CFT
* 8.1 Clustering of local operators in a heavy state
* 8.2 Event shapes
* 8.3 Planar theories
* A Event shapes from correlation functions
* B Four-point correlation functions of half-BPS scalar operators
* C
C Detector kernel of the Mellin representation D Plus-distributions
## 1 Introduction and summary
The energy correlations are among the best studied observables both experimentally and theoretically [1]. They measure the flux of energy deposited in calorimeters located at different points on the celestial sphere and carry information about the dynamics of the underlying theory. A lot of activity has recently been devoted to studying the energy correlations in QCD and in the maximally supersymmetric \(\mathcal{N}=4\) SYM theory. The latter serves as a very useful toy model that one can use to develop new techniques for computing these observables in QCD. Moreover, it allows us to understand certain properties of the energy correlations that cannot be explained within the conventional perturbative QCD approach.
As an example, consider the energy-energy correlation (EEC) measuring the angular distribution of the energy of the particles that enter into two calorimeters separated by the relative angle \(0\leq\theta\leq\pi\)[2, 3]. At large total energy \(Q\), the hadronic final states consist of collimated beams of energetic particles, or jets. In the energy correlations, jets manifest themselves as peaks located at small angles \(\theta\), as well as at finite \(\theta\) corresponding to the angular separation between the jets.
For a small angle \(\theta\), the EEC describes the correlation between particles belonging to the same jet. The analysis of the experimental data shows [4] that it behaves differently at small angles, depending on how \(\theta\) compares with the non-perturbative parameter \(\theta_{0}\simeq\Lambda_{\rm QCD}/Q\) given by the ratio of the QCD hadronization scale and the total energy,1
Footnote 1: The **flat** region corresponds to energy correlations which are slowly-varying functions of the angle. The **OPE** region corresponds to energy correlations which exhibit a simple power-like behavior controlled by the operator product expansion (OPE) between the energy calorimeters.
\[\mbox{\bf Flat}:\quad\mbox{EEC}\sim\mbox{const},\qquad\qquad\mbox{\bf OPE}: \quad\mbox{EEC}\sim 1/\theta^{2-\gamma}\,. \tag{1.1}\]
In QCD, the energy correlations are flat for \(\theta\lesssim\theta_{0}\ll 1\), and they exhibit a power-like growth for \(\theta_{0}\lesssim\theta\lesssim 1\). The scaling behavior EEC \(\sim 1/\theta^{2-\gamma}\) can be derived from the light-ray OPE applied to the energy calorimeters [5, 6, 7, 8, 9], and the exponent \(\gamma\) can be computed at weak coupling as a power series in the QCD coupling constant.2
Footnote 2: In CFT \(\gamma\) is the anomalous dimension of the spin-3, signature plus (continued from even spins) operator on the stress-energy tensor Regge trajectory [5, 10, 11].
Such a change of behavior, from OPE to flat as \(\theta\) decreases, corresponds to the transition from the perturbative regime described in terms of correlations between quarks and gluons to the non-perturbative regime described in terms of correlations between hadrons. To see it, notice that the energy scale that characterizes the branching of
angles is determined by their relative transverse momenta \(Q\theta\). For small transverse momenta \(Q\theta=O(\Lambda_{\rm QCD})\) the theory becomes strongly coupled. Conversely, the perturbative approach is justified for \(\theta\gg\Lambda_{\rm QCD}/Q\). The behavior \({\rm EEC}\sim\theta^{0}\) can be reproduced if one thinks about the final state as consisting of a dense cloud of hadrons weakly interacting with one another. Describing the transition to this regime requires control over the non-perturbative QCD.
A similar change of behavior of the energy-energy correlation at small angles also takes place in the \({\cal N}=4\) SYM theory as the 't Hooft coupling varies, but the underlying mechanism is slightly different. This theory is conformal and it looks alike at short and large distances. At weak coupling, the EEC in \({\cal N}=4\) SYM has the same power-like behavior at small angles as in QCD. This behavior changes as one goes to the limit of strong 't Hooft coupling constant \(\lambda=g_{\rm YM}^{2}N_{c}\gg 1\). Increasing the value of the coupling constant, one enhances the production of particles (gauge, gaugino and scalars) in the final state. At strong coupling, the final state in \({\cal N}=4\) SYM consists of an infinite number of soft particles whose energy is distributed homogeneously on the celestial sphere. As a consequence, for \(\lambda\to\infty\) the energy-energy correlation does not depend on the angle, \({\rm EEC}(\theta)=1+O(1/\lambda)\). As in the case of QCD, the change of behavior of the EEC as a function of the angle is associated with the presence of a large number of particles in the final state. To describe the transition in detail, one needs to know the energy correlation in \({\cal N}=4\) SYM for an arbitrary 't Hooft coupling, which is out of reach at present.
In this paper, we point out that there exists another physical mechanism of transitioning between the two characteristic regimes (1). For simplicity we restrict our attention to conformal field theory (CFT), but the basic mechanism should be applicable to theories with dimensionful scales, such as generic four-dimensional gauge theories including QCD. Because the transition is driven by the large number of particles produced in the final state, we can create such a final state by exciting the vacuum with a "heavy" operator \(O_{H}(x)\) carrying a large scaling dimension \(\Delta_{H}\gg 1\). Physical intuition suggests that for \(\Delta_{H}\to\infty\) the state \(|H(q)\rangle=\int d^{4}x\,e^{iqx}O_{H}(x)|0\rangle\) should contain arbitrarily many soft particles which are largely uncorrelated. As a consequence, for \(\Delta_{H}\to\infty\) the energy correlations are expected to be angle-independent, up to corrections suppressed by a power of \(1/\Delta_{H}\).
In order to formulate this property, it is convenient to introduce the energy flow operator \({\cal E}(n)\) which measures the energy flux in the direction specified by a null vector \(n^{\mu}=(1,\vec{n})\) with \(\vec{n}^{2}=1\)[12; 13; 14]. In the rest frame of the source, for \(q^{\mu}=(Q,\vec{0})\), its expectation value in the state \(|H(q)\rangle\) is determined by the total momentum, \(\langle{\cal E}(n)\rangle_{H}=Q/(4\pi)\). It does not depend on the choice of the source operator and yields the total energy after integration over the celestial sphere, \(\int d^{2}\vec{n}\,\langle{\cal E}(n)\rangle_{H}=Q\).
The multi-point energy correlations are given by the expectation value of the product of several flow operators, in the state \(|H(q)\rangle\) created by a local operator with scaling dimension \(\Delta_{H}\),
\[\underbrace{{\rm EE},\ldots{\rm E}}_{k}{\rm C}=\langle{\cal E}(n_{1})\ldots{ \cal E}(n_{k})\rangle_{H}=\left(\frac{Q}{4\pi}\right)^{k}\langle\widehat{{ \cal E}}(n_{1})\ldots\widehat{{\cal E}}(n_{k})\rangle_{H}\,, \tag{2}\]
where \(\widehat{\mathcal{E}}(n)=\mathcal{E}(n)/\langle\mathcal{E}(n)\rangle_{H}\) is the normalized energy flow operator with \(\langle\widehat{\mathcal{E}}(n)\rangle_{H}=1\). To study their behavior in the limit \(\Delta_{H}\to\infty\), we find it convenient to apply the cumulant expansion
\[\langle\widehat{\mathcal{E}}(n_{1})\ldots\widehat{\mathcal{E}}(n_{k})\rangle_{ H}=1+\sum_{i<j}\langle\widehat{\mathcal{E}}(n_{i})\widehat{\mathcal{E}}(n_{j}) \rangle_{c}+\sum_{i<j<m}\langle\widehat{\mathcal{E}}(n_{i})\widehat{\mathcal{E} }(n_{j})\widehat{\mathcal{E}}(n_{m})\rangle_{c}+\ldots\, \tag{3}\]
where \(\langle\ldots\rangle_{c}\) denotes the connected correlation. Here the terms \(\langle\widehat{\mathcal{E}}(n_{i})\widehat{\mathcal{E}}(n_{j})\rangle_{c}\) depend on the single relative angle between the \(i\)th and \(j\)th detectors; \(\langle\widehat{\mathcal{E}}(n_{i})\widehat{\mathcal{E}}(n_{j})\widehat{ \mathcal{E}}(n_{m})\rangle_{c}\) depend on the three relative angles between the \(i\)th, \(j\)th, and \(k\)th detectors, etc. Starting from four detectors, for \(k\geq 4\), the expansion (3) involves non-linear terms, the simplest being \(\langle\widehat{\mathcal{E}}(n_{i})\widehat{\mathcal{E}}(n_{j})\rangle_{c} \langle\widehat{\mathcal{E}}(n_{m})\widehat{\mathcal{E}}(n_{q})\rangle_{c}\), see e.g. [15] and Appendix A.
We conjecture that when the source becomes heavy, \(\Delta_{H}\gg 1\), each subsequent term in (3) is further suppressed compared to the previous one, namely
\[\langle\widehat{\mathcal{E}}(n_{1})\ldots\widehat{\mathcal{E}}(n_{k})\rangle_ {c}=O\!\left(\frac{1}{\Delta_{H}^{h_{k}}}\right),\qquad 0<h_{2}<h_{3}< \ldots\, \tag{4}\]
The values \(h_{k}\) and the precise form of \(\langle\widehat{\mathcal{E}}(n_{1})\ldots\widehat{\mathcal{E}}(n_{k})\rangle_ {c}\) depend on the theory and the physical state in question. Notice that in (4), when taking \(\Delta_{H}\to\infty\), we keep the angles between the detectors, as well as the number of detectors, fixed.
The same property was observed before in strongly coupled gauge theories with gravity duals [5]. As mentioned above, in this case the multi-particle final state is not generated by considering a heavy source, but by the strongly coupled dynamics of the theory. The role of \(\Delta_{H}\) is played by the 't Hooft coupling \(\lambda\), and \(h_{k}=\frac{k}{2}\). In contrast, we would like to emphasize that the relation (4) holds in the limit of large \(\Delta_{H}\) for an arbitrary coupling (including the free theory!).
In this paper we study the energy correlations (2) in planar \(\mathcal{N}=4\) SYM. To define the heavy state \(|H(q)\rangle\), we choose the operator \(O_{H}(x)\) to be a half-BPS scalar operator of the form \(O_{H}(x)=\mathrm{tr}[\phi^{K}(x)]\). Its scaling dimension is protected from quantum corrections, \(\Delta_{H}=K\), and the heavy state limit corresponds to \(K\to\infty\). Another advantage of this choice is that the two-point correlation \(\langle\widehat{\mathcal{E}}(n_{1})\widehat{\mathcal{E}}(n_{2})\rangle_{c}\), defining the leading angular-dependent contribution in (3), can be computed explicitly both at weak and strong coupling for arbitrary \(K\geq 2\). This computation is done most efficiently by the method based on correlation functions and Mellin transforms, developed in [16; 17; 18; 19; 20; 5].
Using the explicit expression for \(\langle\widehat{\mathcal{E}}(n_{1})\widehat{\mathcal{E}}(n_{2})\rangle_{c}\), we can verify that its dependence on the relative angle \(\cos\theta=(\vec{n}_{1}\,\vec{n}_{2})\) is a function of the scaling dimension of the source \(K\). We find that for \(K=2\), the energy-energy correlation at weak coupling in \(\mathcal{N}=4\) SYM has a shape very similar to that in perturbative QCD. Namely, it is peaked around the end points, \(\theta=0\) and \(\theta=\pi\), and is flat in between. This shape corresponds to a final state containing two jets, one per each scalar field in the definition of the source operator \(\mathrm{tr}[\phi^{2}(x)]\). As \(K\) increases, we observe that the peak at \(\theta=\pi\) disappears and the function flattens out for \(0<\theta<\pi\). Moreover, \(\langle\widehat{\mathcal{E}}(n_{1})\widehat{\mathcal{E}}(n_{2})\rangle_{c}\) vanishes as \(1/K\) in the limit \(K\to\infty\). To the lowest order in the
coupling we find, up to corrections suppressed by powers of \(\lambda\) and \(1/K\),
\[\langle\widehat{\mathcal{E}}(n_{1})\widehat{\mathcal{E}}(n_{2})\rangle_{c}=\frac{ \lambda}{8\pi^{2}K}\left[\frac{3}{z_{+}}+2\text{Li}_{2}(1-z)-6\log z-2\zeta_{2} -\frac{13}{2}+\frac{5}{2}\delta(z)\right]+\ldots\,, \tag{5}\]
where \(z=(1-(\vec{n}_{1}\vec{n}_{2}))/2\) in the rest frame of the source and \(1/z_{+}\) is the plus-distribution. The result (5) is in agreement with our expectation (4) for \(\Delta_{H}=K\) and \(h_{k}=k-1\).
The relation (5) holds at weak coupling for \(K\gg 1\). The situation changes at strong coupling for \(\lambda\gg K\gg 1\). In this limit, we find that the energy-energy correlation does not depend on \(K\) and it vanishes as \(1/\lambda\),
\[\langle\widehat{\mathcal{E}}(n_{1})\widehat{\mathcal{E}}(n_{2})\rangle_{c}= \frac{4\pi^{2}}{\lambda}\left(1-6z(1-z)\right)\,. \tag{6}\]
The relations (5) and (6) illustrate two different mechanisms of producing slowly-varying energy distributions. At weak coupling, the leading correction is controlled by the scaling dimension of the source. At strong coupling, it comes from the evolution of the state and is controlled by the coupling constant. In the case of QCD it is natural to expect that the suppression parameter \(1/K\) is fixed by the multiplicity of the produced hadrons. It is accompanied by a non-perturbative angle-dependent function.
Above we focused our discussion on energy correlations in QCD and its conformal cousin \(\mathcal{N}=4\) SYM. It is interesting to ask what happens for event shapes, or matrix elements of light-ray operators, in a general interacting CFT when the source becomes heavy.3 We present some evidence that the clustering structure (4) is valid for energy correlations in any CFT. Let us briefly explain the reason for that. According to the state-operator correspondence, a local operator \(O_{H}(x)\) with scaling dimension \(\Delta_{H}\) can be associated with an energy eigenstate \(E_{\text{cyl}}=\Delta_{H}/R\) in a CFT defined on a cylinder \(\mathbb{R}\times S^{d-1}\), where \(R\) is the radius of the sphere. A heavy operator with large scaling dimension \(\Delta_{H}\gg 1\) corresponds to a highly excited energy eigenstate. In an interacting theory it is expected to look thermal when simple enough observables are considered [22] (the statement known as the Eigenstate Thermalization Hypothesis). The clustering structure of energy correlations (4) in a heavy state is then related to the clustering of correlation functions of stress-energy tensors in the thermal state as the separation between the operators becomes large [23].
Footnote 3: In an upcoming work [21], event shapes in the large charge limit of CFTs with global \(U(1)\) symmetry that admit a superfluid description are studied. The results of [21] are compatible with the proposal of the present paper.
The paper is organized as follows. In Section 2 we compute the multi-point energy correlations \(\langle\mathcal{E}(n_{1})\ldots\mathcal{E}(n_{k})\rangle_{H}\) in a free theory and show that they satisfy (4) in the limit of large scaling dimension of the source. In Section 3 we consider the energy-energy correlation \(\langle\mathcal{E}(n_{1})\mathcal{E}(n_{2})\rangle_{H}\) in planar \(\mathcal{N}=4\) SYM at weak coupling. We choose the source operator to be the simplest half-BPS operator built out of \(K\) scalar fields and examine the dependence of the energy correlation on \(K\) at the leading order in the coupling constant (Born level). In Section 4 we compute the one- and two-loop perturbative corrections to the energy-energy correlation for
arbitrary \(K\) and show that they verify the relation (4) at large \(K\). In Section 5 we present an approach that allows us to compute the \(O(1/K)\) correction to (4) directly, without going through the details at finite \(K\). In Section 6 we obtain the contact terms necessary to describe the singular behavior of the energy correlations at the end points, for finite \(K\) and for \(K\to\infty\). In Section 7 we compute the same observable at strong coupling including the first stringy correction. In Section 8 we present arguments in favor of (4) in a generic interacting CFT. The paper contains several appendices and ancillary files.
## 2 Energy correlations in the free scalar theory
The energy correlations for a heavy source are expected to have the general form (4). In this section, we show that the relations (4) are indeed satisfied in the free scalar theory.
We consider a free massless scalar field \(\phi(x)\) and choose the source operator to be \(O_{K}(x)=\phi^{K}(x)\). It creates the state \(|H(q)\rangle=\int d^{4}x\,e^{iqx}O_{K}(x)|0\rangle\) containing \(K\) massless scalar particles with the total momentum \(q^{\mu}\) (with \(q^{0}>0\) and \(q^{2}>0\)).
The differential cross-section describing the probability to find \(K\) particles in a final state with on-shell momenta \(p_{i}\) (with \(p_{i}^{2}=0\)) is given by
\[d\sigma_{K}=(2\pi)^{4}\delta^{(4)}(q-\sum_{i=1}^{K}p_{i})\prod_{n=1}^{K}d \text{LIPS}(p_{n})\,, \tag{5}\]
where \(d\text{LIPS}(p)\) is the Lorentz invariant phase space measure of a particle with momentum \(p\),
\[d\text{LIPS}(p)=\frac{d^{4}p}{(2\pi)^{4}}2\pi\delta_{+}(p^{2})\,, \tag{6}\]
Figure 1: Multi-point energy correlation \(\langle\mathcal{E}(n_{1})\dots\mathcal{E}(n_{k})\rangle\) in the final state created by a heavy source.
with \(\delta_{+}(p^{2})=\delta(p^{2})\theta(p^{0})\). The total cross-section is given by
\[\sigma_{K}(q)=\int d\sigma_{K}=\int d^{4}x\,e^{iqx}\prod_{n=1}^{K} \int d\text{LIPS}(p_{n})\,\text{e}^{-ip_{n}x}. \tag{3}\]
In this representation, the phase space integral factorizes into a product of two-point Wightman functions
\[D(x)=\langle\phi(x)\bar{\phi}(0)\rangle=\int d\text{LIPS}(p)\, \text{e}^{-ipx}=\frac{1}{4\pi^{2}(-x^{2}+i0x^{0})}\,, \tag{4}\]
where the '\(+i0x^{0}\)' prescription ensures that the Fourier transform is different from zero for \(p^{0}>0\). The calculation shows that
\[\sigma_{K}(q)=\int d^{4}x\ \text{e}^{iqx}[D(x)]^{K}= \theta(q^{0})\theta(q^{2})\frac{2\pi^{3}(q^{2}/4)^{K-2}}{(2\pi)^{2K}\Gamma(K) \Gamma(K-1)}\,. \tag{5}\]
Notice that \(\sigma_{K}(q)\) grows as a power of \(q^{2}\) with the exponent being a linear function of the weight (or scaling dimension) \(K\) of the operator \(O_{K}\).
Let us now examine the \(k-\)point energy correlation in the \(K-\)particle final state described by the differential distribution (1). It is given by
\[\langle\mathcal{E}(n_{1})\ldots\mathcal{E}(n_{k})\rangle=\frac{K! \left/(K-k)!\right.}{\sigma_{K}(q)}\int d\sigma_{K}\,w_{n_{1}}(p_{1})\ldots w_ {n_{k}}(p_{k})\,, \tag{6}\]
where the weight factor \(w_{n}(p)\) selects the particle in the final state moving along the null direction \(n^{\mu}=(1,\vec{n})\) (with \(\vec{n}^{2}=1\)),
\[w_{n}(p)=p^{0}\delta^{(2)}\!\left(\frac{\vec{p}}{p^{0}}-\vec{n} \right). \tag{7}\]
The combinatorial factor \(K!\left/(K-k)!\right.\) in the numerator of (6) is due to the Bose symmetry of the scalar particles. It counts the total number of events in which \(k\) out of \(K\) particles enter the calorimeters. The diagrammatic representation of the relation (6) is shown in Figure 1.
Substituting (1) into (6) we can express \(\langle\mathcal{E}(n_{1})\ldots\mathcal{E}(n_{k})\rangle\) as an integral over the phase space of the \(K\) particles. The integral over the undetected \((K-k)\) particles gives rise to the total cross-section \(\sigma_{K-k}(\hat{q})\) where \(\hat{q}^{\mu}=q^{\mu}-\sum_{i=1}^{k}p_{i}^{\mu}\) is the total momentum of these particles. Then, \(\langle\mathcal{E}(n_{1})\ldots\mathcal{E}(n_{k})\rangle\) is given by the integral over the phase space of the detected \(k\) particles,
\[\langle\mathcal{E}(n_{1})\ldots\mathcal{E}(n_{k})\rangle=\frac{K! \left/(K-k)!\right.}{\sigma_{K}(q)}\int d\sigma_{k}\,w_{n_{1}}(p_{1})\ldots w_ {n_{k}}(p_{k})\,\sigma_{K-k}(\hat{q})\,. \tag{8}\]
Using (7), this expression can be written as a \(k-\)fold integral over the energies \(w_{i}=p_{i}^{0}\) of the detected particles,
\[\langle\mathcal{E}(n_{1})\ldots\mathcal{E}(n_{k})\rangle=\frac{c_ {K}}{c_{K-k}}\int_{0}^{\infty}\prod_{i=1}^{k}\frac{d\omega_{i}\,\omega_{i}^{2} }{2q^{2}(2\pi)^{3}}\theta(\hat{q}^{0})\theta(\hat{q}^{2})(\hat{q}^{2}/q^{2})^ {K-k-2}\,, \tag{9}\]
where \(\hat{q}^{\mu}=q^{\mu}-\sum_{i=1}^{k}\omega_{i}n_{i}^{\mu}\) and \(c_{K}=(4\pi)^{2K}K!\,(K-1)!\,(K-2)!\,.\)
The integral (9) involves the ratio of the invariant mass of the undetected particles and the total energy,
\[\frac{\hat{q}^{2}}{q^{2}}=1-\sum_{i=1}^{k}\frac{2(qn_{i})}{q^{2}} \omega_{i}+2\sum_{i<j}\frac{(n_{i}n_{j})}{q^{2}}\omega_{i}\omega_{j}\,. \tag{10}\]
Notice that \(\hat{q}^{2}/q^{2}<1\) and, therefore, for large \(K\) the dominant contribution to the integral (9) comes from the integration over the region \(\hat{q}^{2}/q^{2}=1+O(1/K)\), or equivalently \(\omega_{i}=O(1/K)\). Changing variables as
\[\omega_{i}=\frac{q^{2}}{2(K-k-2)(qn_{i})}\varepsilon_{i}\,, \tag{11}\]
we find in the limit \(K\to\infty\) with \(k\) held fixed
\[(\hat{q}^{2}/q^{2})^{K-k-2}=e^{-\sum_{i}\varepsilon_{i}}\left[1- \frac{1}{K}\!\left(\sum_{i<j}\varepsilon_{i}\varepsilon_{j}(1-z_{ij})+\frac{1} {2}\sum_{i}\varepsilon_{i}^{2}\right)+O(1/K^{2})\right], \tag{12}\]
where \(z_{ij}\) are dimensionless variables
\[z_{ij}=\frac{q^{2}(n_{i}n_{j})}{2(qn_{i})(qn_{j})}\,. \tag{13}\]
In the rest frame of the source, for \(q^{\mu}=(Q,\vec{0})\) and \(n_{i}=(1,\vec{n}_{i})\), these variables are related to the relative angles between the detectors on the sphere, \(z_{ij}=(1-\cos\theta_{ij})/2\).
Combining together (9) and (12) we find that the energy correlations in the large \(K\) limit take a remarkably simple form,
\[\langle\mathcal{E}(n_{1})\ldots\mathcal{E}(n_{k})\rangle=\left( \prod_{m=1}^{k}\frac{(q^{2})^{2}}{2(qn_{m})^{3}}\right)\int_{0}^{\infty}\prod _{i=1}^{k}\frac{d\varepsilon_{i}\,\varepsilon_{i}^{2}}{4\pi}\,e^{-\varepsilon _{i}}\] \[\times\left[1-\frac{1}{K}\!\left(\sum_{i<j}\varepsilon_{i} \varepsilon_{j}(1-z_{ij})+\frac{1}{2}\sum_{i}\varepsilon_{i}^{2}-\frac{3}{2}k (k+3)\right)+O(1/K^{2})\right]. \tag{14}\]
The integral in this relation describes an ensemble of \(k\) noninteracting particles whose energies \(\omega=\varepsilon Q/(2K)\) are distributed according to the law \(dP(\varepsilon)=d\varepsilon\,\varepsilon^{2}e^{-\varepsilon}\). The first term in the brackets in (14) is independent of the angular variables \(z_{ij}\) defined in (13). It describes a homogeneous distribution of the energy on the celestial sphere. The angular dependence comes from the second term, which is suppressed by a factor of \(1/K\) and involves a quadratic polynomial in the energy variables \(\varepsilon_{i}\). To higher orders in \(1/K\), the coefficient of \(1/K^{p}\) is given by a polynomial in \(\varepsilon_{i}\) of degree \(2p\). As we show below, it contributes to the connected part of the \(p-\)point correlations in (3).
The calculation of the integral in (14) yields
\[\langle{\cal E}(n_{1})\ldots{\cal E}(n_{k})\rangle=\left(\prod_{i=1}^{k}\frac{(q^ {2})^{2}}{4\pi(qn_{i})^{3}}\right)\left[1+\frac{1}{K}\!\left(\!\sum_{1\leq i<j \leq k}9z_{ij}-3k(k-1)\right)+O(1/K^{2})\right]. \tag{15}\]
Switching to the normalized operators \(\widehat{\cal E}(n_{i})\) on the left-hand side of (15), the term of order \(1/K\) on the right-hand side is identified with the sum of the two-point connected correlations in (3),
\[\langle\widehat{\cal E}(n_{i})\widehat{\cal E}(n_{j})\rangle_{c}=\frac{1}{K} \left(9z_{ij}-6\right)+O(1/K^{2})\,. \tag{16}\]
It is straightforward to compute the subleading terms in (15). Bringing them to the form (3), we can determine the higher-point connected correlations, e.g.
\[\langle\widehat{\cal E}(n_{i})\widehat{\cal E}(n_{j})\widehat{ \cal E}(n_{m})\rangle_{c}=\frac{1}{K^{2}} \Big{[}108(z_{ij}z_{im}+z_{ij}z_{jm}+z_{im}z_{jm})\] \[\qquad\qquad-126(z_{ij}+z_{im}+z_{jm})+114\Big{]}+O(1/K^{3})\,. \tag{17}\]
The relations (16) and (17) take the expected form (4) with \(\Delta_{H}=K\) and \(h_{k}=k-1\).
The relations (15), (16) and (17) are valid for \(n_{i}\neq n_{j}\) and they do not take into account the contribution of contact terms localized at \(n_{i}=n_{j}\). The presence of such terms can be detected as follows. By definition, \(\langle{\cal E}(n_{1})\ldots{\cal E}(n_{k})\rangle\) measures the energy flux on the celestial sphere in the directions specified by the unit vectors \(\vec{n}_{1},\ldots,\vec{n}_{k}\). The integral of \({\cal E}(n)n^{\mu}\) over the unit sphere \(\vec{n}^{2}=1\) yields the total momentum of the final state. As a consequence, the energy correlation has to satisfy the sum rule
\[\int d^{2}n_{k}\,n_{k}^{\mu}\,\langle{\cal E}(n_{1})\ldots{\cal E}(n_{k}) \rangle=q^{\mu}\langle{\cal E}(n_{1})\ldots{\cal E}(n_{k-1})\rangle\,. \tag{18}\]
One can check that the leading \(O(K^{0})\) term in (15) verifies this relation but this is not the case for the \(O(1/K)\) term. The reason for this is that the integral in (18) receives contributions from contact terms localized at coincident directions \(n_{i}\).
A contact term proportional to the product of \(L\) delta functions on the sphere, \(\delta(n_{1},n_{2})\ldots\delta(n_{1},n_{L+1})\), comes with the combinatorial factor \(\binom{K}{k}/\binom{K}{k-L}\sim 1/K^{L}\). To the leading order in \(1/K\), only the contact terms with \(L=1\) contribute. They are proportional to \(\sum_{1\leq i<j\leq k}\delta^{(2)}(\vec{n}_{i},\vec{n}_{j})\). The sum rule (18) allows us to fix their contribution to the energy correlation (15).
For instance, the contact term for the two-point connected correlation (16) is given by \(\frac{3}{2K}\delta(z_{ij})\). Adding it to (16), one can verify that the two-point correlation \(\langle{\cal E}(n_{1}){\cal E}(n_{2})\rangle\) satisfies the sum rule (18). A simple corollary of the sum rule (18) for the three-point energy correlation is that
\[\int d^{2}n_{m}\,\epsilon_{\mu\nu\rho\sigma}q^{\mu}n_{i}^{\nu}n_{j}^{\rho}n_{ m}^{\sigma}\,\langle{\cal E}(n_{i}){\cal E}(n_{j}){\cal E}(n_{m})\rangle=0. \tag{19}\]
In distinction to (18), it is not sensitive to contact terms. We have checked that \(\langle\widehat{\mathcal{E}}(n_{i})\widehat{\mathcal{E}}(n_{j})\widehat{ \mathcal{E}}(n_{m})\rangle_{c}\) in (17) satisfies (19).
We conclude this section by reiterating our main conjecture (4): The connected correlations in the free theory have the expected heavy weight behavior with \(\Delta_{H}=K\) and \(h_{k}=k-1\) (with \(k=2,3,\dots\)).
## 3 Energy correlations in \(\mathcal{N}=4\) Sym
In the previous section, we demonstrated that the energy correlation in a final state consisting of a large number \(K\) of free scalar particles is given by simple expressions like (16) and (17). The question arises, how does the interaction between the particles in the final state affect this result?
### Energy-energy correlation
In order to address this question, we consider the energy correlations in a particular four-dimensional gauge theory - the maximally supersymmetric \(\mathcal{N}=4\) Yang-Mills theory with gauge group \(SU(N_{c})\). For the sake of simplicity we shall concentrate on the two-point correlation \(\langle\mathcal{E}(n_{1})\mathcal{E}(n_{2})\rangle\). To create a final state with a large number of particles produced, we excite the vacuum by acting on it with a half-BPS operator of the form
\[O_{K}(x)=\text{tr}[\phi^{K}(x)]\,,\qquad\quad\phi(x)=\sum_{I=1}^{6}Y^{I}X^{I}\,. \tag{19}\]
It is built of six real scalar fields \(X^{I}\) with \(Y^{I}\) being an auxiliary null complex vector of \(SO(6)\) defining the orientation of \(O_{K}(x)\) in the isotopic \(R-\)space. This operator has the \(R-\)charge of the \([0,K,0]\) representation of \(SO(6)\sim SU(4)\), as well as conformal weight (or scaling dimension) \(K\), the latter being protected from quantum corrections by \(\mathcal{N}=4\) superconformal symmetry. It creates \(K\) scalar particles out of the vacuum. Going to the limit \(K\to\infty\), we encounter the final state discussed in the previous section.
The two-point correlation function of the operators (19) is protected from quantum corrections and is given by the product of \(K\) free scalar propagators (4) multiplied by an \(SU(N_{c})\) color factor. In the planar limit we have
\[\langle O_{K}(x)\bar{O}_{K}(0)\rangle=\frac{(Y\bar{Y})^{K}}{(4\pi^{2})^{K}} \frac{KN_{c}^{K}}{(-x^{2}+i0x^{0})^{K}}\,, \tag{20}\]
where \((Y\bar{Y})=\sum_{I=1}^{6}Y^{I}(Y^{I})^{*}\). Notice that the operators are not time ordered. To simplify the formulae, we put \((Y\bar{Y})=1\) in what follows. Then, the total cross-section for producing an arbitrary number of particles in the final state is given by (see (A.4) in [16])
\[\sigma_{\text{tot}}(q)=\int d^{4}x\ \text{e}^{iqx}\langle O_{K}(x)\bar{O}_{K}( 0)\rangle=KN_{c}^{K}\sigma_{K}(q)\,, \tag{21}\]
where \(\sigma_{K}(q)\) is defined in (5). Like (15), it is protected from quantum corrections.
The relation (16) expresses the total cross-section as the Fourier integral of the two-point Wightman correlation function (15). It is equivalent to (3) after inserting the sum over the final states between the operators. In a similar manner, the energy correlations (6) also admit a representation in terms of correlation functions involving two scalar operators (15), as well as the energy flow operators \({\cal E}(n_{i})\) (for the definition see Appendix A). The latter play the role of calorimeters detecting the particles in the final state.
For instance, the two-point energy correlation is given by
\[\langle{\cal E}(n_{1}){\cal E}(n_{2})\rangle_{K}=\sigma_{\rm tot}^{-1}(q)\int d ^{4}x\ {\rm e}^{iqx}\langle O_{K}(x){\cal E}(n_{1}){\cal E}(n_{2})\bar{O}_{K}(0) \rangle\,. \tag{17}\]
Lorentz symmetry allows us to fix the form of the correlation,
\[\langle{\cal E}(n_{1}){\cal E}(n_{2})\rangle_{K}=\frac{(q^{2})^{4}}{2(qn_{1}) ^{3}(qn_{2})^{3}}\,\frac{{\cal F}_{K}(z)}{(4\pi)^{2}}\,, \tag{18}\]
up to an arbitrary function \({\cal F}_{K}(z)\) of the Lorentz covariant angular separation \(z\equiv z_{12}\) between the detectors (recall (13)),
\[z=\frac{q^{2}(n_{1}n_{2})}{2(qn_{1})(qn_{2})}=\frac{1}{2}(1-( \vec{n}_{1}\vec{n}_{2}))=\frac{1}{2}(1-\cos\theta_{12})\,. \tag{19}\]
Here \(\theta_{12}\) is the angle between the unit vectors \(\vec{n}_{1}\) and \(\vec{n}_{2}\) in the rest frame of the source, \(q^{\mu}=(Q,\vec{0})\). The kinematical factor on the right-hand side of (18) has Lorentz weight \((-3)\) under the independent rescaling of the detector vectors \(n_{i}^{\mu}\), matching the corresponding weight of the energy operators \({\cal E}(n_{i})\) on the left-hand side (see (16)-(17)). The factor \((q^{2})^{4}\) in the numerator corresponds to the scaling dimension \((+1)\) of \({\cal E}(n_{i})\).
For \(K=2\) the energy correlation (18) has been studied both at weak and strong coupling [5; 16; 17; 18; 19; 20; 24; 25; 26; 27]. Our goal in this paper is to extend these results to \(K\geq 3\) and to understand the properties of the energy correlation in the limit of large \(K\). We show below that at large \(K\) the function \({\cal F}_{K}(z)\) satisfies
\[{\cal F}_{K}(z)=2+O(1/K)\,. \tag{20}\]
Substituting this relation in (18) we reproduce (4) for \(\Delta_{H}=K\) and \(h_{2}=1\).
In distinction with the total cross-section, the function \({\cal F}_{K}(z)\) is not protected in \({\cal N}=4\) SYM and depends on the coupling constant \(g_{\rm YM}^{2}\), as well as on the rank of the gauge group \(N_{c}\). In the planar limit, for \(N_{c}\to\infty\) with the 't Hooft coupling constant \(\lambda=g_{\rm YM}^{2}N_{c}\) held fixed, it admits a weak coupling expansion,
\[{\cal F}_{K}(z)={\cal F}_{K}^{(0)}(z)+\sum_{\ell=1}^{\infty}\left( \frac{\lambda}{4\pi^{2}}\right)^{\ell}{\cal F}_{K}^{(\ell)}(z)\,, \tag{21}\]
where \({\cal F}_{K}^{(0)}(z)\) refers to the Born approximation and \({\cal F}_{K}^{(\ell)}(z)\) with \(\ell\geq 1\) denotes the \(\ell-\)loop perturbative correction.
In the Born approximation, the energy correlation (14) is given by (10) evaluated for \(k=2\). Matching (14) with (16) we find that in the limit of large \(K\) the scaling function \({\cal F}_{K}^{(0)}(z)\) is given by
\[{\cal F}_{K}^{(0)}(z)=2+\frac{1}{K}(18z-12+3\delta(z))+O(1/K^{2})\,. \tag{17}\]
As was explained in the previous section, this relation describes an ensemble of \(K\) non-interacting particles whose energies are distributed according to the law (15). Turning on the interaction between these particles, one generates additional correlations of their energies. They are described by the functions \({\cal F}_{K}^{(\ell)}(z)\) in (10) which we compute below for \(\ell=1,2\). We will show that at large \(K\) the loop corrections scale as \({\cal F}_{K}^{(\ell)}(z)\sim 1/K\) for \(\ell\geq 1\).
### Sum rules
The function \({\cal F}(z)\) satisfies nontrivial conditions that follow from the sum rules (19).
We recall that the integral of the energy flow operator over the celestial sphere yields the total energy-momentum of the final state, \(\int d^{2}n\,n^{\mu}\langle{\cal E}(n)\rangle_{K}=q^{\mu}\). Combined with (19), this yields the following relations in the rest frame of the source,
\[\int d^{2}n_{1}\int d^{2}n_{2}\langle{\cal E}(n_{1}){\cal E}(n_{2 })\rangle_{K}=Q^{2}\,,\] \[\int d^{2}n_{1}\int d^{2}n_{2}\,(1-(\vec{n}_{1}\vec{n}_{2})) \langle{\cal E}(n_{1}){\cal E}(n_{2})\rangle_{K}=Q^{2}\,. \tag{18}\]
Substituting the general expression (14) of \(\langle{\cal E}(n_{1}){\cal E}(n_{2})\rangle_{K}\) and using (15), we find that \({\cal F}_{K}(z)\) has to satisfy the sum rules [6; 7; 8]
\[\int_{0}^{1}dz\,z\,{\cal F}_{K}(z)=\int_{0}^{1}dz\,(1-z)\,{\cal F}_{K}(z)=1\,. \tag{19}\]
These relations hold for an arbitrary coupling constant \(\lambda\).
Replacing \({\cal F}_{K}(z)\) with its perturbative expansion (10) and matching the coefficients of the powers of \(\lambda\) on both sides of (19) we find that the Born approximation \({\cal F}_{K}^{(0)}(z)\) alone produces the required right-hand side of (19),
\[\int_{0}^{1}dz\,z{\cal F}_{K}^{(0)}(z)=\int_{0}^{1}dz\,(1-z)\,{\cal F}_{K}^{( 0)}(z)=1\,. \tag{20}\]
The sum rules for the loop corrections \({\cal F}_{K}^{(\ell)}(z)\) are
\[\int_{0}^{1}dz\,z{\cal F}_{K}^{(\ell)}(z)=\int_{0}^{1}dz\,(1-z)\,{\cal F}_{K}^ {(\ell)}(z)=0\,,\qquad\ell\geq 1\,. \tag{21}\]
It is straightforward to verify that (17) satisfies the sum rules (20). Notice that the function accompanying \(1/K\) in (17) gives zero contribution to the sum rules (20).
### Born contribution
In this subsection, we compute the Born contribution to (3.1) at finite \(K\) and study the transition from \(K=2\) to the large \(K\) behavior (3.1).
In the previous section, we used the conventional amplitude approach to obtain the integral representation (2.9) for this contribution. Let us show how the same result follows from the representation (3.4) of the energy correlations in terms of correlation functions.
The energy flow operator \({\cal E}(n)\) is given by an integral involving the stress-energy tensor of \({\cal N}=4\) SYM, see (A.1) in Appendix A. The latter is bilinear in the scalar fields \(X^{I}\) constituting the source operators (3.1). As a consequence, the calculation of the four-point correlation function on the right-hand side of (3.4) in the Born approximation reduces to performing Wick contractions between the \(K\) scalar fields in the operators \(O_{K}(x)\) (source) and \(\bar{O}_{K}(0)\) (sink) and the two pairs of scalar fields in the energy flow operators \({\cal E}(n_{1})\) and \({\cal E}(n_{2})\).4
Footnote 4: A simple example of such a calculation is shown in (A.14) in Appendix A.
In this way, in the planar limit the correlation function \(\langle O_{K}(x){\cal E}(n_{1}){\cal E}(n_{2})\bar{O}_{K}(0)\rangle^{(0)}\) can be factorized into a product of \((K-2)\) propagators connecting the source and sink and the same correlation function evaluated for \(K=2\),
\[\langle O_{K}(x){\cal E}(n_{1}){\cal E}(n_{2})\bar{O}_{K}(0)\rangle^{(0)}= \frac{1}{4}K^{2}(K-1)[N_{c}D(x)]^{K-2}\langle O_{2}(x){\cal E}(n_{1}){\cal E}( n_{2})\bar{O}_{2}(0)\rangle^{(0)}\,, \tag{3.14}\]
where the scalar propagator \(D(x)\) is given by (2.4) and the superscript '\((0)\)' indicates the Born approximation. The relation (3.14) is represented diagrammatically in Figure 2. The combinatorial \(K-\)dependent factor in (3.14) counts the number of contributing diagrams.
Substituting (3.14) into (3.4) we find that \(\langle{\cal E}(n_{1}){\cal E}(n_{2})\rangle^{(0)}_{K}\) is the convolution of the Fourier transform of \(D^{K-2}(x)\) and \(\langle{\cal E}(n_{1}){\cal E}(n_{2})\rangle^{(0)}_{K=2}\). The former is given by the function \(\sigma_{K-2}\) defined
Figure 2: The correlation function (3.14) in the Born approximation and in the planar limit. Black and grey blobs depict scalar operators and energy detectors, respectively. A thin line represents a scalar propagator, a thick line represents a collection of scalar propagators. The total number of scalar propagators attached to the black blobs equals \(K\). In the planar limit, the contributing diagrams have the topology of a sphere with four marked points. The two types of diagrams shown on the left and right panels differ by the number of lines separating the two energy detectors (no line on the left, at least one line on the right).
in (5) and the latter is given in (3.22) below. This leads to
\[\langle\mathcal{E}(n_{1})\mathcal{E}(n_{2})\rangle_{K}^{(0)}=\frac{K^ {2}(K-1)}{(4\pi)^{2}\sigma_{\rm tot}(q)}N_{c}^{K}\int_{0}^{\infty}d\tau_{1}d \tau_{2}\,\tau_{1}^{2}\tau_{2}^{2}\,\sigma_{K-2}(q-n_{1}\tau_{1}-n_{2}\tau_{2})\,, \tag{3.15}\]
where \(n_{1}\tau_{1}\) and \(n_{2}\tau_{2}\) are the momenta of the particles entering the calorimeters. Replacing \(\sigma_{K-2}\) with its expression (5) and changing the integration variables as \(\tau_{i}=\omega_{i}q^{2}/(2(qn_{i}))\), we find that (3.15) takes the expected form (3.5) with
\[\mathcal{F}_{K}^{(0)}(z)= \frac{1}{2}(K-3)(K-2)^{2}(K-1)^{2}K\] \[\times\int_{0}^{1}d\omega_{1}d\omega_{2}\,\omega_{1}^{2}\omega_{2 }^{2}\,(1-\omega_{1}-\omega_{2}+z\omega_{1}\omega_{2})^{K-4}\,, \tag{3.16}\]
where the integration is restricted to the region \(1-\omega_{1}-\omega_{2}+z\omega_{1}\omega_{2}\geq 0\).
The result can be expressed in terms of a hypergeometric function,
\[\mathcal{F}_{K}^{(0)}(z)=\frac{2(K-2)(K-1)}{(K+1)(K+2)}\,_{2}F_{1} \left(3,3;K+3|z\right). \tag{3.17}\]
This relation is valid for \(0<z<1\). The values of \(z=0\) and \(z=1\) correspond to the situation where the two detectors are located, respectively, atop of each other or antipodal to each other. In general, \(\mathcal{F}_{K}(z)\) contains contact terms proportional to \(\delta(z)\) and \(\delta(1-z)\). We discuss them below in Section 3.4.
For specific values of \(K\) the relation (3.17) simplifies, e.g. 5
Footnote 5: For \(K=2\) the function \(\mathcal{F}_{K=2}^{(0)}(z)\) is given by a sum of contact terms, see (3.22).
\[\mathcal{F}_{K=2}^{(0)}(z) =0\,,\] \[\mathcal{F}_{K=3}^{(0)}(z) =\frac{18(z-2)}{z^{4}}-6(z^{2}-6z+6)\frac{\log(1-z)}{z^{5}}\,,\] \[\mathcal{F}_{K=4}^{(0)}(z) =\frac{8\left(10z^{2}-39z+30\right)}{z^{5}}+\frac{24(1-z)\left(z ^{2}-8z+10\right)\log(1-z)}{z^{6}}\,, \tag{3.18}\]
where \(0<z<1\).
The function \(\mathcal{F}_{K}^{(0)}(z)\) is plotted in Figure 3 for several values of \(K\). For \(z\to 0\) it approaches a finite value,
\[\mathcal{F}_{K}^{(0)}(z)=\frac{2(K-2)(K-1)}{(K+1)(K+2)}+O(z)\,. \tag{3.19}\]
For \(z\to 1\) it grows logarithmically for \(K=3\) and stays finite for \(K\geq 4\),
\[\mathcal{F}_{K=3}^{(0)}(z) =-6\log(1-z)-18+O(1-z)\,,\] \[\mathcal{F}_{K\geq 4}^{(0)}(z) =\frac{2K}{K-3}+O(1-z)\,. \tag{3.20}\]
In the large \(K\) limit we get instead
\[{\cal F}_{K}^{(0)}(z)\stackrel{{ K\gg 1}}{{=}}2+\frac{6}{K}(3z-2)+O(1/K^{2})\,, \tag{3.21}\]
in agreement with (3.9). We repeat that the \(z-\)dependence only appears at the level of the \(O(1/K)\) corrections.
We observe that the function \({\cal F}_{K}^{(0)}(z)\) flattens out as \(K\) increases and becomes constant for \(K\to\infty\). This property is in agreement with our expectation (1.4) that the correlations between the particles in the final state are suppressed at large \(K\).
### Contact terms for \(K=2\)
As was mentioned above, the relation (3.17) is valid for \(0<z<1\). One of the reasons for this is that \({\cal F}_{K}^{(0)}(z)\) is expected to contain contact terms localized at \(z=0\) and \(z=1\). The easiest way to reveal their presence is to apply the sum rules (3.12).
For \(K=2\) the naive answer \({\cal F}_{K=2}^{(0)}(z)=0\) in (3.18) does not satisfy the sum rules (3.12). In reality, in this case the final state consists of two particles that move back-to-back in the rest frame of the source. As a consequence, the energy-energy correlation is different from zero only if the two calorimeters are atop of each other on the celestial sphere (\(z=0\)) or antipodal (\(z=1\)). This means that it is given by the sum of two contact terms, \({\cal F}_{K=2}^{(0)}(z)=C_{1}\delta(z)+C_{2}\delta(1-z)\), with coefficients that can be determined from the sum rules (3.12),
\[{\cal F}_{K=2}^{(0)}(z)=\delta(z)+\delta(1-z)\,. \tag{3.22}\]
Figure 3: Born approximation of the energy-energy correlation \({\cal F}_{K}^{(0)}(z)\) for several values of the source weight \(K\) as a function of the angular separation \(z=(1-\cos\theta_{12})/2\), see (3.17). For finite \(K\), it is peaked around the end-point \(z=1\). It flattens out as \(K\) increases and approaches the constant value \(2\) for \(K\to\infty\) (the curve labeled ‘asympt’).
Substituting this relation into (3.1), one reproduces the known expression for the energy correlation \(\langle\mathcal{E}(n_{1})\mathcal{E}(n_{2})\rangle_{K=2}\) in the Born approximation.
For \(K\geq 3\) we use (3.17) to get
\[\int_{\epsilon}^{1-\epsilon}dz\,z\,\mathcal{F}_{K}^{(0)}(z)=1+O( \epsilon)\,,\] \[\int_{\epsilon}^{1-\epsilon}dz\,(1-z)\mathcal{F}_{K}^{(0)}(z)=1- \frac{3}{K+1}+O(\epsilon)\,, \tag{3.23}\]
where a small parameter \(\epsilon>0\) was introduced to exclude the contribution of the end-points \(z=0\) and \(z=1\). In close analogy with the case \(K=2\), the latter is expected to be given by a sum of contact terms \(\delta(z)\) and \(\delta(1-z)\). Comparing (3.2) with (3.2), we observe that the contact terms should contribute zero to the first sum rule in (3.2) and \(3/(K+1)\) to the second sum rule. Put together, these conditions fix the coefficients of the contact terms.
The resulting expression for the function \(\mathcal{F}_{K}^{(0)}(z)\) is
\[\mathcal{F}_{K\geq 3}^{(0)}(z)=\frac{2(K-1)(K-2)}{(K+1)(K+2)}\,_{2}F_{1}\,(3,3;3+K|z)+\frac{3\delta(z)}{K+1}\,. \tag{3.24}\]
As compared with (3.17), it is valid for \(0\leq z\leq 1\). At large \(K\) the contact term in (3.24) agrees with (3.9). Combined with (3.1), the relation (3.24) yields the expression for the energy-energy correlation in planar \(\mathcal{N}=4\) SYM, to the lowest order in the coupling constant.
For finite \(K\), the function (3.24) is peaked around the end-point \(z=1\) indicating that, like in the two-jet final state, the most of the energy in the final state is deposited at the calorimeters located at two antipodal points on the celestial sphere. Going to the limit \(K\to\infty\) we find that, in agreement with (1.4), the energy-energy correlation ceases to depend on the angular separation of the calorimeters. This means that the jets disappear and the energy is homogeneously distributed on the celestial sphere.
## 4 Energy correlations at weak coupling
In this section, we study the corrections to the energy correlation (3.1) in the planar \(\mathcal{N}=4\) SYM theory due to the interaction between the particles in the final state. At weak coupling, the Feynman diagrams contributing to (3.1) can be obtained from those shown in Figure 2 by adding interaction vertices. In the planar limit, the interaction can only occur between adjacent scalar particles. Depending on the choice of these particles we can distinguish three cases of interaction between: (i) undetected particles, (ii) one detected particle and undetected ones, and (iii) two detected and undetected particles. The first and the second class of diagrams constitute the total cross-section (2.5) and the single energy correlation \(\langle\mathcal{E}(n)\rangle\). Because both quantities are protected from quantum corrections in \(\mathcal{N}=4\) SYM, the above mentioned diagrams do not contribute to the unprotected energy correlations (3.1). We are therefore left with the Feynman diagrams in which two detected particles interact among themselves and with other undetected
particles. To the first few orders in the 't Hooft coupling constant, examples of such diagrams are shown in Figure 4.
As follows from the form of these diagrams, at order \(O(\lambda^{\ell})\) the interaction can affect \((\ell+1)\) particles at most. At order \(O(\lambda)\) the two detected particles can only interact with each other. At order \(O(\lambda^{2})\) there is an additional possibility for them to interact with one undetected particle (spectator). At order \(O(\lambda^{\ell})\) the number of spectators cannot exceed \((\ell-1)\).
For \(\ell<K\) the number of interacting particles \(\ell+1\) is smaller or equal to the number \(K\) of particles produced (this is equivalent to the absence of wrapping corrections to the energy correlation (10)). The remaining \((K-\ell-1)\) particles propagate freely from the source to the sink. In close analogy with (11), their contribution to the correlation function \(\langle O_{K}(x){\cal E}(n_{1}){\cal E}(n_{2})\bar{O}_{K}(0)\rangle\) is just a power of the free scalar propagator,
\[\langle O_{K}(x){\cal E}(n_{1}){\cal E}(n_{2})\bar{O}_{K}(0)\rangle^{(\ell)}= \frac{K^{2}}{(\ell+1)^{2}}[N_{c}D(x)]^{K-\ell-1}\langle O_{\ell+1}(x){\cal E} (n_{1}){\cal E}(n_{2})\bar{O}_{\ell+1}(0)\rangle^{(\ell)}. \tag{12}\]
The combinatorial factor on the right-hand side gives the ratio of the number of Wick contractions of scalars in the two correlators in the planar limit. We would like to emphasize that the relation (12) is only valid for \(\ell\leq K-1\). In particular, in the limit \(K\to\infty\) it holds to any given order of the weak coupling expansion.
At one and two loops the relation (12) reads
\[\langle O_{K}(x){\cal E}(n_{1}){\cal E}(n_{2})\bar{O}_{K}(0)\rangle ^{(1)} =\frac{K^{2}}{4}[N_{c}D(x)]^{K-2}\langle O_{2}(x){\cal E}(n_{1}){ \cal E}(n_{2})\bar{O}_{2}(0)\rangle^{(1)}\,, \tag{13}\] \[\langle O_{K}(x){\cal E}(n_{1}){\cal E}(n_{2})\bar{O}_{K}(0) \rangle^{(2)} =\frac{K^{2}}{9}[N_{c}D(x)]^{K-3}\langle O_{3}(x){\cal E}(n_{1}){ \cal E}(n_{2})\bar{O}_{3}(0)\rangle^{(2)}\,. \tag{14}\]
Notice that the first relation holds for \(K\geq 2\) and the second one for \(K\geq 3\).
Figure 4: Diagrammatic representation of the relation (12). The heavy operator creates \(K\) particles. Two detected particles interact with \((\ell-1)\) particles, the remaining \((K-\ell-1)\) undetected particles propagate freely from the source to the sink.
Comparing the last two relations with (3.14) we observe an important difference. At large \(K\) the expression on the right-hand side of (4.2) and (4.3) is suppressed by a factor of \(1/K\) as compared with (3.14). The reason for this is that the leading \(O(K^{3})\) contribution to the Born level result (3.14) comes from the diagrams shown in Figure 2 on the right panel. These diagrams do not contribute to (4.1) in the planar limit as explained above. In application to the energy correlation (3.4), (3.5) this implies that the loop corrections to the scaling function \({\cal F}_{K}(z)\) are suppressed by a factor of \(1/K\) as compared to the Born contribution (3.9).
### Recurrence relation for the perturbative corrections
The relations (4.2) and (4.3) can be used to compute the correction to the energy correlation (3.8) at one and two loops. Let us introduce the auxiliary functions
\[G_{K}^{(0}(q)=\int d^{4}x\,e^{iqx}[D(x)]^{K-2}N_{c}^{-2}\langle O _{2}(x){\cal E}(n_{1}){\cal E}(n_{2})\bar{O}_{2}(0)\rangle^{(0)}\,,\] \[G_{K}^{(\ell)}(q)=\int d^{4}x\,e^{iqx}[D(x)]^{K-\ell-1}N_{c}^{- \ell-1}\langle O_{\ell+1}(x){\cal E}(n_{1}){\cal E}(n_{2})\bar{O}_{\ell+1}(0) \rangle^{(\ell)}\,, \tag{4.4}\]
where \(\ell=1,2\) and the scalar propagator \(D(x)\) is given by (2.4). They satisfy the differential equation
\[\square_{q}\,G_{K}^{(\ell)}(q)=G_{K-1}^{(\ell)}(q)\,. \tag{4.5}\]
Below we show that it yields a recurrence relation for the functions \({\cal F}_{K}^{(\ell)}(z)\) (for \(\ell\) fixed) that allows us to efficiently determine the loop corrections to the energy correlation (3.8).
We combine equations (3.14) and (4.1) together with (3.4) and (3.5) to get
\[{\cal F}_{K}^{(0)}(z) =\pi^{2}(K-1)K\frac{(n_{1}n_{2})^{3}}{z^{3}}\frac{G_{K}^{(0)}(q)} {q^{2}\sigma_{K}(q^{2})}\,,\] \[{\cal F}_{K}^{(\ell)}(z) =4\pi^{2}K\frac{(n_{1}n_{2})^{3}}{(\ell+1)^{2}z^{3}}\frac{G_{K}^{ (\ell)}(q)}{q^{2}\sigma_{K}(q^{2})}\,, \tag{4.6}\]
where \(\sigma_{K}(q^{2})\) is given by (2.5). Next, the relation (4.5) leads to differential equations for the functions \({\cal F}_{K}^{(0)}(z)\) and \({\cal F}_{K}^{(\ell)}(z)\),
\[\left[\mathbb{D}-K(K-1)\right]{\cal F}_{K}^{(0)}(z)+K(K-1){\cal F }_{K-1}^{(0)}(z) =0\,, \tag{4.7}\] \[\left[\mathbb{D}-K(K-1)\right]{\cal F}_{K}^{(\ell)}(z)+K(K-2){\cal F }_{K-1}^{(\ell)}(z) =0\,, \tag{4.8}\]
where \(\ell\geq 1\) and \(\mathbb{D}\) is the second-order differential operator
\[\mathbb{D}=\frac{1}{z}\frac{d}{dz}(1-z)\frac{d}{dz}z^{3}\,. \tag{4.9}\]
The relations (4.7) and (4.8) are valid for \(K\geq 3\) and \(K\geq 2+\ell\), respectively.
One can check that the Born approximation result (3.17) verifies (4.7). In fact, we could have obtained (3.17) (up to an overall normalization) by solving the recurrence relation (4.7) starting from \({\cal F}_{K=2}^{(0)}(z)=0\) (up to contact terms) and requiring \({\cal F}_{K}^{(0)}(z)\) to be regular as \(z\to 0\).
### One-loop correction
In this subsection we show that the recurrence relation (4.8) can be effectively used to compute the energy correlation function \({\cal F}_{K}^{(1)}(z)\) at one loop. The starting point of the recurrence is the one-loop expression for this function at \(K=2\)[17]6
Footnote 6: This result has been obtained by the Mellin method which we review in Section 5.
\[{\cal F}_{K=2}^{(1)}(z)=-\frac{\log(1-z)}{z^{2}(1-z)}\,, \tag{4.10}\]
that is valid for \(0<z<1\).
#### Boundary conditions
The solutions to the second-order differential equations (4.8) are defined up to the zero modes of the operator \((\mathbb{D}-K(K-1))\),
\[{\cal F}_{K}^{(1)}\to{\cal F}_{K}^{(1)}+c_{1}\,z^{K-3}\,_{2}F_{1} \left(\genfrac{}{}{0.0pt}{}{K,K}{2K}\bigg{|}z\right)+c_{2}\,z^{-2-K}\,_{2}F_{1} \left(\genfrac{}{}{0.0pt}{}{1-K,1-K}{2-2K}\bigg{|}z\right)\,. \tag{4.11}\]
This freedom can be fixed by imposing boundary conditions. Notice that the last term on the right-hand side of (4.11) grows as \(z^{-K-3}\) for \(z\to 0\), thus producing a divergent contribution to the sum rules (3.13). To avoid it, we put \(c_{2}=0\).
According to (4.10), the function \({\cal F}_{K=2}^{(1)}(z)\) behaves as \(O(1/z)\) at small \(z\). Examining the differential equation (4.8) for \(K=3,4,\dots\), one finds that its solution has to have the following asymptotic behavior at small \(z\),
\[{\cal F}_{K}^{(1)}(z)=\frac{a_{K}}{z}+b_{K}\log z+O(z^{0})\,. \tag{4.12}\]
For \(K=2\) we find from (4.10) that \(a_{2}=1\) and \(b_{2}=0\).
To fix the coefficient \(c_{1}\) in (4.11) we examine the behavior of the function \({\cal F}_{K}^{(1)}(z)\) around \(z=1\). In the Born approximation, we deduce from (3.20) that \({\cal F}_{K}^{(0)}(z)\) grows logarithmically for \(K=3\) and it approaches a finite value for \(K\geq 4\). Requiring \({\cal F}_{K}^{(1)}(z)\) to stay finite for \(K\geq 4\) as \(z\to 1\) implies \(c_{1}=0\). This condition can be derived from the sum rules (3.13) as follows. Multiplying (4.8) by \(z\) and integrating by parts, we obtain
\[\int_{0}^{1-\epsilon}dz\,\partial_{z}[(1-z)\partial_{z}(z^{3}\,{ \cal F}_{K}^{(1)}(z))]=[(1-z)\partial_{z}(z^{3}\,{\cal F}_{K}^{(1)}(z))]\Big{|} _{z=1-\epsilon}=O(\epsilon)\,. \tag{4.13}\]
The boundary term at \(z=0\) vanishes due to (4.12). This derivation relies on the sum rules (3.13), i.e. we assume that for any \(K\geq 3\) the relation \(\int_{0}^{1-\epsilon}dz\,z{\cal F}_{K}^{(1)}(z)=O(\epsilon)\) holds. As we show below, it is satisfied for \(K\geq 4\) only. We therefore deduce from (4.13) that \({\cal F}_{K}^{(1)}(z)\) should stay finite at \(z\to 1\) and \(K\geq 4\). This leads to \(c_{1}=0\) because the term proportional to \(c_{1}\) in (4.11) grows logarithmically as \(z\to 1\).
For \(K=3\) one can check using (4.10) that \(\int_{0}^{1-\epsilon}dz\,z{\cal F}_{2}^{(1)}(z)=\frac{1}{2}\log^{2}\epsilon\) as \(\epsilon\to 0\). In this case, in order to satisfy the sum rule (3.13), one has to add to (4.10) a contact term localized
at \(z=1\). As a result, for \(K=3\) the right-hand side of (4.13) scales as \(\frac{3}{2}\log^{2}\epsilon\) leading to \({\cal F}^{(1)}_{K=3}(z)=\frac{1}{2}\log^{3}(1-z)\) at \(z\to 1\). The term proportional to \(c_{1}\) in (4.11) yields a subleading contribution in this limit. To fix the coefficient \(c_{1}\) we can use the sum rule (3.13). 7
Footnote 7: For \(K>3\) the sum rule (3.13) leads to \(c_{1}=0\).
We can now turn to solving the differential equation (4.8) supplemented with the boundary conditions specified above. Our strategy is as follows: we first solve explicitly the differential equation for \({\cal F}^{(1)}_{K=3}(z)\). Based on the form of \({\cal F}^{(1)}_{K=3}(z)\) we then work out the general form of the solution for \(K\geq 4\). Finally, we combine all \({\cal F}^{(1)}_{K}(z)\) into a two-variable generating function and provide a closed-form expression for it. In this way we determine the one-loop energy correlation \({\cal F}^{(1)}_{K}(z)\) for any weight \(K\).
#### Solution for \(K=3\)
Solving the differential equation (4.8) for \(K=3\), we replace \({\cal F}^{(1)}_{K=2}(z)\) with its expression (4.10) and pick the solution that satisfies (4.12). The resulting expression for \({\cal F}^{(1)}_{K=3}(z)\) is given by a linear combination of special functions of different transcendental weights accompanied by seven polynomials of \(z\),
\[{\cal F}^{(1)}_{3}(z) =\frac{1}{z^{5}}\Big{[}c_{1}^{[3]}(z)L(z)+c_{2}^{[3]}(z)\text{Li }_{2}(z)+c_{3}^{[3]}(z)\log(z)\log(1-z)\] \[\quad+c_{4}^{[3]}(z)\log^{2}(1-z)+c_{5}^{[3]}(z)\log(1-z)+c_{6}^{[ 3]}(z)\log(z)+c_{7}^{[3]}(z)\Big{]}\,. \tag{4.14}\]
Here the function \(L(z)\) contains only terms of the maximal logarithmic weight 3: 8
Footnote 8: The terms in the second line are not included in the polynomials \(c_{5}^{[3]}\) and \(c_{7}^{[3]}\) in order to keep their coefficients rational.
\[L(z) :=\text{Li}_{3}(1-z)+\frac{1}{2}\text{Li}_{2}(z)\log(1-z)-\frac{1 }{12}\log^{3}(1-z)+\frac{1}{2}\log^{2}(1-z)\log(z)\] \[\quad-\zeta_{2}\log(1-z)-\zeta_{3}\,. \tag{4.15}\]
The coefficients \(c_{m}^{[3]}(z)\) (with \(m=1,\ldots,7\)) are polynomials of degree 2:
\[c_{1}^{[3]} =-6\left(z^{2}-6z+6\right), c_{2}^{[3]} =18(3-2z)\,,\] \[c_{3}^{[3]} =9\left(z^{2}-6z+6\right), c_{4}^{[3]} =-9(z-3)(z-1)\,,\] \[c_{5}^{[3]} =9(z-1)(4z-9)\,, c_{6}^{[3]} =27(2-z)z,\] \[c_{7}^{[3]} =9(3-2z)z\,. \tag{4.16}\]
The coefficients \(c_{5}^{[3]}\) and \(c_{7}^{[3]}\) depend linearly on the zero-mode coefficient \(c_{1}\) in (4.11). As explained above, its value is fixed by the sum rule \(\int_{0}^{1}dz\,z{\cal F}^{(1)}_{3}(z)=0\).
The function \({\cal F}^{(1)}_{3}(z)\) is plotted in Figure 5. For \(z\to 0\) it has the asymptotic behavior
\[{\cal F}^{(1)}_{K=3}(z)=\frac{3}{4z}-\frac{3}{10}\log(z)+\frac{24}{25}+O\left( z\right)\,, \tag{4.17}\]
in agreement with (4.12). For \(z\to 1\) we find
\[\mathcal{F}^{(1)}_{K=3}(z)=\frac{1}{2}\log^{3}(1-z)+\frac{1}{2}\pi^{2}\log(1-z)+6 \zeta(3)+3\pi^{2}+9+O(1-z)\,. \tag{4.18}\]
As expected, this function grows as a power of \(\log(1-z)\).
It is straightforward to verify that the function (4.14) does not satisfy the second sum rule in (3.13), \(\int_{0}^{1}dz\,(1-z)\mathcal{F}^{(1)}_{3}(z)\neq 0\). Like what happens in the Born approximation (3.23), this indicates that \(\mathcal{F}^{(1)}_{3}(z)\) should contain a contact term \(\sim\delta(z)\). It is needed to regularize the contribution of the pole in (4.17) to \(\int_{0}^{1}dz\,(1-z)\mathcal{F}^{(1)}_{3}(z)\). More details about the contact terms can be found in Section 6.
#### General solution for \(K\geq 4\)
The solution of the recurrence relation (4.8) for any \(K\geq 4\) takes the same form as (4.14),
\[\mathcal{F}^{(1)}_{K}(z) = \frac{1}{z^{K+2}}\Big{[}c^{[K]}_{1}(z)L(z)+c^{[K]}_{2}(z)\text{ Li}_{2}(z)+c^{[K]}_{3}(z)\log(z)\log(1-z) \tag{4.19}\] \[+c^{[K]}_{4}(z)\log^{2}(1-z)+c^{[K]}_{5}(z)\log(1-z)+c^{[K]}_{6}(z )\log(z)+c^{[K]}_{7}(z)\Big{]}\,,\]
where \(c^{[K]}_{m}(z)\) (with \(m=1,\ldots,7\)) are polynomials of degree \(K-1\). They are determined recursively by solving (4.8) and requiring \(\mathcal{F}^{(1)}_{K}(z)\) to be finite for \(z\to 1\) and to satisfy (4.12) for \(z\to 0\). In particular, the polynomial \(c^{[K]}_{1}(z)\) has a simple closed form,
\[c^{[K]}_{1}(z)=-K(K-2)\,(z-1)^{K-3}\left[K(K+1)-4Kz+2z^{2}\right]\,. \tag{4.20}\]
Instead of presenting explicit formulas for the remaining polynomials at each weight \(K\), in the next subsection we provide a generating function for all \(\mathcal{F}^{(1)}_{K}(z)\).
The functions \(\mathcal{F}^{(1)}_{K}(z)\) are plotted in Figure 5 for several values of the weight \(K\). For small \(z\) they behave as
\[\mathcal{F}^{(1)}_{K}(z)=\frac{3}{K+1}z^{-1}-\frac{6(K-2)}{(K+1)(K+2)}\log z+ O(z^{0})\,, \tag{4.21}\]
in agreement with (4.12). For \(K=2\) and \(K=3\), they also agree with (4.10) and (4.17), respectively. For \(z\to 1\), we find
\[\mathcal{F}^{(1)}_{K}(z=1)=-\,\frac{9}{(K-3)^{2}}+\frac{6}{K-2}+\frac{24\zeta _{2}-15-12H^{(2)}_{K-4}}{12(K-1)}-\frac{24\zeta_{2}+33-12H^{(2)}_{K-4}}{4(K-3) }\,, \tag{4.22}\]
where \(H^{(2)}_{K-4}=\sum_{p=1}^{K-4}p^{-2}\) is the generalized harmonic number of order two. The relation (4.22) is valid for \(K\geq 4\).
#### Generating function
It is convenient to introduce an auxiliary variable \(t\) and combine the one-loop functions \({\cal F}_{K}^{(1)}(z)\) of all weights \(K\) in the generating function
\[G(z,t):=\sum_{K\geq 2}t^{K}{\cal F}_{K}^{(1)}(z)\,. \tag{4.23}\]
Replacing \({\cal F}_{K}^{(1)}(z)\) with its expression (4.19), we find that each of the seven polynomials \(c_{m}^{[K]}(z)\) (with \(m=1,\ldots,7\)) gives rise to a function
\[G_{m}(z,t):=\frac{1}{z^{2}}\sum_{K\geq 3}\left(\frac{t}{z}\right)^{K}c_{m}^{[K ]}(z)\;,\quad m=1,\ldots,7\,. \tag{4.24}\]
Then the generating function (4.23) takes the form
\[G(z,t)= G_{1}(z,t)\,L(z)+G_{2}(z,t)\,{\rm Li}_{2}(z)+G_{3}(z,t)\log(z)\log(1- z)+G_{4}(z,t)\log^{2}(1-z)\] \[+G_{5}(z,t)\log(1-z)+G_{6}(z,t)\log(z)+G_{7}(z,t)+t^{2}{\cal F}_{ K=2}^{(1)}\,. \tag{4.25}\]
Figure 5: The one-loop correction \((K+1)\,z\,{\cal F}_{K}^{(1)}(z)\) for several values of the weight \(K\). The limiting curve for \(K\to\infty\) is labeled ‘asympt’, see (4.27). Owing to the normalization prefactor, all plots originate from 3 at \(z=0\), see (4.21). For \(z\to 1\), the one-loop correction has a pole at \(K=2\), a logarithmic singularity at \(K=3\) and it is finite for \(K\geq 4\), see (4.10), (4.18) and (4.22). Its value at \(z=1\) decreases with \(K\) and approaches \(-\frac{7}{2}-\frac{\pi^{2}}{3}\approx-6.7899\) in the limit \(K\to\infty\), see (4.29).
We have found closed-form expressions for the functions \(G_{m}(z,t)\). They are given by linear combinations of classical polylogarithms of weights up to three, decorated with rational terms. More precisely, \(G_{1}\), which resulted from the summation of the polynomials (4.20), is a rational function; \(G_{2}\), \(G_{3}\), \(G_{4}\) contain logarithms and rational terms; \(G_{5}\), \(G_{6}\) contain dilogarithms, logarithms and rational terms; and \(G_{7}\) is the most complicated function containing tri-logarithms, dilogarithms, logarithms and rational terms. The arguments of the polylogarithms depend both on \(z\) and \(t\) in such a way that the generating function \(G(z,t)\) is given by 2dHPL functions [28]. We provide an explicit expression for \(G(z,t)\) in the ancillary file.
We can apply the generating function (4.25) to show that \(\mathcal{F}_{K}^{(1)}(z)\) behaves as \(O(1/K)\) at large \(K\),
\[\mathcal{F}_{K}^{(1)}(z)=\frac{1}{K}\varphi^{(1)}(z)+O(1/K^{2})\,. \tag{4.26}\]
Combining this relation with (4.23), we expect that \(G(z,t)\) should scale as \(-\log(1-t)\varphi^{(1)}(z)\) for \(t\to 1\). Indeed, examining the generating function \(G(z,t)\) for \(t\to 1\) we reproduce the expected behavior and identify the function
\[\varphi^{(1)}(z)=\frac{3}{z}+2\text{Li}_{2}(1-z)-6\log(z)-2\zeta_{2}-\frac{13 }{2}\,. \tag{4.27}\]
We therefore conclude that, as announced earlier, the one-loop correction to the energy correlation (3.8) is suppressed at large \(K\) by a factor of \(1/K\), as compared with the Born contribution (3.9).
For \(z\to 0\) the relation (4.26) takes the expected form (4.12) with \(a_{K}=3/K+O(1/K^{2})\) and \(b_{K}=-6/K+O(1/K^{2})\),
\[\mathcal{F}_{K}^{(1)}(z)=\frac{1}{K}\bigg{(}\frac{3}{z}-6\log z+O(z^{0}) \bigg{)}\,. \tag{4.28}\]
It also agrees with (4.21). For \(z\to 1\) we have, in agreement with (4.22),
\[\mathcal{F}_{K}^{(1)}(z)=\frac{1}{K}\bigg{(}-\frac{7}{2}-2\zeta_{2}+O\left(z- 1\right)\bigg{)}. \tag{4.29}\]
In the previous relations we keep only the leading term at large \(K\).
### Two-loop correction
The two-loop corrections \(\mathcal{F}_{K}^{(2)}(z)\) satisfy the recurrence differential equations (4.8). These equations are valid for \(K\geq 4\) and they allow us to express \(\mathcal{F}_{K}^{(2)}(z)\) for arbitrary \(K\geq 4\) in terms of \(\mathcal{F}_{K=3}^{(2)}(z)\). The latter can be determined by extending the relation (4.8) to the case \(K=3\) with a properly defined inhomogeneous term \(\mathcal{F}_{\text{aux}}^{(2)}(z)\).
#### Two-loop solution of the recurrence relation
Below we show that \(\mathcal{F}_{K}^{(2)}(z)\) can be expanded over a basis of polylogarithm functions \(\mathcal{L}_{i}^{(w)}(\sqrt{z})\), namely
\[z^{K+2}\mathcal{F}_{K}^{(2)}(z)= \sum_{w=0}^{5}\sum_{i=1}^{l_{w}}a_{i,K}^{(w)}(z)\,\mathcal{L}_{i}^{ (w)}(\sqrt{z})+\sqrt{z}\,b_{K}(z)\,\mathcal{L}_{1}^{(2)}(\sqrt{z})\] \[+\delta_{K,3}\;z^{2}\left[-6z\mathcal{L}_{3}^{(3)}(\sqrt{z})+6z \mathcal{L}_{3}^{(2)}(\sqrt{z})+6\sqrt{z}\,\mathcal{L}_{1}^{(2)}(\sqrt{z}) \right]\,. \tag{114}\]
Here \(a_{i,K}^{(w)}(z)\) and \(b_{K}(z)\) are polynomials in \(z\) of degree \((K-1)\) and \((K-2)\), respectively, with rational \(K\)-dependent coefficients. The second line in (114) is relevant only for \(K=3\).
The functions \(\mathcal{L}_{i}^{(w)}(\sqrt{z})\) are pure polylogarithms of transcendental weight \(w\). They are given by multi-linear combinations (with rational coefficients) of harmonic polylogarithms (HPL) [29] with argument \(\sqrt{z}\) and zeta-values \(\zeta_{2}\), \(\zeta_{3}\), \(\zeta_{5}\), which are graded by the transcendental weight. The functions \(\mathcal{L}_{i}^{(w)}(\sqrt{z})\) do not depend on \(K\). At weight \(w\) we employ \(l_{w}\) linearly independent polylogarithmic combinations, \(\mathcal{L}_{i}^{(w)}\) with \(i=1,\ldots,l_{w}\), in the expression (114). The counting is as follows:
\[l_{0}=1\,,\qquad l_{1}=2\,,\qquad l_{2}=5\,,\qquad l_{3}=8\,,\qquad l_{4}=5\,,\qquad l_{5}=3\,. \tag{115}\]
In total, the ansatz (114) contains \(\sum_{w=0}^{5}l_{w}=24\) polylogarithmic functions \(\mathcal{L}_{i}^{(w)}(\sqrt{z})\). They take the following form
* Weight-zero function \(\mathcal{L}_{1}^{(0)}\equiv 1\);
* Weight-one functions \[\{\mathcal{L}_{i=1,2}^{(1)}\}=\{H_{-1}-H_{1},\,2H_{0}\}=\{\log(1-z),\log(z) \}\,;\] (116)
* Weight-two functions \[\{\mathcal{L}_{i=1,\ldots,5}^{(2)}\}=\{2H_{-1,0}+2H_{1,0},\,-H_{- 1,-1}+H_{-1,1}+H_{1,-1}-4H_{1,0}-H_{1,1},\] \[\qquad\qquad\qquad\qquad\qquad H_{1,1}-H_{1,-1}+2H_{0,1}-2H_{0,- 1}-H_{-1,1}+H_{-1,-1}+\zeta_{2},\] \[\qquad\qquad\qquad\qquad H_{0,0}+H_{1,0},\,\zeta_{2}\}\,.\] (117)
Here we omit the arguments of the HPLs for the sake of brevity, \(H_{a,b,\ldots}\equiv H_{a,b,\ldots}(\sqrt{z})\)[29]. The remaining higher-weight combinations \(\mathcal{L}_{i}^{(w)}\) have a similar form. They are defined in the ancillary file.
The polylogarithmic combinations of HPLs of transcendental weights up to four, \(\mathcal{L}_{i}^{(w)}\) with \(0\leq w\leq 4\), can be expressed as classical polylogarithms. However, this does not apply to the weight-five combinations \(\mathcal{L}_{i}^{(5)}\), so we prefer to present all weights using the same HPL notations.
#### Solution procedure
In this subsection we present some details of the derivation of (4.30). Namely, we first compute \(\mathcal{F}^{(2)}_{K=3}(z)\) and then apply (4.8) for \(\ell=2\) and \(K\geq 4\) to obtain \(\mathcal{F}^{(2)}_{K}(z)\).
Before we turn to the computation of \(\mathcal{F}^{(2)}_{K=3}(z)\), let us examine the function \(\mathcal{F}^{(2)}_{K=2}(z)\) which defines the two-loop correction to the energy correlation (3.4) for \(K=2\). It was computed in [18] using the Mellin approach summarized in Section 5.1 and Appendix C. As shown there, the function \(\mathcal{F}^{(2)}_{K=2}(z)\) admits a Mellin integral representation (B.16). The result of the Mellin integration in (B.16) can be expanded over the basis of special functions \(\mathcal{L}^{(w)}_{i}\equiv\mathcal{L}^{(w)}_{i}(\sqrt{z})\) defined above,
\[z^{3}\mathcal{F}^{(2)}_{K=2}(z)=2z(1+\sqrt{z})\,\mathcal{L}^{(2 )}_{1}+2z\mathcal{L}^{(2)}_{2}+2z^{2}\mathcal{L}^{(2)}_{3}\] \[+\frac{z}{(1-z)}\left[\left(4+\frac{13z}{2}\right)\mathcal{L}^{( 3)}_{1}-z\mathcal{L}^{(3)}_{2}+z(1+2z)\mathcal{L}^{(3)}_{3}+\mathcal{L}^{(3) }_{4}+z\mathcal{L}^{(3)}_{5}\right]. \tag{4.34}\]
To find the function \(\mathcal{F}^{(2)}_{K=3}\), we have to solve the differential equation (4.8) for \(K=3\). This may seem contradictory, since earlier we declared that at two loops equation (4.8) is valid only for \(K\geq 4\). In reality, we can extend its validity to \(K=3\) by making the following observation. As explained in Appendix B, the correlator (B.5) takes a factorized form for all \(K\geq 3\) with the common function \(F^{(2)}_{K\geq 3}\) given in (B.9). As compared with the function \(F^{(2)}_{K=2}\) that defines the correlator for \(K=2\), it is given by a different linear combination of the same two-loop conformal integrals. We recall that the factorized form (B.5) is at the origin of the differential recursion (4.8) (see (4.4) and (4.5)). However, in the case at hand the starting point of the recursion is not the 'physical' \(\mathcal{F}^{(2)}_{K=2}(z)\) from (4.34) but a different 'auxiliary' function \(\mathcal{F}^{(2)}_{\rm aux}(z)\) that is defined below in (5.19).
In this way we arrive at the differential equation
\[(\mathbb{D}-6)\,\mathcal{F}^{(2)}_{K=3}(z)+3\mathcal{F}^{(2)}_{\rm aux}(z)=0\,. \tag{4.35}\]
This is a particular case of the general recursion (4.8), specified to \(\ell=2\) and \(K=3\), but with a different inhomogeneous term \(\mathcal{F}^{(2)}_{\rm aux}(z)\), not to be confused with \(\mathcal{F}^{(2)}_{K=2}(z)\). In other words, we have extended the relation (4.8), initially derived for \(K\geq 4\), to the case \(K=3\).
To find the explicit expression for \(\mathcal{F}^{(2)}_{\rm aux}(z)\) we need to repeat the Mellin integration in (B.16) with a different linear combination of the same Mellin amplitudes, see (B.18). The result has an expansion in the basis of polylogarithmic functions \(\mathcal{L}^{(w)}_{i}(\sqrt{z})\), similar to (4.34):
\[z^{3}\mathcal{F}^{(2)}_{\rm aux}(z)= (1+\sqrt{z})\,\mathcal{L}^{(2)}_{1}+\mathcal{L}^{(2)}_{2}+z \mathcal{L}^{(2)}_{3}+\frac{1}{(1-z)}\Big{[}\mathcal{L}^{(3)}_{1}+z\mathcal{ L}^{(3)}_{2}+z^{2}\mathcal{L}^{(3)}_{3}\Big{]}. \tag{4.36}\]
At two loops, the differential equations (4.35) and (4.8) can be solved recursively for \(K=3,4,\dots\) in the same manner as in the one-loop case. The general solution takes the form (4.30). The polynomials \(a^{(w)}_{i,K}(z)\) and \(b_{K}(z)\) in (4.30) can be found by substituting the ansatz (4.30) into (4.35) and (4.8) and by taking into account that the polylogarithmic functions \(\mathcal{L}^{(w)}_{i}(\sqrt{z})\)
satisfy the differential equations
\[\frac{d}{dz}\mathcal{L}_{i}^{(w)}(\sqrt{z})=\sum_{j=1}^{l_{w-1}} \left[\frac{c_{i,j}^{(w)0^{2}}}{z}+\frac{c_{i,j}^{(w)0}}{\sqrt{z}}+\frac{c_{i,j}^ {(w)-}}{\sqrt{z}-1}+\frac{c_{i,j}^{(w)+}}{\sqrt{z}+1}\right]\mathcal{L}_{j}^{(w -1)}(\sqrt{z}), \tag{110}\]
where \(c_{i,j}^{(w)}\) are rational coefficients. Since the functions \(\mathcal{L}_{i}^{(w)}(\sqrt{z})\) are linearly independent, the relations (110) lead to recurrence relations for the coefficients of the polynomials \(a_{i,K}^{(w)}(z)\) and \(b_{K}(z)\).
The two boundary conditions needed to fix the solution of the second-order differential equations are the absence of \(z^{-2-K}\) singularities at \(z\to 0\), and the finiteness of \(\mathcal{F}_{K\geq 4}^{(2)}(z)\) at \(z\to 1\). 9 For \(K=3\), the function \(\mathcal{F}_{K=3}^{(2)}(z)\) has a logarithmic singularity at \(z=1\) (see (109) below). As before, we use the first sum rule in (104) to fix the remaining freedom in the solution for \(\mathcal{F}_{K=3}^{(2)}(z)\). The sum rule requires the evaluation of integrals involving HPLs. In Section 6.3, we describe a semi-numerical implementation of the sum rules for the two-loop energy correlation (108) that yields exact values of the remaining unknown.
Footnote 9: Let us note that in order to impose the boundary conditions it is sufficient to work out only the leading terms in the expansions \(z\sim 0\) and \(z\sim 1\) of the HPL combinations \(\mathcal{L}_{i}^{(w)}\), and they do not depend on \(K\). Then fixing the boundary conditions is completely algebraic.
We have solved explicitly the recurrence relations for the polynomials \(a_{i,K}^{(w)}(z)\) and \(b_{K}(z)\), up to \(K=100\). In the anciliary file we provide a Mathematica code that constructs the solutions recursively. The procedure is completely algebraic and it only requires solving a system of linear equations with rational coefficients at the \(K\)-th step. Its solution is then fed in the linear system of the \((K+1)\)-th step, etc.
Unlike the one-loop case, we have not attempted to find a closed-form expression for \(\mathcal{F}_{K}^{(2)}(z)\) for arbitrary \(K\). Instead, we used the obtained solutions for several small values of \(K\) to deduce the asymptotic behavior of \(\mathcal{F}_{K}^{(2)}(z)\) as \(z\to 0\) for any \(K\),
\[\mathcal{F}_{K}^{(2)}(z)=\frac{3}{K+1}\frac{\log(z)}{z}+\frac{c_{1 /z,K}^{(2)}}{K+1}\frac{1}{z}+O(\log z)\,, \tag{111}\]
where
\[c_{1/z,K}^{(2)}=-\frac{3}{2}\zeta_{3}+3\zeta_{2}-\frac{7}{2}- \frac{3}{K-1}-\frac{3}{K}-\frac{3}{K+1}\,. \tag{112}\]
The expression on the right-hand side of (111) contains a pole at \(z=0\). To make \(\mathcal{F}_{K}^{(2)}(z)\) integrable at the origin, one has to add to (111) the contact terms localized at \(z=0\). These terms are derived in Section 6.3 for any \(K\). The two-loop EEC \(\mathcal{F}_{K=2}^{(2)}\), calculated in [18], has a similar asymptotic behavior at \(z\to 0\), see (103).
As compared with the one-loop result (104), the two-loop correction (111) is enhanced at small \(z\) by a factor of \(\log z\). Adding together (104) and (111), we keep the most singular terms at each loop order to get
\[a\mathcal{F}_{K}^{(1)}(z)+a^{2}\mathcal{F}_{K}^{(2)}(z)+\dots= \frac{3}{K+1}\left[\frac{a}{z}+\frac{a^{2}}{z}\log z+\dots\right], \tag{113}\]
where \(a=\lambda/(4\pi^{2})\). This relation is in agreement with the expected small \(z\) behavior of the energy correlation
\[\mathcal{F}_{K}(z)\sim z^{-1+\gamma(\lambda)/2}\,, \tag{108}\]
which follows from the operator product expansion on the celestial sphere (see (103) below). Here \(\gamma(\lambda)=\lambda/(2\pi^{2})+O(\lambda^{2})\) does not depend on \(K\) and coincides with the anomalous dimension of the twist-two operators of spin 3.
For \(z\to 1\), the two-loop correction to the energy correlation \(\mathcal{F}_{K}^{(2)}(z)\) approaches a finite value for \(K\geq 4\). At large \(K\) it decreases as \(O(1/K)\), see (104) and (119) below. For \(K=3\), the function \(\mathcal{F}_{K=3}^{(2)}(z)\) grows for \(z\to 1\) as a power of \(\log(1-z)\),
\[\mathcal{F}_{K=3}^{(2)}(z)= -\frac{3}{160}\log^{5}(1-z)-\frac{\zeta_{2}}{2}\log^{3}(1-z)+ \frac{3\zeta_{3}}{2}\log^{2}(1-z)-\frac{33}{4}\zeta_{4}\log(1-z)\] \[+57\zeta_{5}-72\zeta_{2}\zeta_{3}-\frac{513}{4}\zeta_{4}-\frac{21 6}{2}\zeta_{3}+270\zeta_{2}\log(2)-45\zeta_{2}+3+O(1-z)\,. \tag{109}\]
For \(K=2\), the function \(\mathcal{F}_{K=2}^{(2)}(z)\) has a stronger singularity at \(z\to 1\), namely it grows as \(O(\log^{3}(1-z)/(1-z))\) at the end point, see (117).
The functions \((K+1)\,z\,\mathcal{F}_{K}^{(2)}(z)\) are plotted in Figure 6 for several values of \(K\). The additional factor of \((K+1)z\) is inserted to soften the singularity at \(z=0\), see (102), and to ensure that the function stays finite as \(K\to\infty\). As follows from (101), the integral of \((K+1)z\mathcal{F}_{K}^{(2)}(z)\) over the interval \(0<z<1\) has to vanish. This explains why the functions shown in Figure 6 change sign at some value of \(z\) for \(K\geq 3\). For \(K=2\) the function takes negative values for \(0<z<1\) and its integral vanishes after one takes into account the contact term proportional to \(\delta(1-z)\). We recall that such contact terms are absent for \(K\geq 3\).
We observe that, in agreement with (108), the curves in Figure 6 look alike for small \(z\). The situation is different for finite \(z\), the functions \((K+1)\mathcal{F}_{K}^{(2)}(z)\) become more and more flat as \(K\) increases. We have observed the same pattern at one loop. We recall that the \(z-\)dependence describes the angular distribution of the energy on the celestial sphere. The flattening of the \(z-\)dependence of the energy correlation at large \(K\) implies that the energy distribution becomes more homogeneous.
## 5 Heavy sources at weak coupling
In the previous section, we computed the energy correlations at weak coupling for an arbitrary weight \(K\) of the source operators and then examined their asymptotic behavior at large \(K\). In this section, we present another approach that allows us to obtain this asymptotic behavior directly, without going through the details at finite \(K\). It is based on the method developed in [16; 17; 18; 20] for the calculation of event shapes with sources of weight \(K=2\). The main advantage of this method is that it is applicable to the energy correlations (100) for an arbitrary coupling constant.
### Mellin approach
The energy flow operator \(\mathcal{E}(n)\) in (3.4) is given by the stress-energy tensor integrated along the light-ray defined by the null vector \(n^{\mu}\) (see (A.1)). As a consequence, the energy correlation (3.4) is related to the four-point function \(\langle O_{K}(1)T(2)T(3)O_{K}(4)\rangle\) involving two heavy source operators and two stress-energy tensors. In the maximally supersymmetric \(\mathcal{N}=4\) SYM theory, the stress-energy tensor belongs to the same supermultiplet as the simplest half-BPS operator (3.1) of weight \(K=2\).
The \(\mathcal{N}=4\) superconformal Ward identities relate the above mentioned four-point correlation function to that of four half-BPS operators \(\langle O_{K}(1)O_{2}(2)O_{2}(3)O_{K}(4)\rangle\). The properties of these correlators are summarized in Appendix B. In particular, for an arbitrary coupling constant they are expressed in terms of a single function \(\Phi_{K}(u,v)\) depending on the two conformal
Figure 6: The two-loop correction to the energy correlation \((K+1)\,z\,\mathcal{F}_{K}^{(2)}(z)\) for several values of the weight \(K\) of the source. The limiting curve for \(K\to\infty\) is labeled ‘asympt’. Owing to the normalization prefactor, all plots scale as \(3\log z+O(z^{0})\) at \(z\to 0\), see (4.38). For \(z\to 1\) the two-loop correction develops a pole at \(K=2\), scales as a power of \(\log(1-z)\) at \(K=3\) (see (4.42)) and approaches a finite value for \(K\geq 4\). The latter decreases with \(K\) and is given by \(\frac{49}{144}+\frac{181}{180}\zeta_{2}+\frac{85}{6}\zeta_{3}+\frac{7}{4}\zeta _{4}\approx 20.9176\) in the limit \(K\to\infty\), see (5.24) below.
cross-ratios10
Footnote 10: This choice of cross-ratios is convenient for the Mellin procedure described in Appendix C.
\[u=\frac{x_{12}^{2}x_{34}^{2}}{x_{14}^{2}x_{23}^{2}}\,,\qquad\qquad v =\frac{x_{13}^{2}x_{24}^{2}}{x_{14}^{2}x_{23}^{2}}\,. \tag{116}\]
It proves convenient to use the Mellin representation for this function,
\[\Phi_{K}(u,v)=\int\frac{dj_{1}dj_{2}}{(2\pi i)^{2}}M_{K}(j_{1},j_ {2})\,u^{j_{1}}v^{j_{2}}\,, \tag{117}\]
where the integration contours run parallel to the imaginary axis to the left of the origin, \(\operatorname{Re}j_{i}=-\delta\) with \(0<\delta<1\). The Mellin amplitude \(M_{K}(j_{1},j_{2})\) depends on the coupling constant. The expansion coefficients \(M_{K}^{(\ell)}\) of \(M_{K}(j_{1},j_{2})\) at weak coupling in the planar limit,
\[M_{K}(j_{1},j_{2})=\sum_{\ell\geq 0}\left(\frac{ \lambda}{4\pi^{2}}\right)^{\ell}M_{K}^{(\ell)}(j_{1},j_{2})\,, \tag{118}\]
are not all independent. As we have already discussed in the beginning of Section 4, and provide more details in Appendix B (see (101)), the \(\ell\)-loop perturbative corrections with \(K\geq\ell+1\) are all the same,
\[M_{K=\ell+1}^{(\ell)}=M_{K=\ell+i}^{(\ell)}\,\quad i\geq 2. \tag{119}\]
Applying the \(\mathcal{N}=4\) superconformal Ward identities, one can express the correlation function in (11) in terms of the function (117) or equivalently the Mellin amplitude \(M_{K}(j_{1},j_{2})\). The result is (see [19; 25] for details)
\[\langle O_{K}(x)\mathcal{E}(n_{1})\mathcal{E}(n_{2})\bar{O}_{K}( 0)\rangle=\frac{8}{(n_{1}n_{2})^{3}}\frac{1}{(x^{2})^{K+1}}\frac{d^{2}}{d \gamma^{2}}(1-\gamma)^{2}\gamma^{2}\frac{d^{2}}{d\gamma^{2}}\mathcal{G}_{K}( \gamma)\,, \tag{120}\]
where the function \(\mathcal{G}_{K}(\gamma)\) is given by a Mellin integral involving the same amplitude \(M_{K}(j_{1},j_{2})\) as in (117),
\[\mathcal{G}_{K}(\gamma)=-\frac{1}{16\pi^{3}}\int\frac{dj_{1}dj_{ 2}}{(2\pi i)^{2}}\left[\frac{\Gamma(1-j_{1}-j_{2})}{\Gamma(1-j_{1})\Gamma(1-j _{2})}\right]^{2}M_{K}(j_{1},j_{2})\,\gamma^{j_{1}+j_{2}-1}\,, \tag{121}\]
and \(\gamma\) is a dimensionless Lorentz scalar variable invariant under the independent rescaling of the null vectors \(n_{i}\),
\[\gamma=\frac{2(xn_{1})(xn_{2})}{x^{2}(n_{1}n_{2})}\,. \tag{122}\]
The differential operator acting on \(\mathcal{G}_{K}(\gamma)\) in (120) stems from the relation between the correlators with different spins mentioned in the beginning of this subsection.
Substituting (104) into (105) and going through the calculation we can obtain the expression for the energy correlation \(\langle\mathcal{E}(n_{1})\mathcal{E}(n_{2})\rangle_{K}\). It takes the expected form (104) with the function \(\mathcal{F}_{K}(z)\) given by the double Mellin integral
\[\mathcal{F}_{K}(z)=\int\frac{dj_{1}dj_{2}}{(2\pi i)^{2}}\left[\frac{\Gamma(1-j _{1}-j_{2})}{\Gamma(1-j_{1})\Gamma(1-j_{2})}\right]^{2}M_{K}(j_{1},j_{2})\, \mathcal{K}_{K}(j_{1}+j_{2},z)\,. \tag{106}\]
Here the Mellin amplitude \(M_{K}(j_{1},j_{2})\) defines the four-point correlation function (103) and depends on the coupling constant. The function \(\mathcal{K}_{K}(j_{1}+j_{2},z)\) is called the detector kernel. It is independent of the coupling constant and is given by
\[\mathcal{K}_{K}(j,z)=\sum_{k=0,1,2}z^{-j-k}\,(-1)^{k}\binom{k}{2} \,\frac{\Gamma(j)\Gamma(j+k)}{\Gamma(j-2)\Gamma(j+k-2)}\] \[\times\frac{\Gamma(K-1)\Gamma(K+1)}{\Gamma(K-2+j+k)\Gamma(K+3-j- k)}\,{}_{2}F_{1}\left(\left.\begin{matrix}3-j-k,3-j-k\\ K+3-j-k\end{matrix}\right|\!z\right). \tag{107}\]
The derivation of this relation can be found in Appendix C, see (106) for \(s_{1}=s_{2}=2\). We see that the dependence on the angular variable \(z\) in (106) comes only through the detector kernel. We remark that the relation (106) holds for any value of the coupling constant.
One can check that the function \(\mathcal{F}_{K}(z)\) defined in (106) satisfies the differential equation (107), irrespectively of the choice of the Mellin amplitude \(M_{K}(j_{1},j_{2})\), provided that the latter does not depend on \(K\). For example, this is the case at weak coupling in the planar limit for sufficiently large \(K\), namely \(K\geq\ell+1\), see (105).
In the special case \(K=2\) the kernel simplifies drastically and it is given by (118) (see [16]-[19]). In the next subsection, we discuss the asymptotic behavior of the kernel (107) at large \(K\). As we show below, the main advantage of the Mellin representation (106) is that it allows us to demonstrate directly that the corrections to the energy correlation scale as \(1/K\) for a finite coupling constant. In particular, at weak coupling
\[\mathcal{F}_{K}^{(\ell)}(z)=\frac{1}{K}\varphi^{(\ell)}(z)+O(1/K^{2})\,. \tag{108}\]
This relation generalizes (107) to any loop order.
### Leading term of the large \(K\) expansion
For arbitrary \(K\) the detector kernel (107) is given by a sum of three terms, each containing a hypergeometric function. At large \(K\) these functions can be replaced by 1. In this way, we obtain that, to the leading order in \(1/K\), the detector kernel (107) takes the form
\[\mathcal{K}_{K}(j,z)=\frac{1}{K}\kappa(j,z)+O(K^{-2})\,, \tag{109}\]
where the leading coefficient function \(\kappa(j,z)\) is given by
\[\kappa(j,z)=(j-2)(j-1)\,z^{-2-j}\,\left[j(1+j)+2j(1-j)z+(j-1)(j-2)z^{2}\right]\,. \tag{110}\]
This relation is conveniently rewritten in terms of a fourth-order differential operator,
\[\kappa(j,z)=\mathcal{D}z^{-j+1}\,,\qquad\mathcal{D}:=\frac{1}{z}\frac{d}{dz}z^{2} \frac{d}{dz}\frac{(1-z)^{2}}{z^{2}}\frac{d}{dz}z^{2}\frac{d}{dz}\,. \tag{111}\]
It coincides with the analogous operator in (100) after replacing \(\gamma\) with \(1/z\). Inserting (111) into (101) and changing the integration variable \(j_{2}\) to \(j=j_{1}+j_{2}\), we arrive at the relation (100) with the function \(\varphi^{(\ell)}(z)\) given by
\[\varphi^{(\ell)}(z)=\mathcal{D}\int\frac{djdj_{1}}{(2\pi i)^{2}}\left[\frac{ \Gamma(1-j)}{\Gamma(1-j_{1})\Gamma(1-j+j_{1})}\right]^{2}M^{(\ell)}(j_{1},j-j_ {1})\,z^{-j+1}\,, \tag{112}\]
where \(M^{(\ell)}(j_{1},j-j_{1})\) is the \(O(\lambda^{\ell})\) correction to the Mellin amplitude \(M_{K}(j_{1},j-j_{1})\).
To illustrate the relation (112), we repeat the calculation of the function \(\varphi^{(\ell)}(z)\) at one loop, i.e. for \(\ell=1\). The one-loop Mellin amplitude is given by (110). After its substitution in (112) the integration over \(j_{1}\) can be done immediately leading to
\[\varphi^{(1)}(z)=\mathcal{D}\int\frac{dj}{2\pi i}\frac{1}{2j}\left[\frac{\pi} {\sin(\pi j)}\right]^{2}z^{-j+1}=\mathcal{D}\left[\frac{z}{2}\left(\text{Li}_{ 2}(1-z)-\zeta_{2}\right)\right]. \tag{113}\]
Here in the second relation we closed the integration contour in the left half-plane and picked the residues at the poles \(j=-1,-2,\dots\). Applying the differential operator (111) we recover the expression (109), obtained in the previous section from the generating function.
### Integral relation between the kernels at large \(K\)
The differential operator in (111) allows us to establish a remarkably simple relation between the detector kernels (100) for \(K=2\) and for \(K\to\infty\).
We recall that the kernel \(\mathcal{K}_{K=2}(j,z)\) is given by (111). Applying the identity
\[z^{-j+1} =-z^{2}\frac{\sin(\pi j)}{\pi}\int_{0}^{1}d\tau\,\frac{(1-\tau)^ {j}\tau^{-j-1}}{z+\tau(1-z)}\] \[=-\frac{z^{2}}{2}\int_{0}^{1}d\tau\,\frac{\tau(1-\tau)}{z+\tau(1 -z)}\,\,\mathcal{K}_{K=2}(j,\tau)\,, \tag{114}\]
we get from (111)
\[\kappa(j,z)=-\frac{1}{2}\,\mathcal{D}\left[z^{2}\int_{0}^{1}d\tau\,\frac{ \tau(1-\tau)}{z+\tau(1-z)}\,\,\mathcal{K}_{K=2}(j,\tau)\right]. \tag{115}\]
Notice that the Mellin parameter \(j\) enters the right-hand side of (115) through the argument of the \(K=2\) detector kernel. Then, substituting (109) and (115) into (101) we can swap the Mellin integration with that in (115) to get
\[\varphi^{(\ell)}(z)=\mathcal{D}\left[-\frac{z^{2}}{2}\int_{0}^{1}d\tau\,\frac{ \tau(1-\tau)}{z+\tau(1-z)}\mathcal{F}^{(\ell)}_{\text{aux}}(\tau)\right], \tag{116}\]
where \({\cal F}^{(\ell)}_{\rm aux}(\tau)\) is the convolution of the Mellin amplitude of the correlation function (5.2) and the \(K=2\) detector kernel,
\[{\cal F}^{(\ell)}_{\rm aux}(z)=\int\frac{djdj_{1}}{(2\pi i)^{2}}\left[\frac{ \Gamma(1-j)}{\Gamma(1-j_{1})\Gamma(1-j+j_{1})}\right]^{2}M^{(\ell)}_{K}(j_{1}, j-j_{1}){\cal K}_{K=2}(j,z)\,. \tag{5.19}\]
At one loop, the Mellin amplitude \(M^{(1)}_{K}\) is independent of \(K\) and is shown in (B.14). As a consequence, the function \({\cal F}^{(1)}_{\rm aux}(z)\) coincides with the one-loop correction to the energy correlation \({\cal F}^{(1)}_{K=2}(z)\). Recalling the one-loop result (4.10), we apply the relation (5.18) and reproduce \(\varphi^{(1)}(z)\) in (4.27).
In summary, we have calculated the leading term in (5.10) in three different ways: (i) extracting it from the generating function (4.23); (ii) doing the Mellin integrations in (5.14); (iii) doing the one-fold integration in (5.18).
At two loops, the Mellin amplitude \(M^{(2)}_{K}\) takes different forms for \(K=2\) and \(K\geq 3\), see (B.15). In the former case, the function \({\cal F}^{(2)}_{\rm aux}(z)\) coincides with the two-loop correction to the energy correlation \({\cal F}^{(2)}_{K=2}(z)\) in (B.16). In the latter case, \({\cal F}^{(2)}_{\rm aux}(z)\) is independent of \(K\) and it coincides with the auxiliary function in (B.18).
At higher loops, a similar transition happens at \(K=\ell+1\). The underlying reason for this was explained in the beginning of Section 4. For \(K\geq\ell+1\) the Mellin amplitude \(M^{(\ell)}_{K}\) ceases to depend on \(K\) and the same is true for the function \({\cal F}^{(\ell)}_{\rm aux}(z)\). Then, it follows from (5.18) and (5.10) that the leading \(O(1/K)\) correction to the energy correlation \(\varphi^{(\ell)}(z)\) is independent of \(K\).
### Two-loop corrections to the large \(K\) asymptotics
In this subsection, we use the relation (5.18) at \(\ell=2\) and compute the function \(\varphi^{(2)}(z)\) defining the leading term of the large \(K\) asymptotics of the energy correlation at two loops. The evaluation of the integral in (5.18) is much simpler than the double Mellin integral (5.14). As explained above, the function \({\cal F}^{(2)}_{\rm aux}(z)\) in (4.36) should not be confused with the two-loop energy correlation \({\cal F}^{(2)}_{K=2}(z)\) defined in (4.34).
According to (4.36), the function \({\cal F}^{(2)}_{\rm aux}(z)\) is a combination of classical polylogarithms of transcendental weight up to three. It can also be expressed in terms of harmonic polylogarithms [29], whose arguments define an alphabet of three letters,
\[z\,,\quad 1-\sqrt{z}\,,\quad 1+\sqrt{z}\,. \tag{5.20}\]
Note that the two-loop correction (4.30) involves the same HPL alphabet.
The integration of \({\cal F}^{(2)}_{\rm aux}(z)\) in (5.18) is done with the help of HyperInt[30]. The result can be expressed in terms of classical polylogarithms of weight up to four but the HPL alphabet (5.20) is not sufficient anymore, the new letter \(\sqrt{1-z}+i\sqrt{z}\) has to be added.11 Schematically
we can write
\[z^{2}\int_{0}^{1}d\tau\frac{\tau(1-\tau)}{z+\tau-z\tau}\mathcal{F} ^{(2)}_{K=2}(\tau)=\frac{z}{(1-z)^{2}}\Big{[} g_{1}^{(3)}+zg_{2}^{(3)}+g_{3}^{(4)}+zg_{4}^{(4)}+z^{2}g_{5}^{(4)}\] \[+zh_{1}^{(4)}+i\sqrt{z(1-z)}h_{2}^{(3)}\Big{]}, \tag{101}\]
where \(g_{i}^{(w)}\) and \(h_{i}^{(w)}\) are multi-linear combinations (with rational coefficients) of classical polylogarithms and zeta-values of homogeneous weight \(w\). The \(g-\)functions are spanned over products of
\[\log(z)\,,\quad\text{Li}_{2}(z)\,,\quad\text{Li}_{3}(z)\,,\quad\text{Li}_{3}(1 -z)\,,\quad\text{Li}_{4}(z)\,,\quad\text{Li}_{4}(1-z)\,,\quad\text{Li}_{4} \left(-\frac{1-z}{z}\right), \tag{102}\]
whereas the \(h-\)functions are spanned over products of polylogarithms that depend on the new letter and their arguments are naturally expressed in terms of \(x:=i\sqrt{\frac{z}{1-z}}\),
\[\log(x),\ \log(1\pm x),\ \text{Li}_{2}(x),\ \text{Li}_{3}\left(\frac{x}{1+x} \right),\ \text{Li}_{4}\left(\frac{x}{1+x}\right). \tag{103}\]
Note that (101) is invariant upon the reflection \(\sqrt{z}\to-\sqrt{z}\) that is equivalent to the complex conjugation \(x\to x^{*}\).
Substituting (101) into (102), we apply the differential operator (102) and obtain a closed-form expression for the function \(\varphi^{(2)}(z)\), to be found in the ancillary file. We recall that this function defines the leading large \(K\) behavior (100) of the energy correlation at two loops. It is interesting to note that, due to the appearance of the new letter in the HPL alphabet, its expression involves a larger set of functions than the two-loop energy correlation \(\mathcal{F}^{(2)}_{K}(z)\) at finite \(K\). Also, the maximal transcendental weight of these functions varies with \(K\), as summarized in Table 1.
The dependence of \(z\varphi^{(2)}(z)\) on the angular variable \(0<z<1\) is shown in Figure 6 (see the curve labelled 'asympt'). For \(z\to 1\) it approaches a finite value,
\[\varphi^{(2)}(z)=\frac{7\zeta_{4}}{4}+\frac{85\zeta_{3}}{6}+\frac{181\zeta_{2 }}{180}+\frac{49}{144}+O\left(1-z\right). \tag{104}\]
\begin{table}
\begin{tabular}{c|c c c} \hline \hline loop order \(\ell\) & \(K=2\) & \(K\geq 3\) & \(K\to\infty\) \\ \hline
0 & 0 & 1 & 0 \\
1 & 1 & 3 & 2 \\
2 & 3 & 5 & 4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Maximal transcendental weight of the polylogarithms in the expression for the energy correlation \(\mathcal{F}^{(\ell)}_{K}(z)\) at \(\ell\) loops for finite \(K\) and in the limit \(K\to\infty\).
For \(z\to 0\) it grows as \(\log z/z\),
\[\varphi^{(2)}(z)=\frac{3\log z}{z}+\left(-\frac{3}{2}\zeta_{3}+3\zeta_{2}-\frac{7 }{2}\right)\frac{1}{z}+O\left(\log^{2}z\right). \tag{102}\]
We verify that this relation is in agreement with the large \(K\) limit of (101).
### Subleading corrections
So far we have discussed only the leading term in the large \(K\) expansion of the energy correlation (100),
\[\mathcal{F}_{K}^{(\ell)}(z)=\frac{1}{K}\varphi^{(\ell)}(z)+\sum_{n=2}^{\infty }\frac{1}{K^{n}}\varphi_{n}^{(\ell)}(z)\,. \tag{103}\]
The differential equation (100) allows us to find the subleading coefficient functions \(\varphi_{n}^{(\ell)}(z)\) in terms of the leading function \(\varphi^{(\ell)}(z)\equiv\varphi_{n=1}^{(\ell)}(z)\).
Let us rewrite (100) as
\[\mathbb{D}\mathcal{F}_{K}^{(\ell)}(z)=K\left[(K-1)\mathcal{F}_{K}^{(\ell)}(z )-(K-2)\mathcal{F}_{K-1}^{(\ell)}(z)\right]\,, \tag{104}\]
where the differential operator \(\mathbb{D}\) is defined in (101). Substituting the expansion (103) into this equation and comparing the coefficients on both sides, we derive a recursion relation for \(\varphi_{n}(z)\) with \(n\geq 2\). The solution is
\[\varphi_{2}^{(\ell)}(z) =(1-\mathbb{D})\,\varphi^{(\ell)}(z)\,,\] \[\varphi_{3}^{(\ell)}(z) =\left(1-\mathbb{D}+\frac{1}{2}\mathbb{D}^{2}\right)\varphi^{( \ell)}(z)\,,\] \[\varphi_{4}^{(\ell)}(z) =\left(1-\mathbb{D}+\frac{1}{3}\mathbb{D}^{2}-\frac{1}{6} \mathbb{D}^{3}\right)\varphi^{(\ell)}(z)\,,\quad\ldots \tag{105}\]
For \(z\to 1\), the functions (103) approach a finite value. At small \(z\), replacing \(\varphi^{(\ell)}(z)\sim(\log z)^{\ell-1}/z\) (see (102)) in (105), it is straightforward to verify that the ratio \(\varphi_{n}^{(\ell)}(z)/\varphi^{(\ell)}(z)\) goes to \((-1)^{n-1}\) for \(z\to 0\). Therefore, the subleading terms in (103) effectively modify the coefficient in front of the leading term to \(1/(K+1)\)
\[\mathcal{F}_{K}(z)=\frac{3}{K+1}z^{-1+\frac{\lambda}{4\pi^{2}}}+\ldots\,, \tag{106}\]
where the dots denote the subleading corrections suppressed by powers of \(\lambda\) and \(z\). At two loops, the relation (106) is in agreement with (102).
Contact terms
The expressions for the energy correlations derived in the previous sections are valid only for \(0<z<1\). To extend them to the end points \(z=0\) and \(z=1\), we have to add contact terms. As was explained above, such terms are needed to ensure that the sum rules (3.1) are satisfied. Furthemore, the obtained expressions for the energy correlations have non-integrable singularities at the end points \(z=0\) (for \(K\geq 2\) and \(K\to\infty\)) and \(z=1\) (for \(K=2\)) and require a careful treatment.
In this section we present two complimentary approaches to computing the contact terms. We start with the simplest example of the one-loop correction to the energy correlation \(\mathcal{F}^{(1)}_{K=2}(z)\) given by (4.1) and proceed to the case of arbitrary \(K\) at one loop. In Section 6.3 we compute the contact terms at two loops.
### Sum rule approach
At \(K=2\), the one-loop correction to the EEC (4.1) is singular at the end points \(z=0\) and \(z=1\). Following Section 4 in Ref. [7] (see also [6; 8]), we define the regular (i.e. integrable) part by subtracting the non-integrable singularities from (4.1),
\[\mathcal{F}^{(1)\,\text{reg}}_{K=2}(z)=-\frac{\log(1-z)}{z^{2}(1-z)}-\frac{1} {z}+\frac{\log(1-z)}{1-z}=-\frac{1}{z^{2}}\left(z+(z+1)\log(1-z)\right). \tag{6.1}\]
The resulting function \(\mathcal{F}^{(1)\,\text{reg}}_{K=2}(z)\) is integrable in the interval \(0\leq z\leq 1\). Next, we compensate the subtracted singular terms by adding them up but now interpreted as (integrable) plus-distributions (for the definitions see (D.1) and (D.2)). In addition, we allow for contact terms at the end points with arbitrary coefficients,
\[\mathcal{F}^{(1)}_{K=2}(z)=\mathcal{F}^{(1)\,\text{reg}}_{K=2}(z)+\left[\frac {1}{z}\right]_{+}-\left[\frac{\log(1-z)}{1-z}\right]_{+}+C_{1}\delta(z)+C_{2} \delta(1-z)\,. \tag{6.2}\]
The above procedure is the most general regularization of the energy correlation at the end points consistent with the sum rules (3.1). Indeed, subtracting the poles makes the function integrable. The subsequent addition of the subtracted term in the form of the plus-distributions does not modify the energy correlation (4.1) for \(0<z<1\), at the same time maintaining the integrability. At this stage the regularized expression (6.2) still contains the arbitrary coefficients \(C_{1}\) and \(C_{2}\), which is the usual ambiguity in singular distributions. They can be determined from the sum rules (3.1).
Substituting (6.2) into the sum rules (3.1) we get for \(K=2\)
\[(1+\zeta_{2})+C_{1}+C_{2}=\zeta_{2}+C_{2}=0\,, \tag{6.3}\]
leading to \(C_{1}=-1\) and \(C_{2}=-\zeta_{2}\). Finally, the complete one-loop expression for the energy correlation at weight \(K=2\) is
\[\mathcal{F}^{(1)}_{K=2}(z)=\mathcal{F}^{(1)\,\text{reg}}_{K=2}(z)+\left[\frac {1}{z}\right]_{+}-\left[\frac{\log(1-z)}{1-z}\right]_{+}-\delta(z)-\zeta_{2} \delta(1-z)\,, \tag{6.4}\]
where the regular part \({\cal F}^{(1){\rm reg}}_{K=2}(z)\) is given by (6.1).
For higher weights \(K\geq 3\), the contact terms for the energy correlation \({\cal F}^{(1)}_{K}(z)\) can be found in the same way. We recall that the function \({\cal F}^{(1)}_{K\geq 3}(z)\) is singular for \(z\to 0\) (see (4.21)) but it is finite at \(z\to 1\) for \(K\geq 4\) (see (4.22)). Then we define the regular function \({\cal F}^{(1){\rm reg}}_{K}(z)\) by subtracting the non-integrable singularity at \(z=0\),
\[{\cal F}^{(1){\rm reg}}_{K}(z)=\frac{1}{K!}(\partial_{t})^{K}G(z, t)\Big{|}_{t=0}-\frac{3}{K+1}\frac{1}{z}\,, \tag{6.5}\]
where the first term on the right-hand side involves the generating function (4.23). To get the complete one-loop result, we add to \({\cal F}^{(1){\rm reg}}_{K}(z)\) the plus-distribution term \([1/z]_{+}\) and calculate the coefficient of the contact term \(\delta(z)\) with the help of the sum rules (3.13). In this way, we find for \(K\geq 3\)
\[{\cal F}^{(1)}_{K}(z)={\cal F}^{(1){\rm reg}}_{K}(z)+\frac{3}{K+1 }\left[\frac{1}{z}\right]_{+}+\frac{c^{(1)}_{\delta,K}}{K+1}\,\delta(z)\,, \tag{6.6}\]
where
\[c^{(1)}_{\delta,K}=\frac{5}{2}-\frac{3}{K-1}-\frac{3}{K}-\frac{3} {K+1}\,. \tag{6.7}\]
Let us emphasize that only the contact term at the end point \(z=0\) is required, and its coefficient is fixed by one of the sum rules in (3.13). The other sum rule is not sensitive to the contact term in (6.6), and it should be automatically satisfied. This is a useful cross-check of our EEC calculation,
\[\frac{3}{K+1}+\int_{0}^{1}dz\,z\,{\cal F}^{(1){\rm reg}}_{K}(z)=0\,. \tag{6.8}\]
At large \(K\), we apply (4.27) to find the regular part of \(\varphi^{(1)}(z)\),
\[\varphi^{(1)}_{\rm reg}(z)=2{\rm Li}_{2}(1-z)-6\log(z)-2\zeta_{2} -\frac{13}{2}\,. \tag{6.9}\]
The sum rule (3.13) enables us to calculate the contact terms,
\[\varphi^{(1)}(z)=\varphi^{(1)}_{\rm reg}(z)+\left[\frac{3}{z} \right]_{+}+\frac{5}{2}\delta(z)\,. \tag{6.10}\]
We see that the distribution terms in (6.10) agree with those in (6.6) at large \(K\).
### Mellin approach
In this subsection, we follow Refs. [26; 27] to compute the contact terms for the function \(\varphi^{(1)}(z)\) which is given by the Mellin integral (5.15).
We computed the integral in (5.15) by closing the contour from the left and picking the poles at \(j=-1,-2,-3,\ldots\). The integrand in (5.15) involves \({\cal D}(z^{-j+1})\). It is given by the sum
of three terms of the form \(z^{\alpha}\) with \(\alpha=-j-2,-j-1,-j\) which we now treat as distributions \(z^{\alpha}_{+}\), see (114). The key observation is that this distribution has a contact term in its Laurent expansion, see (115). This creates an extra pole under the Mellin integral at \(\alpha=-1\), whose contribution accounts for the contact term in the energy correlation. Of the three values of \(\alpha\) listed above, only \(\alpha=-j-2\) is inside the integration contour. Replacing \(z_{+}^{-j-2}=-\frac{1}{j+1}\delta(z)+\ldots\), we compute the residue at \(j=-1\),
\[-\delta(z)\underset{j=-1}{\mathrm{Res}}\left(\frac{1}{2}(j-2)(j-1)\left[\frac {\pi}{\sin(\pi j)}\right]^{2}\right)=\frac{5}{2}\delta(z)\,, \tag{121}\]
in agreement with (120). The energy correlation \(\mathcal{F}^{(1)}_{K=2}\) with its two contact terms is treated similarly.
We see that the two approaches - the sum rule method of fixing the contact terms and the approach based on the careful calculation of the Mellin integral - give identical results for the one-loop energy correlation. However, technically the Mellin approach may be more difficult to implement beyond one loop. Indeed, there the function \(M^{(\ell)}(j_{1},j-j_{1})\) in (116) is given by a \(2(\ell-1)-\)fold Mellin integral. The sum rule approach is more efficient, provided that the function \(\mathcal{F}^{(\ell)}(z)\) is known explicitly for \(0<z<1\). We apply it at the two-loop level in the next subsection.
### Two-loop contact terms from the sum rules
Let us start with \(K=2\). The two-loop function \(\mathcal{F}^{(2)}_{K=2}(z)\) (117) was calculated in [18] for \(0<z<1\). Like the one-loop case, this function has non-integrable singularities at \(z=0\) and \(z=1\). Promoting them to plus-distributions, adding the contact terms \(\delta(z)\) and \(\delta(1-z)\) and using the sum rules (10) to fix their coefficients, we obtain
\[\mathcal{F}^{(2)}_{K=2}(z) =\mathcal{F}^{(2)\,\mathrm{reg}}_{K=2}(z)+\left(-3+\zeta_{2}- \frac{\zeta_{3}}{2}\right)\left[\frac{1}{z}\right]_{+}+\left[\frac{\log(z)}{ z}\right]_{+}\] \[+\frac{\zeta_{3}}{2}\left[\frac{1}{1-z}\right]_{+}+\frac{3\zeta_ {2}}{2}\left[\frac{\log(1-z)}{1-z}\right]_{+}+\frac{1}{2}\left[\frac{\log^{3} (1-z)}{1-z}\right]_{+}\] \[+\left(7-3\zeta_{2}+\frac{11}{4}\zeta_{4}\right)\delta(z)+5\zeta _{4}\delta(1-z)\,. \tag{122}\]
Here the regular function \(\mathcal{F}^{(\ell)\,\mathrm{reg}}_{K}\) is obtained from \(\mathcal{F}^{(2)}_{K=2}(z)\) by subtracting the poles at \(z=0\) and \(z=1\).
For \(K\geq 3\), the two-loop function \(\mathcal{F}^{(2)}_{K}(z)\) contains a non-integrable singularity at \(z=0\) calculated in (116). For \(z\to 1\), it is regular for \(K\geq 4\) and has an integrable logarithmic singularity for \(K=3\). As a result, only the contact term \(\delta(z)\) is required. As before, we promote the poles at \(z=0\) to plus-distributions to get
\[\mathcal{F}^{(2)}_{K}(z)=\mathcal{F}^{(2)\mathrm{reg}}_{K}(z)+\frac{3}{K+1} \left[\frac{\log(z)}{z}\right]_{+}+\frac{c_{1/z,K}^{(2)}}{K+1}\left[\frac{1}{ z}\right]_{+}+\frac{c_{\delta,K}^{(2)}}{K+1}\,\delta(z)\,, \tag{123}\]
where \(c^{(2)}_{1/z,K}\) is given in (111).
To find the coefficient of the contact term \(c^{(2)}_{\delta,K}\), we substitute (108) into the sum rules (109). We implemented the integration in (109) numerically and carried out the calculation for \(K\leq 15\). We interpolated the HPL expressions (110) by their generalized series expansions in the vicinity of \(z=0\) and \(z=1\), and integrated numerically the interpolation formulae achieving a precision of at least 40 digits. This level of precision was sufficient to reconstruct \(c^{(2)}_{\delta,K}\) with \(K\leq 15\) as rational linear combinations of the numbers \(\{1,\zeta_{2},\zeta_{3},\zeta_{4}\}\) using the PSLQ algorithm. Exploiting these results, we arrived at an expression for \(c^{(2)}_{\delta,K}\) valid for any weight \(K\),
\[c^{(2)}_{\delta,K} =\frac{111}{16}\zeta_{4}+\frac{1}{2}\left(-\frac{23}{2}+\frac{3}{K -1}+\frac{3}{K}+\frac{3}{K+1}\right)\zeta_{3}+3H^{(2)}_{K+1}+\frac{15}{2}\] \[\quad-\left(\frac{7}{2}+\frac{3}{K-1}+\frac{3}{K}+\frac{3}{K+1} \right)\zeta_{2}+\frac{8}{K-1}+\frac{7}{2K}-\frac{1}{K+1}\,, \tag{112}\]
where \(H^{(2)}_{n}\) is the \(n\)-th generalized harmonic number of degree two. We have also checked numerically that the second sum rule in (109) is satisfied for \(K\leq 15\),
\[-\frac{3}{K+1}+\frac{c^{(2)}_{1/z}(K)}{K+1}+\int_{0}^{1}dz\,z\,{\cal F}^{(2) \text{reg}}_{K}(z)=0\,. \tag{113}\]
Similarly, we use (111) supplemented with the sum rule (109) to calculate the contact terms for the two-loop function \(\varphi^{(2)}(z)\) defining the leading large \(K\) asymptotics (107) of the energy correlation,
\[\varphi^{(2)}(z) =\varphi^{(2)}_{\text{reg}}(z)+3\left[\frac{\log(z)}{z}\right]_{ +}+\left(-\frac{7}{2}+3\zeta_{2}-\frac{3}{2}\zeta_{3}\right)\left[\frac{1}{z }\right]_{+}\] \[+\left(\frac{15}{2}-\frac{\zeta_{2}}{2}-\frac{23}{4}\zeta_{3}+ \frac{111}{16}\zeta_{4}\right)\delta(z)\,. \tag{114}\]
We would like to emphasize that, unlike (108), this relation was derived analytically.
In the large \(K\) limit the function (108) takes the expected form (107). Comparing the coefficients of the contact terms in (109) and (114) we obtain the consistency relation
\[\lim_{K\to\infty}c^{(2)}_{\delta,K}=\frac{111}{16}\zeta_{4}-\frac{23}{4}\zeta _{3}-\frac{\zeta_{2}}{2}+\frac{15}{2}\,. \tag{115}\]
We verified using (112) that it is indeed satisfied. Analogous consistency relations hold for \(c^{(2)}_{1/z,K}\) in (111) at large \(K\).
## 7 Event shapes at strong coupling
After exploring the dependence of the energy correlations on \(K\) at weak coupling \(\lambda\ll 1\), in this section we perform the same analysis at strong coupling \(\lambda\gg 1\). We also consider event shapes in \({\cal N}=4\) SYM with detectors other than energy calorimeters.
The energy correlations at strong coupling were analyzed in [5], where their computation was mapped to the propagation of a probe particle (dual to the source operator) on a shock wave background (dual to the energy calorimeters). In particular, it was found that in theories which are dual to matter minimally coupled to gravity in AdS, the energy correlations do not depend on the angle,
\[\langle\widehat{\mathcal{E}}(n_{1})\ldots\widehat{\mathcal{E}}(n_{ k})\rangle_{\text{GR}}=1. \tag{110}\]
This universality is the expression of the equivalence principle in the bulk since the energy correlations in this approximation effectively measure the Shapiro time delay, which in general relativity does not depend on the type of particle in question.
The leading stringy correction to the EEC was computed in [5],
\[\langle\widehat{\mathcal{E}}(n_{1})\widehat{\mathcal{E}}(n_{2} )\rangle_{c}=\frac{4\pi^{2}}{\lambda}\left(1-6z(1-z)\right). \tag{111}\]
It was also found there that the connected energy correlations obey
\[\langle\widehat{\mathcal{E}}(n_{1})\widehat{\mathcal{E}}(n_{2})...\widehat{\mathcal{E}}(n_{k})\rangle_{c}\sim\frac{1}{\lambda^{k/2}}\,. \tag{112}\]
This relation is analogous to (4) where the role of \(\Delta_{H}\) is played by the coupling constant \(\lambda\).
In this section we rederive and generalize these results starting from the Mellin space representation of the four-point functions \(\langle O_{K}O_{2}O_{2}\bar{O}_{K}\rangle\) at strong coupling. We consider different event shapes measuring the correlations of various conserved charges (not just energy, see also footnote 20). In close analogy with (109) we introduce
\[\langle\mathcal{J}_{s_{1}}(n_{1})\mathcal{J}_{s_{2}}(n_{2}) \rangle_{K}=\frac{(q^{2})^{s_{1}+s_{2}}}{2(4\pi)^{2}(qn_{1})^{s_{1}+1}(qn_{2}) ^{s_{2}+1}}\mathcal{F}_{K}^{s_{1},s_{2}}(z)\,, \tag{113}\]
where the flow operator \(\mathcal{J}_{s}(n)\) is built out of the conserved current of spin \(s\). The energy-energy correlation studied in the previous section corresponds to \(\mathcal{F}_{K}^{2,2}\). For \(s=1\) we get the charge detector. The corresponding flow operator \(\mathcal{J}_{s=1}(n)\) is given by the light-ray transform of the spin one conserved current (see (110)). For \(s=0\) we get the scalar detector \(\mathcal{J}_{s=0}(n)\), it is given by the light-ray transform of the half-BPS operator of dimension \(\Delta=2\) (see (110)). More details about the correlations (113) can be found in appendix C.
In this section we compute \(\mathcal{F}_{K}^{s_{1},s_{2}}(z)\) in the supergravity approximation and derive the leading stringy correction to it,
\[\mathcal{F}_{K}^{s_{1},s_{2}}(z)=\mathcal{F}_{K,\text{sugra}}^{s_ {1},s_{2}}(z)+\frac{4\pi^{2}}{\lambda}\mathcal{F}_{K,\text{stringy}}^{s_{1},s _{2}}(z)+\ldots\,, \tag{114}\]
where the dots denote subleading corrections. We find that in the \(K\to\infty\) limit the event shapes (113) do not depend on the angle and are given by the product of the one-point functions \(\langle\mathcal{J}_{s_{1}}(n_{1})\rangle_{K}\langle\mathcal{J}_{s_{2}}(n_{2}) \rangle_{K}\). We also analyze the event shapes away from the large \(K\) limit, and we find that for event shapes other than the energy-energy correlation the dependence on \(K\) is
nontrivial. For the EEC, in contrast to the weak coupling results where the suppression factor is \(\sim 1/K\), now the leading correction is suppressed by \(1/\lambda\).
In our analysis we consider \(K\)'s that _do not_ scale parameterically with \(\lambda\), see e.g. [32]. Taking \(K\sim\lambda^{1/4}\), which corresponds to a source dual to short massive string modes, we do not expect to observe any change to (101) and (102) above. For \(K\gtrsim\sqrt{\lambda}\), in which case the source is described by a big classical string [33], we expect to get back the \(1/K\) suppression of the leading correction to the energy correlation [34]. It would be interesting to check this explicitly.
### Supergravity approximation
The four-point function of the half-BPS operators \(\langle O_{K}O_{2}O_{2}\bar{O}_{K}\rangle\) has been actively studied at strong coupling recently, starting from [35; 36]. We will be only interested in stringy corrections here, leaving quantum gravity corrections aside.
The stringy corrections to the \(\langle O_{K}O_{2}O_{2}\bar{O}_{K}\rangle\) Mellin amplitude were considered in [37]. To utilize their results, let us notice that the Mellin variables \((s,t)\) used in that paper are related to \((j_{1},j_{2})\) used in the present paper as follows12
Footnote 12: The weight \(K\) is denoted by \(p\) in [37].
\[2j_{1}=K-s-t,\quad 2j_{2}=t-K. \tag{106}\]
The Mellin amplitude \(\mathcal{M}_{K}(s,t)\) in [37] is related to our \(M_{K}(j_{1},j_{2})\) as follows,
\[\frac{M_{K}(j_{1},j_{2})}{[\Gamma(1-j_{1})\Gamma(1-j_{2})]^{2} \Gamma(2+j_{1}+j_{2})}=\frac{1}{K}\Gamma(j_{1}+j_{2}+K)\mathcal{M}_{K}(-2(j_{1 }+j_{2}),2j_{2}+K)\,. \tag{107}\]
Let us notice that while the normalization of the scalar correlator \(\langle O_{K}O_{2}O_{2}\bar{O}_{K}\rangle\) does not have an intrinsic meaning, it is important for us because we use superconformal Ward identities to relate it to the correlators with conserved currents which have canonical normalization, see appendix C for details. This explains the factor \(\frac{1}{K}\) in the formula above.
The supergravity result for the Mellin amplitude \(\mathcal{M}_{K}(s,t)\) takes the form
\[\mathcal{M}_{K}^{\rm sugra}(s,t)=\frac{4K}{\Gamma(K-1)}\frac{1}{(s-2)(t-K)(K- s-t)}\,. \tag{108}\]
It is then easy to compute various event shapes by plugging this Mellin amplitude into the master formula (107). We get the following results
\[\mathcal{F}^{0,0}_{K,{\rm sugra}}(z) =2z^{2}+K(K-2)z+\frac{1}{2}K(K-1)^{2}(K-2)\,,\] \[\mathcal{F}^{1,1}_{K,{\rm sugra}}(z) =2z+\frac{1}{2}(K-2)(K+1)\,,\] \[\mathcal{F}^{2,1}_{K,{\rm sugra}}(z) =-K\,,\] \[\mathcal{F}^{2,2}_{K,{\rm sugra}}(z) =2. \tag{109}\]
It is interesting to notice that while the energy correlations do not depend on \(K\), in agreement with the arguments of [5], the results for other event shapes require simply polynomial corrections. We can also consider the large \(K\to\infty\) limit. According to the general clustering arguments at the beginning of this paper all the event shapes effectively cluster, in other words they become angle-independent,
\[\lim_{K\to\infty}{\cal F}^{0,0}_{K,{\rm sugra}}(z) \sim K^{4}\,\] \[\lim_{K\to\infty}{\cal F}^{1,1}_{K,{\rm sugra}}(z) \sim K^{2}\, \tag{111}\]
with the correction to clustering being of order \(1/K\).
Let us comment on the power of \(K\) that appears in (111). As the two-point function clusters, the power is dictated by the square of the corresponding one-point event shape. This in turn is given by the light transform of the corresponding three-point function which in our case scales as \(\sim K\). The light transform of a detector operator with quantum numbers \((\Delta,J)\) produces an extra factor \(\Delta_{H}^{1-J}\). This is why starting from the three-point function which scales as \(K\), we get \(K^{4}\) for scalar-scalar correlations (detectors of spin \(J=0\)), \(K^{2}\) for charge-charge correlations (spin \(J=1\)), and \(K^{0}\) for energy-energy correlations (spin \(J=2\)).
### Stringy correction
The computation of the stringy corrections to event shapes is subtle because the event shapes are sensitive to the high-energy limit of the scattering amplitudes in AdS. In particular, computing the higher derivative stringy corrections to the Mellin amplitude and plugging them into the formula (109) produces divergent results.
To solve this problem we adopt the approach used in [24] based on the Borel re-summation of the leading high-energy corrections to the Mellin amplitude. The basic observation is that these are controlled by the scattering of strings in flat space, see also [5]. One can then use the formulas that relate the Mellin amplitude to its flat space limit to predict the form of the relevant series [38].
The leading stringy correction to the Mellin amplitude takes the form [37]
\[{\cal M}^{\rm stringy}_{K}(s,t)=\frac{4K}{\Gamma(K-1)}\frac{(K+1)(K+2)(K+3)}{4 \lambda^{3/2}}\zeta(3). \tag{112}\]
If we try to plug this formula into the generating function for the event shapes, we find that the \(t\) integral takes the form \(\int\frac{dt}{2\pi i}{\cal M}^{\rm stringy}_{K}(s,t)\) and is divergent. Therefore we need to re-sum the stringy corrections. More precisely, rescaling \(t\to\sqrt{\lambda}t\) we notice that infinitely many terms in the stringy expansion of the Mellin amplitude become of the same order \(\int dt\frac{t^{2n}}{\lambda^{3/2+n}}\sim\frac{1}{\sqrt{\lambda}}\), which is indeed the expected leading stringy correction to the event shape [5].
In [24], using the flat space limit of the amplitude, it was found that for \(K=2\) the relevant series takes the form
\[\sum_{n=0,n-{\rm even}}c_{n}x^{n},\qquad\qquad c_{n}=\frac{1}{4}\frac{\Gamma(6 +n)\zeta(3+n)}{2^{n+1}}. \tag{113}\]
For general \(K\), using [37], we get instead
\[\sum_{n=0,n-\text{even}}c_{n}x^{n},\qquad\qquad c_{n}=\frac{1}{4}\frac{\Gamma(4+n +K)\zeta(3+n)}{2^{n}\Gamma(1+K)}\,. \tag{111}\]
To perform the Borel sum, we change the summand \(c_{n}x^{n}\to\frac{c_{n}(\sigma x)^{n}}{\Gamma(n+1)}\). We then substitute \(\zeta(3+n)=\sum_{k=0}^{\infty}\frac{1}{(k+1)^{3+n}}\) and perform the sum over \(n\). As a result, we get the following integral when evaluating \(\int\frac{dt}{2\pi i}\mathcal{M}_{K}^{\text{string}}(s,t)\)
\[\int_{0}^{\infty}\frac{dt}{\pi}\int_{0}^{\infty}d\sigma e^{-\sigma}\sum_{k=0}^ {\infty}\frac{1}{(k+1)^{3}}\left[\frac{1}{(1-\frac{i\sigma t}{2(1+k)})^{4+K}} +\frac{1}{(1+\frac{i\sigma t}{2(1+k)})^{4+K}}\right]=\frac{\pi^{2}}{3}\frac{1 }{K+3}\,, \tag{112}\]
where \(\sigma\) is the integration of the Borel transform. To evaluate this integral we notice that if we rescale \(x\to\frac{x}{\sigma}\), which assumes \(\sigma\neq 0\), we get zero. Therefore we can limit the integral over \(\sigma\) to an infinitesimal interval around the origin and drop \(e^{-\sigma}\) from the integrand.
In this way we get for the integral of the re-summed Mellin amplitude
\[\int\frac{dt}{2\pi i}\mathcal{M}_{K}^{\text{string}}(s,t)=\frac{4K}{\Gamma(K- 1)}\frac{(K+1)(K+2)\pi^{2}}{24\lambda}\,. \tag{113}\]
Using this result we can compute the stringy corrections to the event shapes:
\[\mathcal{F}_{\text{stringy}}^{0,0}(z) =2z^{2}\left(1-6z(1-z)+\frac{K(K-2)}{24}(18z+K(K-2)-11)\right)\,\] \[\mathcal{F}_{\text{stringy}}^{1,1}(z) =2z\left(1-6z(1-z)+\frac{(K+1)(K-2)}{8}(3z-2)\right)\,\] \[\mathcal{F}_{\text{stringy}}^{2,1}(z) =-\frac{K+2}{2}\left(1-6z(1-z)\right)\,\] \[\mathcal{F}_{\text{stringy}}^{2,2}(z) =2\Big{(}1-6z(1-z)\Big{)}. \tag{114}\]
As an extra consistency check, we verify that the charge-charge correlation \(\mathcal{F}_{\text{stringy}}^{1,1}(z)\) satisfies the relation \(\int_{0}^{1}dz\,\mathcal{F}_{\text{stringy}}^{1,1}=0\), which follows from the conservation of the total charge. Similarly, \(\int_{0}^{1}dz\mathcal{F}_{\text{stringy}}^{2,2}=\int_{0}^{1}dzz\mathcal{F}_{ \text{stringy}}^{2,2}=0\) due to the energy-momentum sum rules (105). Putting \(K=2\) in (114), we recover the results of [17] that, due to the superconformal Ward identities, all \(\mathcal{F}_{\text{stringy}}^{s_{1},s_{2}}(z)\) are proportional to each other up to a power of \(z\). This simple relationship between event shapes does not hold for \(K>2\).
## 8 Clustering in CFT
In this section we present some arguments in favor of (4) in a general CFT. We start by discussing the clustering of correlation functions involving the stress-energy tensor in heavy states. It is analogous to familiar clustering of correlation functions in QFT as the spatial separation between operators becomes large. We then discuss the leading non-universal correction
to the disconnected result which depends on the details of the theory and the heavy operator in question. We then proceed to event shapes which is the main topic of this paper, and we conjecture that the clustered structure of the stress-energy tensor correlators is not affected by the light-ray transform. Finally, we discuss the special case of heavy states in planar theories \(c_{T}\gg\Delta_{H}\gg 1\), where \(c_{T}\) is defined via the two-point function of stress tensors \(\langle T_{\mu\nu}T_{\rho\sigma}\rangle\sim c_{T}\).13 The statements in this section are based on some basic physics intuition and we do not attempt to prove them rigorously starting from the CFT axioms [40].
Footnote 13: See, for example, [39] for the precise definition.
### Clustering of local operators in a heavy state
In QFT, the correlation functions of local operators factorize when the mutual separation between the local operators goes to infinity [41]. In CFT a closely related property is expected to hold for the correlation functions of local operators in heavy states. We define
\[\llll O_{H}(\infty)\prod_{i}O_{L}(x_{i})O_{H}(0)\rrangle\equiv\lim_{x\to \infty}\frac{\langle O_{H}(x)\prod_{i}O_{L}(x_{i})O_{H}(0)\rangle}{\langle O_ {H}(x)O_{H}(0)\rangle}. \tag{110}\]
Let us also introduce the notation \(T(x)\equiv T_{\mu\nu}(x)z^{\mu}z^{\nu}\) for the stress-energy tensor operator contracted with a null polarization vector \(z^{\mu}\). We keep the dependence on the polarization implicit to avoid cluttering. We can now formulate the clustering of the stress-energy tensor operators in a heavy state as follows
\[\lim_{\Delta_{H}\to\infty}\frac{\llll O_{H}(\infty)\prod_{i}T(x_{i})O_{H}(0) \rrangle}{\prod_{i}\llll O_{H}(\infty)T(x_{i})O_{H}(0)\rrangle}=1. \tag{111}\]
Let us motivate why (111) might be true. We consider a CFT on \(\mathbb{R}\times S^{d-1}\) with the heavy operator defining a state of energy \(E=\frac{\Delta_{H}}{R}\) and the physical distances between the operators being \(L_{ij}=R|x_{i}-x_{j}|\). The key point is that we can naturally associate to the limit \(\Delta_{H}\to\infty\) an infinite volume or thermodynamic limit by sending at the same time \(R\to\infty\), see e.g. [42]. Such a limit is not uniquely defined because it requires specification of which quantities are kept fixed as we take the limit.
It is natural to keep certain local densities fixed as we take the limit. For our purposes we can keep the energy density \(\epsilon=\frac{E}{R^{d-1}}=\frac{\Delta_{H}}{R^{d}}\) fixed. It is also natural to keep the physical distances between the operators \(L_{ij}\) fixed. Notice that it corresponds to the multi-OPE limit \(|x_{ij}|\to 0\) in the space of cross-ratios. On general grounds, we then expect to get nontrivial correlation functions in the infinite volume \(\mathbb{R}^{d-1}\) in an excited state characterized by a characteristic scale \(L_{\epsilon}\). In CFT, due to the absence of dimensionful parameters, such a correlator depends on the dimensionless ratios \(\frac{L_{ij}}{L_{\epsilon}}\). We expect that for \(\frac{L_{ij}}{L_{\epsilon}}\ll 1\) we get nontrivial correlations in the excited state, whereas for \(\frac{L_{ij}}{L_{\epsilon}}\gg 1\) the correlator clusters as the relative spatial separation between the operators goes to infinity.14 Because the stress-energy tensor measures the energy density, the resulting one-point function \(\langle T\rangle_{\epsilon}\neq 0\), and the disconnected piece indeed provides the leading answer to the correlator at large distances.
There is an obvious generalization of the argument above for CFTs with global symmetries. Let us assume for simplicity a CFT with \(U(1)\) global symmetry and denote the corresponding conserved current \(J(x)\equiv J_{\mu}(x)z^{\mu}\). We can then consider a source of large charge and apply the same argument to argue that
\[\lim_{Q\rightarrow\infty}\frac{\langle\!\langle O_{Q}(\infty)\prod_{i}J(x_{i})O_ {Q}(0)\rangle\!\rangle}{\prod_{i}\langle\!\langle O_{Q}(\infty)J(x_{i})O_{Q}( 0)\rangle\!\rangle}=1, \tag{110}\]
where this time we keep the charge density \(q=\frac{Q}{R^{d-1}}\) fixed as we take the limit and we use the fact that \(\langle J\rangle_{q}\neq 0\) because the conserved current measures charge density.
We expect similar arguments to hold for any local operators and not just conserved currents. However, in this case the non-vanishing property of the one-point function is not guaranteed.
Let us next discuss the leading correction to the disconnected result discussed above. For this purpose it is convenient to consider the four-point function. At the level of the original correlation function on the plane the thermodynamic limit discussed above corresponds to the following limit
\[\lim_{\Delta_{H}\rightarrow\infty}\Delta_{H}^{-2}\langle\!\langle O_{H}( \infty)T_{\mu\nu}\left(1-\frac{w}{\Delta_{H}^{1/d}},1-\frac{\bar{w}}{\Delta_{ H}^{1/d}}\right)T_{\rho\sigma}(1)O_{H}(0)\rangle\!\rangle=\langle T_{\mu\nu}(w, \bar{w})T_{\rho\sigma}(0,0)\rangle_{\epsilon}, \tag{111}\]
where we have put all operators on a 2d plane with complex coordinates \((z,\bar{z})\) and we set them to \(\left(1-w/\Delta_{H}^{1/d},1-\bar{w}/\Delta_{H}^{1/d}\right)\) as we take \(\Delta_{H}\rightarrow\infty\). The separation between the operators on the right-hand side after taking the limit is \(x^{2}=w\bar{w}\). The extra factor \(\Delta_{H}^{-2}\) has a kinematic origin and comes from mapping the correlator on the plane to the one on the cylinder, see [42] for details. The two-point function \(\langle T_{\mu\nu}(w,\bar{w})T_{\rho\sigma}(0,0)\rangle_{\epsilon}\) stands for the two-point function in the microcanonical ensemble with energy density \(\epsilon\) for the theory on \(\mathbb{R}^{1,d-1}\). Locally, it is the same as the finite temperature correlator where the temperature is fixed to correctly reproduce the energy density \(\epsilon\).
As we take the operator insertions apart, in an interacting theory the leading correction to the disconnected result is expected to decay exponentially, see e.g. [43],
\[\langle T_{\mu\nu}(w,\bar{w})T_{\rho\sigma}(0,0)\rangle_{\epsilon}=\langle T _{\mu\nu}\rangle_{\epsilon}\langle T_{\rho\sigma}\rangle_{\epsilon}+O(e^{-m_{ \rm th}|w|}), \tag{112}\]
where \(m_{\rm th}\) is the so-called thermal mass. It could also happen that the effective theory that emerges in the limit \(\Delta_{H}\rightarrow\infty\) is gapless, in which case the leading correction to the disconnected piece is a power. For the thermal case it happens for example in the case of the free scalar theory, and for the large charge case in the case of the three-dimensional \(O(2)\) model [44; 45]. In addition to the connected corrections in the macroscopic state we can imagine having corrections related to the finite curvature of the sphere, e.g. terms that go to zero as \(\frac{L_{\epsilon}}{R}\).
To summarize, while the leading result (109) is universal, the corrections to it depend on the details of the theory and the nature of the heavy state. As such, they cannot be computed on general grounds and require a detailed case-by-case analysis.
### Event shapes
Going from the formulas for correlation functions of local operators to the calculation of event shapes includes the extra step of integrating over the detector times, or, equivalently, performing the light transform. When we do that, the separation between the detector operators \(|x_{12}|\) becomes arbitrarily small [46]. The small separation regime includes the light-cone limit and the Regge limit. It could then happen that while the clustering holds for local operators at fixed separation, it fails for the light-ray operators. As a simple example, consider the charge-charge correlation related to (8.3). As we perform the light-transform, the one-point charge correlation produces a finite result, whereas the two-point (or higher-point) function in general will diverge in the Regge limit.15 Therefore, the analog of (8.3), in general will not hold for charge correlations.
Footnote 15: It could be that due to an improved Regge behavior, the charge-charge correlation is finite. For example, this is expected to happen in the \(O(2)\) Wilson-Fischer model [47], see also [48].
The situation is better for the energy correlations which are finite, or IR-safe, observables. In this case, the contribution of the Regge limit to the light transform produces a finite result. The question that remains then is whether this contribution is suppressed, compared to the factorized result in the limit of the heavy source \(\Delta_{H}\to\infty\). We do not know how to show it in general, but based on the explicit example analyzed in the paper and the intuition that the Regge limit is naturally suppressed for energy correlations, we conjecture that in a general CFT
\[\text{{Celestial clustering:}}\quad\lim_{\Delta_{H}\to\infty}\langle \mathcal{E}(n_{1})\ldots\mathcal{E}(n_{k})\rangle=\prod_{i=1}^{k}\frac{(q^{2 })^{2}}{4\pi(qn_{i})^{3}}. \tag{8.6}\]
Intuitively, (8.6) is consistent with the physical picture of a state created by a heavy operator on the celestial sphere having an effective angular correlation length that goes to zero as \(\Delta_{H}\to\infty\). The same picture suggests that the cumulant expansion of the energy correlations (1.3) organizes itself in the hierarchical fashion (1.4).
To understand this better in a non-perturbative setting it is interesting to consider the light-ray OPE, which captures the behavior of the energy correlations at small angles. For simplicity, let us start with the two-point energy correlation \(\langle\mathcal{E}(n_{1})\mathcal{E}(n_{2})\rangle\). The leading OPE contribution at small angles \(z=\sin^{2}(\theta/2)\ll 1\) takes the form (including the 1 from the disconnected contribution)
\[\langle\mathcal{E}(n_{1})\mathcal{E}(n_{2})\rangle\sim 1+\frac{\langle \mathbb{O}_{3}^{+}\rangle_{H}}{z^{1-\gamma_{3}^{+}/2}}\,,\qquad z\ll 1, \tag{8.7}\]
where \(\mathbb{O}_{3}^{+}\) is the spin three light-ray operator of positive signature belonging to the stress-energy tensor Regge trajectory and \(\gamma_{3}^{+}\) is its anomalous dimension.
As follows from (8.7), the contribution of \(\mathbb{O}_{3}^{+}\) to the energy-energy correlation diverges at small angles for \(\gamma_{3}^{+}<2\). Assuming that this relation is satisfied, a necessary condition for (1.4)
to hold is
\[\langle\mathbb{O}_{3}^{+}\rangle_{H} \lesssim\frac{1}{\Delta_{H}^{h_{2}}}\,, \Delta_{H}\to\infty\,. \tag{112}\]
We then see from (110) that there is an emerging characteristic angle defined as \(\frac{1}{\Delta_{H}^{h_{2}}}\frac{1}{\theta_{*}^{2-\gamma_{3}^{+}}}\simeq 1\). It plays the same role as \(\theta_{0}\) in QCD. Namely, for \(\theta\lesssim\theta_{*}\) the second term on the right-hand side of (110) dominates and the energy-energy correlation is in the OPE regime. For \(\theta\gtrsim\theta_{*}\) this term is subdominant and the energy-energy correlation is in the flat regime. Notice that as the angle between the detectors decreases, the transition in QCD is OPE-to-flat, whereas for CFT energy correlations in heavy states it is flat-to-OPE.
Similarly, considering the multi-collinear limit of energy correlations, see e.g. [8; 49], we get a spin \(k+1\) light-ray operator
\[\langle\mathcal{E}(n_{1})\mathcal{E}(n_{2})...\mathcal{E}(n_{k}) \rangle_{c} \sim\langle\mathbb{O}_{k+1}^{+}\rangle_{H}, \tag{113}\]
and the conjecture (4) implies that
\[\langle\mathbb{O}_{k+1}^{+}\rangle_{H} \lesssim\frac{1}{\Delta_{H}^{h_{k}}}, \Delta_{H}\to\infty. \tag{114}\]
Let us now see how this comes about in the simple multi-particle model of the final state. Imagine that the source carries total energy \(E\) which is shared between \(K\) particles. In this case the operator \(\mathbb{O}_{J}^{+}\) schematically measures the \((J-1)\)-th moment of the energy distribution \(\langle E^{J-1}\rangle=\sum_{i=1}^{K}E_{i}^{J-1}\). Taking \(E_{i}=\frac{E}{K}\), we get that \(\langle\mathbb{O}_{J}^{+}\rangle\sim\frac{E}{K^{J-2}}\). In the heavy limit we expect that \(K(\Delta_{H})\to\infty\), which creates the hierarchy between the different connected contributions. For example, in the free scalar theory the source \(\phi^{K}\) creates a \(K\)-particle state, so that \(h_{k}=k-1\).
### Planar theories
In the discussion above we kept \(c_{T}\) fixed as we took \(\Delta_{H}\to\infty\). We expect that the suppression of connected energy correlations takes place in planar theories as well, where \(c_{T}\gg\Delta_{H}\gg 1\). In this case, however, the argument about clustering in the thermodynamic limit does not apply and a different argument is needed. As we have explicitly shown in this paper using the example of half-BPS operators in \(\mathcal{N}=4\) SYM at large charge \(K\), the suppression of connected correlations is different at weak (5) and at strong (6) coupling.
At weak coupling, perturbatively in \(\lambda\), the connected energy correlations are suppressed by \(\frac{1}{K}\). The mechanism is precisely the one described above, where the heavy operator creates a multi-particle state of weakly interacting particles with the number of particles \(\sim K\). We expect this picture to be qualitatively correct for \(\lambda\ll 1\ll K\).
At strong coupling, perturbatively in \(\frac{1}{\lambda}\), the connected energy correlations are suppressed by \(\frac{1}{\sqrt{\lambda}}\). In this case, the strongly coupled dynamics leads to a copious production of particles [50], with the characteristic energy controlled by \(\lambda\) and not by \(K\). We expect this picture to be correct for \(1\ll K\ll\lambda\).
More generally, it would be interesting to explore different scalings between \(K\) and \(\lambda\)[32] (in particular, see Figure 1 in that paper): \(K\sim 1\) (SUGRA); \(K\sim\lambda^{1/4}\) (short massive strings); \(K\sim\lambda^{1/2}\) (big classical strings). For example, the latter case is related to the so-called Frolov-Tseytlin limit [33; 34], where the heavy state describes a classical "fat" string in AdS and to leading order the correlators are again simple, given by the one-point functions on this classical solution.
## Acknowledgements
We thank Benjamin Basso, Joao Caetano, Hao Chen, Vasco Goncalves, Jack Holguin, Shota Komatsu, Cyrille Marquet, Sasha Monin, Ian Moult, Baur Mukhametzhanov, Kyriakos Papadodimas, Riccardo Rattazzi, Kai Yan and HuaXing Zhu for useful discussions. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 949077). DC is supported by the French National Research Agency in the framework of the "Investissements d'avenir" program (ANR-15-IDEX-02).
## Appendix A Event shapes from correlation functions
In this appendix we recall the definition of an energy detector (and other flow operators) from [16; 17]. We illustrate the procedure of calculating energy correlations from space-time correlation functions on the simple example of a single energy detector at Born level.
The energy flow operator is defined by the integral
\[\mathcal{E}(n)=(n\bar{n})\int_{-\infty}^{\infty}dx_{-}\lim_{x_{+}\to\infty}x_{ +}^{2}\,T_{++}(x_{+}n+x_{-}\bar{n})\,,\] (A.1)
in terms of the covariant light-cone component of the energy-momentum tensor
\[T_{++}\equiv\bar{n}^{\mu}\bar{n}^{\nu}T_{\mu\nu}(x)/(n\bar{n})^{2}\,.\] (A.2)
Here the lightlike vectors \(n^{\mu},\ \bar{n}^{\mu}\) (with \(n^{2}=\bar{n}^{2}=0\)) span a basis for the Lorentz covariant decomposition of a vector into light-cone components, e.g.
\[x^{\mu}=x_{+}n^{\mu}+x_{-}\bar{n}^{\mu}\,,\qquad\quad x_{+}=\frac{(x\bar{n})}{ (n\bar{n})}\,,\qquad x_{-}=\frac{(xn)}{(n\bar{n})}\,.\] (A.3)
Lorentz covariance is maintained by requiring homogeneity under the rescaling of each light-like vector by an arbitrary positive scale, \(n^{\mu}\to\rho\,n^{\mu}\,,\ \ \bar{n}^{\mu}\to\rho^{\prime}\,\bar{n}^{\mu}\). For example, the operator (A.1) scales as \(\rho^{-3}\mathcal{E}(n)\). We can fix a Lorentz frame in which \(n^{\mu}=(1,\vec{n})\) and \(\bar{n}^{\mu}=(1,-\vec{n})\), where the unit vector \(\vec{n}\) (with \(\vec{n}^{2}=1\)) defines the direction of the energy detector on the celestial sphere. The vector \(\bar{n}\) is auxiliary and it drops out from the expressions for the energy correlations.
We recall that in the maximally supersymmetric \(\mathcal{N}=4\) Yang-Mills theory the energy-momentum tensor \(T_{\mu\nu}\) together with the R-symmetry current \(J_{\mu}\) and the half-BPS operator
of weight two \(O_{2}\) are members of the same supermultiplet. By analogy with the energy flow (114) one can define the R-charge and scalar flow operators (the R-symmetry indices are not displayed)
\[{\cal Q}(n) =(n\bar{n})\int_{-\infty}^{\infty}dx_{-}\lim_{x_{+}\to\infty}x_{+}^ {2}\,J_{+}(x_{+}n+x_{-}\bar{n})\,,\] \[{\cal O}(n) =(n\bar{n})\int_{-\infty}^{\infty}dx_{-}\lim_{x_{+}\to\infty}x_{+} ^{2}\,O_{2}(x_{+}n+x_{-}\bar{n})\,, \tag{116}\]
where \(J_{+}\equiv\bar{n}^{\mu}J_{\mu}(x)/(n\bar{n})\). We denote collectively these three flow operators as \({\cal J}_{s}(n)\) where the spin \(s\) takes the values \(s=0,1,2\).
The definition (114) of the flow operator involves two steps: (i) we send the detector, i.e. the projection \(T_{++}\) of the energy-momentum tensor, weighted with the factor \(x_{+}^{2}\), to spatial infinity; (ii) we integrate over the entire working time of the detector \(-\infty<x_{-}<\infty\). These manipulations are done with each insertion of the energy-momentum tensor into the Wightman correlation function of the half-BPS scalar source and sink of weight \(K\),
\[\langle{\cal E}(n_{1})\ldots{\cal E}(n_{k})\rangle=\sigma_{\rm tot}^{-1}\int d ^{4}x\ {\rm e}^{iqx}\langle 0|O_{K}(x,Y)\,{\cal E}_{1}(n_{1})\ldots{\cal E}_{k}(n_{ k})\,O_{K}(0,\overline{Y})|0\rangle_{W}\,. \tag{117}\]
In close analogy with the \(k-\)point correlation functions, the correlations (117) can be decomposed into connected pieces (cumulants) \(\langle{\cal E}(n_{1})\ldots{\cal E}(n_{k})\rangle_{c}\). It is convenient to do it by introducing the energy correlations via a generating functional,
\[\langle{\cal E}(n_{1})\ldots{\cal E}(n_{k})\rangle=\partial_{J_{1}}\ldots \partial_{J_{k}}Z(J)\Big{|}_{J_{i}=0}\,. \tag{118}\]
Here \(J_{i}\) are sources coupled to the energy flow operators \({\cal E}(n_{i})\) and the generating functional \(Z(J)\) is defined by
\[Z(J)=e^{\sum_{i}J_{i}\langle{\cal E}(n_{i})\rangle_{c}+\sum_{i<j}J_{i}J_{j} \langle{\cal E}(n_{i}){\cal E}(n_{j})\rangle_{c}+\sum_{i<j<m}J_{i}J_{j}J_{m} \langle{\cal E}(n_{i}){\cal E}(n_{j}){\cal E}(n_{m})\rangle_{c}+\ldots} \tag{119}\]
In particular,
\[\langle{\cal E}(n_{i})\rangle_{c}=\langle{\cal E}(n_{i})\rangle\,,\quad\langle {\cal E}(n_{i}){\cal E}(n_{j})\rangle_{c}=\langle{\cal E}(n_{i}){\cal E}(n_{j} )\rangle-\langle{\cal E}(n_{i})\rangle\langle{\cal E}(n_{j})\rangle\,,\quad\ldots \tag{120}\]
Note that the expressions for the connected correlations of four or more operators involve non-linear terms. Introducing the normalized energy flow operators \(\widehat{\cal E}(n_{i})={\cal E}(n_{i})/\langle{\cal E}(n_{i})\rangle\), one finds that (118) is equivalent to (3).
In this paper we are mainly interested in the case of two energy flow operators, the so-called energy-energy correlation (EEC).16 For illustration purposes, in this appendix we show the details of the calculation of a single energy insertion at Born level [5],17
Footnote 16: In Section 7 we also discuss the correlations of other flow operators.
Footnote 17: In \({\cal N}=4\) SYM the three-point functions of the members of the stress-tensor multiplet are protected from quantum corrections. This is however not so for the four-point functions, the main subject of interest in this paper.
\[\langle{\cal E}(n)\rangle=\sigma_{\rm tot}^{-1}\int d^{4}x\ {\rm e}^{iqx} \langle 0|O_{K}(x,Y)\,{\cal E}(n)\,O_{K}(0,\overline{Y})|0\rangle_{W}^{(0)}\,. \tag{121}\]
The starting point is the three-point function of two half-BPS operators of weight \(K\) and one energy-momentum tensor. In Euclidean space it is given by
\[(G_{E})^{\mu\nu}(1,2,3) =\langle 0|O_{K}(x_{1},Y)T^{\mu\nu}(x_{2})O_{K}(x_{3},\overline{Y})|0 \rangle_{E}^{(0)}\] \[\sim\frac{(Y\overline{Y})^{K}}{x_{12}^{2}x_{2}^{2}(x_{1}^{2})^{ K-1}}\left(\frac{x_{12}^{\mu}}{x_{12}^{2}}+\frac{x_{2}^{\mu}}{x_{2}^{2}} \right)\left(\frac{x_{12}^{\nu}}{x_{12}^{2}}+\frac{x_{2}^{\nu}}{x_{2}^{2}} \right)-\frac{\delta^{\mu\nu}}{4}(\text{trace})\,, \tag{111}\]
where we have set \(x_{3}=0\), as indicated in (110). The last term on the right-hand side of (111) ensures that \(\delta_{\mu\nu}(G_{E})^{\mu\nu}(1,2,3)=0\).
We perform a Wick rotation to the Wightman function \((G_{W})^{\mu\nu}\) in Minkowski space-time by inserting the prescription \(x_{ij}^{2}\to x_{ij}^{2}-i0x_{ij}^{0}\) for \(i<j\). Next we project the free vector indices with \(\bar{n}_{\mu}\), according to the definition (109) (this removes the trace part). Using the decomposition (107) in the limit \(x_{2+}\to\infty\) we obtain \(x_{12}^{2}-i0x_{12}^{0}\to 2x_{2+}(x_{2-}(n\bar{n})-(x_{1}n)+i0)\) and \(x_{2}^{2}-i0x_{2}^{0}\to 2x_{2+}(x_{2-}(n\bar{n})-i0)\). With this we take the limit
\[\lim_{x_{2+}\to\infty}x_{2+}^{2}G_{W}{}^{\mu\nu}\bar{n}_{\mu}\bar{n}_{\nu}\sim \frac{(x_{1}n)^{2}}{((x_{1}n)-x_{2-}(n\bar{n})-i0)^{3}(x_{2-}(n\bar{n})-i0)^{3 }(x_{1}^{2}-i0x_{1}^{0})^{K-1}}\,. \tag{112}\]
We observe two poles in the variable \(x_{2-}\) located on the opposite sides of the real axis. Closing the integration contour in, say, the upper half-plane, we get
\[(n\bar{n})\int_{-\infty}^{\infty}dx_{2-}\lim_{x_{2+}\to\infty}x_{2+}^{2}G_{W}{ }^{\mu\nu}\bar{n}_{\mu}\bar{n}_{\nu}\sim\frac{(Y\overline{Y})^{K}}{((x_{1}n)- i0)^{3}(x_{1}^{2}-i0x_{1}^{0})^{K-1}}\,. \tag{113}\]
Notice that the auxiliary light-like vector \(\bar{n}\) has dropped out of the right-hand side. The last step in the procedure is the Fourier transform in (110) which gives
\[\langle\mathcal{E}(n)\rangle=\frac{1}{4\pi}\frac{(q^{2})^{2}}{(qn)^{3}}\,. \tag{114}\]
In Section 3.3 we deal with the two-point correlation \(\langle\mathcal{E}(n_{1})\mathcal{E}(n_{2})\rangle\) at Born level. The main contribution comes from the first diagram in Figure 2. It factorizes into two one-point expressions of the type (113), with two different direction vectors \(n_{1,2}\):
\[(n_{1}\bar{n}_{1})(n_{2}\bar{n}_{2})\int_{-\infty}^{\infty}dx_{2 -}\,dx_{3-}\lim_{x_{2+},x_{3+}\to\infty}x_{2+}^{2}x_{3+}^{2}\ G_{W}{}^{\mu\nu; \lambda\rho}(1,2,3,4)\bar{n}_{1\mu}\bar{n}_{1\nu}\bar{n}_{2\lambda}\bar{n}_{2\rho}\] \[\sim\frac{(Y\overline{Y})^{K}}{((x_{1}n_{1})-i0)^{3}((x_{1}n_{2}) -i0)^{3}(x_{1}^{2}-i0x_{1}^{0})^{K-1}}\,. \tag{115}\]
Introducing Schwinger parameters for the Fourier integral of the function in the second line, we arrive at Eq. (104) in the main text. There exists another diagram where the two detectors exchange one propagator, the other two legs being attached to the source and sink. In this case the detector time integration gives zero (naively).18
Footnote 18: A more careful treatment shows that this diagram contributes a contact term.
Four-point correlation functions of half-BPS scalar operators
In this appendix, we summarize the properties of the four-point correlation functions in \(\mathcal{N}=4\) SYM that we use in computing the energy correlations.
The four-point correlation functions of interest are
\[G_{K_{1}K_{2}K_{3}K_{4}}=\langle O_{K_{1}}(x_{1},Y_{1})\,O_{K_{2}}(x_{2},Y_{2}) \,O_{K_{3}}(x_{3},Y_{3})\,O_{K_{4}}(x_{4},Y_{4})\rangle\,, \tag{114}\]
where \(O_{K}(x,Y)\) are half-BPS operators defined in (10). They are annihilated by half of the \(\mathcal{N}=4\) supersymmetry charges and their scaling dimension \(\Delta=K\) is protected.
The half-BPS scalar operator (10) is the lowest component of a short \(\mathcal{N}=4\) supermultiplet. Among the operators (10) with different weights \(K\), the one with \(K=2\) plays a special role. The corresponding \(\mathcal{N}=4\) supermultiplet becomes ultrashort [51]. It contains all the conserved currents of the theory, including the \(SU(4)\) R-symmetry current of spin one and the energy-momentum tensor of spin two, hence the name "stress-energy supermultiplet". In virtue of \(\mathcal{N}=4\) superconformal symmetry, the four-point correlation functions of the operators belonging to the same supermultiplet are related to each other by Ward identities.
In application to the energy correlations, we encounter the four-point correlation function \(\langle O_{K}(1)T_{\mu_{1}\nu_{1}}(2)T_{\mu_{2}\nu_{2}}(3)O_{K}(4)\rangle\) where the half-BPS operators define the source/sink and the stress-energy tensors describe the calorimeters. Applying the \(\mathcal{N}=4\) superconformal Ward identities, this correlator can be expressed in terms of the four-point function of scalar operators \(\langle O_{K}(1)O_{2}(2)O_{2}(3)O_{K}(4)\rangle\). Below we summarize the properties of the latter.
The correlator \(G_{K22K}\) splits into the sum of two terms
\[G_{K22K}=\langle O_{K}(1)O_{2}(2)O_{2}(3)O_{K}(4)\rangle=G_{K22K}^{0}+G_{K22K} ^{\rm loop}\,. \tag{115}\]
The first, Born-level part is a rational function of the space-time coordinates and a polynomial in the auxiliary variables \(Y\), independent of the coupling. The second, coupling dependent correction part involves non-trivial functions originating from Feynman integrals.
The expression for the Born term \(G_{K22K}^{0}\) can be obtained by Wick contracting the scalar fields belonging to the four half-BPS operators. It is a polynomial of degree \((K+2)\) in the free scalar propagators
\[d_{ij}=d_{ji}\equiv\langle\phi(x_{i},Y_{i})\phi(x_{j},Y_{j}) \rangle=\frac{Y_{i}\cdot Y_{j}}{x_{ij}^{2}}\,, \tag{116}\]
where \(x_{ij}^{2}=(x_{i}-x_{j})^{2}\). The coefficients of this polynomial depend on the rank of the gauge group \(N_{c}\). In the planar limit, for \(N_{c}\to\infty\), the connected part of \(G_{K22K}^{0}\) is given by
\[G_{K22K}^{0}=\frac{1}{2}K^{2}(K-1)\,N_{c}^{K}\,d_{14}^{K-2}d_{12 }d_{13}d_{23}d_{24}\,, \tag{117}\]
where the \(K-\)dependent combinatorial factor counts the number of Wick contractions.
The interacting (coupling dependent) part of (114) takes the following form in the planar limit (see [52] and references therein)
\[G_{K22K}^{\rm loop}=2\,\left(\frac{N_{c}}{2}\right)^{K-2}\,K^{2}\,R_{1234}\times d _{14}^{K-2}F_{K}(x,\lambda)\,. \tag{119}\]
Here \(R\) is a universal rational prefactor carrying \(SU(4)\) weight \(2\) and conformal weight \(1\) at each point,
\[R_{1234}=d_{13}^{2}d_{24}^{2}x_{13}^{2}x_{24}^{2}+d_{12}d_{13}d_{24}d_{34}\left( x_{14}^{2}x_{23}^{2}-x_{13}^{2}x_{24}^{2}-x_{12}^{2}x_{34}^{2}\right)+(1 \leftrightarrow 2)\,+\,(1\leftrightarrow 4)\,, \tag{120}\]
where the last two terms are obtained by exchanging the pair of coordinates \((x_{i},Y_{i})\) of the points indicated in the parentheses. According to its definition, \(R_{1234}\) is completely symmetric under the exchange of any pair of points. For the special choice \(Y_{2}=Y_{3}\) the relation (120) simplifies as
\[R_{1234}\Big{|}_{Y_{2}=Y_{3}}=d_{12}d_{13}d_{24}d_{34}x_{14}^{2}x_{23}^{2}\,. \tag{121}\]
As explained in [17], putting \(Y_{2}=Y_{3}\) is equivalent to projecting the product of the two operators \(O_{2}(2)O_{2}(3)\) in (114) onto the irreducible representation \({\bf 105}=[0,4,0]\) of \(SO(6)\sim SU(4)\). The importance of this choice is that the scalar-scalar correlation \(\left\langle{\cal O}{\cal O}\right\rangle_{K=2}\) in this channel coincides with the EEC. This property is however lost if the sources have \(K>2\).
The coupling-dependent function \(F_{K}(x,\lambda)\) is an \(SU(4)\) singlet, i.e. it does not depend on the \(Y-\)coordinates, and has conformal weight \(1\) at all four points \(x_{i}\) (for \(i=1,\ldots,4\)). At weak coupling, it admits the perturbative expansion
\[F_{K}=\sum_{\ell=1}^{\infty}\left(\frac{\lambda}{4\pi^{2}}\right)^{\ell}F_{K} ^{(\ell)}\,, \tag{122}\]
where the functions \(F_{K}^{(\ell)}\) can be expanded over a basis of conformal \(\ell-\)loop four-point integrals. At \(K=2\) the expansion (122) was derived in [53; 54; 55; 56] up to ten loops. For \(K\geq 3\) the analogous expansion is known up to five loops [57; 52]. 19 The conformal integrals are independent of \(K\) but they appear in the expression for \(F_{K}(x,\lambda)\) with \(K-\)dependent coefficients.
Footnote 19: An interesting interpretation of these results, together with a possible extension to any loop order, was proposed in [58].
The explicit expressions of the function in (122) for \(\ell=1,2\) are
\[F_{K}^{(1)} =g_{1234}\,,\] \[F_{K=2}^{(2)} =2h_{12;34}+2h_{13;24}+2h_{14;23}+\frac{1}{2}\big{(}x_{12}^{2}x_{ 34}^{2}+x_{13}^{2}x_{24}^{2}+x_{14}^{2}x_{23}^{2}\big{)}[g_{1234}]^{2}\,,\] \[F_{K>2}^{(2)} =2h_{12;34}+2h_{13;24}+h_{14;23}+\frac{1}{2}\big{(}x_{13}^{2}x_{2 4}^{2}+x_{12}^{2}x_{43}^{2}\big{)}[g_{1234}]^{2}\,, \tag{123}\]
where the notation was introduced for the one- and two-loop conformal integrals
\[g_{1234}=-\int\frac{d^{4}x_{5}}{x_{15}^{2}x_{25}^{2}x_{35}^{2}x_{45} ^{2}}\,,\] \[h_{12;34}=x_{34}^{2}\int\frac{d^{4}x_{5}\,d^{4}x_{6}}{(x_{15}^{2} x_{35}^{2}x_{45}^{2})x_{56}^{2}(x_{26}^{2}x_{36}^{2}x_{46}^{2})}. \tag{111}\]
Combining together (110) and (111), one can verify that the function \(F_{K=2}\) is invariant under the exchange of any pair of points in \((1,2,3,4)\), whereas \(F_{K>2}\) is symmetric in the two pairs of points \((1,4)\) and \((2,3)\) separately, in agreement with the Bose properties of the correlation function (109).
As follows from (111), the one-loop correction in (110) is independent of \(K\). At two loops, the functions \(F_{K=2}^{(2)}\) and \(F_{K>2}^{(2)}\) are given by different linear combinations of the same conformal integrals. The three-loop results of [52] show that the functions \(F_{K}^{(3)}\) are different for \(K=2\) and \(K=3\), and again become the same for all \(K\geq 4\). The proposed generalization of [58] indicates that a similar pattern continues at higher loops. Namely, the function \(F_{K}^{(\ell)}\) ceases to depend on \(K\) for \(K\geq\ell+1\). This feature allows us to study the event shapes recursively, see Section 4.3.
#### Mellin representation
The product \(\Phi_{K}=x_{14}^{2}x_{23}^{2}F_{K}(x,\lambda)\) is invariant under the conformal transformations and, therefore, it depends on the two cross-ratios (106). For our purposes, it is convenient to use the Mellin representation
\[F_{K}(x,\lambda)=\frac{1}{x_{14}^{2}x_{23}^{2}}\int\frac{dj_{1}dj_{2}}{(2\pi i )^{2}}M_{K}(j_{1},j_{2})\bigg{(}\frac{x_{12}^{2}x_{34}^{2}}{x_{14}^{2}x_{23}^ {2}}\bigg{)}^{j_{1}}\bigg{(}\frac{x_{13}^{2}x_{24}^{2}}{x_{14}^{2}x_{23}^{2}} \bigg{)}^{j_{2}}\,, \tag{112}\]
or equivalently \(\mathcal{M}[F_{K}]=M_{K}(j_{1},j_{2})\). Here the integration contours are the same as in (107). The symmetry of \(F_{K}(x,\lambda)\) under the exchange of \(x_{1}\) and \(x_{4}\) translates into \(M_{K}(j_{1},j_{2})=M_{K}(j_{2},j_{1})\).
The Mellin amplitude \(M_{K}(j_{1},j_{2})\) admits a weak-coupling expansion analogous to (110), see (108). To find the corresponding functions \(M_{K}^{(\ell)}(j_{1},j_{2})\), it is sufficient to know the Mellin transforms of the various conformal integrals in (111). Let us denote the Mellin amplitudes for the integrals (111) as
\[\mathcal{M}[g_{1234}]=M^{(1)}(j_{1},j_{2})\,,\] \[\mathcal{M}[h_{14;23}]=M^{(2)}(j_{1},j_{2})\,. \tag{113}\]
Their explicit expressions are given below, see (109). Then, the Mellin amplitudes of the
remaining conformal integrals in (B.9) are given by
\[\mathcal{M}[h_{12;34}]=M^{(2)}(-1-j_{1}-j_{2},j_{2})\,,\] \[\mathcal{M}[h_{13;24}]=M^{(2)}(-1-j_{1}-j_{2},j_{1})\,,\] \[\mathcal{M}[x_{23}^{2}x_{14}^{2}g_{1234}^{2}]=\widetilde{M}^{(2)} (j_{1},j_{2})\,,\] \[\mathcal{M}[x_{13}^{2}x_{24}^{2}g_{1234}^{2}]=\widetilde{M}^{(2)} (j_{1},j_{2}-1)\,,\] \[\mathcal{M}[x_{12}^{2}x_{34}^{2}g_{1234}^{2}]=\widetilde{M}^{(2)} (j_{1}-1,j_{2})\,.\] (B.13)
The functions \(M^{(1)}\), \(M^{(2)}\) and \(\widetilde{M}^{(2)}\) introduced above are
\[M^{(1)}(j_{1},j_{2}) =-\frac{1}{4}\left[\Gamma(-j_{1})\Gamma(-j_{2})\Gamma(1+j_{1}+j_{ 2})\right]^{2}\,,\] \[M^{(2)}(j_{1},j_{2}) =-\frac{1}{4}\Gamma(-j_{1})\Gamma(-j_{2})\Gamma(1+j_{1}+j_{2})\] \[\times\int\frac{d_{1}^{\prime}d_{2}^{\prime}}{(2\pi i)^{2}}\frac{ \Gamma(j_{1}^{\prime}-j_{1})\Gamma(j_{2}^{\prime}-j_{2})\Gamma(1+j_{1}+j_{2}- j_{1}^{\prime}-j_{2}^{\prime})}{\Gamma(1-j_{1}^{\prime})\Gamma(1-j_{2}^{\prime}) \Gamma(1+j_{1}^{\prime}+j_{2}^{\prime})}M^{(1)}(j_{1}^{\prime},j_{2}^{\prime})\,,\] \[\widetilde{M}^{(2)}(j_{1},j_{2}) =\int\frac{d_{1}^{\prime}d_{2}^{\prime}}{(2\pi i)^{2}}M^{(1)}(j_{ 1}-j_{1}^{\prime},j_{2}-j_{2}^{\prime})M^{(1)}(j_{1}^{\prime},j_{2}^{\prime})\,.\] (B.14)
We apply the above relations to find from (B.9) the Mellin amplitudes of the one- and two-loop \(F_{K}\),
\[M_{K}^{(1)}(j_{1},j_{2}) =M^{(1)}(j_{1},j_{2})\,,\] \[M_{K=2}^{(2)}(j_{1},j_{2}) =2\left[M^{(2)}(j_{1},j_{2})+M^{(2)}(-1-j_{1}-j_{2},j_{2})+M^{(2) }(-1-j_{1}-j_{2},j_{1})\right]\] \[+\frac{1}{2}\left[\widetilde{M}^{(2)}(j_{1},j_{2})+\widetilde{M}^ {(2)}(j_{1},j_{2}-1)+\widetilde{M}^{(2)}(j_{1}-1,j_{2})\right]\,,\] \[M_{K>2}^{(2)}(j_{1},j_{2}) =M^{(2)}(j_{1},j_{2})+2\left[M^{(2)}(-1-j_{1}-j_{2},j_{2})+M^{(2) }(-1-j_{1}-j_{2},j_{1})\right]\] \[+\frac{1}{2}\left[\widetilde{M}^{(2)}(j_{1},j_{2}-1)+\widetilde{M }^{(2)}(j_{1}-1,j_{2})\right]\,.\] (B.15)
#### Energy correlations in the Mellin representation
According to (5.8), the EEC is given by the convolution of the Mellin amplitude \(M_{K}(j_{1},j_{2})\) and the detector kernel \(\mathcal{K}_{K}(j_{1}+j_{2},z)\) from (5.9). Taking into account the symmetry of the integrand of (5.8) under the exchange \(j_{1}\leftrightarrow j_{2}\), we get from (B.15)
\[\mathcal{F}_{K=2}^{(2)}(z)=\int\frac{dj_{1}dj_{2}}{\left(2\pi i \right)^{2}}\left[\frac{\Gamma(1-j_{1}-j_{2})}{\Gamma(1-j_{1})\Gamma(1-j_{2})} \right]^{2}\mathcal{K}_{K=2}(j_{1}+j_{2},z)\] \[\times\left[2M^{(2)}(j_{1},j_{2})+4M^{(2)}(-1-j_{1}-j_{2},j_{1})+ \frac{1}{2}\widetilde{M}^{(2)}(j_{1},j_{2})+\widetilde{M}^{(2)}(j_{1},j_{2}-1) \right].\] (B.16)
Here \(M^{(2)}(j_{1},j_{2})\) and \(\widetilde{M}^{(2)}(j_{1},j_{2})\) are the Mellin amplitudes (B.14) and the detector kernel is
\[\mathcal{K}_{K=2}(j,z)=\frac{2}{\pi}\sin(\pi j)z^{-2-j}(1-z)^{j-1}\,.\] (B.17)
The auxiliary function \({\cal F}^{(2)}_{\rm aux}(z)\) introduced in (5.19) is given by the convolution of the Mellin amplitude \(M^{(2)}_{K>2}(j_{1},j_{2})\) and the detector kernel \({\cal K}_{K=2}(j_{1}+j_{2},z)\),
\[{\cal F}^{(2)}_{\rm aux}(z) =\int\frac{dj_{1}dj_{2}}{(2\pi i)^{2}}\left[\frac{\Gamma(1-j_{1}-j _{2})}{\Gamma(1-j_{1})\Gamma(1-j_{2})}\right]^{2}{\cal K}_{K=2}(j_{1}+j_{2},z)\] \[\times\left[M^{(2)}(j_{1},j_{2})+4M^{(2)}(-1-j_{1}-j_{2},j_{1})+ \widetilde{M}^{(2)}(j_{1},j_{2}-1)\right].\] (B.18)
Evaluating the Mellin integrals (B.16) and (B.18), one arrives at (4.34) and (4.36), respectively.
## Appendix C Detector kernel of the Mellin representation
In this appendix, we present some details of the derivation of the Mellin representation (5.8) and (5.9) of the energy correlation. We recall that the operator \({\cal E}(n)\) in (3.4) measures the flux of the energy in the direction specified by the null vector \(n^{\mu}\) and it is built out of the energy-momentum tensor.
If the underlying theory contains a conserved current of spin \(s\),20 we can define the flow operator \({\cal J}_{s}(n)\) in an analogous manner. For spin \(s=2\) it coincides with \({\cal E}(n)\) whereas for arbitrary spin \(s\) it measures the flow of the corresponding conserved charge.21 We generalize (3.4) and define the correlation of charges with different spins
Footnote 20: If \(s=0\) the term ‘conserved current’, strictly speaking, does not apply. We justify this name by the fact that in \({\cal N}=4\) SYM the operator \(O_{2}\) of spin \(0\) belongs to the supermultiplet of the conserved energy-momentum tensor, supersymmetry and R-symmetry currents.
Footnote 21: For the definition of the flow operators with spin \(s=0,1,2\) in \({\cal N}=4\) SYM see (A.1) and (A.4). Namely, \({\cal J}_{s=0}={\cal O}\), \({\cal J}_{s=1}={\cal Q}\), \({\cal J}_{s=2}={\cal E}\).
\[\langle{\cal J}_{s_{1}}(n_{1}){\cal J}_{s_{2}}(n_{2})\rangle_{K}=\sigma_{\rm tot }^{-1}(q)\int d^{4}x\ {\rm e}^{iqx}\langle O_{K}(x){\cal J}_{s_{1}}(n_{1}){\cal J}_{s_{2}}(n_{2}) \bar{O}_{K}(0)\rangle\,,\] (C.1)
where the total cross-section \(\sigma_{\rm tot}(q)\) is given by (3.3).
In \({\cal N}=4\) SYM the charges \({\cal J}_{s}\) are members of the stress-energy multiplet. As a consequence, the correlation functions \(\langle O_{K}(x){\cal J}_{s_{1}}(n_{1}){\cal J}_{s_{2}}(n_{2})\bar{O}_{K}(0)\rangle\) with different spins \(s_{1}\) and \(s_{2}\) are related to each other by \({\cal N}=4\) superconformal Ward identities. The general solution to these identities was derived in [19],
\[\langle O_{K}(x){\cal J}_{s_{1}}(n_{1}){\cal J}_{s_{2}}(n_{2})\bar{O}_{K}(0) \rangle=\frac{2^{s_{1}+1}i^{s_{1}+s_{2}}}{(n_{1}n_{2})^{s_{1}+1}}\frac{(xn_{2} )^{s_{1}-s_{2}}}{(x^{2})^{s_{1}+K-1}}\frac{d}{d\gamma^{s_{1}}}(1-\gamma)^{s_{1 }}\gamma^{s_{2}}\frac{d}{d\gamma^{s_{2}}}{\cal G}_{K}(\gamma)\,,\] (C.2)
where \(\gamma\) is defined in (5.7) and the function \({\cal G}_{K}(\gamma)\) is independent of the spins \(s_{1}\geq s_{2}\).
The relation (C.2) was derived in Ref. [19] for sources of weight \(K=2\). As discussed around (B.5), the generalization to arbitrary weight \(K\) can be achieved by simply attaching \((K-2)\) additional scalar propagators stretched between the source/sink operators, thus introducing the dependence of \({\cal G}_{K}(\gamma)\) on the total weight. Going through the analysis in [19] we see that the origin of the relation (C.2) has to do with the supermultiplet buildup at the detector points. At
the source/sink points we stick to the lowest weight state of the supermultiplet, so the result is modified in an obvious way by the additional weight \((K-2)\) at these points.
Because the function \({\cal G}_{K}(\gamma)\) is independent of the spins, we can obtain it by setting \(s_{1}=s_{2}=0\) in (114). In this case, the flow operators \({\cal J}_{s_{i}=0}(n_{i})\) are related to the half-BPS scalar operators \(O_{K=2}(x_{i})\), see (111), and the left-hand side of (114) is given by the correlation function \(\langle O_{K}(x)O_{2}(x_{2})O_{2}(x_{3})\bar{O}_{K}(0)\rangle\)22 integrated over \(x_{2}\) and \(x_{3}\). It is convenient to use the Mellin representation of this correlation function
Footnote 22: For reasons explained in [17; 19] (see also (112)) one restricts the four-point scalar correlator to the channel \({\bf 105}\) in the R-symmetry decomposition.
\[\langle O_{K}(x_{1})O_{2}(x_{2})O_{2}(x_{3})\bar{O}_{K}(x_{4})\rangle =\frac{1}{(x_{14}^{2})^{K-2}x_{12}^{2}x_{13}^{2}x_{24}^{2}x_{34}^ {2}}\] \[\times\int\frac{dj_{1}dj_{2}}{(2\pi i)^{2}}M_{K}(j_{1},j_{2}) \bigg{(}\frac{x_{12}^{2}x_{34}^{2}}{x_{14}^{2}x_{23}^{2}}\bigg{)}^{j_{1}} \bigg{(}\frac{x_{13}^{2}x_{24}^{2}}{x_{14}^{2}x_{23}^{2}}\bigg{)}^{j_{2}} \tag{115}\]
with \(x_{1}=x\) and \(x_{4}=0\). This relation has been obtained in Euclidean signature. Before we can proceed with the detector limit and time integration from the detector definitions (110) and (111), we have to perform the analytic continuation to Minkowski space-time. As explained in [16; 17; 59], this amounts to replacing \(x^{2}\to x^{2}-i0x^{0}\) and \((xn_{i})\to(xn_{i})-i0\) in (114) and (115). The sign of the '\(i0\)' prescription is determined by the order of the operators inside the correlation function (114). Now we can repeat the calculation of [19] to get
\[{\cal G}_{K}(\gamma)=-\frac{1}{16\pi^{3}}\int\frac{dj_{1}dj_{2}}{(2\pi i)^{2} }\left[\frac{\Gamma(1-j_{1}-j_{2})}{\Gamma(1-j_{1})\Gamma(1-j_{2})}\right]^{ 2}M_{K}(j_{1},j_{2})\gamma^{j_{1}+j_{2}-1}\,, \tag{116}\]
where \(\gamma\) is given by (112).
The last step is the Fourier transform in (114). Expanding the derivatives in (114) we find that it is given by a linear combination of integrals of the form
\[I(a,b,c) =\int d^{4}x\,{\rm e}^{iqx}\,\frac{(xn_{2})^{a}}{(x^{2}-i0x^{0})^ {b}}\gamma^{c}\] \[=((n_{1}n_{2})/2)^{-c}\int d^{4}x\,{\rm e}^{iqx}\,\frac{(xn_{1}-i 0)^{c}(xn_{2}-i0)^{a+c}}{(x^{2}-i0x^{0})^{b+c}}\,. \tag{117}\]
Using the Schwinger parameterization they can be evaluated as
\[I(a,b,c)=((n_{1}n_{2})/2)^{-c}\frac{(-1)^{b}i^{-a}}{\Gamma(-c) \Gamma(-a-c)}2\pi^{3}\frac{(q^{2}/4)^{b+c-2}}{\Gamma(b+c)\Gamma(b+c-1)}\] \[\times\bigg{(}\frac{q^{2}}{2(qn_{1})}\bigg{)}^{-c}\bigg{(}\frac{ q^{2}}{2(qn_{2})}\bigg{)}^{-a-c}\int_{0}^{1}d\tau_{1}d\tau_{2}\,\tau_{1}^{-c-1} \tau_{2}^{-a-c-1}(1-\tau_{1}-\tau_{2}+z\tau_{1}\tau_{2})^{b+c-2}\,, \tag{118}\]
where the integration goes over the region \(1-\tau_{1}-\tau_{2}+z\tau_{1}\tau_{2}\geq 0\).
Combining all factors together we find from (108)
\[\langle\mathcal{J}_{s_{1}}(n_{1})\mathcal{J}_{s_{2}}(n_{2})\rangle_{K} =\frac{(q^{2})^{s_{1}+s_{2}}}{2(4\pi)^{2}(qn_{1})^{s_{1}+1}(qn_{2}) ^{s_{2}+1}}\] \[\times\int\frac{dj_{1}dj_{2}}{(2\pi i)^{2}}\left[\frac{\Gamma(1-j _{1}-j_{2})}{\Gamma(1-j_{1})\Gamma(1-j_{2})}\right]^{2}M_{K}(j_{1},j_{2}) \mathcal{K}_{K}^{(s_{1},s_{2})}(j_{1}+j_{2},z)\,. \tag{109}\]
Here the dependence on the coupling constant resides in the Mellin amplitude \(M_{K}(j_{1},j_{2})\). The dependence on the variable \(z\) and the spins \(s_{1},s_{2}\) comes from the detector kernel
\[\mathcal{K}_{K}^{(s_{1},s_{2})}(j,z)=\sum_{k=0}^{s_{1}}(-1)^{k} \binom{s_{1}}{k}\frac{\Gamma(j)\Gamma(j+k)}{\Gamma(j-s_{2})\Gamma(j+k-s_{1})}\] \[\times\frac{z^{1-j-k}\,\Gamma(K+1)\Gamma(K-1)}{\Gamma(K-2+j+k) \Gamma(K+s_{1}+s_{2}-1-j-k)}\,_{2}F_{1}\left(\genfrac{}{}{0.0pt}{}{s_{1}+1-j-k,s_{2}+1-j-k}{K+s_{1}+s_{2}-1-j-k}\Big{|}z\right), \tag{110}\]
where the hypergeometric function arises from the integration in (107).
For \(s_{1}=s_{2}=2\) the relation (109) coincides with (106) and (110), and the kernel (110) reduces to (108). At large \(K\) the relation (110) simplifies as
\[\mathcal{K}_{K}^{(s_{1},s_{2})}(j,z)=K^{3-s_{1}-s_{2}}\sum_{k=0}^{s_{1}}\binom {s_{1}}{k}\frac{(-1)^{k}z^{1-j-k}\Gamma(j)\Gamma(j+k)}{\Gamma(j-s_{2})\Gamma( j+k-s_{1})}\,. \tag{111}\]
Notice that it scales as \(1/K\) for \(s_{1}=s_{2}=2\).
## Appendix D Plus-distributions
In this appendix, we recall the definition of the plus-distributions that appear in the expressions for the contact terms discussed in Section 6.
Following [60], we define
\[\int_{0}^{1}dz\,\left[\frac{1}{z}\right]_{+}\phi(z)=\int_{0}^{1} \frac{dz}{z}[\phi(z)-\phi(0)]\,, \tag{112}\] \[\int_{0}^{1}dz\,\left[\frac{\log^{k}(z)}{z}\right]_{+}\phi(z)= \int_{0}^{1}\frac{dz}{z}\,\log^{k}(z)[\phi(z)-\phi(0)]\,, \tag{113}\]
where \(\phi(z)\) is a test function. The distributions \([1/(1-z)]_{+}\) and \([\log^{k}(1-z)/(1-z)]_{+}\) are defined in the same manner.
In general, for \(\operatorname{Re}\alpha>-n-1\) and \(\alpha\neq-1,-2,\ldots,-n\) the distribution \(z_{+}^{\alpha}\) is defined as [60]
\[\int_{0}^{1}dz\,z_{+}^{\alpha}\,\phi(z)=\int_{0}^{1}dz\,z^{\alpha}\left[\phi( z)-\sum_{k=0}^{n-1}\frac{z^{k}}{k!}\phi^{(k)}(0)\right]+\sum_{k=1}^{n}\frac{ \phi^{(k-1)}(0)}{(k-1)!\,(\alpha+k)}\,, \tag{114}\]
where \(\phi^{(k)}(0)\) denotes the \(k\)-th derivative, and similarly for the distribution \((1-z)_{+}^{\alpha}\). Note that \(z_{+}^{-1}\) is not the value of \(z_{+}^{\alpha}\) at \(\alpha=-1\). The distribution \(z_{+}^{\alpha}\) admits the following Laurent series expansion in the vicinity of \(\alpha=-1\):
\[z_{+}^{\alpha}=\frac{\delta(z)}{\alpha+1}+z_{+}^{-1}+(\alpha+1)[z^{-1}\log z]_ {+}+\ldots+\frac{(\alpha+1)^{k}}{k!}[z^{-1}\log^{k}z]_{+}+\ldots \tag{104}\]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.